uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,497,815 | arxiv | \section{Introduction}
Collective behaviour, such as swarming \cite{Buhl_From_2006, Inherent_2009,Bazazi2012, Attanasi_Finite_2014, Attanasi2014, Murakami2014}, fish schooling\cite{Ioannou2012, Strandburg-Peshkin2013, Berdahl2013, Murakami2015, Niizato2017} and bird flocking \cite{Ballerini2008, Cavagna2010, Cavagna2013, Bialek2014, Attanasi2015, Mora2016} has been widely observed in nature \cite{Couzin2007, Couzin2009, Sumpter_Collective_2010, Vicsek2012}. In some instances, individuals respond to the changing environment rapidly as one collective \cite{Cavagna2013, Bialek2014, Attanasi2015} and, in other cases, relatively good decision-making is achieved as a group \cite{Franks2003, Dyer2009, Bose2017}. Conflicts among individuals, as seen by an external observer, do not necessarily lead to group disruption; instead, they show the way to more an effective response as a group \cite{Couzin2011, Pinkoviezky2018}. The unity of this kind of animal behaviour remains one of the mysteries of nature \cite{Couzin2007}.
Self-organised criticality (SOC) has been a good metaphor for interpreting these collective animal behaviours. If the group is in the intermediate state between order and disorder, it becomes possible to achieve enough flexibility and robustness as one system \cite{Bak1988,Tetzlaff2010, Niizato2012, Gunji2014, Gunji2014b, Niizato2018}. For example, the perturbations of flocks (or swarms) in SOC models optimise the effective correlation range of each bird and make it possible to accomplish fast information transfer \cite{Cavagna2010, Cavagna2013, Bialek2014, Attanasi2015}. However, when it comes to considering small groups, the same method cannot be applied, because, it is hard to assume that the interactions of individuals are homogeneous \cite{Herbert-Read2013, Jolles2017}. In particular, with regard to the subject of this study, it is conceivable that the interactions of two- and three-fish groups may be different \cite{Katz2011, Gautrais2012}. Many researchers, therefore, have considered information transfer (or causal relationships) among individuals in small groups\cite{Staniek2008, Butail2016, Crosato2018}. The (local) transfer entropy is the preferred measure to use in this case\cite{Lizier2008, Lizier2012,Sun2014,James2016, Tomaru2016, YagmurErten2017}. For example, Crosato et al. \cite{Crosato2018} showed that the transfer of misinformation happens in five-fish school when the whole school changes direction. Other studies suggest that active information storage can predict the timing when nontrivial information transfer happens \cite{Wang2012, Wang2011}. Although the latter approaches also promise to give us a tremendous amount of information about what is happening in the group, they will not give us the information about what the system of collective behaviour is \cite{Albantakis2015}. The SOC approach certainly captures some aspects of what the system of collective behaviour is, but it gives little information about the causal structures inside the groups.
Before we go into detail about the difference between what is happening and what the system is, we need to introduce the concept of integrated information theory (IIT). IIT, which Tononi and other researchers have proposed, has been a rapidly developing area over the last two decades \cite{Balduzzi2009, Tononi2010, Barrett2011, Oizumi2014, Oizumi2015, Oizumi2016, Mayner2018}. The original aim was to estimate the degree of consciousness from brain activity \cite{Balduzzi2009, Tononi2010}. Recent studies suggest that IIT can capture and discriminate between various states of lost consciousness, such as dreamless sleep \cite{Massimini2005}, general anaesthesia \cite{Alkire2008} or vegetative states \cite{Gosseries2014}. Although IIT has several versions, its core concept is the same in principle, that is, the integrated information ($\Phi$) is defined as the degree of information loss caused by a certain partition of the system \cite{Tononi2010, Oizumi2016}(in the case of Barrett and Seth's version of IIT, $\Phi$ is the degree of the increase of uncertainty caused by a certain partition\cite{Barrett2011}. A computational comparison of many versions of IIT has been made by Mediano \cite{Mediano2018}). It is worth noting that Ito \cite{Ito2018, Ito2018b} has pointed out the fact that there are some intimate relations between the second law of information thermodynamics and IIT in terms of a projection onto a local reversible manifold. These structural resemblances suggest the possibility of unifying the concept of non-equilibrium thermodynamics and IIT.
The key concept of IIT is that the whole cannot be reduced into its separated parts because the lost information would contain synergetic information produced by those parts. In this respect, the concept of IIT resonates with that of complex systems \cite{Bertalanffy1969}, for which the statement ''the whole is more than the sum of its parts'' has long been a slogan \cite{Hooker2011}. Since the intrinsic causal structures make the system irreducible into its parts, the integrated information (or $\Phi$) also can be a measure of the degree of wholeness as a single autonomous system\cite{Farnsworth2018}.
There have been some applications of IIT to cellular automaton\cite{Albantakis2015}, animat \cite{Edlund2011} and Boolean networks \cite{Marshall2017}. For example, Albantakis et al. \cite{Albantakis2015} showed that average $\Phi$ values for 5 to 6 cells correlated well with their complexity, such as class III and IV, despite the very small number of cell sets. (The behaviours of 5 and 6 cells can be hardly discriminated on the basis of the behaviours of their constituent cells and, in general, the behaviours of small numbers of cellular automata are very similar to an external observer.) They also showed that all rules of class IV have all orders of concepts (i.e. irreducible subsets in the system) unlike other classes.
The example of cellular automata illuminates the meaning of intrinsic properties for IIT. IIT reveals the differences among systems arising from different intrinsic causal structures (rules), rather than considering differences based on external behaviour. That is why we said previous approaches (especially, transfer entropy) captured not what the system is but what is happening. Now we can ask the following question: What is the difference of collective behaviour in terms of the intrinsic causal structure perspective? In this paper, we ask the following: Does the number of agents in a system make its intrinsic properties different? In other words, if the group size changes, what remains the same (continuous) and what changes (discontinuous) in the group? Also, are any new factors introduced, which were not present before? This kind of question is rarely asked in animal collective behaviour, but one study suggests that schools of three fish and schools of two fish have different kinds of interactions \cite{Katz2011, Gautrais2012}. Another suggests that the search strategies of fish in groups of different sizes are essentially different when they are in an unfamiliar environment \cite{Niizato2017}. However, all these studies constrain the number of individuals in the group to three or less and their methods are difficult to generalise to larger groups. Furthermore, these methods never indicate any differences in terms of the group’s intrinsic causal structure.
In this paper, we apply IIT (in particular, IIT 3.0 using PyPhi \cite{Oizumi2014, Mayner2018}) to schools of two to five fish ({\it Plecoglossus altivelis}) and show the intrinsic differences between these groups. To apply IIT to the collective behaviour of animals, we propose a simple hypothesis, namely, that a living system evolves to raise its integrated information. This hypothesis is not a peculiar one because some studies have suggested that, for some artificial systems selected by their fitness, $\Phi$ values were correlated with fitness \cite{Albantakis2015}. Thus, to raise $\Phi$ means to raise fitness in a given environment. Adopting this hypothesis, we found that there is a kind of continuity and discontinuity with respect to school size. The main finding is that there is a discontinuity between three- and four-fish schools, which is a difference that hasn't received a lot of attention previously. Interestingly, the difference between these two systems corresponds to the existence of leadership (more precisely, reducing the field of view for fish’s recognition introduces the existence of the leadership). Furthermore, our results are never replicated by a Boids-type model for the same conditions.
\section{Results}
\label{sec:Results}
\subsection{Definition to apply IIT to fish schools}
To apply IIT 3.0, we define ON and OFF states of an individual in a fish school. In this paper, the ON state means some interaction would occur in a given context. For example, if two individuals are within a certain radius, the state of both individuals are ON (some information transfer would occur between them). This is a symmetric interaction. In the same way, we consider two other interactions to define ON and OFF states for fish in the school: visual field and turning rate interactions (see Fig. \ref{Figure_1}). A visual field interaction means the individual is in the ON state when some other agents are within its visual field. This allows us to consider asymmetrical relations in contrast to the symmetric distance condition. The turning rate interaction is one in which a direction change above a certain value puts the individual in the ON state. This ON state transfers information to other agents in the next time step, so the interaction between individuals is a delayed one. The direction changing rate is a very important measure for collective behaviour, empirically and theoretically \cite{Couzin2002, Vicsek2012, Strandburg-Peshkin2013}.
In this paper, we assume a fish always evaluates these three kinds of information simultaneously. So, we take conjugation (i.e. AND) of the obtained 3 bits of information (for instance, IF Distance:ON, Visual field:ON, Turning rate:OFF, THEN state OFF) to produce an overall state for a fish. Applying the same process to each fish at a time $t$, we obtain the time series of the states of the $n$-fish. Then we can compute $\Phi$ and other values (the number of concept) from the obtained time series. One time step, in this paper, is defined as 0.05 (0.10, 0.20) s. This value roughly corresponds to the fish's reaction timescale \cite{Crosato2018}.
To compute $\Phi$, we also define the network structure in the school. In this paper, we postulate the completely connected network not including self-loops. This assumption comes from the experimental fact that each fish has some contact with (or falls within the visual field of) all individuals in the group during the long series of recorded events (10-15 min). Therefore, it is natural to assume that some interactions happened among all members (In Table S1, we give the minimal distance throughout the events. The data shows all fish have a contact within 5mm).
Before we go into detail about our analysis, it is necessary to understand what the states ON and OFF mean for the fish. Biological information systems, such as the brain have an explicit ON state, that is, firing neurons. In contrast, the ON state for each fish is its recognition of a certain environment, that is, it is the state of a characteristic factor to which each fish pays attention. Since there are various kinds of information to take into account, there is no explicit ON state in fish school. (This kind of ambiguity is not a demerit of our analysis. We will come back this issue in the Discussion.)
\subsection{$\Phi$ values for local parameter settings}
First, we confirmed the fact that $\Phi$ increases with group size (from two to five fish) on average. This trend is also observed in the Boids model (with the same parameter setting, see Table S2) but values are higher than those for real fish schools. This result is a very natural one because the degree of integrity becomes high when each agent keeps their distance almost constant and moves as one collective throughout the series of events. Compared with the Boids model, fish in the real fish school connect more loosely with each other. As a result, $\Phi$ for real fish schools is smaller than in models.
Fig. \ref{Figure_2} shows that a qualitative change occurs when group size increases from three to four. Apparently, $\Phi$ values in two- and three-fish groups depend only on the distance threshold and not on the visual field. It appears that the leadership relation is not so important for fish groups smaller than 4 (an enlarged version of Fig. \ref{Figure_2} is given as \ref{fig:sfig31}). Leadership emerges when the group size is four or more. Interestingly, this trend is not observed in the Boids model and in the mutual information model. (See Figs. \ref{sfig1} and \ref{fig:sfig22a}. Fig. \ref{fig:sfig32} shows other parameter settings.) Furthermore, if we take a time step of 0.1 and 0.2 s instead of 0.05 s, the same tendency observed in almost cases (from Figs.\ref{fig:sfig33} to \ref{fig:sfig33-2}). Two-fish groups show high values around Field of View = $(1/5) {\pi}$ ({\it rad}) and Distance=200({\it mm}) (Fig. \ref{fig:sfig33} and \ref{fig:sfig33-2}). The leadership relation that emerges is, however, essentially different from that in large groups. This kind of behaviour may be called ``followership'' because the very narrow visual fields lead individuals to target the fish swimming ahead of him.
Fig. \ref{Figure_3} is an example of a time series of $\Phi$. The abrupt reductions of $\Phi$ values correspond to the emergence of leadership. In IIT 3.0, the emergence of leadership never raises the $\Phi$ value; it always decreases it. Leadership decreases the integrity of the school because, if we cut between a leader and its followers, the integrity of the whole will be disrupted. So, the emergence of leadership itself raises $\Phi$ values on average (the highest $\Phi$ value corresponds when all fish are in the ON state); however, they decrease $\Phi$ values as a single state.
We also find that the turning rate is not so important for determining $\Phi$ values for cases with a short timescale ($\Delta t = 0.05$ s). However, turning rate becomes important information for long timescale events (see Fig. \ref{fig:sfig33} and \ref{fig:sfig33-2}). Over a short timescale, relative positional information seems the most important for raising $\Phi$ values.
The other intriguing measure is the number of concepts. Concepts are one of the critical notions in IIT 3.0 because $\Phi$ values are determined by their distribution in a conceptual space. A concept is, in short, the ability of ``difference makes difference'' as a subsystem. (Further explanation is given in the''Integrated information $\Phi$'' and ''Concept'' sections in the Supporting Information). If a system contains many concepts (up to $2^n - 1$ concepts exist for $n$ elements), that system has many irreducible components (i.e. it cannot be decomposed into its parts) as subsystems. The importance of the number of concepts can be observed in elementary cellular automata. The rule which shows class IV behaviour has all orders of concept unlike other classes \cite{Albantakis2015}.
We found that there are areas that are rich in concepts despite low $\Phi$ values (Figs.S10 and S11). Combined with the results shown Fig. \ref{Figure_2}, we can distinguish three types of combination, that is, low $\Phi$ and few concepts, high $\Phi$ and many concepts, and low $\Phi$ and many concepts (there are no examples of the combination of high $\Phi$ and few concepts in our study). The most interesting case is the combination of low $\Phi$ and many concepts. These areas tend to get high $\Phi$ values if the number of fish increases. This observation suggests that low $\Phi$ values and many concepts provides the possibility of evolution if the condition (or environment) changes. (We also examined other measures. See \ref{fig:sfig35} to Fig. \ref{fig:sfig36}.)
\subsection{$\Phi$ values for global parameter settings}
Next, we defined the ON and OFF states globally rather locally. That is, the states are determined by global measures of interaction rather than local ones, as previously. For this, we considered the average direction and the centre of mass. When the difference between a fish's direction and the average direction of the fish school is within a certain specified value, its state is ON (see Fig. \ref{Figure_4}). Similarly, when each fish's distance from the centre of mass of the school is smaller than a certain specified value, its state is ON. The main difference from the previous state definition is that these parameters require the existence of a single group to be postulated {\it a priori}. These values will make no sense if the group is divided into two groups. (It is possible that two independent coherent groups will be incoherent when considered as a whole.)
As in the local case, $\Phi$ values also rise with group size (Fig. \ref{Figure_5}). This tendency is also observed in the Boids model. (Note, in particular, that the distribution of two-fish groups in the Boids model is very different from that of real two-fish schooling. See Fig. \ref{fig:sfig22b}). The main difference between local and global measures is the discontinuity occurs at a different point, that is, between two fish and three fish. The discontinuity between three- and four-fish schools is never observed for the global parameters. In this sense, three- and four-fish schools are continuous with respect to the global parameters.
\section{Discussion}
In this study, we applied IIT 3.0 to real fish schools and compared the results with those for another measure (mutual information) and another model (Boids) under the same conditions. Our results suggest the degree of integration $\Phi$ might pick up some unique information about real fish schools. From the $\Phi$ distributions derived with a certain set of parameters, we found a discontinuity between three- and four-fish school with local parameter setting but continuity with global setting: the recognition of leadership raises the degree of integrity above four but not below three. Changing the timescale from 0.05s to 0.2s, we confirmed the emergence of ``followership'' rather than leadership in two-fish groups (Fig. \ref{fig:sfig33} and \ref{fig:sfig33-2}).
Therefore, their intrinsic causal structures are clearly distinct in terms of IIT, although two- and four-fish schools may exhibit leadership as a group.
This result is consistent with Albantakis's argument that IIT captures ``what a dynamical system is from its own intrinsic perspective'' (or ``how much and in which way it exists for itself, independent of an external observer'') rather than ``what is happening in a system from extrinsic perspective of an observer''\cite{Albantakis2015}. Along the lines of this statement, we can say the emergence of leadership represents what the system of a fish school is with respect to its group size. It is worth noting that IIT discriminates between three- and four-fish groups, which is a comparison that is rarely considered in the context of collective animal behaviour, although there are some studies that suggest a difference between two- and three-fish groups in terms of each fish's interactions with others (i.e. a difference in what is happening in the system from an extrinsic perspective') \cite{Katz2011, Gautrais2012, Niizato2017}.
Finally, we comment on the relation between animal recognition systems and the evolution of collective animal behaviour. In this paper, we have hypothesised that living systems evolve to raise their $\Phi$ value. This hypothesis itself is not a peculiar one because some studies have shown that the fitness of artificial systems, such as Animats and genetic Boolean networks, is correlated with $\Phi$ \cite{Edlund2011,Albantakis2015, Marshall2017}. Simple biological systems also show some connections between their functional units and $\Phi$ values (or their concepts) \cite{Marshall2017}. For example, in our study, the emergence of leadership in groups of four fish or more means each individual chooses to reduce its field of view in the group to raise $\Phi$ values (Fig.~\ref{Figure_2} shows the peak of $\Phi$ values of a five-fish group, indeed, shifting to make the field of view smaller than that of a four-fish group).
In our analysis, the factor which determines what is ON and OFF is a fish's recognition of its environment. In contrast with brain systems, ON states are dominant in a fish school. This fact means the OFF states are more informative than the ON states. The ON states, especially for local parameter settings, are important because the all-ON state for a school means all fish recognise that they are part of the same group. That is why the state of leadership (one fish in the group is the OFF state) reduces $\Phi$.
We have confirmed that leadership never raises $\Phi$ when the group size is three or less. This fact indicates the two- and three-fish groups tend to show fission-fusion behaviour rather than leadership. In addition to this, three-fish schools can be said to be a kind of tipping point from a local to a global collective. From the view of the local perspective (local parameter settings), there seems to be no advantage for the group when a two-fish school becomes a three-fish school because $\Phi$ values never rise in this condition. On the other hand, from the global perspective (global parameter settings), increasing the group size from two to three means increasing $\Phi$ values. Therefore, the recognition of what is ON or OFF in those systems would change the $\Phi$ values radically and help the group to find its way to other optimal states of $\Phi$ values for other recognition. Our results suggest the evolution of real autonomic systems would become possible through IIT.
In this study, we avoided going deeply into the problem of timescale (we only used a relatively small timescale, which is roughly equal to a general fish's reaction time). Over longer timescales, other patterns of continuity and discontinuity may be found. Increasing the number of individuals may also give other results. However, the present practical computational limit of IIT 3.0 is around 7 or 8 individuals/neurons \cite{Mayner2018}, so some approximations will be needed to implement further analysis. Another area we didn't address is network structure. We supposed an all-connected network without self-loops in this paper because all fish came into contact with each other throughout the event. This will not always be true for large groups. Furthermore, some studies suggests that the network structure of real schools of fish is radically different from the Boids model one, and that they make a stable network called the $\alpha$-lattice \cite{Olfati-Saber2006,Olfati-Saber2007}. This type of network may prevent $\Phi$-raising trends observed in the Boids model.
\section{Methods}
\subsection{Ethics statement}
This study was carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Committee on the Ethics of Animal Experiments of the University of Tsukuba (Permit Number: 14-386). All efforts were made to minimize suffering.
\subsection{$\Phi$ computation}
All computations, in this paper, were performed using the PyPhi software package with the CUT{\_}ONE{\_}APPROXIMATION to $\Phi$.
\subsection{Experimental Settings}
We studied {\it ayus} ({\it {\it Plecoglossus altivelis}}), also known as sweetfish, which live throughout Japan and are widely farmed in Japan. Juvenile {\it ayus} (approximately 7-14 {\it cm} in body length) display typical schooling behaviour, though adult {\it ayus} tend to show territorial behaviour in environments where fish density is low. We purchased juveniles from Tarumiyoushoku (Kasumigaura, Ibaraki, Japan) and housed them in a controlled laboratory. Approximately 150 fish lived in a 0.8 $m^3$ tank of continuously filtered and recycled fresh water with a temperature maintained at 16.4${}^\circ$C, and were fed commercial food pellets. Immediately before each experiment was conducted, randomly chosen fish were separated to form a school of each size and were moved to an experimental arena without pre-training. The experimental arena consisted of a $3{\times}3 m^2$ shallow white tank. The water depth was approximately 15 $cm$ so that schools would be approximately 2D. The fish were recorded with an overhead grey-scale video camera (Library GE 60; Library Co. Ltd., Tokyo, Japan) at a spatial resolution of 640 $\times$480 pixels and a temporal resolution of 120 frames per second.
\subsection{The definition of ON and OFF state for each parameter}
We define a function for each parameter that returns either 0 (OFF) or 1 (ON) for given input values. Generally, we denote a function as $F_{i}^{t}(\cdot)$, where $F$ is the name of the function, $i$ is the index of the individual and $t$ is the time. The arguments of the function can be either in the position vectors $\bm{x}_i(t)$ or the velocity vectors $\bm{v}_i(t)$ of each individual at time $t$. In general, the dimensions of these vectors are $d\leq 3$; the experimental setup used here gives $d=2$. The number of individuals is $n$.
\subsubsection{Local parameters}
\begin{itemize}
\item Distance function $D_{i}^{t}(\bm{x}_{1}(t), \bm{x}_{2}(t), \cdots, \bm{x}_{n}(t))$: $\mathbb{R}^{d} \times \mathbb{R}^{d} \times \cdots \times \mathbb{R}^{d} \xrightarrow{} \{ 0, 1 \}$
For each individual $i$ we obtain a set $S_{i}^{t}= \{j | d(\bm{x}_{i}(t), \bm{x}_{j}(t)) < \zeta, j \neq i \}$ of all other individuals within a specified distance $\zeta$. Here $d(\bm{x}, \bm{y})$ gives the Euclidean distance between $\bm{x}$ and $\bm{y}$. Then, $D_{i}^{t}(\bm{x}_{1}(t), \bm{x}_{2}(t), \ldots, \bm{x}_{n}(t)) = 1$ when $|S_{i}^{t}|>0$ and is 0 otherwise, where $|S|$ denotes the number of elements of a set $S$.
\item Blind sight function $B_{i}^{t}(\bm{v}_{1}(t), \bm{v}_{2}(t), \cdots, \bm{v}_{n}(t)):\mathbb{R}^{d} \times \mathbb{R}^{d} \times \cdots \times \mathbb{R}^{d} \xrightarrow{} \{ 0, 1 \}$
For each individual we form the set $O_{i}^{t} = \{j|$ arg($\bm{v}_{i}(t)$, $\bm{v}_{j}(t)$) $< \eta$, $j \neq i \}$ of all other individuals whose velocity vectors point in a direction within an angle $\eta$ of that of the focal individual. The function arg($\bm{v}_{1}(t)$, $\bm{v}_{2}(t)$) gives the angle between two vectors. Then, $B_{i}^{t}(\bm{v}_{1}(t), \bm{v}_{2}(t), \cdots, \bm{v}_{n}(t)) = 1$ when $|O_{i}^{t}| > 0$ and is 0 otherwise.
\item Turning rate function $T_{i}^{t}(\bm{v}_{i}(t), \bm{v}_{i}(t-\Delta t)):\mathbb{R}^{d} \times \mathbb{R}^{d} \xrightarrow{} \{ 0, 1 \}$
The turning rate function returns 1 when an individual's turning rate exceeds a specified threshold$ \delta$. That is, $T_{i}^{t}(\bm{v}_{i}(t), \bm{v}_{i}(t-\Delta t))= 1$ when arg($\bm{v}_{i}(t)$, $\bm{v}_{i}(t-\Delta t)) \geq \delta$ and is 0 otherwise. The time step used in this paper is $\Delta t = 0.05$, $\Delta t = 0.1$ or $\Delta t = 0.2$ s.
To obtain the states of the fish school, we take a conjunction of these result, that is, $D_{i}^{t}(\bm{x}_{1}(t), \bm{x}_{2}(t), \cdots, \bm{x}_{n}(t)) \wedge B_{i}^{t}(\bm{v}_{1}(t), \bm{v}_{2}(t), \cdots, \bm{v}_{n}(t)) \wedge T_{i}^{t}(\bm{v}_{i}(t), \bm{v}_{i}(t-\Delta t))$ for each individual $i$. The conjunction is given as $\wedge : \{ 0, 1 \}^2 \xrightarrow{} \{ 0, 1 \}$ where $1 \wedge 1 = 1$ and is 0 otherwise. Thus the state of each individual $i$ at time $t$ is $s_{i}(t; \zeta,\eta, \delta) \in \{0,1\}$ which depends on the triplet of parameter values $(\zeta,\eta, \delta)$. The state of the school at time $t$ is then a vector $\bm{s}(t) = (s_1(t), s_2(t), \ldots, s_n(t)) \in \{0,1\}^n$, where the parameter dependence has been omitted for simplicity.
\end{itemize}
\subsubsection{Global parameters}
\begin{itemize}
\item Average direction function
$Avd_{i}^{t}(\bm{V}(t), \bm{v}_{i}(t)):\mathbb{R}^{d} \times \mathbb{R}^{d} \xrightarrow{} \{ 0, 1 \}$
$\bm{V}(t)$ is the average of $\{ \bm{v}_{1}(t), \bm{v}_{2}(t), ..., \bm{v}_{n}(t)\}$. If an individual's direction of motion deviates from the average by more than a threshold amount $\Theta$ then the individual is in the OFF state: that is, $Avd_{i}^{t}(\bm{V}(t), \bm{v}_{i}(t)) = 1$ when arg($\bm{V}(t)$, $\bm{v}_{i}(t)) \leq \Theta$, and is 0 otherwise.
\item Centre of mass function
$Com_{i}^{t}(\bm{X}(t), \bm{x}_{i}(t)):\mathbb{R}^{d} \times \mathbb{R}^{d} \xrightarrow{} \{ 0, 1 \}$
$\bm{X}(t)$ is the average of $\{ \bm{x}_{1}(t), \bm{x}_{2}(t), \cdots, \bm{x}_{n}(t)\}$. If an individual is further from $\bm{X}(t)$ than a specified threshold $\Omega$ then the individual is in the OFF state: that is, $Com_{i}^{t}(\bm{X}(t), \bm{x}_{i}(t)) = 1$ when $d(\bm{X}(t)$, $\bm{x}_{i}(t)) \leq \Omega$ and is 0 otherwise.
To obtain the state of the fish school, we take a conjunction of these results to obtain a state for each individual which depends on the pair $(\Theta,\Omega)$:, $s_i(t; \Theta, \Omega) = Avd_{i}^{t}(\bm{V}(t), \bm{v}_{i}(t)) \wedge Com_{i}^{t}(\bm{X}(t),\bm{x}_{i}(t)) \in \{0,1\}$. The state of the school at time $t$ is then a vector $\bm{s}(t) = (s_1(t), s_2(t), \ldots, s_n(t)) \in \{0,1\}^n$, where the parameter dependence has been omitted for simplicity.
\end{itemize}
\bibliographystyle{unsrt}
|
1,116,691,497,816 | arxiv | \section{Introduction}
Quantum field theory predicts that a detector accelerating in empty Minkowski space shall observe a particle bath with a spectrum dependent on the proper acceleration
of the detector. In particular, if the motion is linear with constant proper acceleration, the particle bath is thermal with a temperature proportional to the acceleration \cite{Unruh1,Crispino}. This extremely minute physical phenomenon is called the Unruh effect. Despite being difficult to detect directly, the effect could prove to be significant in various scenarios such as centripetal acceleration in rotating frames \cite{UnruhRotatingElectrons}. Moreover, there exist several proposals for observing and simulating the Unruh effect in laboratory conditions \cite{SSH,RCPR, VanMat, MFM,PenSud,Cozzella,Jin}.
Since it has not been detected directly, its very existence and meaning have also been questioned \cite{FordOC,MatVan}. From the theoretical point of view, the Unruh effect is also closely related to Hawking radiation (for a detailed discussion on the subject see Ref. \cite{Crispino}).
Since a constantly accelerated detector experiences an effective thermal background, it is possible to model it as a two-level system interacting with a bosonic environment with a Planckian spectrum. This model has been studied extensively within the framework of open quantum systems theory, both invoking the Born-Markov approximation \cite{YuZhang,Benatti}, and in more general non-Markovian settings \cite{RavalHuKoks, LinHu, MoustosAnastopoulos}. In all these previous works, both Markovian and non-Markovian, an eternally and constantly accelerating Unruh-DeWitt detector is considered.
In this paper we focus on the more realistic case of a finite-size detector starting its constant acceleration at a finite time, while still considering weak coupling between the detector and the field.
The master equation describing the dynamics of the detector in this situation becomes a time-local master equation with time-dependent decay rates which may take temporarily negative values. This time-local structure highlights the departure from the Markovian semigroup dynamics described by the well-known Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation. However, our approach differs from Refs. \cite{RavalHuKoks, LinHu, MoustosAnastopoulos} for two distinct reasons. First, the time-dependent decay rates are always directly dependent on the detector's trajectory, which in our case is different from the standard eternally accelerated case considered in Refs. \cite{RavalHuKoks, LinHu, MoustosAnastopoulos}. Second, we use a modified Wightman function to take into account the detector's profile, as proposed in Ref. \cite{ASatz2} .
During the last decade, a new paradigm in the description of open quantum systems has emerged. Specifically, a formal and rigorous information-theoretical approach was introduced and used to define Markovian and non-Markovian dynamics in order to give a clear physical interpretation, as well as an operational definition, to memory effects \cite{BLP,NMDOQS,ReviewRHP,ReviewLHW,ReviewJ}. Markovian dynamics is characterized by a continuous and monotonic loss of information from the open system to the environment while non-Markovian dynamics occurs when part of the information previously lost into the environment comes back due to memory effects, namely information backflow occurs.
For the system studied in this paper, the time-dependent decay rates appearing in the master equation are obtained from the underlying microscopic Hamiltonian model of system (detector) plus environment (quantum field). Such coefficients are directly linked to the trajectory of the detector in Minkowski space. Interestingly, we have identified the relevant physical parameter ruling the appearance of information backflow and showed under which condition memory effects may occur. This provides new physical insight in the understanding of the Unruh effect and paves the way to the exploration of relativistic quantum phenomena in terms of quantum information exchange between system and environment.
The structure of the paper is as follows. In Sec. II we review the concept of information backflow and how it is related to memory effects and non-Markovian dynamics. In Sec. III. we present our results, namely, (i) we discuss the form of the time-local master equation obtained in the weak coupling limit for a finite-size detector which starts to accelerate at $t=0$; (ii) we study the presence or absence of information backflow and its interpretation, and (iii) we investigate the regions of validity of our approximated master equation by looking at the CP conditions. Finally, in Sec. IV we discuss our results and present conclusions.
\section{Non-Markovianity and information backflow}
The concept of Markovian and non-Markovian stochastic process has a clear and rigorous formulation in the classical domain \cite{breuer-2002}. The extension to quantum processes, however, is not straightforward. Open quantum systems, indeed, may display dynamical features which do not have a classical counterpart, such as recoherence, information trapping, entanglement sudden death and revivals, and so on. For this reason, the generalization of the definition of Markovian/non-Markovian process from classical to quantum is still the subject of an intense debate (for reviews see Ref.\cite{NMDOQS,ReviewRHP,ReviewLHW,ReviewJ}). Generally speaking, there are two approaches to the definition of quantum non-Markovianity. The first one focuses on the properties of the master equation or the corresponding dynamical map, while the second one emphasizes the need of a more physical approach, identifying memory effects with the occurrence of information backflow. The latter approach does not require the knowledge of the explicit form of either the master equation or dynamical map, and has been pioneered by Breuer, Laine, and Piilo (BLP), who introduced the now famous BLP non-Markovianity measure \cite{BLP}. In the following we review both perspectives and recall their connection.
\subsection{Non-Markovianity as nondivisibility}
Historically, Markovian open quantum dynamics was identified with the GKSL form of the master equation and was extensively used due to its powerful property of guaranteeing complete positivity (CP), and hence physicality, of the density matrix at all times. A straightforward extension of the GKSL theorem \cite{Lindblad,GKS} to time-local master equations identifies Markovian and non-Markovian dynamics with the properties of the dynamical map $\Phi_\tau: \rho(\tau)=\Phi_\tau \rho(0)$ characterizing the open system evolution. More precisely, the dynamics is said to be Markovian whenever the dynamical map possesses the property of being CP divisible, namely whenever the propagator $V_{\tau,s}$, defined by $\Phi_\tau = V_{\tau,s} \Phi_s$, is CP \cite{Rivas}. This occurs iff the time-dependent decay rates appearing in the master equation are positive at all times $\tau$. On the contrary, non-Markovian dynamics occurs when the dynamical map $\Phi_\tau$ is not CP divisible. This is signaled by the fact that at least one of the time-dependent decay rates of the master equation attains negative values for certain time intervals.
\subsection{Non-Markovianity as information backflow}
The evolution of a quantum system interacting with its surrounding environment, be it classical or quantum, relativistic or nonrelativistic, can be described in terms of exchange of energy and/or information between the two interacting parties. While the concept of energy is uniquely defined in quantum systems, a unique definition of information is lacking. Indeed, in principle, there are a number of useful and rigorous choices for quantifying information, and hence information flow, and such choices obviously depend on which "type" of information one is interested in. Quantum information theory deals with the study of information quantifiers, their properties, their dynamics, and their usefulness in quantum computation, communication, metrology and sensing.
The first attempt to quantify system-environment information flow, and connect it to the Markovian or non-Markovian nature of the dynamics was based on the concept of trace distance between two states $\rho_1$ and $\rho_2$ of an open system,
\begin{equation}
D(\rho_1,\rho_2) = \tfrac{1}{2} \text{tr}\vert \rho_1-\rho_2 \vert .
\end{equation}
The trace distance is invariant under unitary transformations and contractive for CP dynamical maps, i.e., given two initial open-system states $\rho_1(0)$ and $\rho_2(0)$ , the trace distance between the time-evolved states never exceeds its initial value $D[\rho_1(t),\rho_S(t)] \leq D[\rho_1(0),\rho_2(0)]$.
Trace distance is a measure of information content of the open quantum system since it is simply related to the maximum probability $P_D$ to distinguish two quantum states in a single-shot experiment, namely $P_D=\frac{1}{2}[1+D(\rho_1,\rho_2)]$ \cite{NielsenChuang}. Therefore, an increase in trace distance signals an increase in our information about which one of the two possible states the system is in. Following Ref. \cite{BLP}, one can define information flow as the derivative of trace distance as follows:
\begin{equation}
\sigma(t)= \frac{d}{dt} D[\rho_1(t),\rho_2(t)].
\end{equation}
Even though trace distance cannot increase under CP maps, it may not behave always in a monotonic way as a function of time. Specifically, whenever the trace distance decreases monotonically, information flow is negative, meaning that the system continuously loses information due to the presence of the environment. On the other hand, if for certain time intervals information flow becomes positive, then this signals a partial and temporary increase of distinguishability and, correspondingly, a partial recover of information. This information backflow has been proposed as the physical manifestation of memory effects and non-Markovianity. This idea is known as BLP non-Markovianity.
Note that, whenever the dynamical map is BLP non-Markovian, i.e., in presence of information backflow, then it is also CP nondivisible. However, the inverse is not true, namely, there exist systems that are CP nondivisible but BLP Markovian. In general, the concept of nondivisibility and the concept of BLP non-Markovianity, or information backflow quantified by trace distance, do not coincide and their relationship has been the subject of numerous studies (see, e.g., Refs. \cite{ReviewLHW,ReviewJ} for reviews).
\subsection{Connection between nondivisibility and information backflow}
The difference between the concept of CP divisibility and the concept of memory effects due to information backflow, as signaled by an increase of distinguishability, can be overcome if one allows for a more general definition of distinguishability between states. More precisely, the concept of distinguishability based on trace distance is based on the idea of equal probabilities of preparing the two states, i.e., the preparation is uniformly random and there is no prior additional information on which one of the two states is prepared. One can, however, generalize this concept by introducing the Helstrom matrix $\Delta$,
\begin{equation}
\Delta=p_1 \rho_1 - p_2 \rho_2
\end{equation}
where $p_1$ and $p_2$ are the prior probabilities of the corresponding states. The information interpretation in terms of the one-shot two-state discrimination problem is valid also in this more general setting \cite{DarekRivas}.
In more detail, one now considers two states and their corresponding ancilla evolving under the completely positive, trace preserving dynamical map $\Phi_{\tau}$ as follows
\begin{eqnarray}
\tilde{\rho}_{1,2}(t)= (\Phi_{\tau}\otimes {\cal I}_d) \tilde{\rho}_{1,2}(0),
\end{eqnarray}
with $ \tilde{\rho}_{1,2}$ the combined system-ancilla state, ${\cal I}_d$ the identity map, and $d$ the dimension of the Hilbert space of the system, which in this case is equal to the one of the ancilla.
It has been recently shown in Ref. \cite{DarekRivas} that, for bijective maps, the trace norm of the Helstrom, matrix defined as,
\begin{eqnarray}
E(t) = |\Delta(t)|=|p_1 \tilde{\rho}_1(t) - p_2\tilde{\rho}_1(t)|
\end{eqnarray}
is monotonically decreasing iff the map is CP divisible. This result has been generalized to nonbijective maps in Ref. \cite{DarekPRL2018}.
This allows one to interpret lack of CP divisibility in terms of information backflow for system and ancilla, when having prior information on the state of the system, or in our case of the detector.
Finally, one can release the assumption of prior information and prove that, if one uses a $d+1$ dimensional ancilla, then the dynamical map $\Phi_{\tau}$ is CP divisible if and only if the trace distance $D$ decreases or remains constant as a function of time for all pairs of initial system-ancilla states Ref.\cite{Bogna}. Therefore, also in this case, one can interpret the loss of CP divisibility in terms of information backflow for the system-ancilla pair. For further details on the connection between CP divisibility and information backflow we refer the reader to the recent perspective article \cite{ReviewJ}.
In this paper we will specify these approaches to our physical system, and study memory effects and information backflow by looking at the time evolution of the time-dependent decay rates defined by Eq. (\ref{eq:decayrates}). We note that, for the form of master equation considered in this paper, the behavior of the decay rates can be directly connected to the presence or absence of BLP non-Markovianity, and of several other non-Markovianity indicators based on the behavior of other quantifiers of information, as demonstrated by some of the authors of this paper in Ref. \cite{Jose}. Specifically, BLP non-Markovianity can be inferred by the violation of certain sets of inequalities involving the decay rates \cite{Jose}. We will use these results in the follow-up discussions.
\section{Results}
\subsection{The master equation}
In Ref. \cite{Benatti} a microscopic derivation of the master equation describing the dynamics of a two-level detector weakly interacting with a scalar field in the Minkowski vacuum was presented. The derivation relies on the standard Born-Markov approximation \cite{breuer-2002}. An eternally and uniformly accelerated detector parametrized with the proper time, i.e., following the well-known hyperbolic path \cite{Unruh1}, is considered by the authors. Here we relax this unrealistic assumption and consider instead a different trajectory in Minkowski space, assuming that the detector is inertial until a certain time after which it experiences a uniform acceleration. Under these conditions the environment correlation function is not time-translation invariant anymore, and this leads to decay rates which are now time dependent. Moreover, we generalize the description of the detector from pointlike, to finite size. We show in the appendix that with these generalizations, following the same lines of Refs. \cite{Benatti} and \cite{breuer-2002}, the master equation describing the dynamics of the detector takes the form $\dot{\rho} = -i [H_{\mathrm{eff}}, \rho] + \mathcal{L}(\rho)$, where the dissipator $\mathcal{L}$, in the instantaneous rest frame of the detector, is given by
\begin{equation}\label{eq:meLank}
\mathcal{L}(\rho) = \frac{\gamma_1(\tau)}{2} L_1(\rho) + \frac{\gamma_2(\tau)}{2} L_2(\rho) + \frac{\gamma_3(\tau)}{2} L_3(\rho),
\end{equation}
and where the effective Hamiltonian is $H_{\mathrm{eff}} = \omega \sigma_z/2 + \Omega (\tau)$, with $\Omega(\tau)$ a generally time-dependent renormalized frequency.
The dissipator is given by the sum of three terms, $L_i(\rho)$, describing, in order, heating, dissipation and dephasing, and having the following form
\begin{equation}
\begin{split}
L_1(\rho) &= \sigma_+ \rho \sigma_- - \frac{1}{2} \left\{ \sigma_- \sigma_+ ,\rho \right\} \\
L_2(\rho) &= \sigma_- \rho \sigma_+ - \frac{1}{2} \left\{ \sigma_+ \sigma_- ,\rho \right\} \\
L_3(\rho) &= \sigma_z \rho \sigma_z - \rho. \\
\end{split}
\end{equation}
The coefficients $\gamma_1(\tau),\ \gamma_2(\tau)$ and $\gamma_3(\tau)$ are the absorption, emission and dephasing rates, respectively, with the implicit $\omega$ dependence. They are simply related to the proper time ($\tau-$)derivative of the correlation function $F_\tau (\omega )$ through the equations
\begin{equation} \label{eq:decayrates}
\gamma_1(\tau) = 4 \dot{F}_\tau(-\omega),\
\gamma_2(\tau) = 4 \dot{F}_\tau(\omega),\
\gamma_3(\tau) = 2 \dot{F}_\tau(0).
\end{equation}
Note that in this paper we use units $c=\hbar=1$ and Minkowski spacetime signature (+,\,-,\,-,\,-).
For any detector the correlation function is related to the Wightman function
$W(\tau,\tau')= \langle \phi (\mathtt{x}(\tau ))\phi (\mathtt{x}(\tau'))\rangle$ on the detector worldline $\mathtt{x}(\tau)$
as follows \cite{BirrellDavies}:
\begin{equation}
F_{\tau}(\omega) = \int_{\tau_0}^\tau d \tau' \int_{\tau_0}^\tau d \tau'' e^{- i \omega(\tau' - \tau'')} W(\tau',\tau''),
\end{equation}
where $\phi(\mathtt{x})$ is a massless scalar field at Minkowski space point $\mathtt{x} = (t,x,y,z)$. Hence, the proper time derivative $\dot{F}_\tau(\omega)$, for an always-on detector, i.e., for $\tau_0 \rightarrow -\infty$, in its rest frame, reads as
\begin{equation}
\dot{F}_\tau(\omega) = 2 \int_0^{\infty} ds \Re\left( e^{-i\omega s} W(\tau, \tau - s)\right).
\end{equation}
The Wightman function is most easily calculated for a pointlike detector. However, it is not physically realistic and leads to problems e.g. with Lorentz invariance \cite{Schlicht,Satz}.
These problems can be circumvented by assuming that the detector has a finite size instead of being pointlike. The spatial shape of the detector can be defined by the Lorentzian smearing function given in terms of the Fermi coordinates {\boldmath{${\xi}$}} (momentarily normal coordinates) \cite{Schlicht} as
\begin{equation}
f(\text{\boldmath$\xi$})=\frac{1}{\pi^2} \frac{\epsilon^2}{\left( |\text{\boldmath$\xi$}|^2 + \epsilon^2 \right)^2},
\end{equation}
but the detector profile is eventually irrelevant at least if it satisfies some smoothness conditions \cite{ASatz2}. Following the same reference, the transition rate for a pointlike always-on detector is given by
\begin{equation}\label{eq:transrategeneral}
\dot{F}_\tau(\omega) = - \frac{\omega}{4 \pi} + \frac{1}{2 \pi^2}\int_0^\infty \mathrm{d}s \left( \frac{\cos(\omega s)}{(\Delta \mathtt{x})^2} + \frac{1}{s^2}\right),
\end{equation}
while the transition rate for a finite-size detector of characteristic size $\epsilon$ is given by (10) with
\begin{equation}\label{eq:W2}
W(\tau,\tau') = \frac{-1/4\pi^2}{\left( \mathtt{x}(\tau) - \mathtt{x}(\tau') - i \epsilon \left( \dot {\mathtt{x}}(\tau ) - \dot {\mathtt{x}}(\tau') \right)\right)^2},
\end{equation}
where $\Delta \mathtt{x} := \mathtt{x}(\tau)-\mathtt{x}(\tau-s)$.
This finite-size correlator is more physical, it appears to have much more regular properties, and is therefore used in our study.
In this paper we consider a detector at rest for $\tau \leq 0$ and uniformly accelerated for $\tau > 0$, following the path given by
\begin{equation}
\begin{split}
t(\tau) &= \theta(-\tau) \tau + \theta (\tau) \alpha \sinh \left( \frac{\tau}{\alpha} \right),\\
x(\tau) &= \alpha \theta(-\tau) + \alpha \theta(\tau) \cosh \left( \frac{\tau}{\alpha} \right),\\
y(\tau) &= z(\tau)=0,
\end{split}
\end{equation}
where the proper acceleration experienced by the detector is $1/\alpha$, and $\theta(\tau)$ is the Heaviside step function.
These more realistic assumptions allow us to perform calculations and obtain explicit expressions for the decay rates. By inserting Eq. (\ref{eq:W2}) and the path into Eq. (\ref{eq:transrategeneral}) we obtain
\begin{equation} \label{eq:impo1}
\begin{split}
2 \pi \alpha \dot{F}_{\bar{\tau}}(\bar{\omega}) =& \frac{\bar{\omega}}{e^{2 \pi \bar{\omega}} - 1} + \Delta \dot{F}_{\bar{\tau}}(\bar{\omega}) \\
\equiv& \frac{\bar{\omega}}{e^{2 \pi \bar{\omega}} - 1}\\
& + \frac{1}{\pi} \int_{\bar{\tau}}^\infty \mathrm{d}\bar{s} \cos(\bar{\omega} \bar{s}) \left( \frac{1}{\left( \Delta \mathtt{x}\right)^2_> } - \frac{1}{\left( \Delta \mathtt{x}\right)^2_< }\right),
\end{split}
\end{equation}
where
\begin{equation} \label{eq:impo2}
\begin{split}
\left(\Delta \mathtt{x}\right)^2_> &:= -\left( \sinh(\bar{\tau}) - (\bar{\tau} - \bar{s})\right)^2 + \left( \cosh (\bar{\tau}) - 1\right)^2 \\
\left(\Delta \mathtt{x}\right)^2_< &:= -4 \sinh^2(\bar{s}/2),
\end{split}
\end{equation}
with $\bar{\omega} = \omega \alpha$,
$\bar{\tau} = \frac{\tau}{\alpha}$ and $\bar{s} = \frac{s}{\alpha}$.
For negative times $\bar\tau<0$ the rate of an inertial detector, $\dot{F}_{\bar{\tau}}(\bar{\omega})= -\frac {\omega}{2\pi}\theta(-\omega)$, is restored reflecting the fact that only emission can happen.
For positive times $\bar\tau>0$ the transition rate is the sum of the Planckian equilibrium part ${\bar{\omega}}/({e^{2 \pi \bar{\omega}} - 1})$ and a dynamical correction
$\Delta \dot{F}_{\bar{\tau}}(\bar{\omega})$ which tends to zero in the asymptotic limit $\bar\tau \rightarrow\infty$. In this limit we obtain the same Lindblad master equation as in Ref. \cite{Benatti}.
Equations (\ref{eq:impo1}) and (\ref{eq:impo2}) allow us to obtain the expression of the decay rates by means of Eq. (\ref{eq:decayrates}) and thus show their connection with the detector trajectory. We note that the behavior of the decay rates crucially depends on the $\alpha$-multiplied angular frequency $\bar\omega$, and hence on both the detector energy $\hbar \omega$ and the proper acceleration; in particular, for fixed $\omega$, larger values of $\bar \omega$ correspond to smaller proper acceleration, i.e. smaller deviation from the inertial system. Also, since the proper acceleration is proportional to the effective Unruh temperature $T_U$, $\bar{\omega}$ can be seen as the ratio between the detector energy and the effective bath thermal energy $k_B T_U$. We will see that this parameter rules the appearance of information backflow in the Unruh effect.
\begin{figure}
\begin{center}
\vspace{1cm}
\includegraphics[trim={52 0 0 0},clip,width=0.48\textwidth]{wightman8_plots_v118_gam1_050_020_005_gam3}
\caption{Absorption rate $\gamma_1(\bar{\tau})$ for $\bar{\omega} = 0.50\ {\rm (red)}, 0.20\ {\rm (green)}, 0.05\ {\rm (yellow)}$ and dephasing rate $\gamma_3(\bar{\tau})$. \label{fig:wightman8_plots_v118_gam1_050_020_005_gam3}}
\end{center}
\end{figure}
\subsection{Decay rates and information backflow}
In this section we analyze in detail the behavior of the time-dependent decay rates with the aim of understanding the time evolution of information exchange between system and environment. We recall that, if at least one of the coefficients becomes negative at some time, then the map is not CP divisible and therefore information flows back into the system-ancilla pair. However, the system can still be BLP Markovian, meaning that there is no information backflow into the system only, but information does return to a larger Hilbert space which includes an ancilla living in a Hilbert space of dimension $d$ (prior information on the state present) or $d+1$ (no prior information on the state present).
The dephasing rate can be calculated explicitly and has the form
\begin{equation}\label{eq:gamma3}
\pi \alpha \gamma_3(\bar{\tau}) = \frac{1}{2 \pi}\frac{\bar{\tau} - \sinh({\bar{\tau}})}{1 - \cosh({\bar{\tau}})}.
\end{equation}
From this equation we see that $\gamma_3(\bar{\tau})$ is always non-negative for our system. The absorption and emission rates, defined for $\bar{\omega} \neq 0$, require numerical approaches.
In Fig (\ref{fig:wightman8_plots_v118_gam1_050_020_005_gam3}) we plot sample curves of the absorption and dephasing rates $\gamma_1(\bar\tau)$ and $\gamma_3(\bar\tau)$ weighed by the inverse acceleration factor $\alpha$. These illustrate by examples our extensive numerical investigations showing positivity of the aforementioned rates for all times.
The emission rate $\gamma_2 (\bar{\tau})$ displays a more interesting temporal behavior, since it can attain negative values for $\bar{\omega} \ge 1$, as shown in Fig. (\ref{fig:gamma2}). The parameter $\bar{\omega}$, therefore, controls the transition between CP divisibility and CP nondivisibility, with $\bar{\omega} \approx 1$ the transition value. In the intervals of time where $\gamma_2 (\bar{\tau})$ is negative the system-ancilla pair experiences information backflow and memory effects. This happens approximately when the detector energy becomes greater than the thermal energy of the effective bath, i.e., for small Unruh temperatures (or small proper accelerations).
\begin{figure}
\vspace{1cm}
\includegraphics[trim={48 0 0 0},clip,width=0.48\textwidth]{wightman8_plots_v118_gam2_09_10_16_40}
\caption{Emission rate $\gamma_2(\bar{\tau})$ for $\bar{\omega} = 0.9\ {\rm (blue)}, 1.0\ {\rm (yellow)}, 1.6\ {\rm (green)}, 4.0\ {\rm (red)}$ starting from top, showing non-Markovian regions after $\bar{\omega} \approx 1$ threshold. \label{fig:gamma2}}
\end{figure}
We now conclude our analysis by looking at behavior of other non-Markovianity indicators. In Ref. \cite{Jose} we have established conditions for detecting memory effects using a number of indicators common in the literature, including the BLP non-Markovianity, by means of inequalities involving the decay rates. Since the numerical values of the emission rate are at all times much higher than those of the absorption rate, as seen from Eq. (\ref{eq:impo1}), the inequalities derived in Ref. \cite{Jose} allow us to conclude immediately that the BLP measure \cite{BLP}, the geometric measure \cite{geometric} and the relative entropy of coherence measure \cite{coherence} do not detect information backflow for any value of $\bar{\omega}$.
This is consistent with the fact that these three quantities are only indicators of CP nondivisibility; therefore they may not always detect violation of such property. In other words, in the framework of the system studied, information never returns to the detector only but it will return to a larger system formed by the detector, which interacts with the environment, and an ancilla which does not interact directly with the environment. The ancilla could physically represent, for example, other electronic levels of an atom, if the detector is actually a single atom, or more in general other degrees of freedom which are not explicitly taken into account in the two-state description of the detector and which are not explicitly coupled to the environment.
\subsection{Complete positivity}
We now explore the conditions for complete positivity of the time-local master equation for the Unruh effect discussed in this paper. This is particularly relevant since we know that when the decay rates become negative, and hence the dynamics non-Markovian, we cannot rely anymore on the GKSL theorem to guarantee physicality (i.e., complete positivity) of the solution of the master equation.
In Ref. \cite{Lankinen} necessary and sufficient conditions for complete positivity for a master equation such as the one here considered has been derived. These conditions are expressed in terms of four inequalities involving the decay rates. By using these inequalities it is straightforward to see that, since in our case $\gamma_3(\tau) > 0$ (Eq. (\ref{eq:gamma3})) at all times, the condition $\tilde{\Gamma}(\tau) = \int_0^\tau ds \gamma_3(s) \geq 0$ is always satisfied. Therefore in our system the complete positivity conditions reduce to the simpler positivity conditions, given by
\begin{equation}\label{eq:cp_conditions}
\begin{split}
P_1(\tau) \equiv e^{-\Gamma(\tau)}\left[ G(\tau)+1\right] \in &\left[0,1\right] \\
P_0(\tau) \equiv e^{-\Gamma(\tau)} G(\tau) \in &\left[0,1\right],
\end{split}
\end{equation}
where
\begin{equation}
\begin{split}
\Gamma(\tau) &= \frac{1}{2} \int_0^\tau \mathrm{d}s \left( \gamma_1(s) + \gamma_2(s)\right) \\
G(\tau) &= \frac{1}{2} \int_0^\tau \mathrm{d}s e^{\Gamma(s)}\gamma_2(s).
\end{split}
\end{equation}
Moreover, $P_{0,1}(\tau)$ can be identified as the ground state probability with initial conditions $P(0)$ equal to 0 or 1, respectively. The positivity conditions of Eq. (\ref{eq:cp_conditions}) can be seen as upper and lower bounds to the ground state probability, respectively.
Taking the derivative of Eqs. (\ref{eq:cp_conditions}) with respect to $\tau$ we arrive to the same differential equation, with two different boundary values:
\begin{equation}\label{eq:cp_derivatives}
\begin{split}
P'_{1,0}(\tau) &= -P_{1,0}(\tau) \Gamma'(\tau)+\frac{1}{2} \gamma_2(\tau) \\
P_1(0) &= 1 \\
P_0(0) &= 0.
\end{split}
\end{equation}
The upper bounds $P_{0,1}(\tau)\leq 1$ can be studied using Eq. (\ref{eq:cp_derivatives}):
\begin{equation}
\begin{split}
P'_1(0) &= - \frac{1}{2} \gamma_1(0) < 0 \\
P'_0(0) &= \frac{1}{2} \gamma_2(0) > 0,
\end{split}
\end{equation}
as $\gamma_{1,2}(0) = \dot{F}_P (\mp \omega) > 0$, where $\dot F_P(\omega)$ is the Planckian spectrum. Thus, $P_1(\tau)$ is equal to 1 and decreasing at $\tau=0$ while $P_0(\tau)$ is equal to 0 and increasing at $\tau=0$. Also, both $P_{0}(\tau)$ and $P_{1}(\tau)$ tend to a single finite asymptotic value $\in (0,1)$ as $\tau \rightarrow \infty$, because both $\Gamma'$ and $\gamma_2$ have constant positive asymptotic time limits.
Suppose now, that $P_{0}(\tau)$ or $P_{1}(\tau)$ is increasing at some time $\tau_1>0$ where it reaches value 1 and would therefore violate complete positivity upper bound for $\tau > \tau_1$. At $\tau = \tau_1$, Eq. \eqref{eq:cp_derivatives} reduces to
\begin{equation}
\begin{split}
P'_{0,1} (\tau_1) &= - \Gamma'(\tau_1) + \frac{1}{2} \gamma_2(\tau_1) \\
&= -\frac{1}{2} (\gamma_1(\tau_1) + \gamma_2(\tau_1)) + \frac{1}{2} \gamma_2(\tau_1) \\
&= - \frac{1}{2} \gamma_1(\tau_1).
\end{split}
\end{equation}
However, the numerical evidence [see, e.g. Fig. \ref{fig:wightman8_plots_v118_gam1_050_020_005_gam3}]
indicates that $\gamma_1(\tau) > 0 \ \forall \tau > 0$, i.e. the function $P_{0,1}(\tau)$ decreases at the point $\tau_1$ where its value is 1, which is in conflict with the assumption that the function is increasing.
Therefore neither function $P_{0}(\tau)$ nor $P_{1}(\tau)$ can reach the value 1 for any positive time. Thus, $\forall \tau\geq 0$ we have $P_{0,1}(\tau)\leq 1$, and the upper bounds of the complete positivity conditions are satisfied.
The lower bounds can only be studied numerically. Fortunately, because $P_1 (\tau) > P_0 (\tau)$ only condition $P_0(\tau )\geq 0$ is relevant. In Fig. \ref{fig:CP_mark} we show the dynamics of the ground state probabilities, i.e. functions of the conditions (\ref{eq:cp_conditions}), for some values of
$\bar{\omega}$. At first sight it seems that the dynamics are completely positive for all times and all considered values of $\bar{\omega}$. However, studying parameter values $\bar{\omega}>1.0$, where the decay rate $\gamma_2 (\tau )$ already exhibits nonpositivity, numerical investigations reveal that the CP condition is violated, i.e.\ $P_0(\bar{\tau})<0$, when $\bar\omega\gtrsim 1.53$ (Fig. \ref{fig:CP_G}). This indicates the breakdown of the approximations used in the derivation of the master equation.
\subsection{Reversed path}
A similar approach to the one in sections above can be directly applied to the case in which the detector decelerates from infinity to rest with a constant negative acceleration rate. This yields the same equation as (\ref{eq:impo1}), now with
\begin{equation}
\begin{split}
(\Delta x)^2_{>} =& -\left( \bar{\tau} - \sinh \left( \bar{\tau} - \bar{s}\right) \right)^2 \\
& + \left( 1 - \cosh \left( \bar{\tau} - \bar{s}\right) \right)^2, \\
(\Delta x)^2_{<} =& \ \bar{s}^2.
\end{split}
\end{equation}
Further analysis, however, shows that the complete positivity conditions fail for all times $\tau > 0$. This is consistent with the fact that the derivation of the master equation with the Lindbladian dissipator in Eq. (\ref{eq:meLank}) assumes complete separability for the initial global state at $\tau = 0$, which is not the case if there has been interaction between the open system and a thermal, $T>0$, environment for all of $\tau < 0$.
\begin{figure}
\vspace{1cm}
\includegraphics[trim={50 0 0 0},clip,width=0.48\textwidth]{wightman8_plots_v220_p0p1}
\caption{$P_0(\bar{\tau})$ and $P_1(\bar{\tau})$. The ground state probabilities for $\bar{\omega}=0.05\ {\rm (blue)}, 0.2\ {\rm (yellow)},\, 1.53\ {\rm (red)}$. Dashed lines represent the Markovian behavior without time-dependent $\Delta \dot F_{\bar\tau}(\bar\omega)$ contribution corresponding to eternally accelerated detector with ground state probability 0 or 1 at $\bar \tau =0$. \label{fig:CP_mark}}
\end{figure}
\begin{figure}
\begin{center}
\vspace{1cm}
\includegraphics[trim={50 0 0 0},clip,width=0.48\textwidth]{wightman8_plots_v220_p0z}
\caption{The ground state probabilities $P_0(\bar{\tau})$
for $\bar{\omega} = 1.20\ {\rm (light blue)}, 1.53\ {\rm (dark blue)}, 2.0\ {\rm (violet)}$ starting from top, where $P_0(\bar{\tau}) < 0$ indicates CP violation. Dashed lines represent the Markovian behavior without time-dependent contribution.
\label{fig:CP_G}}
\end{center}
\end{figure}
\section{Discussion and Conclusions}
When considering the dynamics of the system under study it is worth recalling that, while the accelerated detector undergoes emission and absorption, an inertial detector does not undergo spontaneous excitations. Indeed, more elaborate calculations on the system show that the energy momentum tensor describing the particle content of the space vanishes in any coordinate system, and in particular in the inertial as well as in the rest frame of the accelerated detector \cite{BirrellDavies}. This simply means that the particles detected by the accelerated detector are not real but rather "fictitious" particles.
The source of energy for the excitation of the accelerating detector is, indeed, its direct coupling to the surrounding vacuum field \cite{Crispino, BirrellDavies, UnWa}. As the detector accelerates, it feels resistance and work is done on it by the external system. The work done not only accelerates the detector but also excites it: to overcome the resistance it is converted into the thermal field affecting the noninertial detector. Thus the energy is not provided by any external particle field but rather originates from the unspecified force keeping the detector in the state of accelerating motion.
In this paper we show that, releasing the eternally accelerated and pointlike detector assumptions, the dynamics may display memory effects and information backflow. The corresponding master equation is time-local with time-dependent decay rates directly linked to the detector worldline. For small enough accelerations the detector keeps memory of the initial time when the acceleration began, and the time evolution becomes non-CP divisible displaying information backflow as defined in Ref. \cite{Bogna}. The same parameter ($\bar{\omega}$) which drives the crossover between the presence or absence of information backflow also controls the range of validity of the master equation, as shown by our study on CP conditions.
Our results shed light on the dynamics of information exchange between the detector and its environment, and specifically on the occurrence of information backflow, in the framework of the Unruh effect. We believe that cross-fertilization between relativistic quantum field theory, open quantum system theory and quantum information theory, may pave the way to a better understanding of a number of open problems by introducing new tools, diverse approaches and original perspectives.
\section{Acknowledgements}
The authors acknowledge financial support from the Academy of Finland via the Centre of Excellence program (Project No. 312058) as well as Project No. 287750 and Finnish Academy of Science and Letters. J. L. thanks the University of Turku for hospitality in the early stage of this work. J. L. was supported in part by Science and Technology Facilities Council (Theory Consolidated Grant ST/P000703/1). B. S. thanks the Jenny and Antti Wihuri foundation for financial support.
\section{Appendix}
\subsection{Microscopic derivation of the master equation}
In the microscopic approach to open quantum systems dynamics we start by modeling the total closed system, whose Hilbert space is $\mathcal{H}_S \otimes \mathcal{H}_E$, by means of the microscopic Hamiltonian
\begin{eqnarray}
H=H_{S} \otimes {\rm I} _E +H_{E} \otimes {\rm I} _S +H_{I},
\end{eqnarray}
where $H_{S}$ and $H_{E}$ are the free Hamiltonians of the system and of the environment, respectively, and $H_{I}$ is the interaction term. The initial state of the total system is assumed to be separable, i.e. no correlations between system and environment are initially present.
As the total system is closed, we can write its unitary evolution as
\begin{equation}
\varrho_{SE} (\tau) = U(\tau) \, \varrho_{S} (0) \otimes \varrho_{E} \, U^{\dag} (\tau),
\end{equation}
with $U(\tau)= \exp [-i H \tau]$. If we now take the partial trace over the environment in the equation above, we have:
\begin{equation}
\begin{split}
\varrho_{S} (\tau) =& \, {\rm Tr}_E \{ U(\tau) \, \varrho_{S} (0) \otimes \varrho_{E} \, U^{\dag} (\tau)\} \\
\equiv & \, \Lambda_t \varrho_{S} (0) ,
\end{split}
\end{equation}
where $\Lambda_t$ is the dynamical map. In the following we will describe the assumptions that allow us, starting from a microscopic description of system plus environment, to derive a physically meaningful master equation.
Let us consider the dynamics of the overall density operator $\varrho_{SE}$ given by the von Neumann equation which, in units of $\hbar$ and in the interaction picture, reads as follows
\begin{equation}
\frac{d{\varrho}_{SE} (\tau)}{dt}=-i[H_{I}(\tau),\varrho_{SE} (\tau)], \label{eq:1}
\end{equation}
where we omit for simplicity of notation the subscript $I$ which we should use to indicate the density matrix in the interaction picture.
The integral form of this equation is
\begin{equation}
\varrho_{SE}(\tau)= \varrho_{SE}(0) - i \int_{0}^{\tau}ds [H_{I}(s),\varrho_{SE}(s)].
\label{eq:2}
\end{equation}
Inserting Eq. (\ref{eq:2}) into Eq. (\ref{eq:1}) and taking the partial trace over the environmental degrees of freedom we get
\begin{equation}
\frac{d\varrho_{S}}{dt}(\tau)=-\int_{0}^{\tau}ds\textrm{Tr}_{E}\{[H_{I}(\tau),[H_{I}(s),\varrho_{SE}(s)]]\},
\label{micro1}
\end{equation}
where we have assumed $\textrm{Tr}_{B}[H_{I}(\tau),\varrho_{SE}(0)]=0$.
We assume now that system and environment are weakly coupled (Born approximation). This approximation amounts to assuming that the correlations established between system and environment are negligible at all times (initially zero), i.e.,
$$
\varrho_{SE}(\tau)\approx\varrho_{S}(\tau)\otimes\varrho_{E}
$$
Within this approximation we get a closed integro-differential equation for $\varrho_{S}(\tau)$
\begin{equation}
\frac{d\varrho_{S} (\tau)}{dt}=-\int_{0}^{\tau}ds\textrm{Tr}_{E}\{[H_{I}(\tau),[H_{I}(s),\varrho_{S}(s)\otimes\varrho_{E}]]\}
\label{micro2}
\end{equation}
Note that, in the equation above, the future evolution of the system, described by $\frac{d\varrho_{s}}{dt}(\tau)$, depends on the past states of the system $\varrho_{S}(s)$ for times $s < \tau$ through the integral.
A further simplification to this equation is obtained by assuming that we can replace $\varrho_{S}(s)$ appearing inside the integral with its value at time $\tau$, $\varrho_{S}(\tau)$, which is possible if the density matrix does not change strongly in the interval of time $0 \le s \le \tau$.
This is the case in many physical situations in which this integrand (or rather that part of it describing the environment correlations) quickly decays to zero after a short characteristic correlation time $\tau_E$. This timescale quantifies the memory time of the reservoir. Hence, if the density matrix of the system does not change sensibly in the correlation time $\tau_E$, then we can approximate $\varrho_{S}(s)$ with $\varrho_{S}(\tau)$ in Eq. (\ref{micro2}). The resulting equation is known as the Redfield equation
\begin{equation}
\frac{d\varrho_{S} (\tau)}{dt}=-\int_{0}^{\tau}ds\textrm{Tr}_{E}\{[H_{I}(\tau),[H_{I}(s),\varrho_{S}(\tau)\otimes\varrho_{E}]]\}.
\label{micro4}
\end{equation}
Equation (\ref{micro4}) is local in time, i.e., the future evolution of the state of the system does not depend on its past state. However, it still retains memory of the initial state $\varrho_{S} (0)$.\\
Until now we have assumed the density matrix does not change much within the correlation time $\tau_E$. The next step will be to neglect such a change altogether by performing a coarse graining in time. This is mathematically achieved by replacing the upper limit of the integral in Eq. (\ref{micro4}) with $\infty$,
\begin{equation}
\frac{d\varrho_{S}}{dt}(t)=-\int_{0}^{\infty}ds\textrm{Tr}_{E}\{[H_{I}(t),[H_{I}(t-s),\varrho_{S}(t)\otimes\varrho_{E}]]\},
\label{micro44}
\end{equation}
where we have replaced for the sake of convenience $s$ with $\tau-s$.
The two-step approximation described in Eqs. (\ref{micro4}) and (\ref{micro44}) is known as the Markov approximation. We say that Eq. (\ref{micro44}) is derived from a microscopic model under the Born-Markov approximation, i.e., for weak coupling and quickly decaying reservoir correlations (memoryless dynamics).
Let us decompose the interaction Hamiltonian $H_{I}$ in terms of operators of the system and of the environment:
$$
H_{I}=\sum_{\alpha}A_{\alpha}\otimes B_{\alpha}
$$
with $A_{\alpha} \, (B_{\alpha})$ Hermitian operators of the system (environment). In our case of a two-level system interacting with a scalar field this can be rewritten as
$$
H_{I}=\sum_{\alpha}\sigma_{\alpha}\otimes \phi_{\alpha}
$$
Let us assume that $H_{S}$ has a discrete spectrum and let us indicate with $\epsilon$ the eigenvalues and with $\Pi(\epsilon)$ the corresponding projectors into the corresponding eigenspace. We define the eigenoperators of the system as follows
\begin{equation}
\sigma_{\alpha}(\omega)=\sum_{\epsilon'-\epsilon=\omega}\Pi(\epsilon)\sigma_{\alpha}\Pi(\epsilon').
\end{equation}
We can rewrite the interaction Hamiltonian in terms of eigenoperators of $H_{S}$, and then pass to the interaction picture exploiting the fact that the system eigenoperators have a simple time dependency in this picture. The environment operators in the interaction picture are simply given by $\phi_{\alpha}(\tau) = e^{i H_E \tau} \phi_{\alpha} e^{- i H_E \tau}$.
After some algebra, we can rewrite the master equation in the following form
\begin{equation}
\begin{split}
\frac{d\varrho_{s}}{dt}(\tau)=\sum_{\omega,\omega'}\sum_{\alpha,\beta}e^{i(\omega'-\omega)\tau}&\Gamma_{\alpha\beta}(\omega) [\sigma_{\beta}(\omega)\varrho_{S}(\tau)\sigma_{\alpha}^{\dagger}(\omega') \\ & -
\sigma_{\alpha}^{\dagger}(\omega')\sigma_{\beta}(\omega)\varrho_{S}(\tau)]+\textrm{h.c.}
\end{split}
\label{micro5}
\end{equation}
where we introduced
$$
\Gamma_{\alpha\beta}(\omega)\equiv\int_{0}^{\infty}dse^{i\omega s}\langle \phi_{\alpha}^{\dagger}(\tau)\phi_{\beta}(\tau-s)\rangle ,
$$
with the reservoir correlation functions given by
$$
\langle \phi_{\alpha}^{\dagger}(\tau)\phi_{\beta}(\tau-s)\rangle\equiv\textrm{Tr}_{E}\{\phi_{\alpha}^{\dagger}(\tau)\phi_{\beta}(\tau-s)\varrho_{E}\}.
$$
Such correlation functions are homogeneous in time if the reservoir is stationary, i.e.
$$
\langle \phi_{\alpha}^{\dagger}(\tau)\phi_{\beta}(\tau-s)\rangle=\langle \phi_{\alpha}^{\dagger}(s)\phi_{\beta}(0)\rangle,
$$
however this is not true in our case as the field $\phi$ is not invariant under time translations, which is one of the crucial differences from the time-independent case in Ref. \cite{Benatti}.
We now make the last approximation, known as the secular approximation. First we define $\tau_{S}$ as the characteristic intrinsic evolution time of the system. This timescale is generally of the order of $\tau_{S}\approx|\omega'-\omega|^{-1}, \omega'\ne\omega$. We indicate with $\tau_{R}$ the relaxation time of the open system. If $\tau_{S}\gg\tau_{R}$ we can neglect all the exponential terms oscillating at frequency $|\omega'-\omega| \ne0$ as they oscillate very rapidly (averaging out to zero) over the timescale $\tau_R$ over which $\varrho_{S}$ changes appreciably. We then decompose the environment correlation functions into their real and imaginary parts
$$
\Gamma_{\alpha\beta}(\omega)=\frac{1}{2}\gamma_{\alpha\beta}(\omega)+iS_{\alpha\beta}(\omega),
$$
where, for fixed $\omega$,
$$
\gamma_{\alpha\beta}(\omega)=\Gamma_{\alpha\beta}(\omega)+\Gamma_{\beta\alpha}^{*}(\omega)=\int_{-\infty}^{+\infty}dse^{i\omega s}\langle \phi_{\alpha}^{\dagger}(\tau-s)\phi_{\beta}(\tau)\rangle,
$$
form a positive matrix and
$$
S_{\alpha\beta}(\omega)=\frac{1}{2i} [\Gamma_{\alpha\beta}(\omega)-\Gamma_{\beta\alpha}^{*}(\omega)],
$$
form a Hermitian matrix.
With these definitions we finally arrive at the interaction picture master equation
\begin{equation}
\frac{d\varrho_{S}}{dt}(\tau)=-i[H_{LS},\varrho_{S}(\tau)]+\mathcal{L}(\varrho_{S}(\tau))
\label{micro6}
\end{equation}
where
$$
H_{LS}=\sum_{\omega}\sum_{\alpha,\beta}S_{\alpha\beta}(\omega)\sigma_{\alpha}^{\dagger}(\omega)\sigma_{\beta}(\omega)
$$
is a Lamb-Shift term which provides a Hamiltonian contribution to the dynamics and
$$
\mathcal{L}(\varrho_{S})=\sum_{\omega}\sum_{\alpha,\beta}\gamma_{\alpha\beta}\left[\sigma_{\beta}(\omega)\varrho_{S}\sigma_{\alpha}^{\dagger}(\omega)-\frac{1}{2}\{\sigma_{\alpha}^{\dagger}(\omega)\sigma_{\beta}(\omega),\varrho_{S}\}\right].
$$
This form of the dissipator (generator of the dynamics) $L$ is known as first standard form. Diagonalizing the real positive matrix $\gamma_{\alpha\beta}(\omega)$ we get the GKSL Markovian master equation
$$
\mathcal{L}(\varrho_{S})=\sum_{\omega}\sum_{\alpha}\gamma_{\alpha}(\omega)\left[\bar{\sigma}_{\alpha}(\omega)\varrho_{S}\bar{\sigma}_{\alpha}^{\dagger}(\omega)-\frac{1}{2}\{\bar{\sigma}_{\alpha}^{\dagger}(\omega)\bar{\sigma}_{\alpha}(\omega),\varrho_{S}\}\right],
$$
where $\{\bar{\sigma}_\alpha\}_{\alpha = 0..3} = \{ I, \sigma_+, \sigma_-, \sigma_z \}$.
|
1,116,691,497,817 | arxiv | \section{Introduction}
In this paper we consider $\lambda Y$-calculus, which is an extension of the simply typed $\lambda$-calculus by a fixed-point operator $Y$.
A term $P$ of $\lambda Y$-calculus that is of sort\footnote{%
We use the word ``sort'' instead of the usual ``type'' to avoid confusion with intersection types introduced in this paper.}
$o$ can be used to generate an infinite tree $\mathit{BT}(P)$, called the B\"ohm tree of $P$.
Trees generated by terms of $\lambda Y$-calculus can be used to faithfully represent the control flow of programs in languages with higher-order functions.
Traditionally, Higher Order Recursive Schemes (HORSes) are used for this purpose~\cite{Damm82,KNU-hopda,Ong-hoschemes,kobayashiOng2009type}; this formalism is equivalent to $\lambda Y$-calculus,
and the translation between them is rather straightforward~\cite{MSC:9613738}.
Collapsible Pushdown Systems \cite{collapsible} and Ordered Tree-Pushdown Systems \cite{DBLP:conf/fsttcs/ClementePSW15} are other equivalent formalisms.
Intersection type systems were intensively used in the context of HORSes, for several purposes like
model-checking \cite{kobayashi2009types-popl,kobayashiOng2009type,DBLP:conf/csl/BroadbentK13,DBLP:conf/popl/RamsayNO14},
pumping \cite{koba-pumping},
transformations of HORSes \cite{context-sensitive-2,downward-closure}, etc.
Interestingly, constructions very similar to intersection types were used also on the side of collapsible pushdown systems; they were alternating stack automata \cite{saturation}, and types of stacks \cite{ho-new,Kar-Par-pumping}.
In this paper we show how intersection types can be used for deciding quantitative properties of trees generated by $\lambda Y$-terms.
We concentrate on the language finiteness problem for nondeterministic HORSes:
given a nondeterministic HORS, decide whether the set of all finite trees generated by this HORS is finite.
This problem can be restated in the world of $\lambda Y$-terms (or standard, deterministic HORSes), generating a single infinite tree.
Here, instead of resolving nondeterministic choices during the generation process, we leave them in the resulting tree.
Those nondeterministic choices are denoted by a distinguished $\mathsf{br}$ (``branch'') symbol, below which we put options that could be chosen.
Then to obtain a finite tree generated by the original HORS we just need to recursively choose in every $\mathsf{br}$-labeled node which of the two subtrees we want to consider.
Thus, in this setting, the language finiteness problem asks whether the set of all finite trees obtained this way is finite.
The difficulty of this problem lies in the fact that sometimes the same finite tree may be found in infinitely many different places of $\mathit{BT}(P)$ (i.e., generated by a nondeterministic HORS in many ways);
thus the actual property to decide is whether there is a common bound on the size of each of these trees.
This makes the problem inaccessible for standard methods used for analyzing HORSes, as they usually concern only regular properties of the B\"ohm tree, while boundedness is a problem of different kind.
The same difficulty was observed in \cite{koba-pumping}, where they prove a pumping lemma for deterministic HORSes, while admitting (Remark 2.2) that their method is too weak to reason about nondeterministic HORSes.
In order to solve the language finiteness problem, we present an appropriate intersection type system, where derivations are annotated by flags and markers of multiple kinds.
The key property of this type system is that the number of flags in a type derivation for a $\lambda Y$-term $P$
approximates the size of some finite tree obtained by resolving nondeterministic choices in the infinite tree $\mathit{BT}(P)$.
In consequence, there are type derivations using arbitrarily many flags if, and only if, the answer to the language finiteness problem is ``no''.
The language finiteness problem was first attacked in \cite{achim-pumping} (for safe HORSes only), but their algorithm turned out to be incorrect \cite{achim-erratum}.
To our knowledge, the only known solution of this problem follows from a recent decidability result for the diagonal problem \cite{diagonal-safe, downward-closure}.
This problem asks, given a nondeterministic HORS and a set of letters $\Sigma$, whether for every $n\in\mathbb{N}$ the HORS generates a finite tree in which every letter from $\Sigma$ appears at least $n$ times.
Clearly, a nondeterministic HORS generates arbitrarily large trees exactly when for some letter $a$ it generates trees having arbitrarily many $a$ letters, i.e., when the answer to the diagonal problem for $\Sigma=\{a\}$ is ``yes''.
Our type system is, to some extent, motivated by the algorithm of \cite{downward-closure} solving the diagonal problem.
This algorithm works by repeating two kinds of transformations of HORSes.
The first of them turns the HORS into a HORS generating trees having only a fixed number of branches, one per each letter from $\Sigma$ (i.e., one branch in our case of $|\Sigma|=1$).
The branches are chosen nondeterministically out of some tree generated by the original HORS; for every $a\in\Sigma$ there is a choice witnessing that $a$ appeared many times in the original tree.
Then such a HORS of the special form is turned into a HORS that is of order lower by one,
and generates trees having the same nodes as trees generated by the original HORS, but arranged differently (in particular, the new trees may have again arbitrarily many branches).
After finitely many repetitions of this procedure, a HORS of order $0$ is obtained, and the diagonal problem becomes easily decidable.
In some sense we want to do the same, but instead of applying all these transformations one by one, we simulate all of them simultaneously in a single type derivation.
In this derivation, for each order $n$, we allow to place arbitrarily one marker ``of order $n$''; this corresponds to the nondeterministic choice of one branch in the $n$-th step of the previous algorithm.
We also place some flags ``of order $n$'', in places that correspond to nodes remaining after the $n$-th step of the previous algorithm.
The idea of using intersection types for counting is not completely new.
Paper \cite{jfp-numerals} presents a type system that, essentially, allows to estimate the size of the $\beta$-normal form of a $\lambda$-term just by looking at (the number of some flags in) a derivation of a type for this term.
A similar idea, but for higher-order pushdown automata, is present in \cite{ho-new}, where we can estimate the number of $\sharp$ symbols appearing on a particular, deterministically chosen branch of the generated tree.
This previous approach also uses intersection types, where the derivations are marked with just one kind of flags, denoting ``productive'' places of a $\lambda$-term
(oppositely to our approach, where we have different flags for different orders, and we also have markers).
The trouble with the ``one-flag'' approach is that it works well only in a completely deterministic setting, where looking independently at each node of the B\"ohm tree we know how it contributes to the result;
the method stops working (or at least we do not know how to prove that it works) in our situation, where we first nondeterministically perform some guesses in the B\"ohm tree, and only after that we want to count something that depends on the chosen values.
\paragraph{Acknowledgements.}
I would like to thank Szymon Toruńczyk for stimulating discussions, and anonymous reviewers for useful comments.
\section{Preliminaries}
\paragraph{Trees.}
Let $\Sigma$ be a \emph{ranked alphabet}, i.e., a set of symbols together with a rank function assigning a nonnegative integer to each of the symbols.
We assume that $\Sigma$ contains a distinguished symbol $\mathsf{br}$ of rank $2$, used to denote nondeterministic choices.
A \emph{$\Sigma$-labeled} tree is a tree that is rooted (there is a distinguished root node),
node-labeled (every node has a label from $\Sigma$),
ranked (a node with label of rank $n$ has exactly $n$ children),
and ordered (children of a node of rank $n$ are numbered from $1$ to $n$).
When $t$ is a $\Sigma$-labeled tree $t$, by $\mathcal{L}(t)$ we denote the set of all finite trees that can be obtaining by choosing in every $\mathsf{br}$-labeled node of $t$ which of the two subtrees we want to consider.
More formally, we consider the following relation $\to_\mathsf{br}$: we have $t\to_\mathsf{br} u$ if $u$ can be obtained from $t$ by choosing in $t$ a $\mathsf{br}$-labeled node $x$ and its child $y$,
and replacing the subtree starting in $x$ by the subtree starting in $y$ (which removes $x$ and the other subtree of $x$).
Let $\to_\mathsf{br}^*$ be the reflexive transitive closure of $\to_\mathsf{br}$.
Then $\mathcal{L}(t)$ contains all trees $u$ that do not use the $\mathsf{br}$ label, are finite, and such that $t\to_\mathsf{br}^*u$.
\paragraph{Infinitary $\lambda$-calculus.}
The set of \emph{sorts} (a.k.a.~simple types), constructed from a unique basic sort $o$ using a binary operation ${\to}$, is defined as usual.
The order of a sort is defined by: $\mathit{ord}(o)=0$, and $\mathit{ord}(\alpha{\to}\beta)=\max(1+\mathit{ord}(\alpha),\mathit{ord}(\beta))$.
We consider infinitary, sorted $\lambda$-calculus.
\emph{Infinitary $\lambda$-terms} (or just \emph{$\lambda$-terms}) are defined by coinduction, according to the following rules:
\begin{itemize}
\item if $a\in\Sigma$ is a symbol of rank $r$, and $P_1^o,\dots,P_r^o$ are $\lambda$-terms, then $(a\,P_1^o\,\dots\,P_r^o)^o$ is a $\lambda$-term,
\item for every sort $\alpha$ there are infinitely many variables $x^\alpha,y^\alpha,z^\alpha,\dots$; each of them is a $\lambda$-term,
\item if $P^{\alpha{\to}\beta}$ and $Q^\alpha$ are $\lambda$-terms, then $(P^{\alpha{\to}\beta}\,Q^\alpha)^\beta$ is a $\lambda$-term, and
\item if $P^\beta$ is a $\lambda$-term and $x^\alpha$ is a variable, then $(\lambda x^\alpha.P^\beta)^{\alpha{\to}\beta}$ is a $\lambda$-term.
\end{itemize}
We naturally identify $\lambda$-terms differing only in names of bound variables.
We often omit the sort annotations of $\lambda$-terms, but we keep in mind that every $\lambda$-term (and every variable) has a particular sort.
A $\lambda$-term $P$ is \emph{closed} if it has no free variables.
Notice that, for technical convenience, a symbol of positive rank is not a $\lambda$-term itself, but always comes with arguments.
This is not a restriction, since e.g.~instead of a unary symbol $a$ one may use the term $\lambda x.a\,x$.
The order of a $\lambda$-term is just the order of its sort.
The \emph{complexity} of a $\lambda$-term $P$ is the smallest number $m$ such that the order of every subterm of $P$ is at most $m$.
We restrict ourselves to $\lambda$-terms that have finite complexity.
A $\beta$-reduction is defined as usual.
We say that a $\beta$-reduction $P\to_\beta Q$ \emph{is of order $n$} if it concerns a redex $(\lambda x.R)\,S$ such that $\mathit{ord}(\lambda x.R)=n$.
In this situation the order of $x$ is at most $n-1$, but may be smaller (when other arguments of $R$ are of order $n-1$).
\paragraph{B\"ohm Trees.}
We consider B\"ohm trees only for closed $\lambda$-terms of sort $o$.
For such a term $P$, its \emph{B\"ohm tree} $\mathit{BT}(P)$ is constructed by coinduction, as follows:
if there is a sequence of $\beta$-reductions from $P$ to a $\lambda$-term of the form $a\,P_1\,\ldots\,P_r$ (where $a$ is a symbol),
then the root of the tree $t$ has label $a$ and $r$ children, and the subtree starting in the $i$-th child is $\mathit{BT}(P_i)$.
If there is no sequence of $\beta$-reductions from $P$ to a $\lambda$-term of the above form, then $\mathit{BT}(P)$ is the full binary tree with all nodes labeled by $\mathsf{br}$.\footnote{%
Usually one uses a special label $\bot$ of rank $0$ for this purpose, but from the perspective of our problem both definitions are equivalent.}
By $\mathcal{L}(P)$ we denote $\mathcal{L}(\mathit{BT}(P))$.
\paragraph{$\lambda Y$-calculus.}
The syntax of $\lambda Y$-calculus is the same as that of finite $\lambda$-calculus, extended by symbols $Y^{(\alpha{\to}\alpha){\to}\alpha}$, for each sort $\alpha$.
A term of $\lambda Y$-calculus is seen as a term of infinitary $\lambda$-calculus
if we replace each symbol $Y^{(\alpha{\to}\alpha){\to}\alpha}$ by the unique infinite $\lambda$-term $Z$ such that $Z$ is syntactically the same as $\lambda x^{\alpha{\to}\alpha}.x\,(Z\,x)$.
In this way, we view $\lambda Y$-calculus as a fragment of infinitary $\lambda$-calculus.
It is standard to convert a nondeterministic HORS $\mathcal{G}$ into a closed $\lambda Y$-term $P^o$ such that $\mathcal{L}(P)$ is exactly the set of all finite trees generated by $\mathcal{G}$.
The following theorem, which is our main result, states that the \emph{language finiteness problem} is decidable.
\begin{theorem}\label{thm:main}
Given a closed $\lambda Y$-term $P$ of sort $o$, one can decide whether $\mathcal{L}(P)$ is finite.
\end{theorem}
\section{Intersection Type System}
In this section we introduce a type system that allows to determine the desired property: whether in $\mathcal{L}(P)$ there is an arbitrarily large tree.
\paragraph{Intuitions.}\label{para:intuitions}
The main novelty of our type system is in using flags and markers, which may label nodes of derivation trees.
To every flag and marker we assign a number, called an order.
While deriving a type for a $\lambda$-term of complexity $m$, we may place in every derivation tree at most one marker of each order $n\in\{0,\dots,m-1\}$, and arbitrarily many flags of each order $n\in\{0,\dots,m\}$.
Consider first a $\lambda$-term $M_0$ of complexity $0$.
Such a term actually equals its B\"ohm tree.
Our aim is to describe some finite tree $t$ in $\mathcal{L}(M_0)$, i.e., obtained from $M_0$ by resolving nondeterministic choices in some way.
We thus just put flags of order $0$ in all those (appearances of) symbols in $M_0$ that contribute to this tree $t$;
the type system ensures that indeed all symbols of some finite tree in $\mathcal{L}(M_0)$ are labeled by a flag.
Then clearly we have the desired property that there is a derivation with arbitrarily many flags if, and only if, there are arbitrarily large trees in $\mathcal{L}(M_0)$.
Next, consider a $\lambda$-term $M_1$ that is of complexity $1$, and reduces to $M_0$.
Of course every finite tree from $\mathcal{L}(M_0)$ is composed of symbols appearing already in $M_1$;
we can thus already in $M_1$ label (by order-$0$ flags) all symbols that contribute to some tree $t\in\mathcal{L}(M_0)$ (and an intersection type system can easily check correctness of such labeling).
There is, however, one problem: a single appearance of a symbol in $M_1$ may result in many appearances in $M_0$ (since a function may use its argument many times).
Due to this, the number of order-$0$ flags in $M_1$ does not correspond to the size of $t$.
We rescue ourselves in the following way.
In $t$ we choose one leaf, we label it by an order-$0$ marker, and on the path leading from the root to this marker we place order-$1$ flags.
On the one hand, $\mathcal{L}(M_0)$ contains arbitrarily large trees if, and only if, it contains trees with arbitrarily long paths, i.e., trees with arbitrarily many order-$1$ flags.
On the other hand, we can perform the whole labeling (and the type system can check its correctness) already in $M_1$, and the number of order-$1$ flags in $M_1$ will be precisely the same as it would be in $M_0$.
Indeed, in $M_1$ we have only order-$1$ functions, i.e., functions that take trees and use them as subtrees of larger trees;
although a tree coming as an argument may be duplicated, the order-$0$ marker can be placed in at most one copy.
This means that, while reducing $M_1$ to $M_0$, every symbol of $M_1$ can result in at most one symbol of $M_0$ lying on the selected path to the order-$0$ marker
(beside of arbitrarily many symbols outside of this path).
This procedure can be repeated for $M_2$ of complexity $2$ that reduces to $M_1$ via $\beta$-reductions of order $2$ (and so on for higher orders).
We now place a marker of order $1$ in some leaf of $M_1$;
afterwards, we place an order-$2$ flag in every node that is on the path to the marked leaf, and that has a child outside of this path whose some descendant is labeled by an order-$1$ flag.
In effect, for some choice of a leaf to be marked, the number of order-$2$ flags approximates the number of order-$1$ flags, up to logarithm.
Moreover, the whole labeling can be done in $M_2$ instead of in $M_1$, without changing the number of order-$2$ flags.
In this intuitive description we have talked about labeling ``nodes of a $\lambda$-term'', but formally we label nodes of a derivation tree that derives a type for the term, in our type system.
Every such node contains a type judgment for some subterm of the term.
\paragraph{Type Judgments.}
For every sort $\alpha$ we define the set $\mathcal{T}^\alpha$ of \emph{types} of sort $\alpha$,
and the set $\mathcal{F}^\alpha$ of \emph{full types} of sort $\alpha$.
This is done as follows, where $\mathcal{P}$ denotes the powerset:
\begin{align*}
&\mathcal{T}^{\alpha{\to}\beta}=\mathcal{P}(\mathcal{F}_{\mathit{ord}(\alpha{\to}\beta)}^\alpha)\times\mathcal{T}^\beta\,,\qquad
\mathcal{T}^o=o\,,\\
&\mathcal{F}_k^\alpha=\{(k,F,M,\tau)\mid F,M\subseteq\{0,\dots,k-1\},\,F\cap M=\emptyset,\,\tau\in\mathcal{T}^\alpha\}\,,\qquad\mathcal{F}^\alpha=\bigcup_{k\in\mathbb{N}}\mathcal{F}_k^\alpha\,.
\end{align*}
Notice that the sets $\mathcal{T}^\alpha$ and $\mathcal{F}_k^\alpha$ are finite (unlike $\mathcal{F}^\alpha$).
A type $(T,\tau)\in\mathcal{T}^{\alpha{\to}\beta}$ is denoted as $T{\to}\tau$.
A full type $\hat\tau=(k,F,M,\tau)\in\mathcal{F}_k^\alpha$ consists of its order $k$, a set $F$ of flag orders, a set $M$ of marker orders, and a type $\tau$;
we write $\mathit{ord}(\hat\tau)=k$.
In order to distinguish types from full types, the latter are denoted by letters with a hat, like $\hat\tau$.
A \emph{type judgment} is of the form $\Gamma\vdash P:\hat\tau\triangleright c$, where $\Gamma$, called a \emph{type environment},
is a function that maps every variable $x^\alpha$ to a subset of $\mathcal{F}^\alpha$,
$P$ is a $\lambda$-term, $\hat\tau$ is a full type of the same sort as $P$ (i.e., $\hat\tau\in\mathcal{F}^\beta$ when $P$ is of sort $\beta$), and $c\in\mathbb{N}$.
As usual for intersection types, the intuitive meaning of a type $T{\to}\tau$ is that a $\lambda$-term having this type can return a $\lambda$-term having type $\tau$, while taking an argument for which we can derive all full types from $T$.
Moreover, in $\mathcal{T}^o$ there is just one type $o$, which can be assigned to every $\lambda$-term of sort $o$.
Suppose that we have derived a type judgment $\Gamma\vdash P:\hat\tau\triangleright c$ with $\hat\tau=(m,F,M,\tau)$.
Then
\begin{itemize}
\item $\tau$ is the type derived for $P$;
\item $\Gamma$ contains full types that could be used for free variables of $P$ in the derivation;
\item $m$ bounds the order of flags and markers that could be used in the derivation: flags could be of order at most $m$, and markers of order at most $m-1$;
\item $M\subseteq\{0,\dots,m-1\}$ contains the orders of markers used in the derivation, together with those provided by free variables
(i.e., we imagine that some derivations, specified by the type environment, are already substituted in our derivation for free variables);
we, however, do not include markers provided by arguments of the term (i.e., coming from the sets $T_i$ when $\tau=T_1{\to}\dots{\to} T_k{\to} o$);
\item $F$ contains those numbers $n\in\{0,\dots,m-1\}$ (excluding $n=m$) for which a flag of order $n$ is placed in the derivation itself, or provided by a free variable, or provided by an argument;
for technical convenience we, however, remove $n$ from $F$ whenever $n\in M$
(when $n\in M$, the information about order-$n$ flags results in placing an order-$(n+1)$ flag, and need not to be further propagated);
\item $c$, called a \emph{flag counter}, counts the number of order-$m$ flags present in the derivation.
\end{itemize}
\paragraph{Type System.}
Before giving rules of the type system, we need a few definitions.
We use the symbol $\uplus$ to denote disjoint union.
When $A\subseteq\mathbb{N}$ and $n\in\mathbb{N}$, we write $A{\restriction}_{<n}$ for $\{k\in A\mid k<n\}$, and similarly $A{\restriction}_{\geq n}$ for $\{k\in A\mid k\geq n\}$.
By $\varepsilon$ we denote the type environment mapping every variable to $\emptyset$,
and by $\Gamma[x\mapsto T]$ the type environment mapping $x$ to $T$ and every other variable $y$ to $\Gamma(y)$.
Let us now say how a type environment $\Gamma$ from the conclusion of a rule may be split into type environments $(\Gamma_i)_{i\in I}$ used in premisses of the rule:
we say that $\mathit{Split}(\Gamma\mid(\Gamma_i)_{i\in I})$ holds if and only if for every variable $x$ it holds $\Gamma_i(x)\subseteq\Gamma(x)$ for every $i\in I$,
and every full type from $\Gamma(x)$ providing some markers (i.e., $(k,F,M,\tau)$ with $M\neq\emptyset$) appears in some $\Gamma_i(x)$.
Full types with empty $M$ may be discarded and duplicated freely.
This definition forbids to discard full types with nonempty $M$, and from elsewhere it will follow that they cannot be duplicated.
As a special case $\mathit{Split}(\Gamma\mid\Gamma')$ describes how a type environment can be weakened.
All type derivations are assumed to be finite
(although we derive types mostly for infinite $\lambda$-terms, each type derivation analyzes only a finite part of a term).
Rules of the type system will guarantee that the order $m$ of derived full types will be the same in the whole derivation (although in type environments there may be full types of different orders).
We are ready to give the first three rules of our type system:
\begin{mathpar}
\inferrule*[right=(Br)]{
\Gamma\vdash P_i:\hat\tau\triangleright c
\\
i\in\{1,2\}
}
{\Gamma\vdash \mathsf{br}\,P_1\,P_2:\hat\tau\triangleright c}
\and
\inferrule*[right=(Var)]{
\mathit{Split}(\Gamma\mid\varepsilon[x\mapsto\{(k,F,M',\tau)\}])
\\
M{\restriction}_{<k}=M'
}
{\Gamma\vdash x:(m,F,M,\tau)\triangleright 0}
\end{mathpar}
\begin{mathpar}
\inferrule*[right=($\lambda$)]{\Gamma'[x\mapsto T]\vdash P:(m,F,M,\tau)\triangleright c
\\
\mathit{Split}(\Gamma\mid\Gamma')
\\
\Gamma'(x)=\emptyset}
{\Gamma\vdash\lambda x.P:(m,F,M\setminus\bigcup{}_{(k,F',M',\sigma)\in T}M',T{\to}\tau)\triangleright c}
\end{mathpar}
We see that to derive a type for the nondeterministic choice $\mathsf{br}\,P_1\,P_2$, we need to derive it either for $P_1$ or for $P_2$.
The \TirName{(Var)}\xspace rule allows to have in the resulting set $M$ some numbers that do not come from the set $M'$ assigned to $x$ by the type environment; these are the orders of markers placed in the leaf using this rule.
Notice, however, that we allow here only orders not smaller than $k$ (which is the order of the superterm $\lambda x.P$ binding this variable $x$).
This is consistent with the intuitive description of the type system (page \pageref{para:intuitions}),
which says that a marker of order $n$ can be put in a place that will be a leaf after performing all $\beta$-reductions of orders greater than $n$.
Indeed, the variable $x$ remains a leaf after performing $\beta$-reductions of orders greater than $k$, but while performing $\beta$-reductions of order $k$ this leaf will be replaced by a subterm substituted for $x$.
Recall also that, by definition of a type judgment, we require that $(k,F,M',\tau)\in\mathcal{F}^\alpha_k$ and $(m,F,M,\tau)\in\mathcal{F}^\alpha_m$, for appropriate sort $\alpha$;
this introduces a bound on maximal numbers that may appear in the sets $F$ and $M$.
\begin{example}\label{ex:var}
Denoting $\hat\rho_1=(1,\emptyset,\{0\},o)$ we can derive:
\begin{mathpar}
\inferrule*[Right=(Var)]{ }{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0\},o)\triangleright 0
}
\and
\inferrule*[Right=(Var)]{ }{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0,1\},o)\triangleright 0
}
\end{mathpar}
%
In the derivation on the right, the marker of order $1$ is placed in the conclusion of the rule.
\end{example}
The \TirName{($\lambda$)}\xspace rule allows to use (in a subderivation concerning the $\lambda$-term $P$) the variable $x$ with all full types given in the set $T$.
When the sort of $\lambda x.P$ is $\alpha{\to}\beta$, by definition of $\mathcal{T}^{\alpha{\to}\beta}$ we have that all full types in $T$ have the same order $k=\mathit{ord}(\alpha{\to}\beta)$ (since $(T{\to}\tau)\in\mathcal{T}^{\alpha{\to}\beta}$).
Recall that we intend to store in the set $M$ the markers contained in the derivation itself and those provided by free variables, but not those provided by arguments.
Because of this, in the conclusion of the rule we remove from $M$ the markers provided by $x$.
This operation makes sense only because there is at most one marker of each order, so markers provided by $x$ cannot be provided by any other free variable nor placed in the derivation itself.
The set $F$, unlike $M$, stores also flags provided by arguments, so we do not need to remove anything from $F$.
\begin{example}\label{ex:lambda}
The \TirName{($\lambda$)}\xspace rule can be used, e.g., in the following way (where $a$ is a symbol of rank $1$):
\begin{mathpar}
\inferrule*[Right=($\lambda$)]{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash a\,x:(2,\{1\},\{0\},o)\triangleright 0
}{
\varepsilon\vdash\lambda x.a\,x:(2,\{1\},\emptyset,\{\hat\rho_1\}{\to} o)\triangleright 0
}
\and
\inferrule*[Right=($\lambda$)]{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash a\,x:(2,\emptyset,\{0,1\},o)\triangleright 1
}{
\varepsilon\vdash\lambda x.a\,x:(2,\emptyset,\{1\},\{\hat\rho_1\}{\to} o)\triangleright 1
}
\end{mathpar}
%
Notice that in the conclusion of the rule, in both examples, we remove $0$ from the set of marker orders, because the order-$0$ marker is provided by $x$.
\end{example}
The next two rules use a predicate $\mathit{Comp}_m$, saying how flags and markers from premisses contribute to the conclusion.
It takes ``as input'' pairs $(F_i,c_i)$ for $i\in I$; each of them consists of the set of flag orders $F_i$ and of the flag counter $c_i$ from some premiss.
Moreover, the predicate takes a set of marker orders $M$ from the current type judgment (it contains orders of markers used in the derivation, including those provided by free variables).
The goal is to compute the set of flag orders $F$ and the flag counter $c$ that should be placed in the current type judgment.
First, for each $n\in\{1,\dots,m\}$ consecutively, we decide whether a flag of order $n$ should be placed on the current type judgment.
We follow here the rules mentioned in the intuitive description.
Namely, we place a flag of order $n$ if we are on the path leading to the marker of order $n-1$ (i.e., if $n-1\in M$), and simultaneously we receive an information about a flag of order $n-1$.
By receiving this information we mean that either a flag of order $n-1$ was placed on the current type judgment, or $n-1$ belongs to some set $F_i$.
Actually, we place multiple flags of order $n$: one per each flag of order $n-1$ placed on the current type judgment, and one per each set $F_i$ containing $n-1$.
Then, we compute $F$ and $c$.
In $c$ we store the number of flags of the maximal order $m$: we sum all the numbers $c_i$, and we add the number of order-$m$ flags placed on the current type judgment.
In $F$ we keep elements of all $F_i$, and we add the orders $n$ of flags that were placed on the current type judgment.
We, however, remove from $F$ all elements of $M$.
This is because every flag of some order $n-1$ should result in creating at most one flag of order $n$, in the closest ancestor that lies on the path leading to the marker of order $n-1$.
If we have created an order-$n$ flag on the current type judgment, i.e., if $n-1\in M$, we do not want to do this again in the parent.
Below we give a formal definition, in which $f_n'$ contains the number of order-$n$ flags placed on the current type judgment,
while $f_n$ additionally counts the number of premisses for which $n\in F_i$.
We say that $\mathit{Comp}_m(M;\allowbreak((F_i,c_i))_{i\in I})=(F,c)$ when
\begin{align*}
&F=\{n\in\{0,\dots,m-1\}\mid f_n>0\land n\not\in M\}\,,&&c=f'_m+\sum_{i\in I}c_i\,,\qquad\mbox{where, for $n\in\{0,\dots,m\}$,}\\
&f_n=f_n'+\sum_{i\in I}|F_i\cap\{n\}|,&&f_n'=\left\{\begin{array}{ll}f_{n-1}&\mbox{if }n-1\in M,\\0&\mbox{otherwise.}\end{array}\right.
\end{align*}
We now present a rule for constants other than $\mathsf{br}$:
\begin{mathpar}
\inferrule*[right=(Con)]{
\Gamma_i\vdash P_i:(m,F_i,M_i,o)\triangleright c_i\mbox{ for each }i\in\{1,\dots,r\}
\\
M=M'\uplus M_1\uplus\dots\uplus M_r
\\
(m=0)\Rightarrow(F'=\emptyset\land c'=1)
\\
(m>0)\Rightarrow(F'=\{0\}\land c'=0)
\\
(r>0)\Rightarrow(M'=\emptyset)
\\
a\neq\mathsf{br}
\\
\mathit{Split}(\Gamma\mid\Gamma_1,\dots,\Gamma_r)
\\
\mathit{Comp}_m(M;\allowbreak(F',c'),(F_1,c_1),\dots,(F_r,c_r))=(F,c)
}
{\Gamma\vdash a\,P_1\,\dots\,P_r:(m,F,M,o)\triangleright c}
\end{mathpar}
Here, the conditions in the second line say that in a node using the \TirName{(Con)}\xspace rule we always place a flag of order $0$ (via $F'$ or via $c'$, depending on $m$),
and that if the node is a leaf (i.e., $r=0$), then we are allowed to place markers of arbitrary order (via $M'$).
Then to the $\mathit{Comp}_m$ predicate, beside of pairs $(F_i,c_i)$ coming from premisses, we also pass the information $(F',c')$ about the order-$0$ flag placed in the current node;
this predicate decides whether we should place also some flags of positive orders.
Let us emphasize that in this rule (and similarly in the next rule) we have a disjoint union $M'\uplus M_1\uplus\dots\uplus M_r$,
which ensures that a marker of any order may be placed only in one node of a derivation.
\begin{example}\label{ex:con}
The \TirName{(Con)}\xspace rule may be instantiated in the following way:
\begin{mathpar}
\inferrule*[Right=(Con)]{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0\},o)\triangleright 0
}{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash a\,x:(2,\{1\},\{0\},o)\triangleright 0
}
\and
\inferrule*[Right=(Con)]{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0,1\},o)\triangleright 0
}{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash a\,x:(2,\emptyset,\{0,1\},o)\triangleright 1
}
\end{mathpar}
%
In the left example, flags of order $0$ and $1$ are placed in the conclusion of the rule
(a flag of order $0$ is created because we are in a constant; since the marker of order $0$ is visible, we do not put $0$ into the set of flag orders, but instead we create a flag of order $1$).
In the right example, a marker of order $1$ is visible, which causes that this time flags of order $0$, $1$, and $2$ are placed in the conclusion of the \TirName{(Con)}\xspace rule
(again, we do not put $0$ nor $1$ into the set of flag orders, because of $0$ and $1$ in the set of marker orders).
\end{example}
The next rule describes application:
\begin{mathpar}
\inferrule*[right=(\!@\!)]{
\Gamma'\vdash P:(m,F',M',\{(\mathit{ord}(P),F_i{\restriction}_{<\mathit{ord}(P)},M_i{\restriction}_{<\mathit{ord}(P)},\tau_i)\mid i\in I\}{\to}\tau)\triangleright c'
\\
\Gamma_i\vdash Q:(m,F_i,M_i,\tau_i)\triangleright c_i\mbox{ for each }i\in I
\\
M=M'\uplus\biguplus{}_{i\in I}M_i
\\
\mathit{ord}(P)\leq m
\\
\mathit{Split}(\Gamma\mid\Gamma',(\Gamma_i)_{i\in I})
\\
\mathit{Comp}_m(M;\allowbreak(F',c'),((F_i{\restriction}_{\geq\mathit{ord}(P)},c_i))_{i\in I})=(F,c)
}
{\Gamma\vdash P\,Q:(m,F,M,\tau)\triangleright c}
\end{mathpar}
In this rule, it is allowed (but in fact useless) that for two different $i\in I$ the full types $(m,F_i,M_i,\tau_i)$ are equal.
It is also allowed that $I=\emptyset$, in which case no type needs to be derived for $Q$.
Observe how flags and markers coming from premisses concerning $Q$ are propagated: only flags and markers of order $n<\mathit{ord}(P)$ are visible to $P$, while only flags of order $n\geq\mathit{ord}(P)$ are passed to the $\mathit{Comp}_m$ predicate.
This can be justified if we recall the intuitions staying behind the type system (see page \pageref{para:intuitions}).
Indeed, while considering flags and markers of order $n$, we should imagine the $\lambda$-term obtained from the current $\lambda$-term by performing all $\beta$-reductions of all orders greater than $n$;
the distribution of flags and markers of order $n$ in the current $\lambda$-term actually simulates their distribution in this imaginary $\lambda$-term.
Thus, if $n<\mathit{ord}(P)$, then our application will disappear in this imaginary $\lambda$-term, and $Q$ will be already substituted somewhere in $P$;
for this reason we need to pass the information about flags and markers of order $n$ from $Q$ to $P$.
Conversely, if $n\geq\mathit{ord}(P)$, then in the imaginary $\lambda$-term the considered application will be still present,
and in consequence the subterm corresponding to $P$ will not see flags and markers of order $n$ placed in the subterm corresponding to $Q$.
\begin{example}\label{ex:app}
Denote by $\hat\tau_\mathsf{f}$ and $\hat\tau_\mathsf{m}$ the types derived in Example \ref{ex:lambda}:
\begin{align*}
&\hat\tau_\mathsf{f}=(2,\{1\},\emptyset,\{\hat\rho_1\}{\to} o)\,,&
&\mbox{and}&
&\hat\tau_\mathsf{m}=(2,\emptyset,\{1\},\{\hat\rho_1\}{\to} o)\,.
\end{align*}
Then, using the \TirName{(\!@\!)}\xspace rule, we can derive (where $e$ is a symbol of rank $0$, and $f$ a variable):
\begin{mathpar}
\inferrule*[Right=(\!@\!)]{
\inferrule*[right=(Var)]{ }{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{m}\}]\vdash f:\hat\tau_\mathsf{m}\triangleright 0
}
\and
\inferrule*[Right=(Con)]{ }{
\varepsilon\vdash e:(2,\{1\},\{0\},o)\triangleright 0
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}]\vdash f\,e:(2,\emptyset,\{0,1\},o)\triangleright 1
}
\end{mathpar}
%
Recall that $\hat\rho_1=(1,\emptyset,\{0\},o)$.
In the conclusion of the \TirName{(\!@\!)}\xspace rule the information about a flag of order $1$ (from the second premiss) meets the information about the marker of order $1$ (from the first premiss),
and thus a flag of order $2$ is placed, which increases the flag counter.
Notice that we have discarded the full type $\hat\tau_\mathsf{f}$ assigned to $f$ in the type environment;
this is allowed because $\hat\tau_\mathsf{f}$ provides no markers (equally well $\hat\tau_\mathsf{f}$ could be assigned to $f$ also in one or two of the premisses, and discarded there).
On the other hand, the full type $\hat\tau_\mathsf{m}$ provides markers, so it cannot be discarded nor duplicated (in particular, we could not pass it to the conclusion of the \TirName{(Con)}\xspace rule).
\end{example}
The key property of the type system is described by the following theorem.
\begin{theorem}\label{thm:types-ok}
Let $P$ be a closed $\lambda$-term of sort $o$ and complexity $m$.
Then $\mathcal{L}(P)$ is infinite if and only if for arbitrarily large $c$ we can derive $\varepsilon\vdash P:\hat\rho_m\triangleright c$, where $\hat\rho_m=(m,\emptyset,\{0,\dots,m-1\},o)$.
\end{theorem}
The left-to-right implication of Theorem \ref{thm:types-ok} (completeness of the type system) is shown in Section \ref{sec:compl}, while the opposite implication (soundness of the type system) in Section \ref{sec:sound}.
In Section \ref{sec:effective} we discuss how Theorem \ref{thm:main} follows from Theorem \ref{thm:types-ok}.
Before all that, we give a few more examples of derivations, illustrating the type system and Theorem \ref{thm:types-ok}.
\begin{example}\label{ex:large-1}
In this example we analyze the $\lambda$-term $P_1=R\,(\lambda x.a\,x)$, where $R$ is defined by coinduction as $R=(\lambda f.\mathsf{br}\,(f\,e)\,(R\,(\lambda x.f\,(f\,x))))$.
As previously, $a$ and $e$ are symbols of rank $1$ and $0$, respectively.
In $\mathcal{L}(P_1)$ there are trees that consist of a branch of $a$ symbols ended with an $e$ symbol, but only those where the number of $a$ symbols is $2^k$ for some $k\in\mathbb{N}$.
Notice that the complexity of $P_1$ is $2$.
Continuing Example \ref{ex:app}, we derive the full type $\hat\sigma_R=(2,\emptyset,\{0\},\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}{\to} o)$ for $R$:
%
\begin{mathpar}
\inferrule*[Right=($\lambda$)]{
\inferrule*[Right=(Br)]{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}]\vdash f\,e:(2,\emptyset,\{0,1\},o)\triangleright 1
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}]\vdash \mathsf{br}\,(f\,e)\,(R\,(\lambda x.f\,(f\,x))):(2,\emptyset,\{0,1\},o)\triangleright 1
}
}{
\varepsilon\vdash R:\hat\sigma_R\triangleright 1
}
\end{mathpar}
Next, we derive the same full type for $R$, but using the second argument of the $\mathsf{br}$ symbol; this results in greater values of the flag counter.
We start by deriving the full type $\hat\tau_\mathsf{f}$ for the subterm $\lambda x.f\,(f\,x)$:
%
\begin{mathpar}
\inferrule*[right=($\lambda$)]{
\inferrule*[Right=(\!@\!)]{
\inferrule*{ }{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f}\}]\vdash f:\hat\tau_\mathsf{f}\triangleright 0
}
\and
\inferrule*[Right=(\!@\!)]{
\inferrule*{ }{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f}\}]\vdash f:\hat\tau_\mathsf{f}\triangleright 0
}
\and
\inferrule*{ }{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0\},o)\triangleright 0
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f}\},x\mapsto\{\hat\rho_1\}]\vdash f\,x:(2,\{1\},\{0\},o)\triangleright 0
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f}\},x\mapsto\{\hat\rho_1\}]\vdash f\,(f\,x):(2,\{1\},\{0\},o)\triangleright 0
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f}\}]\vdash\lambda x.f\,(f\,x):\hat\tau_\mathsf{f}\triangleright 0
}
\end{mathpar}
%
In the above derivation there are no flags nor markers.
Next, we derive $\hat\tau_\mathsf{m}$ for the same subterm:
%
\begin{mathpar}
\inferrule*[right=($\lambda$)]{
\inferrule*[Right=(\!@\!)]{
\inferrule*{ }{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f}\}]\vdash f:\hat\tau_\mathsf{f}\triangleright 0
}
\and
\inferrule*[Right=(\!@\!)]{
\inferrule*{ }{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{m}\}]\vdash f:\hat\tau_\mathsf{m}\triangleright 0
}
\and
\inferrule*{ }{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0\},o)\triangleright 0
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{m}\},x\mapsto\{\hat\rho_1\}]\vdash f\,x:(2,\emptyset,\{0,1\},o)\triangleright 0
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\},x\mapsto\{\hat\rho_1\}]\vdash f\,(f\,x):(2,\emptyset,\{0,1\},o)\triangleright 1
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}]\vdash\lambda x.f\,(f\,x):\hat\tau_\mathsf{m}\triangleright 1
}
\end{mathpar}
%
Below the lower \TirName{(\!@\!)}\xspace rule the information about a flag of order $1$ meets the information about the marker of order $1$, and thus a flag of order $2$ is placed, which increases the flag counter.
We continue with the $\lambda$-term $R$:
%
\begin{mathpar}
\inferrule*[right=($\lambda$)]{
\inferrule*[Right=(Br)]{
\inferrule*[Right=(\!@\!)]{
\varepsilon\vdash R:\hat\sigma_R\triangleright c
\and
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f}\}]\vdash\lambda x.f\,(f\,x):\hat\tau_\mathsf{f}\triangleright 0
\and
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}]\vdash\lambda x.f\,(f\,x):\hat\tau_\mathsf{m}\triangleright 1
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}]\vdash R\,(\lambda x.f\,(f\,x)):(2,\emptyset,\{0,1\},o)\triangleright c+1
}
}{
\varepsilon[f\mapsto\{\hat\tau_\mathsf{f},\hat\tau_\mathsf{m}\}]\vdash \mathsf{br}\,(f\,e)\,(R\,(\lambda x.f\,(f\,x))):(2,\emptyset,\{0,1\},o)\triangleright c+1
}
}{
\varepsilon\vdash R:\hat\sigma_R\triangleright c+1
}
\end{mathpar}
%
In this fragment of a derivation no flag nor marker is placed.
In particular, there is no order-$2$ flag in conclusion of the \TirName{(\!@\!)}\xspace rule, although its second premiss provides a flag of order $1$ while the third premiss provides the marker of order $1$.
We recall from the definition of the \TirName{(\!@\!)}\xspace rule that the information about flags and markers coming from the arguments is divided into two parts.
Numbers smaller than the order of the operator ($\mathit{ord}(R)=2$ in our case) are passed to the operator, while only greater numbers ($\geq 2$ in our case) contribute in creating new flags via the $\mathit{Comp}$ predicate.
By composing the above fragments of a derivation, we can derive $\varepsilon\vdash R:\hat\sigma_R\triangleright c$ for every $c\geq 1$.
Recall that in Examples \ref{ex:var}-\ref{ex:con} we have derived $\varepsilon\vdash\lambda x.a\,x:\hat\tau_\mathsf{f}\triangleright 0$ and $\varepsilon\vdash\lambda x.a\,x:\hat\tau_\mathsf{m}\triangleright 1$.
Together with the above, this allows to derive for $P_1$ the full type $\hat\rho_2=(2,\emptyset,\{0,1\},o)$ (appearing in Theorem \ref{thm:types-ok}):
%
\begin{mathpar}
\inferrule*[Right=(\!@\!)]{
\varepsilon\vdash R:\hat\sigma_R\triangleright c
\and
\varepsilon\vdash\lambda x.a\,x:\hat\tau_\mathsf{f}\triangleright 0
\and
\varepsilon\vdash\lambda x.a\,x:\hat\tau_\mathsf{m}\triangleright 1
}{
\varepsilon\vdash P_1:\hat\rho_2\triangleright c+1
}
\end{mathpar}
%
We can notice a correspondence between a derivation with flag counter $c+1$ and a tree in $\mathcal{L}(P)$ of size $2^{c-1}+1$.
We remark that in every of these derivations only three flags of order $0$ and only three flags of order $1$ are present, in the three nodes using the \TirName{(Con)}\xspace rule.
\end{example}
\begin{example}
Consider a similar $\lambda$-term $P_2=R\,(\lambda x.b\,x\,x)$, where $R$ is as previously, and $b$ is a symbol of rank $2$.
In $\mathcal{L}(P_2)$ we have, for every $k\in\mathbb{N}$, a full binary tree in which every branch consist of $2^k$ symbols $b$ and ends with an $e$ symbol.
This time for the subterm $\lambda x.b\,x\,x$ we need to derive three full types:
\begin{align*}
&\hat\tau_0'=(2,\{0\},\emptyset,\{(1,\{0\},\emptyset,o)\}{\to} o)\,,\\
&\hat\tau_\mathsf{f}'=(2,\{1\},\emptyset,\{(1,\{0\},\emptyset,o),\hat\rho_1\}{\to} o)\,,\qquad\mbox{and}\\
&\hat\tau_\mathsf{m}'=(2,\emptyset,\{1\},\{(1,\{0\},\emptyset,o),\hat\rho_1\}{\to} o)\,.
\end{align*}
The last one is derived with flag counter $1$.
Notice that $\hat\tau_\mathsf{f}'$ and $\hat\tau_\mathsf{m}'$ need now two full types for the argument $x$; the new one $(1,\{0\},\emptyset,o)$ describes the subtree that is not on the path to the order-$0$ marker.
We also have a new full type $\hat\tau_0'$ that describes the use of $\lambda x.b\,x\,x$ outside of the path to the order-$0$ marker.
Then, similarly as in the previous example, for every $c\geq 1$ we can derive $\varepsilon\vdash R:\hat\sigma_R'\triangleright c$,
where $\hat\sigma_R'=(2,\emptyset,\{0\},\{\hat\tau_0',\hat\tau_\mathsf{f}',\hat\tau_\mathsf{m}'\}{\to} o)$.
Again, this allows to derive $\varepsilon\vdash P_2:\hat\rho_2\triangleright c+1$.
This time a derivation with flag counter $c+1$ corresponds to a tree in $\mathcal{L}(P)$ of size $2^h-1$ with $h=2^{c-1}+1$.
\end{example}
\begin{example}
Next, consider the $\lambda$-term $P_3=R\,(\lambda x.\,x)$.
The only tree in $\mathcal{L}(P_3)$ consists of a single $e$ node.
Let us see how the derivation from Example \ref{ex:large-1} has to be modified.
The full type $\hat\tau_\mathsf{m}$ can still be derived for $\lambda x.\,x$ (although with flag counter $0$ now),
but instead of $\hat\tau_\mathsf{f}$ we have to use $\hat\tau_\mathsf{f}''=(2,\emptyset,\emptyset,\{\hat\rho_1\}{\to} o)$ that provides no flag of order $1$:
%
\begin{mathpar}
\inferrule*[Right=($\lambda$)]{
\inferrule*[Right=(Var)]{ }{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0\},o)\triangleright 0
}
}{
\varepsilon\vdash\lambda x.x:\hat\tau_\mathsf{f}''\triangleright 0
}
\and
\inferrule*[Right=($\lambda$)]{
\inferrule*[Right=(Var)]{ }{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0,1\},o)\triangleright 0
}
}{
\varepsilon\vdash\lambda x.x:\hat\tau_\mathsf{m}\triangleright 0
}
\end{mathpar}
Next, for $R$ we want to derive the full type $\hat\sigma_R''=(2,\emptyset,\{0\},\{\hat\tau_\mathsf{f}'',\hat\tau_\mathsf{m}\}{\to} o)$.
We can easily adopt every of the previous derivations for $\varepsilon\vdash R:\hat\sigma_R\triangleright c$: we basically replace every $\hat\tau_\mathsf{f}$ by $\hat\tau_\mathsf{f}''$.
The key point is that while deriving the full type $\hat\tau_\mathsf{m}$ for the subterm $\lambda x.f\,(f\,x)$, previously in the lower \TirName{(\!@\!)}\xspace rule we have received information about an order-$1$ flag,
and thus we have created an order-$2$ flag and increased the flag counter;
this time there is no information about an order-$1$ flag, and thus we do not create an order-$2$ flag and do not increase the flag counter.
In consequence, even if this part of the derivation is repeated arbitrarily many times, the value of the flag counter of the whole derivation remains $1$.
\end{example}
\begin{example}
Finally, consider the $\lambda$-term $P_4=(\lambda g.P_3)\,(\lambda x.a\,(a\,(\dots\,(a\,x)\dots))$, which $\beta$-reduces to $P_3$.
Notice that we can create the following derivation:
%
\begin{mathpar}
\inferrule*[Right=($\lambda$)]{
\inferrule*[Right=(Con)]{
\inferrule*[Right=(Con)]{
\inferrule*[Right=(Con)]{
\inferrule*[Right=(Var)]{ }{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash x:(2,\emptyset,\{0\},o)\triangleright 0
}
}{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash a\,x:(2,\{1\},\{0\},o)\triangleright 0
}
}{
\vdots
}
}{
\varepsilon[x\mapsto\{\hat\rho_1\}]\vdash a\,(a\,(\dots\,(a\,x)\dots)):(2,\{1\},\{0\},o)\triangleright 0
}
}{
\varepsilon\vdash\lambda x.a\,(a\,(\dots\,(a\,x)\dots)):\hat\tau_\mathsf{f}\triangleright 0
}
\end{mathpar}
Every \TirName{(Con)}\xspace rule used in this derivation places in its conclusion an order-$0$ flag and an order-$1$ flag.
This derivation can be used as a part of a derivation for $P_4$:
%
\begin{mathpar}
\inferrule*[Right=(\!@\!)]{
\inferrule*[right=($\lambda$)]{
\varepsilon[g\mapsto\{\hat\tau_\mathsf{f}\}]\vdash P_3:\hat\rho_2\triangleright 1
}{
\varepsilon\vdash\lambda g.P_3:(2,\emptyset,\{0,1\},\{\hat\tau_\mathsf{f}\}{\to} o)\triangleright 1
}
\and
\varepsilon\vdash\lambda x.a\,(a\,(\dots\,(a\,x)\dots)):\hat\tau_\mathsf{f}\triangleright 0
}{
\varepsilon\vdash P_4:\hat\rho_2\triangleright 1
}
\end{mathpar}
Because $\hat\tau_\mathsf{f}$ provides no markers, it can be removed from the type environment and thus for $P_3$ we can use the derivation from the previous example.
We thus obtain a derivation for $P_4$ in which there are many order-$0$ and order-$1$ flags (but only one flag of order $2$).
This shows that in the flag counter we indeed need to count only the number of flags of the maximal order (not, say, the total number of flags of all orders).
\end{example}
\section{Completeness}\label{sec:compl}
The proof of the left-to-right implication of Theorem \ref{thm:types-ok} is divided into the following three lemmata.
Recall that a $\beta$-reduction $P\to_\beta Q$ is of order $n$ if it concerns a redex $(\lambda x.R)\,S$ such that $\mathit{ord}(\lambda x.R)=n$.
The number of nodes of a tree $t$ is denoted $|t|$.
As in Theorem \ref{thm:types-ok}, we denote $\hat\rho_m=(m,\emptyset,\{0,\dots,m-1\},o)$.
\begin{lemma}\label{lem:base}
Let $P$ be a closed $\lambda$-term of sort $o$ and complexity $m$, and let $t\in\mathcal{L}(P)$.
Then there exist $\lambda$-terms $Q_m,Q_{m-1},\dots,Q_0$ such that $P=Q_m$, and for every $k\in\{1,\dots,m\}$ the term $Q_{k-1}$ can be reached from $Q_k$ using only $\beta$-reductions of order $k$,
and we can derive $\varepsilon\vdash Q_0:\hat\rho_0\triangleright|t|$.
\end{lemma}
\begin{lemma}\label{lem:increase-m}
Suppose that we can derive $\varepsilon\vdash P:\hat\rho_m\triangleright c$.
Then we can also derive $\varepsilon\vdash P:\hat\rho_{m+1}\triangleright c'$ for some $c'\geq\log_2 c$.
\end{lemma}
\begin{lemma}\label{lem:c-step}
Suppose that $P\to_\beta Q$ is a $\beta$-reduction of order $m$, and we can derive $\Gamma\vdash Q:\hat\tau\triangleright c$ with $\mathit{ord}(\hat\tau)=m$.
Then we can also derive $\Gamma\vdash P:\hat\tau\triangleright c$.
\end{lemma}
Now the left-to-right implication of Theorem \ref{thm:types-ok} easily follows.
Indeed, take a closed $\lambda$-term $P$ of sort $o$ and complexity $m$ such that $\mathcal{L}(P)$ is infinite, and take any $c\in\mathbb{N}$.
By $\log^k_2$ we denote the $k$-fold application of the logarithm: $\log^0_2 x=x$ and $\log^{k+1}_2 x=\log_2(\log_2^k x)$.
Since $\mathcal{L}(P)$ is infinite, it contains a tree $t$ so big that $\log_2^m|t|\geq c$.
We apply Lemma \ref{lem:base} to this tree, obtaining $\lambda$-terms $Q_m,Q_{m-1},\dots,Q_0$ and a derivation of $\varepsilon\vdash Q_0:\hat\rho_0\triangleright|t|$.
Then repeatedly for every $k\in\{1,\dots,m\}$ we apply Lemma \ref{lem:increase-m}, obtaining a derivation of $\varepsilon\vdash Q_{k-1}:\hat\rho_k\triangleright c_k$ for some $c_k\geq\log^k_2|t|$,
and Lemma \ref{lem:c-step} for every $\beta$-reduction (of order $k$) between $Q_k$ and $Q_{k-1}$, obtaining a derivation of $\varepsilon\vdash Q_k:\hat\rho_k\triangleright c_k$.
We end with a derivation of $\varepsilon\vdash P:\hat\rho_m\triangleright c_m$, where $c_m\geq\log^m_2|t|\geq c$, as needed.
In the remaining part of this section we prove the three lemmata.
\begin{proof}[Proof of Lemma \ref{lem:base} (sketch)]
Recall that $t\in\mathcal{L}(P)$ is a finite tree, thus it can be found in some finite prefix of the B\"ohm tree of $P$.
By definition, this prefix will be already expanded after performing some finite number of $\beta$-reductions from $P$.
We need to observe that these $\beta$-reductions can be rearranged, so that those of higher order are performed first.
\label{page:rearrange-beta}
The key point is to observe that when we perform a $\beta$-reduction of some order $k$, then no new $\beta$-redexes of higher order appear in the term.
Indeed, suppose that $(\lambda x.R)\,S$ is changed into $R[S/x]$ somewhere in a term, where $\mathit{ord}(\lambda x.R)=k$.
One new redex that may appear is when $R$ starts with a $\lambda$, and to the whole $R[S/x]$ some argument is applied; this redex is of order $\mathit{ord}(R)\leq k$.
Some other redexes may appear when $S$ starts with a $\lambda$, and is substituted for such appearance of $x$ to which some argument is applied; but this redex is of order $\mathit{ord}(S)<k$.
We can thus find a sequence of $\beta$-reductions in which $\beta$-reductions are arranged according to their order, that leads from $P$ to some $Q_0$ such that $t$ can be found in the prefix of $Q_0$ that is already expanded to a tree.
It is now a routine to use the rules of our type system and derive $\varepsilon\vdash Q_0:\hat\rho_0\triangleright|t|$:
in every $\mathsf{br}$-labeled node we choose the subtree in which $t$ continues, and this effects in counting the number of nodes of $t$ in the flag counter.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:increase-m}]
Consider some derivation of $\varepsilon\vdash P:\hat\rho_m\triangleright c$.
In this derivation we choose a leaf in which we will put the order-$m$ marker, as follows.
Starting from the root of the derivation, we repeatedly go to this premiss in which the flag counter is the greatest (arbitrarily in the case of a tie).
In every node that is not on the path to the selected leaf, we replace the current type judgment $\Gamma\vdash Q:(m,F,M,\tau)\triangleright d$ by $\Gamma\vdash Q:(m+1,F',M,\tau)\triangleright 0$,
where $F'=F\cup\{m\}$ if $d>0$, and $F'=F$ otherwise.
In the selected leaf and all its ancestors, we change the order from $m$ to $m+1$, we add $m$ to the set of marker orders, and we recalculate the flag counter.
Let us see how such transformation changes the flag counter on the path to the selected leaf.
We will prove (by induction) that the previous value $d$ and the new value $d'$ of the flag counter in every node on this path satisfy $d'\geq\log_2 d$.
In the selected leaf itself, the flag counter (being either $0$ or $1$) remains unchanged; we have $d'=d\geq\log_2 d$.
Next, consider any proper ancestor of the selected node.
Let $k$ be the number of those of its children in which the flag counter was positive, plus the number of order-$m$ flags placed in the considered node itself.
Let also $d_{\max}$ and $d_{\max}'$ be the previous value and the new value of the flag counter in this child that is in the direction of the selected leaf.
By construction, the flag counter in this child was maximal, which implies $k\cdot d_{\max}\geq d$, while by the induction assumption $d'_{\max}\geq\log_2 d_{\max}$.
To $d'$ we take the flag counter only from the special child, while for other children with positive flag counter we add $1$, i.e., $d'=k-1+d'_{\max}$.
Altogether we obtain $d'=k-1+d'_{\max}\geq k-1+\log_2d_{\max}\geq\log_2(k\cdot d_{\max})\geq\log_2 d$, as required.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:c-step}]
We consider the base case when $P=(\lambda x.R)\,S$ and $Q=R[S/x]$; the general situation (redex being deeper in $P$) is easily reduced to this one.
In the derivation of $\Gamma\vdash Q:\hat\tau\triangleright c$ we identify the set $I$ of places (nodes) where we derive a type for $S$ substituted for $x$.
For $i\in I$, let $\Sigma_i\vdash S:\hat\sigma_i\triangleright d_i$ be the type judgment in $i$.
We change the nodes in $I$ into leaves, where we instead derive $\varepsilon[x\mapsto\{\hat\sigma_i\}]\vdash x:\hat\sigma_i\triangleright 0$.
It should be clear that we can repair the rest of the derivation, by changing type environments, replacing $S$ by $x$ in $\lambda$-terms, and decreasing flag counters.
In this way we obtain derivations of $\Sigma_i\vdash S:\hat\sigma_i\triangleright d_i$ for every $i\in I$, and a derivation of $\Sigma'\vdash R:\hat\tau\triangleright d$,
where $\Sigma'=\Sigma[x\mapsto\{\hat\sigma_i\mid i\in I\}]$ with $\Sigma(x)=\emptyset$,
and $\mathit{Split}(\Gamma\mid\Sigma,(\Sigma_i)_{i\in I})$, and $c=d+\Sigma_{i\in I}d_i$.
To the latter type judgment we apply the $\TirName{($\lambda$)}\xspace$ rule, and then we merge it with the type judgments for $S$ using the \TirName{(\!@\!)}\xspace rule, which results in a derivation for $\Gamma\vdash P:\hat\tau\triangleright c$.
We remark that different $i\in I$ may give identical type judgments for $S$ (as long as the set of markers in $\hat\sigma_i$ is empty); this is not a problem.
The \TirName{(\!@\!)}\xspace rule requires that $\mathit{ord}(\hat\sigma_i)=\mathit{ord}(\lambda x.R)$; we have that $\mathit{ord}(\hat\sigma_i)=\mathit{ord}(\hat\tau)$, and $\mathit{ord}(\hat\tau)=m=\mathit{ord}(\lambda x.R)$ by assumption.
\end{proof}
\section{Soundness}\label{sec:sound}
In this section we sketch the proof of the right-to-left implication of Theorem \ref{thm:types-ok}.
We, basically, need to reverse the proof from the previous section.
The following new fact is now needed.
\begin{lemma}\label{lem:zero-when-no-marker}
If we can derive $\Gamma\vdash P:(m,F,M,\tau)\triangleright c$ with $m-1\not\in M$ and $\mathit{ord}(P)\leq m-1$, then $c=0$.
\end{lemma}
A simple inductive proof is based on the following idea:
flags of order $m$ are created only when a marker of order $m-1$ is visible;
the derivation itself (together with free variables) does not provide it ($m-1\not\in M$), and the arguments, i.e.~sets $T_1,\dots,T_k$ in $\tau=T_1{\to}\dots{\to} T_k{\to} o$,
may provide only markers of order at most $\mathit{ord}(P)-1\leq m-2$ (see the definition of a type), thus no flags of order $m$ can be created.
We say that a $\lambda$-term of the form $P\,Q$ is an application \emph{of order $n$} when $\mathit{ord}(P)=n$, and that an \TirName{(\!@\!)}\xspace rule is \emph{of order $n$} if it derives a type for an application of order $n$.
We can successively remove applications of the maximal order from a type derivation.
\begin{lemma}\label{lem:s-step}
Suppose that $\varepsilon\vdash P:\hat\rho_m\triangleright c$ for $m>0$ is derived by a derivation $D$ in which the \TirName{(\!@\!)}\xspace rule of order $m$ is used $n$ times.
Then there exists $Q$ such that $P\to_\beta Q$ and $\varepsilon\vdash Q:\hat\rho_m\triangleright c$ can be derived by a derivation $D'$ in which the \TirName{(\!@\!)}\xspace rule of order $m$ is used less than $n$ times.
\end{lemma}
Recall from the definition of the type system that the \TirName{(\!@\!)}\xspace rule of orders higher than $m$ cannot be used while deriving a full type of order $m$.
Thus in $D$ we have type judgments only for subterms of $P$ of order at most $m$ (although $P$ may also have subterms of higher orders),
and in type environments we only have variables of order at most $m-1$.
In order to prove Lemma \ref{lem:s-step} we choose in $P$ a subterm $R\,S$ with $\mathit{ord}(R)=m$ such that there is a type judgment for $R\,S$ in some nodes of $D$ (at least one),
but no descendants of those nodes use the \TirName{(\!@\!)}\xspace rule of order $m$.
Since $R$ is of order $m$, it cannot be an application (then we would choose it instead of $R\,S$) nor a variable; thus $R=\lambda x.R'$.
We obtain $Q$ by reducing the redex $(\lambda x.R')\,S$; the derivation $D'$ is obtained by performing a surgery on $D$ similar to that in the proof of Lemma \ref{lem:c-step} (but in the opposite direction).
Notice that every full type $(m,F,M,\tau)$ (derived for $S$) with nonempty $M$ is used for exactly one appearance of $x$ in the derivation for $R'$;
full types with empty $M$ may be used many times, or not used at all, but thanks to Lemma \ref{lem:zero-when-no-marker} duplicating or removing the corresponding derivations for $S$ does not change the flag counter.
In the derivations for $R'[S/x]$ no \TirName{(\!@\!)}\xspace rule of order $m$ may appear, and the application $R\,S$ disappears, so the total number of \TirName{(\!@\!)}\xspace rules of order $m$ decreases.
When all \TirName{(\!@\!)}\xspace rules of order $m$ are eliminated, we can decrease $m$.
\begin{lemma}\label{lem:s-zero}
Suppose that $\varepsilon\vdash P:\hat\rho_m\triangleright c$ for $m>0$ is derived by a derivation $D$ in which the \TirName{(\!@\!)}\xspace rule of order $m$ is not used.
Then we can also derive $\varepsilon\vdash P:\hat\rho_{m-1}\triangleright c'$ for some $c'\geq c$.
\end{lemma}
The proof is easy; we simply decrease the order $m$ of all derived full types by $1$, and we ignore flags of order $m$ and markers of order $m-1$.
To obtain the inequality $c'\geq c$ we observe that when no \TirName{(\!@\!)}\xspace rule of order $m$ is used, the information about flags of order $m-1$ goes only from descendants to ancestors,
and thus every flag of order $m$ is created because of a different flag of order $m-1$.
By repeatedly applying the two above lemmata, out of a derivation of $\varepsilon\vdash P:\hat\rho_m\triangleright c$ we obtain a derivation of $\varepsilon\vdash Q:\hat\rho_0\triangleright c'$, where $P\to_\beta^*Q$ and $c'\geq c$.
Since $\hat\rho_0$ is of order $0$, using the latter derivation it is easy to find in the already expanded part of $Q$ (and thus in $\mathcal{L}(Q)=\mathcal{L}(P)$) a tree $t$ such that $|t|=c'\geq c$.
\section{Effectiveness}\label{sec:effective}
Finally, we show how Theorem \ref{thm:main} follows from Theorem \ref{thm:types-ok}, i.e., how given a $\lambda Y$-term $P$ of complexity $m$ we can check whether $\varepsilon\vdash P:\hat\rho_m\triangleright c$ can be derived for arbitrarily large $c$.
We say that two type judgments are equivalent if they differ only in the value of the flag counter.
\label{page:effective-def}
Let us consider a set $\mathcal{D}$ of all derivations of $\varepsilon\vdash P:\hat\rho_m\triangleright c$ in which on each branch (i.e., each root-leaf path) there are at most three type judgments from every equivalence class,
and among premisses of each \TirName{(\!@\!)}\xspace rule there is at most one type judgment from every equivalence class.
These derivations use only type judgments $\Gamma\vdash Q:\hat\tau\triangleright d$ with $Q$ being a subterm of $P$ and with $\Gamma(x)\neq\emptyset$ only for variables $x$ appearing in $P$.
Since a finite $\lambda Y$-term, even when seen as an infinitary $\lambda$-term, has only finitely many subterms,
this introduces a common bound on the height of all derivations in $\mathcal{D}$, and on their degree (i.e., on the maximal number of premisses of a rule).
It follows that there are only finitely many derivations in $\mathcal{D}$, and thus we can compute all of them.
We claim that $\varepsilon\vdash P:\hat\rho_m\triangleright c$ can be derived for arbitrarily large $c$ if and only if in $\mathcal{D}$ there is a derivation in which on some branch
there are two equivalent type judgments with different values of the flag counter (and the latter condition can be easily checked).
Indeed, having such a derivation, we can repeat its fragment between the two equivalent type judgments,
obtaining derivations of $\varepsilon\vdash P:\hat\rho_m\triangleright c$ with arbitrarily large $c$.
We use here an additivity property of our type system: if out of $\Gamma\vdash Q:\hat\tau\triangleright d$ we can derive $\Gamma'\vdash Q':\hat\tau'\triangleright d'$,
then out of $\Gamma\vdash Q:\hat\tau\triangleright d+k$ we can derive $\Gamma'\vdash Q':\hat\tau'\triangleright d'+k$, for every $k\geq-d$.
Conversely, take a derivation of $\varepsilon\vdash P:\hat\rho_m\triangleright c$ for some large enough $c$.
Suppose that some of its \TirName{(\!@\!)}\xspace rules uses two equivalent premisses.
These premisses concern the argument subterm, which is of smaller order than the operator subterm, and thus of order at most $m-1$.
The set of marker orders in these premisses has to be empty, as the sets of marker orders from all premisses have to be disjoint.
Thus, by Lemma \ref{lem:zero-when-no-marker}, the flag counter in our two premisses is $0$.
\label{page:effective-narrow}
In consequence, we can remove one of the premisses, without changing anything in the remaining part of the derivation, even the flag counters.
In this way we clean the whole derivation, so that at the end among premisses of each \TirName{(\!@\!)}\xspace rule there is at most one type judgment from every equivalence class.
The degree is now bounded, and at each node the flag counter grows only by a constant above the sum of flag counters from the children.
Thus, if $c$ is large enough, we can find on some branch two equivalent type judgments with different values of the flag counter.
Then, for some pairs of equivalent type judgments, we remove the part of the derivation between these type judgments (and we adopt appropriately the flag counters in the remaining part).
It it not difficult to perform this cleaning so that the resulting derivation will be in $\mathcal{D}$, and simultaneously on some branch there will remain two equivalent type judgments with different values of the flag counter.
\section{Conclusions}
In this paper, we have shown an approach for expressing quantitative properties of B\"ohm trees using an intersection type system, on the example of the finiteness problem.
It is an ongoing work to apply this approach to the diagonal problem, which should give a better complexity than that of the algorithm from \cite{downward-closure}.
Another ongoing work is to obtain an algorithm for model checking B\"ohm trees with respect to the Weak MSO+U logic \cite{DBLP:conf/stacs/BojanczykT12}.
This logic extends Weak MSO by a new quantifier U, expressing that a subformula holds for arbitrarily large finite sets.
Furthermore, it seems feasible that our methods may help in proving a pumping lemma for nondeterministic HORSes.
\bibliographystyle{eptcs}
|
1,116,691,497,818 | arxiv | \section{Introduction}
A pattern that is sometimes seen in collimated stellar outflows
is a high-velocity, compact ``clump'', joined to the outflow
source by fainter emission with a linear ramp of increasing
velocity as a function of distance from the source. This
results in striking ``position-velocity'' (PV) diagrams (obtained,
e.g., from long-slit, high resolution spectra or from millimetre
interferometric ``position-velocity cubes'') with a linear
ramp ending in a bright, high-velocity condensation.
Alcolea et al. (2001) proposed that clumps with ``Hubble law tails''
(observed in the CO emission of a collimated, protoplanetary nebula
outflow) are produced in
``explosive events'' (i.e., with a duration much shorter than the
evolutionary time of the outflow). A ``velocity sorting''
mechanism (with higher velocity material racing ahead of slower
ejecta) would then produce the observed linear velocity vs. position
structure of the tails.
The most dramatic example of ``Hubble law tail clumps'' is of
course found in the molecular fingers pointing away from the
Orion BN-KL region (see, e.g., Allen \& Burton 1993; Zapata et al. 2011;
Bally et al. 2017). The $\sim 100$ fingers
all show CO emission with linearly increasing radial velocities away
from the outflow centre, and terminate in compact clumps (observed
in H$_2$ and in optical atomic/ionic lines).
Dennis et al. (2008) presented numerical simulations of variable jets
and of outflows composed of discrete ``clumps'', and conclude
that the clump-like outflows produce a compact ``head'' (i.e.,
the clump), followed by a tail of decreasing velocity material.
They favour this ``clump scenario'' for explaining the
observed ``Hubble law'' PV diagrams of clumps in planetary nebulae (PNe).
However, even though they obtain trails of decreasing velocity
material (between the clumps and the outflow source), these
trails do not show either the length or the very dramatic
linear velocity vs. position signatures of the observed clumps.
In the present paper we explore a scenario similar to the one of
Dennis et al. (2008), but instead of imagining a ``clump''
ejected from the source (with a well defined ejection velocity), we
propose a ``single pulse''-type ejection velocity (and density)
variability. Basically, during a finite time the source ejects material
first at increasing velocities, then reaching a maximum ejection velocity,
and finally decreasing down to zero. In principle, within this
``ejection episode'', the density of the ejected material could
also vary in an arbitrary way,
In sections 2-5
we present a simple analytic model of the
resulting ``head/tail plasmon'' flow,
calculate its time-evolution and obtain predicted
PV diagrams. We also compute an
axisymmetric numerical simulation of this
flow (with parameters appropriate for a clump in a PN),
and compare it with our analytic model (section 6).
\section{The plasmon model}
\subsection{Centre of mass equation of motion}
Let us consider a cylindrical outflow, with an ejection ``pulse''
beginning at an ejection time $\tau=-\tau_0$ and ending at $\tau=\tau_0$.
This pulse has an arbitrary ejection density $\rho_0(\tau)$ and
an ejection velocity $u_0(\tau)=0$ for $|\tau|\geq\tau_0$
and $u_0(\tau)>0$ for $|\tau|<\tau_0$, This ejection
travels into a stationary environment of uniform density $\rho_a$.
Clearly, as the ejection pulse evolves, the faster material ejected
at later times catches up with the slower, earlier ejection, producing
a shock wave. Also, a second shock wave (i.e., the bow shock) is
produced in the interaction of the jet with the surrounding
environment. This working surface is the ``head'' of the plasmon.
At later times, the ``tail'' region between the ``head'' and the source
is filled with the material ejected in the tail of the ejection pulse.
and has a velocity that increases out to the position of the ``head''.
We call this flow (shown in the schematic diagram of Figure 1)
the ``head/tail plasmon'' in order to difference
it from the plasmon of De Young \& Axford (1967).
\begin{figure}
\centering
\includegraphics[width=5cm]{fig1.eps}
\caption{Schematic diagram showing the ``head/tail'' plasmon. The head
(at a distance $x_{cm}$ from the source)
travels at a velocity $v_{cm}$ along the $x$-axis,
and the tail of unshocked material eventually develops a velocity
stratification with lower velocities closer to the outflow source.}
\label{fig1}
\end{figure}
Using the ``centre of mass'' formalism of Cant\'o et al. (2000),
we will assume that:
\begin{enumerate}
\item before reaching the working surface
the ejected material is free-streaming (as appropriate
for a hypersonic flow),
\item the working surface has a position that coincides with
the centre of mass of the material within it (calculated
as if the material were still free-streaming).
\end{enumerate}
This latter point is correct if the working surface can be
seen as an inelastic merger of flow parcels.
With these two points, the position of the head (i.e.,
the working surface) coincides with the centre of mass:
\begin{equation}
x_{cm}=\frac{\int_{-\tau_0}^\tau \rho_0 x_j u_0 d\tau'+\int_0^{x_{cm}}\rho_a x dx}
{\int_{-\tau_0}^\tau \rho_0u_0d\tau'+\int_0^{x_{cm}}\rho_adx}\,,
\label{xcm}
\end{equation}
where $u_0(\tau')$ and $\rho_0(\tau')$ are the time-dependent
ejection velocity and density (respectively), and $\rho_a(x)$ is
the environmental density. The outflow source is assumed to be
at $x=0$, and the cylindrical ejection is parallel to the
$x$-axis (see Figure 1).
The position $x_j$ of the fluid parcels (if they had not merged)
is given by the free-streaming relation:
\begin{equation}
x_j=(t-\tau')u_0(\tau')\,,
\label{xj}
\end{equation}
where $t$ is the ``evolutionary time'' (different from the ejection
time $\tau'$, satisfying the condition $t\geq \tau'$). The
upper limit $\tau$ of the integrals is given by the free-streaming
flow condition:
\begin{equation}
x_{cm}=(t-\tau)u_0(\tau)\,,
\label{xttau}
\end{equation}
for the ejected fluid parcels currently (i.e., at time $t$) entering the
working surface.
Now, combining equations (\ref{xcm}-\ref{xttau}), and considering
a uniform environment (with $\rho_a=const.$), we obtain:
$$
\frac{\rho_a x_{cm}^2}{2}+x_{cm}
\,\left[\int_{-\tau_0}^\tau \rho_0u_0d\tau'-
\frac{1}{u_0(\tau)} \int_{-\tau_0}^\tau \rho_0u_0^2 d\tau'\right]=
$$
\begin{equation}
\tau\int_{\tau_0}^\tau\rho_0 u_0^2d\tau'-\int_{\tau_0}^\tau \tau'\rho_0u_0^2d\tau'\,,
\label{xcm1}
\end{equation}
which, once the appropriate integrals over $\tau'$ have been carried
out, is a quadratic equation which gives us $x_{cm}(\tau)$. If we
want to know the position of the working surface as a function
of the evolutionary time $t$, we can calculate $t$ as a function
of $\tau$ and $x_{cm}(\tau)$ from equation (\ref{xttau}).
\subsection{Solution for a parabolic $u_0(\tau)$ pulse with constant
mass loss rate}
Let us now consider an ejection velocity pulse with
$u_0(\tau)=0$ for $|\tau|\geq\tau_0$ and:
\begin{equation}
u_0(\tau)=v_0\left[1-\left(\frac{\tau}{\tau_0}\right)^2\right]\,;\,\,\,
{\rm for}\,\,|\tau|<\tau_0\,,
\label{u0q}
\end{equation}
a parabola that goes to zero at $\tau=\pm \tau_0$ and has
a peak velocity $v_0$ at $\tau=0$. For the ejection density
$\rho_0(\tau)$, we assume that it is proportional
to the inverse of the ejection velocity, so that the
mass loss rate (per unit area)
\begin{equation}
{\dot m}=\rho_0(\tau)u_0(\tau)\,,
\label{mdot}
\end{equation}
is time-independent. However, an arbitrary ejection density
variability could be considered within our analytic framework.
With $u_0(\tau)$ and $\rho_0(\tau)$ given by equations
(\ref{u0q}-\ref{mdot}) we compute the integrals in equation (\ref{xcm1}),
obtaining:
\begin{equation}
\sigma\left(\frac{x_{cm}}{v_0\tau_0}\right)^2+
f\left(\frac{\tau}{\tau_0}\right)\,\frac{x_{cm}}{v_0\tau_0}=
g\left(\frac{\tau}{\tau_0}\right)\,,
\label{xcm3}
\end{equation}
with
\begin{equation}
\sigma\equiv \frac{\rho_av_0}{2{\dot m}}\,,
\label{acm1}
\end{equation}
\begin{equation}
f(\eta)\equiv \frac{(2\eta-1)(\eta+1)}{3(\eta-1)}\,;\,\,\,\,\,
g(\eta)=\frac{(3-\eta)(\eta+1)^3}{12}\,.
\label{acm3}
\end{equation}
\section{The ``free plasmon'', $\sigma=0$ case}
In the $\sigma\to 0$ limit
(see equation \ref{acm1}) of a very low density environment,
equation (\ref{xcm3}) has the solution:
\begin{equation}
\frac{x_{cm}}{v_0\tau_0}=\frac{g(\tau/\tau_0)}{f(\tau/\tau_0)}\,,
\label{xcm5}
\end{equation}
with $f$ and $g$ given by equation (\ref{acm3}).
Substituting equation (\ref{acm3}) in (\ref{xcm5}) we
obtain:
\begin{equation}
\frac{x_{cm}}{v_0\tau_0}=\frac{(3-\eta)(\eta-1)(\eta+1)^2}
{4(2\eta-1)}\,,
\label{xcm55}
\end{equation}
where $\eta=\tau/\tau_0$.
Using the free-streaming flow condition (equation \ref{xttau}),
and equations (\ref{u0q}) and (\ref{xcm55}) we obtain:
\begin{equation}
\frac{t}{\tau_0}=\frac{3(\eta-1)(1+3\eta)}{4(2\eta-1)}\,.
\label{ttau6}
\end{equation}
Clearly, both $x_{cm}$ and $t$ $\to \infty$ for $\tau\to \tau_0/2$
(see equations \ref{xcm55}-\ref{ttau6}).
The velocity $v_{cm}=dx_{cm}/dt$ can be obtained from equations
(\ref{xcm55}-\ref{ttau6}):
\begin{equation}
\frac{v_{cm}}{v_0}=\frac{1}{3}(2+\eta-\eta^2)\,.
\label{vcm}
\end{equation}
Therefore, for $t\to \infty$ ($\tau\to \tau_0/2$)
the plasmon head reaches an asymptotic velocity
\begin{equation}
v_a=u_0\left(\frac{\tau_0}{2}\right)=\frac{3}{4}v_0\,.
\label{va}
\end{equation}
This result implies that the material ejected in the part
of the pulse with $\tau>\tau_0/2$ (see equation \ref{u0q}) never
reaches the plasmon head.
In the ($t\to \infty$, $\tau\to \tau_0/2$) asymptotic regime,
the head of the plasmon has a mass (per unit area)
$m_{h,a}=3{\dot m}\tau_0/2$ and the tail has a mass
$m_{t,a}={\dot m}\tau_0/2$. Therefore, out of the total ejected mass
$m_{tot}=2{\dot m}\tau_0$, a fraction of 3/4 ends up in the head
and 1/4 in the tail.
In Figure 2 we plot $x_{cm}/(v_0\tau_0)$ (see equation \ref{xcm5})
as a function of $t$ (which is obtained from $\tau,\,x_{cm}(\tau)$
through equation \ref{xttau}). We also plot the velocity
$v_{cm}$ (given by equation \ref{vcm}) as a function of the
evolutionary time $t$. From this figure it is clear that the plasmon
head first accelerates, and for $t>\tau_0$ starts to approach
the asymptotic velocity given by equation (\ref{va}).
\begin{figure}
\centering
\includegraphics[width=6cm]{fig2.eps}
\caption{Position $x_{cm}$ (top frame) and velocity $v_{cm}$ (bottom frame)
of the head of the plasmon as a function of time for models with
$\sigma=0$ (top curves), 0.1, 1.0 and 10 (bottom curves).}
\label{fig2}
\end{figure}
\section{The $\sigma> 0$ case}
For $\sigma>0$, equation (\ref{xcm3}) can be inverted to obtain:
\begin{equation}
\frac{x_{cm}}{v_0\tau_0}=\frac{1}{2\sigma}
\left[-f\left(\frac{\tau}{\tau_0}\right)+\sqrt{f^2\left(\frac{\tau}{\tau_0}\right)
+4\sigma g\left(\frac{\tau}{\tau_0}\right)}\right]\,.
\label{xcm6}
\end{equation}
The centre of mass positions and velocities as a function of $t$ obtained
for differenent $\sigma$ values are shown in Figure 2.
For the $\sigma=0.1$ case (see Figure 2), $x_{cm}$ and $v_{cm}$ initially
follow the $\sigma=0$ solution (see equation \ref{xcm5}), and start deviating
for $t>0$, when the plasmon head begins to brake in an appreciable way. The
$\sigma=1$ and 10 solutions show substantial braking for all $t$.
The $\sigma>0$ solutions show plasmon heads that first accelerate, then reach
a maximum velocity, and subsequently brake for increasing times $t$. For
$\sigma\ll 1$, the plasmon head first reaches a velocity similar to the
asymptotic velocity $v_a$ of the ``free plasmon'' (see equation \ref{va}) and
then slowly slow down for increasing times. For $\sigma>1$, the velocity of the
plasmon head does not reach values $\sim v_a$.
We should note that in the $\sigma>0$ solutions, all of the mass ejected in the
pulse eventually ends up in the plasmon head.
\section{Position-velocity diagrams}
Evidently, the ``head/tail plasmon'' model is attractive for trying to explain
fast moving clumps which have a tail of decreasing velocity emission towards
the outflow source. When observed with spatially resolved spectroscopy
or with interferometric millimeter observations these clumps
show position-velocity (PV) diagrams
with a high velocity, compact emission at a given position, and a ramp
of emission with increasing radial velocities from the source out to the
clump.
In Figure 3 we show the positions
and velocities of the head at different times, and the velocity
of the material in the ``tail'' of the plasmon. This
velocity is directly obtained from the free-streaming relation:
\begin{equation}
u(x,t)=u_0(\tau)=\frac{x}{t-\tau}\,,
\label{uxt5}
\end{equation}
where $u_0(\tau)$ is given by equation (\ref{u0q}). This can be easily
done in a parametric way by varying $\tau$ (at a fixed evolutionary
time $t$), using the first equality to calculate the velocity $u(x,t)$
and then the second equality for obtaining the corresponding position
$x$ along the tail of the plasmon.
Figure 3 shows
the PV diagrams obtained for different values of $t$ for four
models with $\sigma=0$, 0.1, 1 and 2. {The
$\sigma=0$ model has a PV diagram that becomes more extended
along the outflow axis with time, with a plasmon head that
shows a decreasing acceleration for increasing times.}
The models with higher $\sigma$ values have PV diagrams that have
lower peak velocities as a function of $t$.
Regardless of the value of $\sigma$, the ``head/tail'' plasmons
develop an almost linear velocity vs. distance ``Hubble law''
velocity profile at evolutionary times $t\gg\tau_0$. This result
follows from equation (\ref{uxt5}), which in the
$t\gg\tau\sim \tau_0$ limit gives $u(x,t)\approx x/t$ (i.e.,
at a given time $t$ we have a ``Hubble law'' of slope $1/t$).
\begin{figure}
\centering
\includegraphics[width=6cm]{fig3.eps}
\caption{Velocity along the outflow axis vs. distance from the outflow
source at different evolution times. The plots are labeled with the
value of $\sigma$ of the model (from $\sigma=0$ at the top to $\sigma=2$
on the bottom). The $\sigma=0$ frame (top graph) shows
the velocity along the tail as a function of $x$ for times $t/\tau_0=0$
(shortest curve), 1, 3, 5 and 7 (spatially more extended curve).
The $\sigma=0.1$ frame shows the velocity vs. position at times
$t/\tau_0=0$, 2, 4, 6 and 8. The $\sigma=1$ and 2
frames (two bottom graphs) show the velocity vs. position at times
$t/\tau_0=0$, 5, 10, 15 and 20. The open circles located at the end of each
curve show the position and velocity of the head of the plasmon.}
\label{fig3}
\end{figure}
\section{Numerical simulation}
We have computed an axisymmetric gasdynamic simulation
of a ``head/tail plasmon'' with parameters for a high-velocity
clump in a PN (see, e.g., Alcolea et al. 2001)
using the {\sc Walicxe-2D} code (Esquivel et
al. 2009). We use a setup with an adaptive mesh
with 5 refinement levels giving a maximum resolution of
14.64~AU in a computational domain of
$15000 \times 3750$~AU. We used a reflective boundary condition
on the symmetry axis and free outflow for all of the other boundaries.
The ejection velocity pulse is imposed at $x=0$, with a radius $r_j=10^{16}$~cm,
a time half-width $\tau_0=50$~yr and a peak velocity $v_0=200$~km~s$^{-1}$
(see equation \ref{u0q}).
The total mass of the pulse is $M_p=10^{-4}$~M$_\odot$. For calculating
the ejection density, we impose a constant mass loss rate
per unit area ${\dot m}=M_p/(2\pi r_j^2\tau_0)=2.0\times
10^{-13}$~g~cm$^{-2}$~s$^{-1}$, and calculate the density as:
\begin{equation}
\rho_0(\tau)=\frac{\dot m}{\max\, [u_0(\tau),v_{min}]}\,,
\label{rho000}
\end{equation}
with $v_{min}=1$~km~s$^{-1}$ ($v_{min}$ is introduced in order to avoid
the divergence of the density for $u_0\to 0$).
Initially, the computational domain is filled
with a uniform environment of numerical density
$n_a=1963.3$ cm$^{-3}$, which, combined with the properties of the
pulse, gives $\sigma=0.1$ (see equation \ref{acm1}). Both the environment
and the ejected material have an initial temperature of $10^4$~K, and
have singly ionized H.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{fig4.eps}
\caption{H$\alpha$ maps obtained from the numerical simulation
for evolutionary times $t/\tau_0$=4, 6 and 8 (top, middle and bottom panels, respectively),
assuming a $\phi=30^\circ$ angle between the outflow axis and the plane of the sky.
The maps are normalized to the peak emission of the top frame, and are shown with
the logarithmic colour scale given by the bar to the right of the top frame.}
\label{fig4}
\end{figure}
In the simulation, a minimum temperature of $10^4$~K is imposed
in all cells at all times (also assuming that H is always fully ionized),
and the parametrized cooling function of Biro \& Raga (1994) is used for
$T>10^4$~K. This setup is meant to approximate the behaviour of the gas within
a photoionized region. {Throughout our simulation, the bow shock has a shock
velocity $\sim 100$~km~s$^{-1}$, which together with the pre-shock ambient
density ($n_a\approx 2000$~cm$^{-3}$, see above) gives a cooling distance
$d_c\sim 1$~AU to $10^4$~K (from the plane-parallel shock models of Hartigan et al. 1987),
which is unresolved in our simulation. The slower ``jet shock'' develops velocities
as low as $\sim 20$~km~s$^{-1}$, and does not have substantial cooling in this regime.}
From this simulation, we have calculated predicted
H$\alpha$ maps and PV diagrams. These are obtained by computing the H$\alpha$
emission coefficient (using the interpolation of Aller 1994), and
integrating it through lines of sight.
Figure \ref{fig4} shows the H$\alpha$ emission maps obtained
for evolutionary times $t/\tau_0$=4, 6 and 8 (upper, middle and
bottom panels, respectively), assuming a $\phi=30^{\circ}$ angle
between the outflow axis and the plane of the sky. From this figure
we see that the H$\alpha$ emission has two components: the plasmon
head and the tail. This latter component is brightest close to the
outflow source. {The bow shock at the head of the plasmon is rather broad,
which is a result of the fact that the Mach number of the flow is not so high
(going down to $\sim 10$ towards the end of the simulation).}
We also calculate the PV diagrams for evolutionary times
$t/\tau_0$=4, 6 and 8, and a $\phi=30^{\circ}$ angle between the
outflow axis and the plane of the sky (see Figure \ref{fig5}).
For the PV diagrams, we have assumed that we have a spectrograph slit
with a full width of 100~AU, centred on the outflow axis. The
resulting PV diagrams show a clear ``Hubble law'' ramp of increasing
radial velocities vs. distance from the source, ending in a broad
emission line region corresponding to the head of the plasmon.
In Figure \ref{fig5} we also plot
the (appropriately projected) velocity
vs. position obtained from the analytical model
(see section 5 and Figure \ref{fig3}). {The "Hubble law" feature of
the tail agrees very well with the results obtained from the numerical
simulation. Also, the analytic position of the plasmon head falls in
the middle of the spatially quite extended emission predicted from the
numerical simulation (this spatial extent being partly the result of the
projection of the wide bow shock onto the plane of the sky).}
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{fig5.eps}
\caption{Position-velocity (PV) diagrams obtained from the simulation
for evolutionary times $t/\tau_0$=4, 6 and 8 (top, middle and bottom panels, respectively),
assuming a $\phi=30^{\circ}$ angle between the outflow axis and the plane of the
sky. The PV diagrams
are normalized to the peak emission of the top frame, and are shown with
the logarithmic colour scale given by the bar to the right of the top frame.
The (appropriately projected) velocity vs. position obtained from the
analytical solution (for the corresponding evolutionary times) is shown with
the dashed white curves.}
\label{fig5}
\end{figure}
\section{Conclusions}
We present a model for a hypersonic ``single pulse jet'', produced by a collimated
outflow event with an ejection velocity history with a single peak, and
wings of decreasing velocity (at earlier and later times). An arbitrary
form for a simultaneous ejection density variability is also possible.
Such an ejection results in the formation of a ``head'' associated with
a working surface travelling through the surrounding environment, and a
``tail'' of slower material (formed by the decaying velocity tail of
the outflow event) which rapidly develops a linear, ``Hubble law''
kinematical signature. We call this flow configuration a ``head/tail plasmon''.
We study the simple case of a parabolic ejection velocity pulse (which
could be viewed as a second order Taylor series of the peak of an arbitrary
ejection pulse), with a time-independent mass loss rate (i.e.,
the ejection density is proportional to the inverse of the ejection velocity).
With a ``centre of mass formalism'', we obtain the motion of the head
of the ``head/tail plasmon'' (see section 2).
In the limit of a very low density environment (see section 3)
the head of the plasmon reaches a constant velocity, and the material in the
tail at all times retains a substantial fraction (asymptotically
approaching 1/4) of the total mass of the ejection event. For denser
environments (see section 4), the plasmon slows dowm, and the
head ends up incorporating most of the mass of the ejection pulse.
For all flow paramenters, the predicted PV diagrams rapidly develop
a ``Hubble law'' kinematical signature (see section 5).
Finally, we compute an axisymmetric gasdynamic simulation with parameters
appropriate for a high velocity clump in a PN (see section 6).
We compute H$\alpha$ emission maps and PV diagrams showing the observational
characteristics this flow.
The predicted PV diagrams obtained from the simulation agree very well with
the analytic model (see Figure 5).
This paper represents a first exploration of a different kind of jet or plasmon
flow. A detailed application
of this model to different objects will be necessary to show what improvements
are found with respect to previous models, such as the ones of Dennis et al.
(2008) for knots in PNe, or the ones of Rivera et al. (2019a, b)
for the Orion BN-KL fingers.
\section*{Acknowledgments}
We acknowledge support from the PAPIIT (UNAM) project
IG100218/BG100218. ACR acknowledges support from a DGAPA-UNAM posdoctoral
fellowship. We thank an anonymous referee for helpful comments.
\noindent Data availability: The lead author may be
contacted for access to the results of the simulations.
|
1,116,691,497,819 | arxiv | \section{Introduction}
Recent advances in miniaturization, robotics, sensor technology and communications have revolutionized Unmanned Aerial Vehicles (UAVs) and brought about their adoption in a wide range of applications. One such application is the use of UAVs carrying base station equipment acting as aerial base stations that can dynamically re-position themselves to improve coverage and capacity demands of existing networks \unskip~\cite{flyingdronebs,dronebsplacement,DBLP:journals/corr/abs-1809-01752,7974285,uav_wcomm, 7994915,dronebs2}. Such aerial base stations could supplement terrestrial infrastructure when it is overloaded or unavailable, as presented in the context of $5G$ networks in \unskip~\cite{5gtutorial, UAV5g}. While the majority of these proposals considered non-mobile UAVs hovering over a service area, some recent works \unskip~\cite{7974285, flyingdronebs, mabs} have argued for the use of flying (or cruising) aerial base stations wherein the UAVs continue to service ground nodes while in flight. The trajectory, i.e., the movement patterns of the aerial base stations is tailored so as to maximize network performance in the presence of geospatial variation in user demand, or to improve spectral efficiency. A prototype demonstrating the use of flying aerial base stations was developed recently by Eurecom \unskip~\cite{eurecom}.
UAVs rely on an on-board battery for power, which limits their operational duration before recharging is required. Researchers have investigated power-efficient operations of UAVs to extend battery lifetime by reducing the energy consumed for communications (electronics) and mobility (mechanical), as summarized in \unskip~\cite{DBLP:journals/corr/abs-1809-01752}. Since extending the lifetime of the battery does not eliminate the need for recharging, a promising and parallel direction of research involves investigating ways for recharging UAVs to ensure service continuity. In particular, mechanisms for replenishing energy without disrupting the UAV's usual trajectory (in the case of flying UAVs) or deployed locations (in the case of non-mobile UAVs) where the UAV is not required to move to a different location to receive power, is essential for uninterrupted service provisioning. In this paper, we propose an architecture for recharging cruising UAVs using energy harvesting from received Radio Frequency (RF) transmitted by \textit{dedicated, non-mobile airborne} UAVs equipped with RF transmitters referred to as transmitter UAVs ($tUAV$s). In particular, we study the optimum placement of the $tUAV$s to maximize the received energy by the receiver UAVs ($rUAV$s).
Researchers in \unskip~\cite{RF_UAV} have also explored RF energy harvesting for recharging UAVs. However, they rely on terrestrial energy sources for charging UAVs while we consider airborne chargers. As such, our approach can be used in a wide range of scenarios where deployment of terrestrial chargers may not always be possible, for example where UAVs are deployed to monitor ground sensors in a forest. The energy transfer efficiency is influenced by both distance and the presence of obstacles (line-of-sight vs no-line-of-sight). Our approach offers flexibility to address both these issues. We can position the airborne energy sources in a way that would minimize this distance and improve line-of-sight RF links thus increase energy transfer efficiency. Moreover, our work is the first to consider the optimal positioning of $tUAV$s which maximizes the total received energy in the $rUAV$s.
Our contributions in this paper are as follows: (i) we propose an UAV re-charging architecture using wireless power transfer from carefully positioned, airborne, stationary energy sources that provide power to the UAVs without disrupting their trajectories, (ii) we provide a mathematical model to derive optimal placement of the energy sources to maximize the total received energy in the system, and (iii) we consider a specific scenario of two $rUAV$s moving along a linear trajectory servicing ground nodes stationed within a square region and use our model to determine the optimal locations for two $tUAV$s. From our solutions to the optimal placement problem, we observed that for this specific scenario, the optimal placement of an even number of energy sources will also result in fairness in terms of equal amount of received energy by all $rUAV$s. However, we found that if we used an odd number of energy sources, either the fairness could be achieved or the total amount of received energy could be maximized, but not both at the same time. Our numerical results revealed that placing the charging nodes at the suggested optimal locations resulted in significant power gain compared to non-optimal placements.
The rest of the paper is organized as follows. Section \unskip~\ref{sec:uca} presents our proposed UAV recharging architecture. We first present a general case of any number of $tUAV$s and $rUAV$s, followed by a specific case of two $tUAV$s and two $rUAV$s. We solve the specific case of energy source placement in Section \unskip~\ref{sec:ssc}. We provide implications of our solutions in Section \unskip~\ref{ram}, ending our paper with some numerical results and conclusion in Sections \unskip~\ref{num} and \unskip~\ref{conclu}.
\iffalse
\section{Related Work}
\label{sec:rw}
Some related work on addressing the limited battery issue of UAVs are as follows.
Authors in \unskip~\cite{RF_UAV} consider a relaying network where a UAV receives information as well as harvests energy from a \textit{terrestrial base station}, and relays the received information to a ground node. In this study, the location of the energy source (the terrestrial base station) is fixed, and the UAV circles around the base station trying to find the best position to maximize throughput of the relaying network. Authors in \unskip~\cite{galkin_UAV} propose the use of rooftop laser transmitters to wirelessly power a UAV. The use of rooftop transmitters limit the use to urban areas only. Moreover, the placement of the transmitters to recharge a number of UAVs is not explored, which is the focus of our work. Different from both of these work, we focus on the use of a limited set of \textit{floating (airborne)} energy sources that are at the same altitude (height) as that of the energy receiver UAVs ($rUAV$s)
Authors in ~\cite{galkin_UAV} propose three ways of addressing UAV battery life issue while in service: ``hotswapping batteries of a single UAV covering a hotspot'' in which the UAV comes back to a base to get a new battery, ``cycling through multiple UAVs to cover a hotspot'' (when one UAV's battery is drained, it comes back to a base and another one goes to the service area), and ``using lasers to wirelessly power a UAV''. The first two options require the UAV to leave the service area for a short duration depending on the location of the base [SK: this was discussed earlier - so this paper needs to be discussed there], however, the third option which uses rooftop laser transmitters is similar in concept with our proposed system in that the UAV is not required to leave the service area for receiving power. However, the use of rooftop transmitters limit the use to urban areas only. Moreover, the placement of the transmitters to recharge a number of UAVs is not explored, which is the focus of our work. Another important difference is the scenario considered by the authors: the application considered is an UAV covering a hotspot, whereas in our case, multiple UAVs fly back and forth over a service area. Energy harvesting from AC power lines have been explored in ~\cite{Gupta} to recharge sensor devices, and were shown as a viable solution to the limited power problem of sensor devices. Authors in ~\cite{8368105} examine the possibility of using high voltage power lines to recharge UAVs. However, this requires the UAVs to be near such power lines, moreover, the UAVs need to attach themselves on the power line conductor to be recharged. This implies that the UAVs will have to make trips to the power line conductors, leaving their normal trajectory, to be recharged.
\fi
\bgroup
\fixFloatSize{images/Fig1_NEW.jpg}
\begin{figure}[!tbp]
\centering \makeatletter\IfFileExists{Fig1_NEW.jpg}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[width=8cm,height=6cm]{Fig1_NEW.jpg}}
\makeatother
\caption{The system model for flying base station recharging. The $rUAV$s represent the power-receiver $UAV$s, and the $tUAV$s represent the RF energy sources. Only the $(x,y)$ coordinates of the $tUAV$s are shown, since these are placed at the same height as those of the $rUAV$s.}
\label{figure-14f4abc00bef73fd1e0fb3b53d0d4368}
\end{figure}
\egroup
\section{UAV Recharging Architecture: System Model}
\label{sec:uca}
Our UAV charging architecture is shown in Figure \unskip~\ref{figure-14f4abc00bef73fd1e0fb3b53d0d4368}, which is used for charging a number of cruising UAVs that fly back and forth with a linear trajectory over a square area of side length $l$. The trajectories of the cruising UAVs can be of any form, e.g., of geometric form (circular, linear), or can be of other forms, as shown in \unskip~\cite{7974285, flyingdronebs}. In our work, we have assumed a linear trajectory for the cruising UAVs. These are the RF energy receiver UAVs- the $rUAV$s, and fly back and forth on a path parallel to the horizontal axis of the square area, with a constant speed $V$. The $rUAV$s harvest energy from the received RF signals while in service, from airborne, dedicated energy sources. We assume that these energy sources are specialized UAVs, equipped with wireless power transmitters, and are referred to as transmitter UAVs- $tUAV$s. The $tUAV$s are placed at fixed locations (i.e., non-mobile) over this area with their $(x,y)$ coordinates given by $(x_1,y_1)$ \& $(x_2,y_2)$, etc., and their $z$ coordinate (the height) is the same as those of the $rUAV$s. The heights of the $tUAV$s and the $rUAV$s being same improves the amount of received RF power, which we can adjust since we are using airborne energy sources, as opposed to using terrestrial energy sources with non-adjustable heights. The $tUAV$s are assumed to have a wire-line connection to the ground for a constant power supply \unskip~\cite{wireline_drone}. In order to avoid the chance of collision, the $tUAV$s must be placed outside the collision zone $R1$, anywhere in zone $R2$ as shown in Figure \unskip~\ref{figure-14f4abc00bef73fd1e0fb3b53d0d4368}. By careful positioning of the $tUAV$s in terms of their $(x, y)$ coordinates, this architecture aims to maximize the received energy by the $rUAV$s during their flight time to travel one side of the square, achieving service continuity by the $rUAV$s without disrupting their trajectory. We provide a general model for this architecture next.
Time taken to travel one side length of the square area by a $rUAV$ is given by $T={l}/{V}$. Locations of the $rUAV_1$ and $rUAV_2$ that are on the parallel edges of the square at time $t$ is given by $(Vt,0)$ and $(Vt,l)$ respectively, for $t\in[0,T]$. The received power of far-field RF transmission attenuates as per the reciprocal of the squared distance between the transmitter and the receiver. Therefore, the harvested RF power ($P_R$) at the receiver can be calculated using Frii's free space propagation model \unskip~\cite{friis-2} as:
\begin{equation}\label{eqfriis}
P_R = \frac{P_TG_TG_R\lambda^{2}}{(4\pi R)^{2}}
\end{equation}
where $P_T$ is the transmit power, $G_T$ and $G_R$ are the antenna gains of the transmitter and the receiver, $\lambda$ is the power transfer wavelength, and $R$ is the distance between the transmitter and the receiver. Without the loss of generality, we can say that the received power varies inversely with the \textit{square of the distance} between the transmitter and the receiver, which our model is based on. In Section \unskip~\ref{num}, we use specific values of the other parameters of Frii's equation to estimate received power. Distance of $rUAV_1$ from $tUAV_1$ at time $t$ is $ \sqrt{(Vt-x_1)^2+y_1^2}$. So, energy received by $rUAV_1$ from the $tUAV_1$ over $[0,T]$ is
\begin{equation}
\propto \int_0^T \frac{dt}{(Vt-x_1)^2+y_1^2}.
\end{equation}
For a general $rUAV$ path $(x(t), y(t)), \phantom{-}0 \le t \le T$, i.e., the $rUAV$ located anywhere in the considered area, energy received by an $rUAV$ from a $tUAV$ located at $(x_1,y_1)$ is
\begin{equation}
\propto \int_0^T \frac{dt}{(x(t)-x_1)^2+(y(t)-y_1)^2}.
\end{equation}
Let $E_{rUAV_k,tUAV_j}$ be the energy received by $rUAV_k$ from $tUAV_j$ over time $0 \le t \le T$. Then the total energy received by $rUAV_k$ is:
\begin{equation}
E_k = \sum_{j} E_{rUAV_k,tUAV_j} \propto \sum_{j} \int_0^T \frac{dt}{(x_k(t)-x_j)^2+(y_k(t)-y_j)^2}.
\end{equation}
The total energy received by all $rUAV$s from all $tUAV$s is given by:
\begin{equation}
E_{total} = \sum_{k} E_{k} = \sum_{k}\sum_{j} E_{rUAV_k, tUAV_j}
\end{equation}
where $(x_k(t),y_k(t))$ is the flight path of $rUAV_k$ for $0 \le t \le T$ and $(x_j,y_j)$ is the $j^{th}$ transmitter UAV's ($tUAV_j$) location. The $E_{total}$ can also be calculated by summing up the given energy by all $tUAV$s to all $rUAV$s, as:
\begin{equation}
E_{total} = \sum_{j} \sum_{k} E_{rUAV_k, tUAV_j}
\end{equation}
where $\sum_{k} E_{rUAV_k, tUAV_j}$ is the energy provided by $tUAV_j$ to all $rUAV$s. In order to gain an insight into solving the energy source placement problem, we focus on a specific case of \textit{two transmitters} and \textit{two receivers} next.
\subsection{The Case of Two $tUAV$s and Two $rUAV$s}
In this section, we consider a scenario where two $tUAV$s ($tUAV_1$ and $tUAV_2$) are placed at locations $(a_1,b_1)$ \& $(a_2,b_2)$, at the same height level of two $rUAV$s ($rUAV_1$, and $rUAV_2$). The $rUAV$s fly back and forth over straight-line paths of two parallel edges of the square, separated from each other with a distance $l$ which is the length of the square. Note that the paths of the two $rUAV$s are given by $(x_1(t),y_1(t))=(Vt,0)$, and $(x_2(t),y_2(t))=(Vt,l)$ for $0 \le t \le T$. Total energy received by $rUAV_1$ is given by
\begin{equation}
\begin{aligned}
E_1 \propto \int_0^T \frac{dt}{(x_1(t)-a_1)^2+(y_1(t)-b_1)^2} +\\
\int_0^T \frac{dt}{(x_1(t)-a_2)^2+(y_1(t)-b_2)^2}.
\end{aligned}
\end{equation}
Replacing the path position values of $rUAV_1$, we get
\begin{equation}
\begin{aligned}
E_1 \propto \int_0^T \frac{dt}{(Vt-a_1)^2+b_1^2} +\\
\int_0^T \frac{dt}{(Vt-a_2)^2+b_2^2}.
\end{aligned}
\end{equation}
Similarly,
\begin{equation}
\begin{aligned}
E_2 \propto \int_0^T \frac{dt}{(Vt-a_1)^2+(l-b_1)^2} +\\
\int_0^T \frac{dt}{(Vt-a_2)^2+(l-b_2)^2}.
\end{aligned}
\end{equation}\\
So, our objective is to maximize $E_1+E_2$, or equivalently
\begin{equation*}
P:\max_{a_1,b_1,a_2,b_2} \quad E_1+E_2
\end{equation*}
\begin{equation*}
\text{s.t.} \quad 0 \le a_j \le l \; \quad j=1,2
\end{equation*}
\begin{equation}
\varepsilon \le b_j \le l -\varepsilon \; \quad j=1,2
\end{equation}
Here, $\varepsilon \in (0,l/2)$ is the width of the region $R1$ as in Figure \unskip~\ref{figure-14f4abc00bef73fd1e0fb3b53d0d4368}, representing the collision area width of each $rUAV$ within which no $tUAV$s are to be placed.
\bgroup
\fixFloatSize{Fig2_NEW.jpg}
\begin{figure}[!tbp]
\centering \makeatletter\IfFileExists{Fig2_NEW.jpg}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[width=8cm,height=6cm]{Fig2_NEW.jpg}}
\makeatother
\caption{{Optimal placement of two $tUAV$s to recharge two $rUAV$s.}}
\label{figure-fig2}
\end{figure}
\egroup
\section{Optimal Solution to Problem P}
\label{sec:ssc}
In this section, we solve the $tUAV$ placement problem for the specific case of two $tUAV$s and two $rUAV$s as per the description in the previous section, with the restriction of not placing any $tUAV$ in the region $R1$ of each $rUAV$. We show that the optimal placement of the $tUAV$s are one transmitter on each boundary of the restricted zones of each $rUAV$, and in the middle of the horizontal axis of the zone boundary which is shown in Figure \unskip~\ref{figure-fig2}. Our main result is the following Theorem.
\begin{theorem}\label{thm1}
The physically unique solution to Problem $(P)$ is
\begin{center}
$(a_1, b_1) = (l/2,\varepsilon)$, $(a_2, b_2) = (l/2,l-\varepsilon)$.
\end{center}
\end{theorem}
In order to help prove the theorem, we state a series of useful Lemmas that we prove in the Appendix.
\begin{lemma}\label{lma1}
Let $c:[0,l] \rightarrow \mathbb{R}$ \textrm{be a (strictly) concave function.}\\
Let $g:[0,l]\rightarrow \mathbb{R}, \quad g(x)=c(x)+c(l-x) \quad \forall x \in [0,l].$\\
Then $g$ \textrm{is (strictly) maximized at} $x=\frac{l}{2}.$
\begin{proof}
See Appendix A.
\end{proof}
\end{lemma}
\begin{lemma}\label{lma2}
Let $f:[\varepsilon, l-\varepsilon] \rightarrow \mathbb{R}$ \textrm{be (strictly) convex, where } $\varepsilon \lt l/2$. \\
Let $h:[\varepsilon, l-\varepsilon] \rightarrow \mathbb{R}, \quad h(x)=f(x)+f(l-x) \quad \forall x \in [\varepsilon,l-\varepsilon].$\\
Then $h$ \textrm{is (strictly) maximized at the endpoints (i.e., at $x=\varepsilon$ \quad and $x=l-\varepsilon$)}.
\begin{proof}
See Appendix B.
\end{proof}
\end{lemma}
\iffalse
FIG 3
\bgroup
\fixFloatSize{images/Fig3.jpg}
\begin{figure}[!tbp]
\centering \makeatletter\IfFileExists{images/Fig3.jpg}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}{images/Fig3.jpg}}
\makeatother
\caption{{Optimum placement of one Charger UAV dictates the placement of the other Charger UAV, to be symmetrical opposite of the middle line}}
\label{figure-fig3}
\end{figure}
\egroup
\fi
Below we provide the proof of Theorem \ref{thm1}.
\begin{proof}
We have to maximize total energy received by the $rUAV$s from $tUAV_1$ and $tUAV_2$, which is given by
\begin{equation*}
\begin{aligned}
\textrm{Total Energy = (Energy provided by } tUAV_1 ) + \\
(\textrm{Energy provided by } tUAV_2).
\end{aligned}
\end{equation*}
Energy provided by $tUAV_j$ is
\begin{equation*}
\begin{aligned}
= \int_0^T \frac{dt}{(Vt-a_j)^2+b_j^2} & +
\int_0^T \frac{dt}{(Vt-a_j)^2+(l-b_j)^2} \\
= \phi (a_j,b_j) + & \phi (a_j, l-b_j), \\
\end{aligned}
\end{equation*}
where $\phi(a,b) := \int_0^T \frac{dt}{(Vt-a)^2+b^2}$, $\quad 0 \le a \le l,\quad \varepsilon \le b \le l -\varepsilon$
and so,
\begin{equation*}
E_{total} \propto \phi(a_1,b_1) + \phi(a_1, l-b_1) + \phi(a_2,b_2) + \phi(a_2, l-b_2).
\end{equation*}
Since total energy received are additions of energy contributed by each $tUAV$, which come from the same function $\phi(a,b) + \phi(a, l-b)$ with independent values of $(a,b)$, it suffices to find how to maximize $\phi(a,b)+\phi(a,l-b)$ for $ 0 \le a \le l, \text{and } \varepsilon \le b \le l -\varepsilon$.
Let $F(a,b) = \phi(a,b) + \phi(a, l-b)$. Note that $F(a,b)$ is proportional to total energy received by both $rUAV$s in travelling one side length of the square, from one $tUAV$ located at $(a,b)$. We want to maximize $F$ over $(a,b)$.
We prove the following two properties of $F(a,b)$:
\begin{Properties}
\item $\argmax\limits_{a \in [0,l]} F(a,b)= \frac{l}{2} \quad \forall b \in [\varepsilon, l - \varepsilon]$
We have
\begin{align*}
\phi(a,b)&=\int_0^T \frac{dt}{(Vt-a)^2+b^2}\\
&= \frac{1}{Vb}\Big (\tan^{-1}\Big(\frac{a}{b}\Big) + \tan^{-1}\Big(\frac{l-a}{b} \Big) \Big ).
\end{align*}
Recall, $F(a,b) = \phi(a,b) + \phi(a, l-b)$. Hence to show $\argmax\limits_{a \in [0,l]} F(a,b)= \frac{l}{2} \quad \forall b \in [\varepsilon, l - \varepsilon]$, it suffices to show that
\begin{equation}\label{eqpr1}
\argmax\limits_{a \in [0,l]}\phi(a,b)= \frac{l}{2} \quad \forall b \in [\varepsilon, l - \varepsilon].
\end{equation}
Because $ \forall b \in [\varepsilon, l - \varepsilon]$ we have $\frac{1}{Vb}\gt 0$ and $\tan^{-1}\Big (\frac{a}{b} \Big)$ is strictly concave in $a$ for $a\in [0,l]$, Lemma \unskip~\ref{lma1} implies the desired result of Equation \unskip~\ref{eqpr1}.\\
\item $\argmax\limits_{b \in [\varepsilon, l-\varepsilon]} F(a,b) = \{\varepsilon, l-\varepsilon\} \quad \forall a \in [0, l]$
We first show for any constant $k\gt0$, $\frac{1}{Vb}\tan^{-1}\Big(\frac{k}{b}\Big)$ is strictly convex in $b$ for $b \in[\varepsilon, l-\varepsilon]$. Observe
\begin{equation*}
\frac{1}{Vb}\tan^{-1}\Big(\frac{k}{b}\Big) = f(g(b))
\end{equation*}
where $f(x)=\frac{x}{V}\tan^{-1}\Big(kx\Big)$ and $g(b)=\frac{1}{Vb}$ (for $b\in[\varepsilon,l-\varepsilon]$). Since if $f$ is a convex and strictly increasing function, and $g$ is a strictly convex function, then $f(g(b))$ is strictly convex, it suffices to show:
\begin{itemize}
\item $g$ is strictly convex.
It is clear that $g(b)$ is strictly convex.
\item $f$ is convex, and strictly increasing.
We see that $f$ is strictly increasing because $f(x)$ is a product of two strictly increasing positive functions (for $x \gt 0$).
To show $f$ is convex, we observe that its second derivative $f^{\prime\prime}(x)$ is $\frac{2k}{V\Big(1+k^2x^2 \Big)^2}\gt 0$ for $k\gt0$.
Since
\begin{align*}
\phi(a,b)& = \frac{1}{Vb}\tan^{-1}\Big(\frac{a}{b}\Big) + \frac{1}{Vb}\tan^{-1}\Big(\frac{l-a}{b} \Big)
\end{align*}
the above implies that $\phi(a,b)$ is the sum of two strictly convex functions in $b$ for fixed $a \in (0,l)$. For $a=0 \text{ or } l$, $\phi(a,b) = \frac{1}{Vb}\tan^{-1}\Big(\frac{l}{b}\Big)$, which is also a strictly convex function of $b$.
Therefore, for any $a \in [0,l]$, $\phi(a,b)$ is strictly convex in $b$. Since $F(a,b) = \phi(a,b) + \phi(a, l-b)$, Lemma \unskip~\ref{lma2} now implies Property 2.
\end{itemize}
\end{Properties}
These properties prove $F(a,b)$ is maximized precisely at $(l/2,\varepsilon)$ and $(l/2,l-\varepsilon)$. Recall $F(a,b)$ is proportional to total energy received by both $rUAV$s in travelling one side length of the square, from one $tUAV$ located at $(a,b)$. Total received energy is maximized if we place the (one) $tUAV$ at either of these two points. As such, if we have two $tUAV$s, we need to place one $tUAV$ at $(l/2,\varepsilon)$ and the other $tUAV$ at $(l/2,l-\varepsilon)$ for the total received energy to be maximized, as total energy is the sum of energy received by the $rUAV$s from both of the $tUAV$s.
Thus, Theorem \unskip~\ref{thm1} is proved.
\end{proof}
\section{Ramifications of the Model}
\label{ram}
Figure \unskip~\ref{figure-fab} shows the values of $F(a,b)$, which is proportional to the received energy by two $rUAV$s for various placements of one $tUAV$ within zone $R2$. We observe that placing the $tUAV$ anywhere but at either of the two mid-points of the boundaries of $R1$ and $R2$ results in lower values of $F(a,b)$. As such, placing the $tUAV$s at these optimal locations will result in maximum total received energy by the two $rUAV$s. If we place an even number of $tUAV$s by equally distributing these at these two locations, we will achieve an equal amount of received energy in each $rUAV$ (fairness). However, if we have an odd number of $tUAV$s, there will be an imbalance of energy received individually by the $rUAV$s, since after distributing the $tUAV$s equally in these two locations, if we place the leftover $tUAV$ in one of these two locations, received total energy will be maximized, however, this will also mean that the $rUAV$ closer to the last $tUAV$ will receive more energy than the other (unfair). If we place the last $tUAV$ in the middle position from both the $rUAV$s, they will receive an equal amount of energy but the total received energy will not be maximized since the last $tUAV$ is at a non-optimal location. Thus, we make the following observations:
\iffalse
\bgroup
\fixFloatSize{images/Fab_V10ms_l80_e5.jpg}
\begin{figure}[!tbp]
\centering \makeatletter\IfFileExists{images/Fab_V10ms_l80_e5.jpg}{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[width=8cm,height=6cm]{images/Fab_V10ms_l80_e5.jpg}}
\makeatother
\caption{$F(a,b)$ for $l=80m, \varepsilon=5m$, $V=10m/s$.}
\label{figure-fab}
\end{figure}
\egroup
\fi
\begin{figure}
\centering
{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[width=8cm,height=5cm]{FAB_26Jan-eps-converted-to.pdf}}
\caption{$F(a,b)$ within $R2$, for $l=80m, \varepsilon=5m$, $V=10m/s$. The two $rUAV$s are flying over the opposite horizontal axes of the square area.}
\label{figure-fab}
\end{figure}
\begin{Observation}
To recharge two $rUAVs$ with an even number of $tUAVs$, it is possible to place the $tUAVs$ in such a way that maximizes $E_{total}$, i.e., $E_1+E_2$, and also achieves fairness, i.e., $E_1=E_2$.
\end{Observation}
\begin{Observation}
To recharge two $rUAVs$ with an odd number of $tUAVs$, it is possible to achieve either maximized $E_{total}$, or fairness ($E_1=E_2$), but not both at the same time. \end{Observation}
\begin{Observation}
The optimal placement locations are valid for recharging two $rUAVs$ by \textbf{any number of $tUAVs$}, i.e., not only by two $tUAVs$, since new $tUAVs$ contribute to the total energy in an additive manner.
\end{Observation}
\section{Numerical Results}
\label{num}
In order to gain an insight into the average power that the optimal placement of \textbf{one $tUAV$} can provide to the \textbf{two $rUAV$s} compared to the non-optimal placements, we report the numerical results for a similar scenario as used in our model but with one $tUAV$. The parameter values are listed in Table \unskip~\ref{tab:para}. The calculation is based on Equation ~\unskip\ref{eqfriis}, however, we replaced $\lambda$ with $\frac{c}{f}$ where $c$ is the speed of light, and $f$ is the frequency.
For the $tUAV$ power transmission frequency, we have used $433$ MHz, which belongs to the non-licensed ISM band. This frequency is commonly found to be generated by garage door openers, however, we use this in our numerical experiments for the RF power transmission. Using this frequency in a remote location should not cause interference with other devices. Note that the commercially available RF transmitters that are used for charging low-power devices use higher frequencies, for example, \textit{Powercaster} transmitters \unskip~\cite{powercaster} use $915$ mHz. Our requirement of charging UAVs demanding higher power, the $tUAV$s need to transmit power at lower frequencies, since lower frequencies result in higher received power. Moreover, due to the significant reduction of RF power at the receiver compared to the transmitted power, we also need the $tUAV$ to transmit at a higher power, which we have taken to be $1$ kW. This can be justified by our assumption that the $tUAV$s have ground power supply connections, thus it can transmit at this level. Results discussed below assumes full energy conversion efficiency.
\begin{figure}
\centering
{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[width=8cm,height=5cm]{avg_power_433_1KW-eps-converted-to.pdf}}
\caption{Average power (dBm) received by two $rUAV$s from one $tUAV$ located at $(a,b)$, over the time interval $[0,T]$. The two $rUAV$s are flying over the opposite horizontal axes of the square area.}
\label{fig:avg_power_DBM_433_1K}
\end{figure}
\iffalse quoted out the 915 mhz 57 dbm figure from 26 January ******
\begin{figure}
\centering
{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[width=8cm,height=5cm]{images/avg_power_dbm_26Jan.eps}}
\caption{Average Power (dBm) received by Two $rUAV$s from one $tUAV$ located at $(a,b), f=915 mHz, P_t=501 W (57dBm)$.}
\label{fig:avg_power_DBM}
\end{figure}
\fi
\begin{table}
\begin{center}
\caption{Parameter Values}
\label{tab:para}
\begin{tabular}{c|c}
\hline
Parameter & Value \\
\hline
$f$ & $433$ MHz \\
$c$ & $299792458$ m/s\\
$P_t$ & $1$ kW \\
$G_r,G_t$ & $6$ dBi\\
$V$ & $10$ m/s\\
$l$ & $80$ m\\
$\varepsilon$ & $5$ m\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
{\@ifnextchar[{\ts@includegraphics}{\ts@includegraphics[width=\checkGraphicsWidth,height=\checkGraphicsHeight,keepaspectratio]}}[width=8cm,height=5cm]{28JanUav1-eps-converted-to.pdf}}
\caption{Average power (dBm) received by one $rUAV$s from one $tUAV$ located at $(a,b)$, over the time interval $[0,T]$. The $rUAV$ is flying over the lower horizontal axis of the square area.}
\label{fig:avg_power_1uav}
\end{figure}
Figure \unskip~\ref{fig:avg_power_DBM_433_1K} shows the average power over the time taken to traverse one side of the square (for the $rUAV$) in dBm, received by the two $rUAV$s from one $tUAV$ placed at different $(x,y)$ coordinates within the safe placement zone $R2$ of the square area. The optimal placement of the $tUAV$ as per our model, which is the middle of either of the boundary of $R1$ and $R2$ (the mid point of the horizontal lines of the coloured zone) resulted in the maximum average received power of $25.5121$ dBm by the two $rUAV$s. If the $tUAV$ is placed at the middle of the area, i.e, at $(40,40)$, the average received power became $16.7425$ dBm. Placing the $tUAV$ at the middle of the vertical axes, i.e., at $(0, 40)$, or $(80,40)$ resulted in the average received power of $15.2233$ dBm. As we can see, the optimal placement of the $tUAV$ resulted in a power gain of $8.7696$ dBm and $10.2888$ dBm over the reported two \textit{non-optimal} placements.
In order to observe the level of power received by one $rUAV$ from one power source, in Figure \unskip~\ref{fig:avg_power_1uav} we have reported the average power over the time taken to traverse one side of the square when the $tUAV$ is placed at different $(x,y)$ coordinates. In this case, the average power received by the $rUAV$ is found to be $25.4152$ dBm when the $tUAV$ is placed at the optimal location which is at $(40,5)$ in this case, and $12.2130$ dBm when it is located at the middle of the vertical axes. A gain of $13.2022$ dBm was achieved in this case by placing the $tUAV$ at the optimal location.
Clearly the optimal placement of the $tUAV$ must be used in our considered scenario for the best outcome of the energy transmission system, however, received level of energy harvested from RF signals is still quite low, and will require multiple dedicated RF sources to power the UAVs \unskip~\cite{rfharvesting}. For the fixed-wing drones consuming much less power than the rotary-wing drones since the fixed-wing drones only need energy to move forward but not to keep afloat in air, our system can certainly be considered to extend the operation duration of such drones using multiple $tUAV$s, all placed at (or closer to) the suggested optimal locations. For the placements of multiple $tUAV$s, Section \unskip~\ref{ram} provided some insights.
\section{Conclusion}
\label{conclu}
This paper considered the problem of in-situ recharging aerial base stations without disrupting their regular trajectory. We proposed a solution that leverages wireless power transfer via carefully positioned airborne but stationary energy sources. We presented a mathematical model for solving the optimal placement of these energy sources so as to maximize the total received power at the UAVs while simultaneously achieving fairness. Our numerical results showed that placing the charging nodes at the suggested optimal locations resulted in significant power gain compared to non-optimal placements. In our future work, we will consider an optimization model that takes the number of $rUAV$s and $tUAV$s as well as their multidimensional trajectories as tuning parameters to find the optimum solution.
\section*{Appendix} \label{sec:app}
\appendices{A. Proof of Lemma \unskip~\ref{lma1}}\\
Since $c$ is a strictly concave function, therefore $c(l-x)$ is also strictly concave. As such, $g$ being the sum of two strictly concave functions, $g$ is also strictly concave. For $g$ to be (strictly) maximized at $x=l/2$, we have to prove that $g(x) < g(l/2) \quad \forall x \in [0,l]$, and $x \ne l/2$. Let $x \in [0,l]$, and $x \ne l/2$.
We have
\begin{equation*}
\begin{aligned}
g(x) & = c(x) + c(l-x)\\
& = 2\Big(\frac{1}{2} c(x) + \frac{1}{2} c(l-x)\Big)\\
& < 2 c\Big(\frac{1}{2} x + \frac{1}{2} (l-x)\Big) \text{ by strict concavity,}\\
& \text{and since } x \ne l-x\\
& = 2 c\Big(\frac{l}{2}\Big) \\
& = g\Big(\frac{l}{2}\Big).
\end{aligned}
\end{equation*}
Thus, Lemma \unskip~\ref{lma1} is proved.
\appendices{B. Proof of Lemma \unskip~\ref{lma2}}\\
Since $f$ is a strictly convex function, therefore $f(l-x)$ is also strictly convex. As such, $h$ being the sum of two strictly convex functions, $h$ is also strictly convex. We have to show that if $\varepsilon \lt x \lt l-\varepsilon$ then $h(x) \lt h(\varepsilon)$ (note $ h(\varepsilon) = h(l-\varepsilon)$).
Suppose $\varepsilon\lt x\lt l-\varepsilon$. Then there exists $\theta \in (0,1)$ such that
\begin{equation*}
x=\theta \varepsilon + (1-\theta)(l-\varepsilon).
\end{equation*}
So,
\begin{equation*}
\begin{aligned}
h(x) & = h\Big( \theta \varepsilon + (1-\theta)(l-\varepsilon) \Big)\\
& < \theta h(\varepsilon) + (l-\theta)h(l-\varepsilon) \text{ by strict convexity,}\\
& \text{ and since } \varepsilon \ne l-\varepsilon\\
& = h(\varepsilon).
\end{aligned}
\end{equation*}
Thus, Lemma \unskip~\ref{lma2} is proved.
\bibliographystyle{IEEEtran}
|
1,116,691,497,820 | arxiv | \section{Introduction}
Object existence determination\footnote{Certain literature \cite{zagoruyko2016multipath} may refer the problem using the terminology \textit{object detection}. More commonly object detection refers to both deciding the existence of certain patterns and subsequently locating them if so.}
(ED) focuses on deciding if certain visual patterns exist in an image.
As the basis of many computing vision tasks, ED's quality affects further processing such as locating certain patterns (apart from telling the existence), segmentation of certain patterns, object recognition, and object tracking in consecutive image frames.
However, while ED is conducted by humans rapidly and effortlessly \cite{das2016human,borji2014salient}, the performance of computer vision algorithms are surprisingly poor especially when the image is of large size and low quality.
Hence, it is desirable to develop efficient and noise-proof systems to deal with object detection tasks with large and noisy images.
In fact, the way humans process images is not similar to recent prevailing approaches such as detecting objects via convolution networks (ConvNet) and residual networks \cite{he2016deep}.
Instead of taking all pixels from the image in parallel, humans perform sequential interactions with the image.
Humans may recursively deploy visual attentions and performs glimpses on selective locations to acquire information.
At the end of the processing, information from all past locations and glimpses is gathered together to make the final decision.
Such behavior accomplishes ED tasks efficiently, especially for large images as it depends only on the number of saccades.
Meanwhile, as the approach selectively learns to skip the clutter\footnote{Clutter are the irrelevant features of the visual environment, discussed in RAM \cite{mnih2014recurrent}.}, it tends to be less sensitive to noise compared with those that take all pixels into the computation.
The process can be naturally interpreted as a reinforcement learning (RL) task where each image represents an environment.
At the beginning of the process, the agent conducts an action which is represented by a 2-dimension Cartesian coordinate.
When the environment receives the action, it calculates the retina-like representation of the image at the corresponding location, and returns that representation to the agent as the agent's observation.
Repeatedly until the last step, the agent predicts the detection result based on the trajectory and receives the evaluation of its prediction as the reward signal.
It is important to note that the agent has never had access to the full image directly.
Instead, it carefully chooses its actions in order to get the desired partial observations of the internal states of the environment.
Recurrent attention models (RAM) \cite{mnih2014recurrent} are the first computational models to imitate the process with a reinforcement learning algorithm.
The success of RAM leads to enormous studies on attention-based computer vision solutions \cite{yeung2016end,gregor2015draw}.
However, RAM and their extensions \cite{ba2015learning,ba2014multiple} are designed to solve object recognition tasks such as handwritten digit classification.
Those models largely ignore the trajectory information which causes massively delayed rewards.
Indeed in RAM, the reward function is associated with only the last step of the process and is otherwise zero.
The actions before that, which deploy the attention for the model, do not receive direct feedback and are therefore not efficiently learned.
Especially in ED (and in general, object detection) tasks, delayed rewards fail to provide reinforcement signals to the choice of locations when the glimpse at certain locations may provide explicit information for the existence of the object.
We present recurrent existence determination models (RED), which inherit the advantage of RAM that the attention is only deployed on locations that are deemed informative.
Our approach involves a new observation setting which allows the agent to have access to explicit visual patches.
Unlike previous trails which blur the pixels that are far from the saccade location, we acquire the exact patches which help to detect the existence of specific patterns.
We employ gated recurrent units (GRU) \cite{chung2014empirical} to encode the historical information acquired by the agent and generates temporary predictions at each time step.
The temporary predictions over the time horizon are then aggregated via a novel $k$-maximum aggregation layer, which averages the $k$-greatest value to compute the final decision.
It allows the rewards to be backpropagated to the early and middle stages of the processing directly apart from through the recurrent connections of the GRU.
It provides immediate feedback which guides the agent to allocate its attention, and therefore addresses the issues caused by delayed rewards.
RED is evaluated empirically on both synthetic datasets, \textit{Stained} \textit{MNIST}, and real-world datasets.
Stained MNIST is a set of handwritten digits from MNIST. Additionally, the resolution has been enlarged and each digit may be added dot stains around the writings.
The dataset is designed to compare the performance of RED and existing algorithms on images with high-resolution settings.
The results show that attention based models run extraordinarily faster than traditional, ConvNet-based methods \cite{dieleman2015rotation,graham2014fractional}, while having better accuracy as well.
Experiments on real-world dataset show superior speed improvement and competitive accuracy on retinopathy screening, compared to existing approaches.
This also demonstrates that our algorithm is practical enough to be applied to real-world systems.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Policy Gradients}
\label{sec:prelim-rl}
In an episode of RL \cite{sutton2018reinforcement}, at each time step $t$, the agent takes an action $a_t$ from the set $\mathcal{A}_t$ of feasible actions.
Receiving the action from the agent, the environment updates its internal state, and returns an observation $x_t$ and a scalar reward $r_t$ to the agent accordingly.
In most of the problems, the observation does not fully describe the internal state of the environment, and the agent has to develop its policy using only the partial observations of the state.
This process continues until the time horizon $T$.
Let $R_t=\sum_{t^\prime=1}^{t^\prime=t}r_{t^\prime}$ denote the cumulative rewards up to time $t$, the policy is trained to maximize the expectation of $\mathbb{E}[R_T]$.
Let $\pi_\theta$ be the policy function, parameterized by $\theta$, the REINFORCE algorithm \cite{williams1992simple,mnih2016asynchronous} estimates the policy gradient using
\begin{equation}
g = \mathbb{E}_\pi[\nabla_\theta\log\pi(a_t|s_t)(R_t-b_t)],
\end{equation}
where $b_t$ is an baseline function for variance reduction.
It is common in RL to use $x_t$ as the state $s_t$.
However, consider that $x_t$ are small patches in our setting, the information in a single $x_t$ is insufficient.
Ideally, the decision of the action is based on the trajectory $\tau_t=(a_1,x_1,r_1,\dots,a_{t-1},x_{t-1},r_{t-1})$ which includes past actions, observations, and rewards.
To handle the growing dimensionality of $\tau_t$, the agent maintains an internal state\footnote{In our paper, $s_t$ is defined to be the state of the agent instead of the state of the environment.} $s_t$ which encodes the trajectory using a recurrent neural network (RNN), and updates it repeatedly until the end of the time horizon.
In this way, the action is decided by the policy function, based only on the internal state $s_t$ of the agent.
Note that the full state of the environment is $\tau_t$ and the image to be processed, and the agent observes $\tau_t$ only.
The training of RL repeats the above process from step $1$ to step $T$ for a certain number of episodes.
At the beginning of each episode, the agent resets its internal state while the environment resets its internal state as well.
The model parameters are maintained across multiple episodes and are updated gradually as the occurrence of the reward signals at the end of each episode.
Note that in RAM and RED, different from general online learning framework, the agent does not receive the reward signal in the middle stages of an episode.
Hence the reward signals are inevitably heavily delayed, and RED need to address temporal credit assignment problem \cite{sutton1984temporal}, which evaluates individual action within a sequence of actions according to a single reinforcement signal.
\subsection{Glimpse and Retina-Like Representations} \label{sec:attention}
A retina-like representation is the visual signal humans receive when glimpsing at a point of an image.
The visual effect is that regions close to the focused location tend to retain their original, high-resolution form, while regions far from the focused location are blurred and passed to the human brain in their low-resolution form.
In RAM and RED, the environment calculates the retina-like representations and returns it as the observation.
Existing approaches to mimic such visual effects have been used in RAM and RAM's variants.
They can be categorized into two classes: soft attention \cite{gregor2015draw,xu2015show} and hard attention \cite{eslami2016attend,xu2015show,mnih2014recurrent}.
Soft attention \cite{hermann2015teaching} applies a filter centered at the focused location.
It imitates human behaviors, which downsamples the image gradually as it approaches far away from the focused point, resulting in a smooth representation.
The approach is fully differentiable and is hence amenable to be trained straightforwardly using neural networks together with gradient descent.
Despite those merits, soft attention is in general computationally expensive as it involves the filtering operation over all pixels, which deviates from the idea of RED and RAM to only examines parts of the image and subsequently making the process relatively inefficient.
Hard attention, on the other hand, extracts pixels with predefined sample rates.
Fewer pixels are extracted as the region approaches further away from the focused location, making the process cost only constant time.
Hard attention fits the idea of RED well though it is non-differentiable as it indexes the image and extracts pixels.
To address the non-differentiability, we develop our training algorithm via policy gradient and use the rollouts of the attention mechanism to estimate the gradient.
Formally, let $x_t$ be a list of $c$ channels and the $i$-th channel extracts the squared region centered at $a_t$ with size $n_i\times n_i$, and down sample the patch to $n_1\times n_1$.
The channels are incorporated with the location information (known as the what and where pathways) with the patch information by adding the linear transformation $\tanh(W_{xa}a_t)$ of $a_t$.
Note that the value of each entry $W_{xa}$ will be restricted to be relatively small compared to the pixel values to retain the original patch information.
\subsection{Convolutional Gated Recurrent Units}
RAM and RED use an RNN to encode the trajectory and update the states of the agent.
While both long short-term memory (LSTM) and GRU are prevailing RNN implementations in sequential data processing \cite{chung2014empirical}, GRU is preferred to LSTM in RED.
The reason is that the input passes through the unit explicitly when a GRU merges the memory cell state and the output state in an LSTM unit.
This explicit information helps to make the temporary detection decisions and enables our design of the $k$-maximum aggregation layer.
Meanwhile, with the merge, the agent updates its internal state more efficiently.
The speed improvement is critical especially for real-time applications such as surveillance anomaly detection when an instant detection decision is required.
We use convolutional GRU, a variant of GRU where the matrix product operations between the output state $s_t$, the input $x_t$, and the model parameters are replaced with convolution operations, and $s_t$ and $x_t$ are kept in their 2-dimension matrix shapes \cite{xingjian2015convolutional}.
A graphical illustration of GRU is shown in Figure \ref{fig:gru}, where lines in orange represent convolution operation.
The gate mechanism in a convolutional GRU is formulated as
\begin{align}
\begin{split}
z_t & = \sigma(W_{zh}\ast s_{t-1}+W_{zx}\ast x_t) \\
v_t & = \sigma(W_{rh}\ast s_{t-1}+W_{rx}\ast x_t) \\
\tilde{s}_t & = \tanh(W_{sh}\ast (v_t\circ s_{t-1})+W_{sx}\ast x_t) \\
s_t & = (1-z_t)s_{t-1} + z_t\tilde{s}_t, \label{eqn:gru}
\end{split}
\end{align}
where $\ast$ denotes convolution, $\circ$ denotes Hadamard product, $\sigma(\cdot)$ denotes the sigmoid function, and $z_t$ and $v_t$ are the update gate and the reset gate, respectively.
$W_{zh}$, $W_{zx}$, $W_{rh}$, $W_{rx}$, $W_{sh}$, and $W_{sx}$ are trainable parameters.
Convolutional GRU retains the spatial information in the output state so that temporary detection decisions can be made well before the end of an episode.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth,height=0.26\textwidth]{fig/gru.png}
\caption{Illustration of convolutional gated recurrent units.}
\label{fig:gru}
\end{figure}
\section{Recurrent Existence Determination}
\begin{figure}[t]
\includegraphics[width=0.48\textwidth,height=0.26\textwidth]{fig/arch.png}
\caption{The attention mechanism and the reward mechanism in our proposed RED model.}
\label{fig:arch}
\end{figure}
In this section we discuss the three main components of RED, namely, the attention mechanism, the $k$-maximum aggregation layer, and the policy gradient estimator.
Taking together, an illustrative of our model is shown in Figure~\ref{fig:arch}, where the arrows denote forward propagation.
\subsection{Attention Mechanism in RED}
We formulate the attention mechanism within each of the episodes, that is, within the processing of one image.
Let $\mathcal{I}$ denote the image, the agent has $x_0$, which is the low-resolution form of $\mathcal{I}$, as the initial observation.
The state $s_0$ of the agent is initialized as the zero vector.
Repeatedly, at each time step $t$, the agent calculates its action $a_t\in \mathbb{R}^2$, according to
\begin{eqnarray}
a_t & = & \tanh(W_{as}s_t) + \epsilon_t, \label{eqn:att}
\end{eqnarray}
where $W_{as}$ is a trainable parameter of the model and $\epsilon_t$ is a random noise to improve exploration.
The action $a_t$ refers to a Cartesian Coordinate on the image, with $(-1,-1)$ corresponding to the bottom-left corner of $\mathcal{I}$ and $(1,1)$ corresponding to the top-right corner of $\mathcal{I}$.
Each entry of $\epsilon_t$ is sampled from a normal distribution with a fixed standard deviation of $\beta$, independently.
The environment returns the retina-like representation $x_t$ via the hard attention model described in Section~\ref{sec:attention}.
The agent employs a single convolutional GRU and uses the output states $s_t$ as the agent's state, defined in Equation~\eqref{eqn:gru}.
The state $s_t$ has the same shape $n_1\times n_1$ as each channel of the observation, which is ensured by the convolution operation in Equation~\eqref{eqn:gru}.
We take advantage of GRU that the reset gate $v_t$ in Equation~\eqref{eqn:gru} controls the choice between long-term dependencies and short-term observations.
The former is important for exploring future attention deployment within an episode, while the later is important for exploiting currently available information to make temporary decisions.
By training over a large number of episodes, the agent learns to balance exploration and exploitation from the reinforcement signals by updating its gate parameters.
With Equation~\eqref{eqn:gru}, Equation~\eqref{eqn:att} and the hard attention mechanism, each rollout is computed in constant time with respect to the image size as $T$ and $n_i$ are fixed. As a result, a trained RED model is able to make predictions very efficiently.
\subsection{Prediction Aggregation}
We present the framework to generate temporary predictions and subsequently aggregate the temporary predictions into the final prediction, i.e. the detection of the patterns.
At each time step $t$, the agent has access to the output state $s_t$ of the GRU which carries the information from the current patch $x_t$.
Based on $s_t$ the agent makes a temporary prediction $\hat{\mathcal{Y}}$ using a feed-forward network followed by a non-linear operation $\hat{y}_t=\frac{1}{2}(1+\tanh(W_{ys}s_t))$,
where $\hat{y}_t$ is the estimated probability that the object exists in $\mathcal{I}$.
The temporary predictions are aggregated over time using our newly proposed $k$-maximum aggregation layer.
The layer calculates the weighted average of the top $k$ largest values among $\hat{y}_{t_0},\cdots ,\hat{y}_T$, where $t_0\geq 1$ is a fixed threshold of the model.
The output $\hat{\mathcal{Y}}$ of the $k$-maximum layer is formulated as
\begin{equation}
\hat{\mathcal{Y}}=\frac{1}{Z}\sum_{t\in K}(1-\gamma^t) \hat{y}_t, \label{eqn:kmax}
\end{equation}
where $K=k\text{-argmax}_{t_0\leq t \leq T} \{\hat{y}_t\}$ is the set of the indexes of the top $k$-largest temporary predicted probabilities, and $Z=\sum_{t\in K}(1-\gamma^t)$ is the normalizer to guarantee $0\leq \hat{\mathcal{Y}} \leq 1$.
In Equation~\eqref{eqn:kmax} we elaborate a time discount factor $1-\gamma^t$ which assigns a larger value toward the late stages of the process than the early stages of the process, where $\gamma$ is fixed through the process.
The factor $\gamma$ is a trade-off between RAM where all previous steps are used to benefit the prediction at the end of the episode and majority voting where all observation contributes to the binary determination.
The advantage of using the $k$-maximum layer is to guide the model to balance between exploration and exploitation\footnote{It also helps to address the problem of vanishing gradient at the same time, though, it is out of the scope of this paper.}.
Consider that only steps $t$ with top $k$ largest $\hat{y}_t$ are taken into account in the final prediction, the model has a sufficient number of time steps to explore different locations on $\mathcal{I}$ and does not need to worry about affecting the final prediction.
In fact, exploring the context of the image is important to collect information and locate the detection objective in late stages.
The time discount factor further reinforces that by assigning larger weights toward late stages, which encourages the agent to explore at the early stages of the process and exploit at the late stages of the process.
Viewing our proposed prediction aggregation mechanism from an RL perspective, it addresses the credit assignment problem \cite{sutton1984temporal}.
Existing studies on applications via policy learning, e.g. \cite{mnih2016asynchronous,li2018policy,young2018metatrace}, commonly equally assign the feedback of an episode toward all actions the agent has made.
The large variance of estimating the quality of a single action using the outcome of the entire episode is neutralized by training the agent for millions of episodes.
However, in our settings the state of the environment is diverse as each different image $\mathcal{I}$ corresponds to a unique initial state of the environment.
The variance cannot be reduced by simply training on a large dataset of images without a fixed observation function with respect to $\mathcal{I}$.
In this way, our proposed aggregation mechanism is necessary to help the algorithm to converge and it is the key component for RED to make detection decisions.
\subsection{Policy Gradient Estimation}
In this section we derive the estimator of the policy gradient. It is feasible to apply the policy gradient theorem \cite{sutton2000policy,sutton2018reinforcement,mnih2014recurrent}, but since we know the exact formulation of the reward function we can largely reduce the variance by incorporating this information.
To achieve this, we derive the estimator specifically for RED from scratch by taking the derivative of the expected cumulative regret, defined as the negative reward \cite{li2016contextual}.
Let $W$ denote the set of trainable parameters including $\theta$, $W_{as}$, $W_{xa}$, $W_{ys}$ and the trainable parameters in the GRU, in Equation~\eqref{eqn:gru}.
Also let $\mathcal{Y}\in \{0,1\}$ be the ground truth of the detection result, where $0$ and $1$ correspond to the existence and non-existence of the object, respectively.
Define a rollout $\hat{\tau_T}$ of the trajectory within an episode to be a sample drawn from the probability distribution $\mathbb{P}(\tau_T|\pi_\theta(\cdot))$.
During training, the agent generates its rollouts $\hat{\tau}_T$ and predictions $\hat{\mathcal{Y}}$ on an iterator of $(\mathcal{I}, \mathcal{Y})$ pairs, where each pair of the image and the ground truth corresponds to one episode of RL.
Define the regret $L_T$ to be the squared error between the predicted probability and the ground truth
\begin{equation}
L_T=(\hat{\mathcal{Y}}-\mathcal{Y})^2.
\end{equation}
The model updates $W$ after the conclusion of each episode, when it receives a reward signal $r_T=1-L_T$.
In this case, $L_T+R_T=L_T+r_T=1$.
We utilize similar arguments in the policy gradient theorem to address the non-differentiability.
Let $\boldsymbol{a}=(\hat{a}_1,\dots,\hat{a}_T)$ be the sequence of actions in $\hat{\tau}_T$, we have the expected regret
\begin{equation}
\mathbb{E}[L_T|W]=\sum\nolimits_{\boldsymbol{a}}\mathbb{P}(\boldsymbol{a}|W)(\hat{\mathcal{Y}}_{\boldsymbol{a}}-\mathcal{Y})^2,
\end{equation}
where the deterministic variable $\hat{\mathcal{Y}}_{\boldsymbol{a}}$ denotes the model's counterfactual prediction under the condition that $\boldsymbol{a}$ is sampled with probability one.
Since there is no randomness involved on the environment side, the expectation above is calculated over the actions only.
Taking the derivative with respect to $W$, the gradient is
\begin{align}
\begin{split}
\nabla_{W}\mathbb{E}[L_T|&W] = \mathbb{E}_{\boldsymbol{a}\sim \mathbb{P}(\boldsymbol{a}|W)}[(\hat{\mathcal{Y}}_{\boldsymbol{a}}-\mathcal{Y})^2 \\
& \nabla_{W}\log\mathbb{P}(\boldsymbol{a}|W) + 2(\hat{\mathcal{Y}}_{\boldsymbol{a}}-\mathcal{Y})\nabla_{W}\hat{\mathcal{Y}}_{\boldsymbol{a}}], \label{eqn:policy}
\end{split}
\end{align}
where the immediate partial derivative from the chain rule of Equation~\eqref{eqn:att} is
\begin{equation}
\nabla_{W}\log\mathbb{P}(a_t|W)=\frac{1}{\beta^2}\cdot(a_t-\mathbb{E}[a_t|W])s_{t-1}^T. \label{eqn:log-action}
\end{equation}
Further, deduct from the regret the baseline function
\begin{equation}
b_T=\mathbb{E}_{\boldsymbol{a}\sim \mathbb{P}(\boldsymbol{a}|W)}[(\hat{\mathcal{Y}}_{\boldsymbol{a}}-\mathcal{Y})^2],
\end{equation}
which calculates the expected regret from the rollouts, for variance reduction.
By doing this we account only the difference between the actual reward and the baseline function.
Note that the baseline function introduces no bias into the expectation in Equation~\eqref{eqn:policy}, while it is used to reduces the variance when estimating the policy gradient using the Monte-Carlo samples $\boldsymbol{a}$.
At the end of each episode, update $W$ according to
\begin{align}
\begin{split}
W \leftarrow\text{ } & W - \alpha\mathbb{E}_{\boldsymbol{a}\sim \mathbb{P}(\boldsymbol{a}|W)}[((\hat{\mathcal{Y}}_{\boldsymbol{a}}-\mathcal{Y})^2-b_T) \\
& \nabla_{W}\log\mathbb{P}(\boldsymbol{a}|W) + 2(\hat{\mathcal{Y}}_{\boldsymbol{a}}-\mathcal{Y})\nabla_{W} \hat{\mathcal{Y}}_{\boldsymbol{a}}], \label{eqn:bias}
\end{split}
\end{align}
where $\alpha$ is the learning rate.
To estimate the expectation in Equation~\eqref{eqn:bias}, the agent generates a rollout $\hat{\tau}_T$ which samples $\boldsymbol{a}$ according to $\boldsymbol{a}\sim \mathbb{P}(\boldsymbol{a}|W)$.
The expectation is then estimated using the generated $\boldsymbol{a}$ value by Equation~\eqref{eqn:log-action} and REINFORCE's back-propagation \cite{wierstra2007solving}.
Note that the second part $2(\hat{\mathcal{Y}}_{\boldsymbol{a}}-\mathcal{Y})\nabla_{W}\hat{\mathcal{Y}}_{\boldsymbol{a}}$ of the gradient is useful, though, it is sometimes ignored in previous studies \cite{mnih2014recurrent}.
It connects the regret to the early stages which allows the regret signal to be back-propagated directly to those steps and to guide the exploitation of the agent.
It can be regarded as a retrospective assignment of the credits after the rollout has been fully generated, equivalently making the reward $r_t$ in RED no longer $0$ when $t<T$ during the training phase, which addresses the issues caused by delayed rewards.
\section{Experiments}
\subsection{Stained MNIST}
We first test and compare RED on our synthetic dataset, \textit{stained MNIST}, with a variety of baseline methods.
Stained MNIST contains a set of handwritten digits, which have very high resolution and much thinner writings than the original MNIST does.
Each digit may be associated with multiple \textit{stains} on the edge of its writing, which are dot-shaped regions with high tonal value.
The algorithms are required to predict if such stains exist in the images.
The task is very challenging as the image resolution is very high while the writings are thin and unclear.
Hence it is hard to locate the stains or recognize the stains from the writings.
Stained MNIST is constructed by modifying the original MNIST dataset as follows.
Each image from MNIST is first resized to $7168\times 7168$ by bilinear interpolation, and rescaled to 0 to 1 tonal value.
The enlarged images are then smoothed using a Gaussian filter with a $20\times 20$ kernel.
After that, it calculates the central differences of each pixel and finds out the set $C$ of pixels with $0.2$ or larger gradient.
The tonal values of pixels that are within $500$ pixels of $C$ are set to $0$.
This operation makes the writings of the digits much thinner in the high-resolution images.
After removing those pixels, the gradient of each pixel is calculated again, and $10$ to $15$ stains with radius $12$ are randomly added at pixels with high gradient.
The hyper-parameters of RED are set to be $c=3,n_1=18,n_2=36,n_3=54$ for attention mechanism and $\gamma=0.95, k=25, t_0=10$ for prediction aggregation, through a random search on a training subset.
The search over $\gamma\in [0.9, 0.98]$, $k\in [15,30]$, and $t_0\in [10,50]$ does not observe significant difference on the performance.
Accordingly, the patch size $x_t$ is set to $n_1\times n_1$ as the input of the GRU.
The horizon is fixed to $T=350$, where no significant improvement can be observed by further increasing it.
When estimating the baseline function $b_T$, 15 instances are sampled and are averaged over.
When evaluating RED, we remove the stochastic components $\epsilon_t$ in computing the actions.
We compare both the accuracy and the average runtime to make a prediction by RED with the baseline approaches, including RAM with the same set of parameters, a 3-layer ConvNet, a 4-layer ConvNet, and RED where the attention $\hat{a}_t$ is uniformly randomly selected from $\mathcal{A}_t=[-1,1]^2$.
The last baseline is used to show the necessity of the learned attention mechanism.
As shown on Table~\ref{tbl:stained}, RED significantly outperforms all baselines in terms of accuracy, and all attention-based models have better speed compared with ConvNet-based algorithms.
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
Approach & Runtime (s) & Accuracy (test) \\
\midrule
RED &$0.06$ & $84.43$\%\\
Random $\boldsymbol{a}$ &$0.06$ & $51.79$\%\\
RAM &$0.06$ & $62.35$\%\\
ConvNet-3 &$1.95$ & $81.49$\%\\
ConvNet-4 &$3.30$ & $82.92$\%\\
\bottomrule
\end{tabular}
\caption{Comparisons of RED with different baseline approaches on Stained MNIST.}
\label{tbl:stained}
\end{table}
\subsection{Diabetic Retinopathy Screening}
Diabetic Retinopathy (DR) \cite{fong2004retinopathy} is among the leading causes of blindness in the working-age population of the developed world.
Its consequence of vision loss is effectively prevented by population-wise DR screening, where automatic and efficient DR screening is an interesting problem in medical image analysis.
The screening process is to detect abnormality from the fundus photographs, which are generally in high resolution and are noisy due to the photo-taking procedure.
The high-resolution, low signal-to-noise ratio, and the need for efficient population-wise screening agree with the characterizations of our proposed RED model, which motivates us to test the model on this task.
We test and compare the performance using a dataset publicly available on Kaggle\footnote{https://www.kaggle.com/c/diabetic-retinopathy-detection}.
While the images are originally rated with five levels, we consider level $0$ and $1$ as negative results $\mathcal{Y}=0$ and level $2$, $3$ and $4$ as positive results $\mathcal{Y}=1$.
The results are shown in Table~\ref{tbl:dr}, where the same hyper-parameters are used as is in the stained MNIST experiment.
\begin{figure}[t]
\includegraphics[width=0.49\textwidth,height=0.68\textwidth]{fig/dr-rollout.png}
\caption{Distribution of the attentions in a rollout of RED.}
\label{fig:dr-rollout}
\end{figure}
The performance of our RED approach is compared with RAM and ConvNet with both four layers and five layers.
Also, we test ConvNet with fractional max-pooling layers \cite{graham2014fractional} and cyclic pooling layers \cite{dieleman2015rotation} which have solid performances on the Kaggle challenge.
We re-implement their approach with 4 and 5 layers (\textit{ConvNet-4+} and \textit{ConvNet-5+}) and the comparisons are shown in Table~\ref{tbl:dr}.
Our RED approach achieves extraordinary speed performance while demonstrating competitive accuracy.
Notably, compared with the ConvNet-based methods which usually take many seconds to process each image, RED provides a way to trade marginal accuracy to significant speed improvement.
That could be critical especially for the DR screening tasks designed to be used on population-wise datasets while requiring timely results.
Apart from the speed improvement, it is worth to note that RED is also light weighted, where the number of parameters needed is relatively low to process only small patches at any time step.
The experiments on DR screening demonstrate that our RED method is practical enough to be applied to real-world systems.
\begin{table}
\centering
\begin{tabular}{lrr}
\toprule
Approach & Runtime (s) & Accuracy (test) \\
\midrule
RED &$0.04$ & $91.55$\%\\
Random $\boldsymbol{a}$ &$0.04$ & $53.44$\%\\
RAM &$0.04$ & $81.35$\%\\
ConvNet-4 &$2.32$ & $90.61$\%\\
ConvNet-4+ &$2.32$ & $91.97$\%\\
ConvNet-5 &$2.92$ & $91.84$\%\\
ConvNet-5+ &$2.92$ & $92.29$\%\\
\bottomrule
\end{tabular}
\caption{Comparisons of RED with different baseline approaches on DR screening.}
\label{tbl:dr}
\end{table}
\subsection{Intuitive Demonstration of the Trajectory}
To understand the policy that deploys the agent's attention, we present a graphical demonstration of the trajectory, which imitates the way humans process existence detection tasks.
As shown in Figure~\ref{fig:dr-rollout} top, the trained agent predicts if patterns related to DR exist in a fundus image.
To observe the trajectory, we put the limit $T\rightarrow \infty$ on the time horizon while keeping the stochastic components $\epsilon$ in Equation~\eqref{eqn:att}.
We then illustrate the distribution of the attentions, in the form of a heat map, in Figure~\ref{fig:dr-rollout} bottom.
We first observe that the attention are majority crowded in the bottom right part of the image, which coincides with the lesion patterns (yellow stains on the fundus image).
Within the small blue box marked on Figure~\ref{fig:dr-rollout} top concentrates 30 out of the first 250 saccades.
It shows the ability of the trained model to locate regions of interest and to deploy its limited attention resource selectively.
Notably, only 4 of them happen in the first 100 time steps, and the density of attention for $T\rightarrow \infty$ becomes even higher ($\geq 9$ heat value on Figure~\ref{fig:dr-rollout} bottom).
On the other hand, we observe that the model tends to deploy its attentions on around the blood vessels especially at the early stages of the process.
Such behavior helps the agent to gain information about the context of the image and locate the region of interest in the later stages of the process.
Also, it is worth to note that the agent does not get stuck in a small region even when we set the time horizon to be arbitrage large.
Instead, the agent keeps exploring the image indefinitely.
The way the agent automatically balance exploitation and exploration is what we have been expecting an RL algorithm to learn.
\section{Conclusion and Future Works}
We present recurrent existence determination, a novel RL algorithm for existence detection.
RED imitate the attention mechanism that humans elaborate to process object detection both efficiently and precisely, yielding similar characterizations as desired.
RED employs hard attention which boosts the test-time speed while the non-differentiability introduced by the attention mechanism is addressed via policy optimization.
We propose the $k$-maximum aggregation layer and other components in RED which help to solve the delayed reward problem and automatically learns to balance exploration and exploitation.
Experimental analysis shows significant speed and accuracy improvement compared with previous approaches, on both synthetic and real-world datasets.
One of the plausible future direction is to further address the delayed reward problem, by adding a value network as the critic the actor-critic method \cite{sutton2018reinforcement,mnih2016asynchronous}.
The critic will give the agent immediate feedback for any action it takes, using the estimation of the action-state value function.
In this case, as the environment is partially observable, the actor-critic need to be asymmetric where the critic will have access to the full image.
The critic network is expected to be a proper replacement of the aggregation layer in this paper with an improved performance.
\newpage
\bibliographystyle{named}
|
1,116,691,497,821 | arxiv | \section{Introduction}
\label{sec:intro}
High-quality physical qubits with long coherence times that allow one to reliably store fragile quantum states form the backbone of currently developed quantum processors~\cite{Ladd2010,nielsen2010}. Over the last decades, the development of methods to characterise physical qubits and their coherence properties has been subject of intense study.
Here, widespread and popular figures of merit are the longitudinal and transverse relaxation time scales, known as $T_1$ and $T_2$. They were originally introduced in the field of nuclear magnetic resonance, describing a simple exponential decay dynamics of spin states \cite{nielsen2010,abragam1961}.
Such simple descriptions, however, become incomplete in the presence of, e.g., temporal noise correlations giving rise to non-Markovian dynamics~\cite{breuer-book,rivasRMP-2014}. Similarly, spatial noise correlations can play a role in larger quantum registers, where such correlations can be quantified and measured \cite{Postler2018,Rivas-Mueller-2015} and sometimes also harnessed for noise mitigation techniques, for instance by storing quantum information in decoherence-free subspaces \cite{zanardi1997,lidar1998,lidar2001a,lidar2001b,kielpinski1013,haffner2005}.
Currently, we are witnessing enormous efforts to build and reliably control increasingly larger quantum processors - often termed noisy intermediate-scale quantum (NISQ) devices \cite{Preskill2018}. These devices are also used to implement low-distance quantum error correcting codes \cite{Chiaverini2004,Schindler2011,Reed2012,Nigg2014,Waldherr2014,kelly2015,Linke2017,arXiv:1912.09410}, which allow one to encode and protect quantum information in so-called logical qubits formed of entangled ensembles of physical qubits \cite{Terhal-2015,lidar2013,nielsen2010}. An important short-term goal is to reduce the effective error rates \cite{Gambetta2017,bermudez-prx2017,debroy2019}, as a first step towards the long-term goal of protected large-scale fault-tolerant quantum computation \cite{kitaev2001,ShorThreshold,PreskillThreshold}.
However, characterising the performance of logical qubits is naturally more involved, because fully characterising the state of its constituents is not feasible for even intermediate-size quantum registers.
It is tempting to try to directly leverage the well-established figures of merit developed for physical qubits to logical qubits, guided by the intuition that the encoded information in logical qubits should show qualitatively similar dynamical behaviour as their physical constituents. In this work we illustrate that the analogy to a physical qubit does not hold generally, and that the characterisation of logical qubits as quantum memories \cite{Terhal-2015} comes with a number of unique challenges. In particular, spatial noise correlations can strongly affect QEC performance \cite{clemens2004,klesse2005,aharonov2006,preskill2013,novais2013,novais2006,shabani2008} and influence dynamical behaviour of logical qubits in a counter-intuitive way.
For example, we show that generalisations of, e.g. $T_1$ and $T_2$ times to logical qubits fail, even for encodings consisting of no more than 3 or 4 physical qubits. We theoretically discuss and experimentally observe rich decay dynamics of small-scale logical qubits, due to leakage of quantum information from the code space, or temporal behavior governed by multiple time scales in contrast to simple exponential decay. We foresee that awareness of these effects and the efficient characterisation tools used in this work will guide the development and optimisation of logical qubits.
%
\section{Experimental system and noise}
\label{sec:experiment}
The experimental setup consists of a trapped-ion quantum information processor with $^{40}$Ca$^+$ ions, that has been described in detail in reference~\cite{Schindler2013}. The qubits are encoded in the 4S${_{1/2}(m_j=-1/2)=\ket{1}}$ ground state and the metastable excited state 3D${_{5/2}(m_j=-1/2)=\ket{0}}$ and transitions between these states are driven with a narrow linewidth laser~\cite{Schindler2013}. The system provides a universal set of gate operations consisting of M{\o}lmer-S{\o}rensen (MS) entangling gates and arbitrary local operations
~\cite{Schindler2013,Martinez2016}. Any local operation can be implemented by a combination of a resonant collective local operation $U_{x}(\theta) = \exp(-i \theta/2 S_x)$, with $S_{x} = \sum_i X_i$ being the sum over all single-qubit $X$ Pauli operators\footnote{We denote the Pauli operators with their capital letters $X,Y,Z$ to facilitate the notation.}, and single-qubit AC-Stark shifts, represented by rotations around the z-axis of the Bloch sphere $U_z^{(i)}(\theta) = \exp(-i \theta/2 Z_i)$.
The action of the entangling MS gate operation on the entire qubit register is described as $\mathrm{MS}(\theta)=\exp(-i \theta/4 S_x^2)$.
The dominating noise source for storing information in our experimental system is given by dephasing caused by laser frequency noise and fluctuations in the bias magnetic field~\cite{Schindler2013}.
In our system, the effect of fluctuations of the laser frequency as well as the magnitude of the magnetic field cannot be distinguished. We can thus describe the dephasing process using a single fluctuating variable $B(t)$, referred to in the following as effective magnetic field:
\begin{eqnarray}
H_G(t)=\frac{1}{2}B(t) Z.
\label{eq:single_dephasing_hamiltonian}
\end{eqnarray}
In the following, we assume the random fluctuation in the values of the effective magnetic field to obey a Gaussian distribution $P(B)$, which implies that
\begin{eqnarray}
&&\left\langle\exp\left[\pm\text{i}\int_{0}^tB(t^\prime)dt^\prime\right]\right\rangle\nonumber\\
&=&\exp\left[-\frac{1}{2}\left\langle\left(\int_{0}^tB(t^\prime)dt^\prime\right)^2\right\rangle\right].
\label{eq:because_gaussian}
\end{eqnarray}
We also assume a stationary autocorrelation function of the noise source, implying
\begin{eqnarray}
\langle B(t+\tau)B(t)\rangle=\langle B(\tau)B(0)\rangle,
\end{eqnarray}
and a further $\delta$-correlation of the noise, such that
\begin{eqnarray}
\langle B(\tau)B(0)\rangle=\langle [B(0)]^2\rangle\delta(\tau).
\end{eqnarray}
Therefore, in the case of local dephasing, this implies
\begin{eqnarray}
\langle B_k(t+\tau)B_l(t)\rangle&=&\langle [B_k(0)]^2\rangle\delta_{k,l}\delta(\tau).
\end{eqnarray}
Using these properties, one finds
\begin{eqnarray}
\left\langle\left[\int_{0}^tB(t^\prime)dt^\prime\right]^2\right\rangle
=\langle[B(0)]^2\rangle t=\gamma t,
\label{eq:define_gamma}
\end{eqnarray}
where we define $\gamma=\langle[B(0)]^2\rangle$.
We will, for completeness, now present a brief overview of the relevant results obtained when dephasing noise is applied to a single physical qubit.
Writing a generic pure single-qubit state in terms of the computational basis $\{\ket{0},\ket{1}\}$ as $\ket{\psi}=\cos\frac{\theta}{2}\ket{0}+\text{e}^{\text{i}\phi}\sin\frac{\theta}{2}\ket{1}$, with $\theta$ and $\phi$ being real parameters ($0\leq \theta\leq \pi$, $0\leq \phi\le 2\pi$), the dephasing noise acts on the state as $\ket{\psi^\prime}=\exp\left[-{\rm{i}}\int_0^t H_G(t^\prime)dt^\prime\right]\ket{\psi}_L$, leading to
\begin{eqnarray}
\ket{\psi^\prime}=\cos\frac{\theta}{2}\ket{0}
+\exp\left[\text{i}\left(\phi+\int_{0}^tB(t^\prime)dt^\prime\right)\right]\sin\frac{\theta}{2}\ket{1},\nonumber\\
\end{eqnarray}
discarding a global phase $\exp\left[-\frac{\text{i}}{2}\int_{0}^tB(t^\prime)dt^\prime Z\right]$.
Denoting the distribution of the random values of the magnetic field by $P(B)$, the density matrix of the qubit is given by $\rho^\prime=\int \ket{\psi^\prime}\bra{\psi^\prime} P(B)dB$. Assuming $P(B)$ to be a Gaussian distribution, and using Eq.~(\ref{eq:define_gamma}), the noisy density matrix can be simplified as
\begin{eqnarray}
\rho^\prime&=& \cos^2\frac{\theta}{2}\ket{0}\bra{0}+\sin^2\frac{\theta}{2}\ket{1}\bra{1}\nonumber\\
&&+\frac{1}{2}{\rm{e}}^{-\frac{1}{2}\gamma t}\sin\theta(\text{e}^{\text{-i}\phi}\ket{0}\bra{1}+\text{e}^{\text{i}\phi}\ket{1}\bra{0}).
\end{eqnarray}
For a physical qubit represented completely by its Bloch vectors $\vec{r}=(r_x,r_y,r_z)$, where $r_x\equiv X$, $r_y\equiv Y$, and $r_z\equiv Z$, it is crucial to understand how the components of the Bloch vectors are modified under the application of the dephasing noise. The expectation values of the components of the Bloch vector in the state $\rho^\prime$ evolve under dephasing as
\begin{eqnarray}
\label{eq:x_physical}
\langle X\rangle&=&\text{Tr}(X\rho^\prime)=\text{e}^{-\frac{1}{2}\gamma t}\sin\theta\cos\phi, \\
\label{eq:y_physical}
\langle Y\rangle&=&\text{Tr}(Y\rho^\prime)=\text{e}^{-\frac{1}{2}\gamma t}\sin\theta\sin\phi, \\
\label{z_physical}
\langle Z\rangle&=&\text{Tr}(Z\rho^\prime)=\cos\theta.
\end{eqnarray}
This behavior is a special case of the most general qubit relaxation dynamics which is characterized by the longitudinal and transverse the relaxation time-scales $T_1$ and $T_2$, as introduced in the early nuclear magnetic resonance experiments. These relaxation times are defined as
\begin{eqnarray}
\label{eq:longitudinal}
r_z(t)&=& r_z^{\text{eq}}-{\rm{e}}^{-\frac{t}{T_1}}\left[r_z^{\text{eq}}-r_z(0)\right],\\
\label{eq:transverse}
r_\perp(t)&=& r_\perp^{\text{eq}}-{\rm{e}}^{-\frac{t}{T_2}}\left[r_\perp^{\text{eq}}-r_\perp(0)\right],
\end{eqnarray}
where $r_\perp=\sqrt{r_x^2+r_y^2}$, and the superscript ``eq" signifies the equilibrium time of the corresponding signal when the system has fully relaxed. Here, $T_1$ represents the typical decay time of the eigenstates of the $Z$ Pauli matrix, and $T_2$ quantifies the lifetime of quantum coherence between them. Comparing Eqs.~(\ref{eq:longitudinal})-(\ref{eq:transverse}) with Eqs.~(\ref{eq:x_physical})-(\ref{eq:y_physical}), one obtains $T_1=\infty$, while $T_2=\frac{1}{2\gamma}$ for dephasing noise.
In a multi-qubit system, the spatial correlation of the noise needs to be accounted for. We concentrate on two extreme cases of spatial noise correlations: (i) local dephasing noise, where each qubit has its own, independent noise source, and (ii) a global, i.e.~collective dephasing where one noise source is affecting all qubits identically.
Local dephasing would be caused by local fluctuating magnetic fields, where each of the physical qubits constituting the logical qubit experiences a different random magnetic field, and the noise Hamiltonian is given by
\begin{eqnarray}
H_L(t)=\frac{1}{2}\sum_k B_k(t)Z_k,
\label{eq:local_dephasing_hamiltonian}
\end{eqnarray}
where $B_k(t)$ is the time-dependent strength of the magnetic field local to the physical qubit $k$, and $Z_k$ is the $z$-component of the Pauli matrices corresponding to qubit $k$.
On the other hand, the global dephasing noise is due to a randomly fluctuating effective magnetic field that acts on all of the physical qubits, such that the noise Hamiltonian is given by
\begin{eqnarray}
H_G(t)=\frac{1}{2}B(t)\sum_k Z_k,
\label{eq:global_dephasing_hamiltonian}
\end{eqnarray}
where $B(t)$ is the time-dependent strength of the global fluctuating magnetic field.
In typical ion-trap experiments, global dephasing is dominating, as the typical length-scale of noise fields is much larger than the inter-ion distance~\cite{Schindler2011,Postler2018}. Global dephasing is also applicable to any system that uses a common local oscillator as phase reference.
In the following sections, we showcase the performance of our proposed parameters in quantifying the quality of a logical qubit at the example of dephasing noise. Naturally, a similar analysis can also be carried out with other types of noise at play, e.g. amplitude damping noise (see appendix~\ref{sec:diff_noise}).
\section{A logical qubit under dephasing}
\label{subsec:logical_qubit_dephasing}
A logical qubit is constructed from $N$ physical qubits, and its generic pure logical state is denoted as $\ket{\psi}_L=\cos\frac{\theta}{2}\ket{0}_L+\text{e}^{\text{i}\phi}\sin\frac{\theta}{2}\ket{1}_L$. The logical basis states $\{\ket{0}_L$ and $\ket{1}_L$ are, in general, $N$-qubit entangled states. A logical qubit is defined by the set of stabilizer generators $\{S_i\}$ and the set of logical operators $\{X_L,Y_L,Z_L\}$ as
\begin{eqnarray}
X_L\ket{0}_L&=&\ket{1}_L,\,\,X_L\ket{1}_L=\ket{0}_L,\\
Z_L\ket{0}_L&=&\ket{0}_L,\,\,Z_L\ket{1}_L=-\ket{1}_L.
\end{eqnarray}
Each of these logical operators is acting on multiple physical qubits. Without any loss in generality, one can express the logical state $\ket{0}_L$ as a superposition of computational basis states of the physical qubits as $\ket{0}_L = \sum_l b_l \ket{b}_l$.
The effect of dephasing noise on such a complex $N$-qubit state can be analyzed straightforwardly by grouping the physical basis states by their \textit{magnetization}. The magnetization of a basis state is defined as the difference between the number of spins in the ground state $\ket{0}$ with eigenvalue $+1$, denoted as $n$, and the remaining number of spins in the excited state $\ket{1}$ with eigenvalue $-1$, $N-n$. The magnetization is expressed as
\begin{eqnarray}
m = 2n - N \, .
\end{eqnarray}
Each magnetization value has the multiplicity $N_m = N!/(m! (N-m)!)$. The magnetization has $N+1$ possible values given by $m \in \{-N,-N+2,\cdots,N-2,N \}$.
The logical basis state $\ket{0}_L$ can then be written by grouping the physical basis states by their magnetization:
\begin{eqnarray}
\ket{0}_L=\sum_{m} \sum_{l=1}^{N_m} b_l^m\ket{b}_l^m \, .
\end{eqnarray}
The state $\ket{1}_L$ can also be written in a similar way.
Let us first consider the global dephasing noise represented by the noise Hamiltonian $H_G(t)$. The effect of the global dephasing noise on a generic logical state $\ket{\psi}_L$, given by $\ket{\psi^\prime}_L=\exp\left[-{\rm{i}}\int_0^t H_G(t^\prime)dt^\prime\right]\ket{\psi}_L$, is determined by the eigenvalue equation
\begin{eqnarray}
\left[\sum_kZ_k\right]\ket{b}_l^m=m|b\rangle_l^m.
\label{eq:global_dephasing_eigenvalue}
\end{eqnarray}
Therefore, in the density matrix
$\rho=\int\left(\ket{\psi^\prime}\bra{\psi^\prime}\right)_L P(B)dB$ of the logical qubit, the off-diagonal elements $\ket{b}_l^m\bra{b}_{l^\prime}^{m^\prime}$
have coefficients decaying with time as
\begin{eqnarray}
c_{\Delta m}&=&\exp\left[-\frac{1}{2}\left(\frac{\Delta m}{2}\right)^2\gamma t\right],
\label{eq:global_timescales}
\end{eqnarray}
where the difference in magnetization $\Delta m=m-m^\prime$ takes integer values. Note that the time-decays of these coefficients originate solely due to the difference $\Delta m=m-m^\prime$ in the \emph{magnetization} values corresponding to different basis states $\ket{b}_l^m$. For situations where $\Delta m=0$, no manifestation of the global noise in the form of the time-decay of the coefficients of the density matrix can be found. The subspace of the Hilbert space of the $N$-qubit system hosting the basis states for which $\Delta m=0$, therefore, forms a \emph{decoherence-free subspace} (DFS) which is not affected by global dephasing noise.
In contrast to Eq.~(\ref{eq:global_dephasing_eigenvalue}), the effect of \emph{local dephasing noise} governed by the Hamiltonian $H_L(t)$ on the logical qubit state $\ket{\psi}_L$ is determined by the eigenvalue equation
\begin{eqnarray}
\left[\sum_k B_k(t)Z_k\right]\ket{b}_l=\left[\sum_k\alpha_kB_k(t)\right]\ket{b}_l,
\end{eqnarray}
where the factors $\alpha_k=\pm 1$ are defined by $Z_k\ket{k}_l=\alpha_k\ket{b}_l$, i.e., whether the $k$th qubit in $\ket{b}_l$ is in $\ket{0}$ or $\ket{1}$ state.
For uncorrelated dephasing of equal strength on the $N$ qubits this leads to a decay of the off-diagonal terms in the density matrix $\rho^\prime$ as
\begin{eqnarray}
c_{\Delta n}&=&\exp\left[-\frac{\Delta n}{2}\gamma t\right],
\label{eq:local_timescales}
\end{eqnarray}
where $\Delta n$ is the number of positions in the basis states $\ket{b}_l$ and $\ket{b}_{l^\prime}$ where the entries differ (Hamming distance). Note that in this case the dephasing dynamics is not governed by the (differences in) \textit{magnetization} $m$, and DFS does not exist in this case.
\subsection{Assessing the quality of a logical qubit}
\label{subsec:observables}
We now discuss the relevant quantities to assess the quality and to characterise decay dynamics of a logical qubit.
A natural choice of such quantities would be the components of the \emph{logical Bloch vector} $\vec{R}=\left(R_x,R_y,R_z\right)$, where we identify $R_{x,y,z}$ as
\begin{align}
R_x=\langle X_L\rangle,\,R_y=\langle Y_L\rangle,\,R_z=\langle Z_L\rangle.
\label{eq:logical_op_exp_values}
\end{align}
Here, $\langle \mathcal{O}\rangle={\rm{Tr}}\left[\mathcal{O}\rho^\prime\right]$ is the expectation value of the operator $\mathcal{O}$ in the noisy state $\rho^\prime$ of the logical qubit.
A major issue for characterizing logical qubit dynamics is the fact that noise typically causes leakage from the code space. It is therefore useful to also quantify the code-space population, $p=\langle P_\text{c}\rangle$, where
\begin{eqnarray}
P_{\text{c}}=\frac{1}{2^{N}}\prod_{k=1}^N(I+S_k)
\label{eq:code_space_population_op}
\end{eqnarray}
denotes the projector onto the code-space of an $N$-qubit stabilizer QEC code \cite{nielsen2010}. Here, \{$S_k\}$ is the set of stabilizer generators that define the code, and $I$ is the identity operator in the Hilbert space of the $N$ physical qubits.
Projecting on the code-space population corresponds to post-selecting on the no-error outcome if one realized a perfect syndrome measurement, i.e.~the set of generators of the code, via ancilla qubits.
Note that the code-space population and all other quantities discussed below can be evaluated from measuring the $2^N$ stabilizer elements of the code, requiring fewer measurements than full state tomography. Furthermore, the number of measurements could be reduced further using techniques proposed in the context of efficient fidelity estimation of stabilizer states~\cite{PhysRevLett.106.230501}.
In order to incorporate the effect of leakage from the code space in the expectation values of the logical operators, we also consider the quantities $\{p_x,p_y,p_z\}$, where
\begin{eqnarray}
p_x=\langle X_L P_\text{c}\rangle,\,p_y=\langle Y_L P_\text{c}\rangle,p_z=\langle Z_L P_\text{c}\rangle.
\label{eq:operator_expt_values_in_code_space}
\end{eqnarray}
The relevant time-scales in the evolution of these quantities under global dephasing noise are given by Eq.~(\ref{eq:global_timescales}) for magnetization differences $\Delta m$. In Sec.~\ref{subsec:illustrations}, we derive the theoretical results for the time evolution of the expectation values of these quantities. We then also compare this to our experimental results.
We stress here that these quantities are defined independent of the specific noise model.
In the following subsections, we demonstrate the performance of the quantities using dephasing noise, which is the dominant noise in the experimental setup considered in this paper. However, these quantities can also be used to investigate the quality of the logical qubit under other types of noise. In appendix~\ref{sec:diff_noise} we present a comparison of simulated global dephasing versus simulated amplitude damping noise.
\subsection{Dephasing noise on small QEC codes}
\label{subsec:illustrations}
We now examine how the quantities discussed in Sec.~\ref{subsec:observables} evolve over time under global dephasing noise, for a single logical qubit in a variety of three- and four-qubit QEC codes.
\subsubsection{A three-qubit bit-flip code}
The first example we consider is that of a $3$-qubit QEC code, whose stabilizer operators are given by
\begin{eqnarray}
S_1=Y_1X_2Y_3, \,
S_2=X_1Y_2Y_3,
\end{eqnarray}
and the logical operators are
\begin{eqnarray}
X_L &=& -Y_1Y_2Z_3,\nonumber\\
Z_L &=& X_1X_2X_3, \nonumber\\
Y_L &=&\text{i}X_L Z_L = Z_1 Z_2 Y_3
\label{eq:logical_operator_3qubit}
\end{eqnarray}
The logical basis states $\{\ket{0}_L,\ket{1}_L\}$ are given by
\begin{eqnarray}
\ket{0}_L&=&\frac{1}{\sqrt{2}}\left(\ket{001}+\ket{110}\right),\nonumber\\
\ket{1}_L&=&\frac{1}{\sqrt{2}}\left(\ket{000}-\ket{111}\right).
\label{eq:logical_computational_basis_3qubit}
\end{eqnarray}
The motivation behind choosing this specific form of the logical basis is two-fold. Firstly, the effect of the global dephasing noise depends explicitly on the choice of the logical basis, as explained in Sec.~\ref{subsec:logical_qubit_dephasing}. Therefore, it is important to choose a set of logical basis state that clearly demonstrates the effect of the different magnetization values, which is achieved by the chosen basis. Secondly, the chosen logical basis states are easy to prepare, using only a single MS gate.
Evidently, the basis states $\ket{b}_l^m$ contributing in $\ket{\psi}_L$ have four specific values of $m$, given by $m=-3(\ket{111})$, $-1(\ket{110})$, $1(\ket{001})$, and $3(\ket{000})$. Therefore, following the discussions in Sec.~\ref{subsec:logical_qubit_dephasing}, the dynamics of the coefficients of the off-diagonal elements in $\rho^\prime$ are governed by the exponential decay factors as given by Eq.~(\ref{eq:global_timescales}), namely $\exp\left[-\frac{1}{2}\gamma t\right]$ (corresponding to the off-diagonal terms of the form $\ket{b}_l^m\bra{b}_{l^\prime}^{m\pm2}$), $\exp\left[-2\gamma t\right]$ (corresponding to the off-diagonal terms of the form $\ket{b}_l^m\bra{b}_{l^\prime}^{m\pm4}$), and $\exp\left[-\frac{9}{2}\gamma t\right]$ (corresponding to the off-diagonal terms of the form $\ket{b}_l^m\bra{b}_{l^\prime}^{m\pm6}$).
Explicit calculation of the expectation values of the quantities discussed in Eqs.~(\ref{eq:logical_op_exp_values})-(\ref{eq:operator_expt_values_in_code_space}) in Sec.~\ref{subsec:observables} under global dephasing noise leads to:
\begin{eqnarray}
\label{eq:global_logical_3_x}
R_x&=&\text{e}^{-2\gamma t}\sin\theta\cos\phi,\\
\label{eq:global_logical_3_y}
R_y&=&\text{e}^{-\frac{1}{2}\gamma t}\sin\theta\sin\phi,\\
\label{eq:global_logical_3_z}
R_z&=&\frac{1}{2}\text{e}^{-\frac{9}{2}\gamma t}\left[\cos\theta+2\text{e}^{4\gamma t}\cos^2\frac{\theta}{2}-1\right],\\
\label{eq:global_csp_3}
p&=&\frac{1}{2}\Big[\text{e}^{-\frac{1}{2}\gamma t}\cos^2\frac{\theta}{2}+\text{e}^{-\frac{9}{2}\gamma t}\sin^2\frac{\theta}{2}+1\Big],\\
\label{eq:global_logical_csp_3_x}
p_x&=&\text{e}^{-\frac{5}{4}\gamma t}\sin\theta\cos\phi\cosh\frac{3\gamma t}{4}, \\
\label{eq:global_logical_csp_3_y}
p_y&=&\text{e}^{-\frac{5}{4}\gamma t}\sin\theta\sin\phi\cosh\frac{3\gamma t}{4}, \\
\label{eq:global_logical_csp_3_z}
p_z&=&\frac{1}{2}\Big[\cos\theta-\text{e}^{-\frac{9}{2}\gamma t}\sin^2\frac{\theta}{2}+\text{e}^{-\frac{1}{2}\gamma t}\cos^2\frac{\theta}{2}\Big].
\end{eqnarray}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.8 \textwidth]{exp_overview.pdf}
\caption{Circuit to prepare the a) 3-qubit $\ket{\psi}_L$ and b) 4-qubit $|0\rangle_L$ and c) 4-qubit $|+\rangle_L$ encoded states.}
\label{fig:exp_overview}
\end{figure*}
The encoding of the logical qubit is a 3-qubit repetition code and can be implemented by a single fully entangling MS gate with unitary $\mathrm{MS}(\pi/2)$, followed by a collective local operation collective operation $U_x(\pi/2)$~\cite{Schindler2011}. %
The individual eigenstates of the logical Pauli operators can be prepared by applying single qubit operations $U_E(\theta)=\exp(-i\theta/2 Y_1)$ on the first physical qubit before applying the MS gate. The rotation angle of $U_E(\theta)$ is $\theta \in \{0, \pi, \pi/2 \}$ to generate the $\{-1, +1, +1 \}$ logical eigenstates of the logical $\{Z_L,Z_L,X_L\}$ operators, in the following denoted as $\{-Z_L,+Z_L,+X_L\}$. The encoding circuit is shown in Fig.~\ref{fig:exp_overview}. We thus prepare the logical qubit in the +1 eigenstate of the logical X operator and the $\pm$1 eigenstates of the logical Z operators.
In order to investigate the performance of the proposed quantities in Sec.~\ref{subsec:observables}, we let the encoded state freely evolve in time, which ideally corresponds to the implementation of the identity operation with increasing length. To get an estimate of the density matrix describing the complete system after the evolution, we perform quantum state tomography with maximum likelihood reconstruction~\cite{paris2004quantum}. We use the obtained density matrices to deduce estimates of all presented expectation values i.e. the code space stabilizers, the logical Bloch vectors and the fidelities inside the code space.
Note, that we are assessing the performance of the proposed quantities under collective dephasing noise, since this is the dominant noise source in our experimental setup. Importantly, the presented method is not limited to this type of noise and could be readily extended to other kinds of noise, like e.g. amplitude damping. One could also investigate the action of operations on logical qubits other than the identity, by e.g. performing logical randomized benchmarking~\cite{combes2017logical}, which is beyond the scope of this work.
In Fig.~\ref{fig:3qubit} we present measured data of the dynamics after preparing the logical state in the $\{+1,+1,-1\}$ eigenstate of the logical $\{X_L,Z_L,Z_L\}$ operator. We estimate the coherence time of the physical qubits by performing least-squares fits of the dynamics of the individual expectation values according to Eqs.~(\ref{eq:global_logical_3_x}) - (\ref{eq:global_logical_csp_3_z}), where the experimental imperfections are modeled by multiplying the expectation value with a constant contrast factor. The mean value of all individual fit results yields an experimental coherence time $T_2=78(12)$\,ms and a contrast 0.89(3), where the error describes the standard deviation of the mean. A detailed discussion of the influence of slow drifts in the dephasing noise can be found in the appendix~\ref{sec:app_drifts}. All lines depicted in Fig.~\ref{fig:3qubit} represent the theoretical models with the mean coherence time and contrast estimated from experimental data. The relatively large standard deviation comes dominantly from laser frequency and magnetic field fluctuations in the experimental apparatus, and also from the fact that the method in its current form is not robust against state preparation and measurement (SPAM) errors.
Nevertheless, the measured data can be described very well by the theory model based on collective phase noise where the SPAM error is included as a constant contrast factor as shown in Fig.~\ref{fig:3qubit}.
Notably, in Fig.~\ref{fig:3qubit} a) the expectation value of the logical $Z_L$ operator initially vanishes but then grows with increasing storage time, as predicted by Eq.~(\ref{eq:global_logical_3_z}). This is counter-intuitive to the expectation from dephasing from physical qubits. Furthermore, this behaviour cannot be described by a quantum channel that originates from a Lindblad master equation with a time independent rate acting only on the logical qubit.
An animation of the logical qubit behavior on the Bloch sphere can be found in the online supplementary material~\cite{pal_amit_kumar_2020_4321279}.
The expectation values of the +1 and -1 eigenstate of the logical $Z_L$ operator depicted in figure~\ref{fig:3qubit} b) and c) are expected to show drastically different dynamics according to Eq.~(\ref{eq:global_logical_csp_3_z}), which is reflected in the experimental data.
Animations of the logical qubit behavior on the Bloch sphere can be found in the online supplementary material~\cite{pal_amit_kumar_2020_4321279}.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.8 \textwidth]{3_qubits.pdf}
\caption{Expectation values of the logical Pauli operators and code space population for the 3 qubit code, initially in the a) +1 eigenstate of the logical X operator,
b) +1 eigenstate of the logical Z operator, c) -1 eigenstate of the logical Z operator. The wait time for experimental data is given in units of $T_2=78(12)$ms and the theoretical expectation values are multiplied by a constant value of $0.89(3)$.}
\label{fig:3qubit}
\end{figure*}
\subsubsection{Four-qubit Grassl code}
\label{sec:grassl}
Next, we consider the four-qubit QEC code used for correcting erasure noise, as proposed by Grassl \textit{et al.} \cite{PhysRevA.56.33}, defined by the stabilizers
\begin{eqnarray}
S_1&=&X_1X_2X_3X_4,\nonumber\\
S_2&=&Z_3Z_4,\nonumber\\\
S_3&=&Z_1Z_2.
\end{eqnarray}
The computational basis corresponding to the logical qubit is given by $\{\ket{0}_L,\ket{1}_L\}$, with
\begin{eqnarray}
\ket{0}_L&=&\ket{\Phi^+}\ket{\Phi^+},\;\ket{1}_L=\ket{\Phi^-}\ket{\Phi^-},
\label{eq:logical_computational_basis_4qubit}
\end{eqnarray}
where $\ket{\Phi^\pm}=\frac{1}{\sqrt{2}}(\ket{00}\pm\ket{11})$, and the logical operators are
\begin{eqnarray}
X_L=Z_1Z_3,\,Z_L=X_1X_2,\,Y_L=-Y_1X_2Z_3.
\label{eq:logical_operator_4qubit}
\end{eqnarray}
The forms of $\{\ket{0}_L,\ket{1}_L\}$ suggest that the different magnetization values corresponding to the basis states $\ket{b}_l^m$ contributing in $\ket{\psi}_L$ are $m=4,0,-4$. This implies that the coefficients of terms of the form $\ket{b}_l^m\bra{b}_{l^\prime}^{m\pm 4}$ in the density matrix would decay as $\exp\left[-2\gamma t\right]$, while the coefficients of the terms of the form $\ket{b}_l^m\bra{b}_{l^\prime}^{m\pm 8}$ would have a time dependence given by $\exp\left[-8\gamma t\right]$. These characteristic time-decays yield decays of the expectation values (see Sec.~\ref{subsec:observables}), as given by the following equations:
\begin{eqnarray}
\label{eq:global_logical_4_x}
R_x&=&\sin\theta\cos\phi,\\
\label{eq:global_logical_4_y}
R_y&=&{\rm{e}}^{-2\gamma t}\sin\theta\sin\phi,\\
\label{eq:global_logical_4_z}
R_z&=&{\rm{e}}^{-2\gamma t}\cos\theta,\\
\label{eq:global_csp_4}
p&=&\frac{1}{4}\left[3+\text{e}^{-8\gamma t}+\left(\text{e}^{-8\gamma t}-1\right)\sin\theta\cos\phi\right],\\
\label{eq:global_logical_csp_4_x}
p_x&=&\frac{1}{4}\left[\text{e}^{-8\gamma t}-1+\left(\text{e}^{-8\gamma t}+3\right)\sin\theta\cos\phi\right], \\
\label{eq:global_logical_csp_4_y}
p_y&=&\text{e}^{-2\gamma t}\sin\theta\sin\phi, \\
\label{eq:global_logical_csp_4_z}
p_z&=&\text{e}^{-2\gamma t}\cos\theta.
\end{eqnarray}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8 \textwidth]{4_qubits.pdf}
\caption{Expectation values of the logical Pauli operators and code space population for the 4 qubit code, initially in the a) +1 eigenstate of the logical X operator and b) the +1 eigenstate of the logical Z operator. The evolution time for experimental data is given in units of $T_2=25(5)$ms and the theoretical expectation values are multiplied by a constant value of $0.93(1)$.}
\label{fig:4qubit}
\end{figure*}
The procedure to generate the 4-qubit Grassl code consists of two half-entangling gates MS$(\pi/4)$
with additional local $Z$ rotations $U_z(\theta)=\exp(-i\theta/2S_z)$, with $S_z=\sum_iZ_i$.
For the preparation of the logical state $|0\rangle_L$ two spin echo pulses between the MS gates on qubits 1 and 2 $U_z(\pi)=\exp(-i\pi/2S_z)$, in addition to two phase correction operations $U_z(-\frac{\pi}{2})=\exp(+i\pi/4S_z)$ on qubits 1 and 3 at the end of the sequence are implemented.
The preparation of the logical state $|+\rangle_L$ has no need for spin echo pulses and hence the sequence consists of only one fully entangling gate MS$(\pi/2)$ and a single phase correction operation $U_z(\frac{\pi}{2})=\exp(-i\pi/4S_z)$ on qubit 1.
The experimental results for this four-qubit code for the +1 eigenstates of the logical X operator are shown in figure~\ref{fig:4qubit}a). Here, it is notable, that the logical X expectation value does not decay, but the population in the code space is decaying rapidly to the steady state value of 0.5.
Figure~\ref{fig:4qubit}b) shows the behavior for the +1 eigenstate of the logical Z operator. Due to miscalibrated single-qubit operations, which we discuss in detail in appendix~\ref{sec:app_calibration}, the experimentally generated eigenstate has been rotated. The theoretical description in Fig.~\ref{fig:4qubit}b) is based on a qubit in the state $\ket{\Psi}_L = \cos(\delta) \ket{0}_L + \sin(\delta) \ket{1}_L$ with $\delta=0.16$ radian.
It is notable that the expectation value of the X logical operator increases with the waiting time if the code was initially close to the +1 eigenstate of the logical Z operator. This behavior is predicted by Eq.~(\ref{eq:global_logical_csp_4_x}). Animations of the logical Bloch vectors are shown in the online supplementary material~\cite{pal_amit_kumar_2020_4321279}. The estimated coherence time is $T_2=25(5)$ms. The difference compared to the estimated 3-qubit code coherence time can be explained by the fact that the measurements were taken four months apart, where several changes to the experimental apparatus have been made in the meantime.
Note that one could work also with a variation of this code, by working with logical basis states given by
\begin{eqnarray}
\ket{0}_L&=&\ket{\Psi^+}\ket{\Psi^+},\nonumber\\
\ket{1}_L&=&\ket{\Psi^-}\ket{\Psi^-},
\label{eq:logical_computational_basis_4qubit_exp}
\end{eqnarray}
with $\ket{\Psi^\pm}=\frac{1}{\sqrt{2}}(\ket{01}\pm\ket{10})$. Note that this code is up to local single-qubit rotations equivalent to the investigated code as defined by the basis states given in Eq.~(\ref{eq:logical_computational_basis_4qubit}), however, it is expected to provide immunity against global dephasing noise.
\section{Conclusions}
\label{sec:conclusion}
In this work, we illustrated that simple physical noise models can lead to non-trivial dynamics of logical qubits, which are not captured by usual relaxation time scales. As shown by the examples explored in this work, deviations from simple exponential decay dynamics of logical qubits are possible even in Markovian systems. However, the behavior of the encoded system can be described by the logical Pauli expectation values in conjunction with the code space population, given by the expectation value of the code-defining stabilizers.
Awareness of these effects is particularly relevant for quantum error correction protocols that protect quantum memories, where a key goal is to extend the information storage time. Here, a careful choice of logical operators, and local-unitary equivalent stabilizer operators, actually matters, and should also be taken into account when analyzing the expected performance of longer algorithms on fault-tolerant hardware.
Extensions of the present work could include the analysis of spatial correlations which are not maximal throughout the entire register, the effect of temporal correlations, and potential generalizations of spin-echo techniques from physical qubits to logical qubits. In this regard, physically Markovian dynamics implies monotonic decay of the physical Bloch volume element~\cite{rivasRMP-2014,Lorenzo-2013}. This property can be translated to the logical level by considering the logical Bloch volume element relative to the code population. Namely, the volume element induced by the mean values $R_x^c={\rm{Tr}}[\rho_c X_L]$, $R_y^c={\rm{Tr}}[\rho_c Y_L]$ and $R_z^c={\rm{Tr}}[\rho_c Z_L]$ for the conditional state $\rho_c=P_c \rho P_c/p_0$, which in our previous notation are nothing but $R_x^c=p_x/p_0$, $R_y^c=p_y/p_0$ and $R_z^c=p_z/p_0$. A nonmonotonic decay of this volume element certifies non-Markovian evolution at the logical level.
Furthermore, one could aim at the development of state preparation and measurement error insensitive versions of the characterization protocols used in this work. In the context of characterising logical qubits not only as quantum memories, but also logical gate operations for fault-tolerant quantum computing, first works are aiming at developing logical randomised benchmarking or gate set tomography protocols [REFS].
Finally, an interesting and open challenge concerns the derivation of effective, efficiently simulatable noise models for logical qubits. This is not only relevant for the quantum memory scenario, but also for reliable numerical predictions of the performance of logical gates or gadgets like lattice surgery, state distillation and injection techniques, which will be required for the operation of large, fault-tolerant quantum processors.
\vspace{1cm}
\textbf{Acknowledgements} We gratefully acknowledge funding by the U.S. Army Research Office (ARO) through grant no. W911NF-14-1-0103. We also acknowledge funding by the Austrian Science Fund (FWF), through the SFB BeyondC (FWF Project No. F71), by the Austrian Research Promotion Agency (FFG) contract 872766, by the EU H2020-FETFLAG-2018-03 under Grant Agreement no. 820495, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the U.S. ARO Grant No. W911NF-16-1-0070. ll statements of fact, opinions or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government. We acknowledge support from the Institut für Quanteninformation GmbH (Innsbruck, Austria). We acknowledge financial support from the Spanish MINECO grants MINECO/FEDER Projects FIS 2017-91460-EXP,
PGC2018-099169-B-I00 FIS-2018 and from CAM/FEDER Project No. S2018/TCS- 4342 (QUITEMAD-CM). AKP acknowledges support from the National Science Center (Poland) Grant No. 2016/22/E/ST2/00559.
\vspace{5mm}
\noindent \textbf{Author contributions}.
AKP, AE, PS, and MM wrote the manuscript and all authors provided revisions. AKP, MM, PS, and TM developed the research based on discussions with RB, AR, and MAMD. AKP and MM developed the theory. AE and PS performed the experiments and evaluated the data. AE, PS, RB, and TM contributed to the experimental setup. All authors contributed to discussions of the results and the manuscript.\\
\bibliographystyle{plainnat}
|
1,116,691,497,822 | arxiv | \section*{Introduction}
Computing Gromov--Witten invariants of the quintic $3$-fold $X$ has attracted interests of both mathematicians and physicists due to its importance in mirror symmetry, which mainly studies Calabi--Yau $3$-folds.
One effective way to conquer this computation is to relate them with GW invariants of $\mathbb{P}^4$ in which $X$ is embedded. Then we apply virtual localisation \cite{GP99} for the natural torus action on $\mathbb{P}^4$ to compute them.
We will call this principle relating GW invariants of $X$ and $\mathbb{P}^4$ the {\em quantum Lefschetz property}.
The name, quantum Lefschetz, is originally from the formula between genus $0$ virtual cycles: Let $\iota: M(X)\hookrightarrow M(\mathbb{P}^4)$ be the moduli spaces of stable maps to $X\hookrightarrow \mathbb{P}^4$, respectively. On $M(\mathbb{P}^4)$ a coherent sheaf $V:=\pi_*\mathsf{f}^*\cO_{\mathbb{P}^4}(5)$ is defined via the universal curve $\pi:C\to M(\mathbb{P}^4)$ and the universal map $\mathsf{f}:C\to \mathbb{P}^4$. In genus $0$, $M(\mathbb{P}^4)$ is smooth and $V$ is a vector bundle. Then the quantum Lefschetz formula \cite{KKP03} asserts that
\begin{align}\label{naiveQLP1}
\iota_*[M(X)]^{\mathrm{vir}} \ = \ e(V) \ \cap\ [M(\mathbb{P}^4)].
\end{align}
Unfortunately, it turns out that \eqref{naiveQLP1} does not hold for higher genus invariants \cite{Gi98}. So we need more sophisticated version of the quantum Lefschetz property for higher genus invariants.
\smallskip
Meanwhile, the explicit relationship between GW and {\em stable quasimap invariants} of $X$ is known to be wall-crossing formula \cite{CK20, CJR21-1, Zh22}. Since we may expect a relatively simpler version of quantum Lefschetz property for higher genus quasimap invariants, wall-crossing formula allows us to study simpler quantum Lefschetz property to compute GW invariants. For instance the original quantum Lefschetz formula \eqref{naiveQLP1} holds true for genus $1$ quasimap invariants, so it dramatically helps the computation of genus $1$ GW invariants \cite{KL18}.
We notice that there has been several interesting quantum Lefschetz formulae for higher genus GW or quasimap invariants, or relationships between invariants of $X$ and other invariants, developed in a recent few years \cite{Zi1, Zi08, CZ14, CL15, KL18, CLLL16, FL19, BCM20, CM18, CJRS18, CGLL21, LO18, CGL21, CJR21-2, LO20}. These lead us some actual computations of higher genus invariants \cite{Zi2, Po13, KL18, GJR17, FL19, CGL18, GJR18}.
In our paper we would like to introduce one more quantum Lefschetz formula for genus $2$ quasimap invariants. Our formulae \eqref{qlp1}, \eqref{qlp} contain Zinger-type reduced virtual cycles, which have not been studied in any of references above for genus $\geq 2$ yet. Since it is expected to have some interesting properties -- such as integrability -- we hope our new formulae would suggest some idea in studying higher genus invariants.
To construct Zinger-type reduced virtual cycles, we need to study the reduced components on which the cycles are supported (conjecturally), in the moduli spaces of stable maps or stable quasimaps to $\mathbb{P}^n$. It is firstly addressed in \cite{VZ08,HL10} where they studied genus $1$ stable maps. Later \cite{HLN18, BC} studied genus $2$ stable maps in different ways -- \cite{HLN18} is closer to the original idea of \cite{VZ08,HL10}, whereas \cite{BC} uses curves with Gorenstein singularities. Although \cite{BC} studied more general target spaces, we follow the idea of \cite{HLN18} to construct our reduced virtual cycles due to its advantage on computations.
\smallskip
We consider a slight more general situation. Let $X = \{f_1 = \dots = f_m = 0\}$ be a complete intersection in projective space $\mathbb{P}^n$, where $f_i \in \Gamma(\mathbb{P}^n, \cO_{\mathbb{P}^n}(\ell_i))$. When $n=4$, $m=1$ and $\ell_1=5$ it recovers a quintic threefold $X$. We denote by $Q_{g,k,d}(X)\hookrightarrow Q_{g,k,d}(\mathbb{P}^n)$ the moduli spaces of stable quasimaps to $X\hookrightarrow \mathbb{P}^n$ of genus $g$, degree $d$ with $k$ marked points. Using the universal curve and map
$$
\xymatrix@R=6mm{
C\ar[r]^-{\mathsf{f}} \ar[d]^-{\pi} & [\mathbb{C}^{n+1}/\mathbb{C}^*]\\
Q_{g,k,d}(\mathbb{P}^n), &
}
$$
we define $V_{g,k,d}:=\oplus_{i=1}^m\pi_*\mathsf{f}^*\cO(\ell_i)$, where $\cO(d):=[\mathbb{C}^{n+1}\times\mathbb{C}/\mathbb{C}^*]$ is a bundle defined by weight $d$ representation. Let $Q^{\mathrm{red}}_{g,k,d}(\mathbb{P}^n)$ be the closure of the open substack in $Q_{g,k,d}(\mathbb{P}^n)$ on which $R^1\pi_*\mathsf{f}^*\cO(1)$ vanishes
$$
Q^{\mathrm{red}}_{g,k,d}(\mathbb{P}^n)\ :=\ \mathrm{closure}\left(Q_{g,k,d}(\mathbb{P}^n)\smallsetminus\mathrm{supp}R^1\pi_*\mathsf{f}^*\cO(1)\right)\ \subset\ Q_{g,k,d}(\mathbb{P}^n).
$$
Then on the proper birational base change $\widetilde{Q}_{g,k,d}(\mathbb{P}^n)\to Q_{g,k,d}(\mathbb{P}^n)$ in Section \ref{desin}, the proper transform of $Q^{\mathrm{red}}_{g,k,d}(\mathbb{P}^n)$ is smooth and $V_{g,k,d}$ over there is a bundle. We denote by $\mathbb{L}$ the tautological bundle associated to the marked point, a line bundle formed by the cotangent line at the marked point.
Then we prove the following quantum Lefschetz formula for a Calabi-Yau $3$-fold.
\begin{Thm*}\label{QLP1}
When $X$ is a Calabi-Yau $3$-fold, $d\geq 3$, we have an equivalence in the Chow group of $Q_{2,0,d}(X)$,
\begin{align}\label{qlp1}
[Q_{2,0,d}(X)]^{\mathrm{vir}} = \ & e^{\mathrm{ref}}(V_{2,0,d})\cap [Q_{2,0,d}^{\mathrm{red}}(\mathbb{P}^n)] \\ \nonumber
& - \frac{c_1(\mathbb{L})}{24}\cap [Q_{1,1,d}(X)]^{\mathrm{vir}} \\ \nonumber
& + \frac{1}{24^2}\l(\frac{c_1(\mathbb{L}_1)c_1(\mathbb{L}_2)}{2}-\frac{3(\mathrm{ev}^*_1 c_2(T_X) + \mathrm{ev}^*_2 c_2(T_X))}{2} \r) \cap [Q_{0,2,d}(X)]^{\mathrm{vir}} .
\end{align}
\end{Thm*}
Using the defining section $f=(f_i)_i\in \Gamma(\mathbb{P}^n,\oplus_i\cO(\ell_i))$ of $X\subset \mathbb{P}^n$, the first term in the RHS of \eqref{qlp1} is localised to $Q(X):=Q_{2,0,d}(X)$ via refined Euler class $e^{\mathrm{ref}}(V_{2,0,d})$ \cite[Section 14.1]{Fu}\footnote{This is called the localised top Chern class there.} defined by the section $\pi_*\mathsf{f}^*f\in \Gamma(V_{2,0,d})$ cutting out $Q(X)=(\pi_*\mathsf{f}^*f)^{-1}(0)$. The last two terms in the RHS are cycles on $Q(X)$ via the pushforwards of embeddings,
\smallskip
\begin{enumerate}
\item
$ \iota_1 : \overline{M}_{1,1} \times Q_{1,1,d}(X) \hookrightarrow Q(X)$,
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.2]{./pic2.png}
\end{center}
\end{figure}
\\
\item
$ \iota_2 : \overline{M}_{1,1} \times Q_{0,2,d}(X) \times \overline{M}_{1,1} \xrightarrow{2:1} Q(X)$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.2]{./pic3.png}
\end{center}
\end{figure}
\\
\end{enumerate}
On these loci $\oplus_iR^1\pi_*\mathsf{f}^*\cO(\ell_i)$ does not vanish, obstructing the original formula \eqref{naiveQLP1} is satisfied. Note that the image of $\iota_2$ is contained in the image of $\iota_1$, but the rank of $\oplus_iR^1\pi_*\mathsf{f}^*\cO(\ell_i)$ jumps on the image of $\iota_2$.
\smallskip
In fact Theorem \ref{QLP1} for a Calabi-Yau $3$-fold is induced by the following quantum Lefschetz formula in Theorem \ref{QLP} for any complete intersection.
In this general case, we may have a nontrivial contribution from
\begin{enumerate}
\item[(3)]
$ \iota_3 : \overline{M}_{1,2} \times Q'_{0,2,d}(X) \hookrightarrow Q(X)$,
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.25]{./pic4.png}
\end{center}
\end{figure}
\\
where $Q'_{0,2,d}(X)\hookrightarrow Q_{0,2,d}(X)$ is the closed substack on which the two evaluation maps are the same $\mathrm{ev}_1=\mathrm{ev}_2$.
\end{enumerate}
as well. This (3) does not have a contribution for a Calabi-Yau $3$-fold. These three loci in (1), (2) and (3) are the places where $\oplus_iR^1\pi_*\mathsf{f}^*\cO(\ell_i)$ does not vanish exactly.
Before stating Theorem we introduce some (Chow) cohomology classes to simplify the statement. Denoting by $\cH$ the Hodge bundle $\cH:=\pi_*\omega_C$ we define
$$
K\ :=\ \frac{c\,(\cH^\vee \boxtimes \mathrm{ev}^* T_X)}{c\,(\mathbb{L}^\vee \boxtimes \mathbb{L}^\vee)},\
A^t\ :=\ \frac{c\,(\cH^\vee \boxtimes \mathrm{ev}^* T_X)}{c\,(\mathbb{L}^\vee \boxtimes 1)^t},\ B\ :=\ \frac{1}{c\,(1 \boxtimes \mathbb{L}^\vee)}.
$$
And we denote by $K_i$, $A^t_i$, $B_i$ the classes corresponding to the $i$-th marked point, whereas by $[K]_i$, $[A^t]_i$, $[B]_i$ the degree $i$ parts. We also define a (Chow) homology class
\begin{align}\label{Q1'}
[Q'_{0,2,d}(X)]^{\mathrm{vir}} \ :=\ (\mathrm{ev}_1 \times \mathrm{ev}_2)^*\Delta_X \cap [Q_{0,2,d}(X)]^{\mathrm{vir}}
\end{align}
using the diagonal class $\Delta_X\in A^{\mathrm{dim} X}(X\times X)$. The bundle $V_{2,0,d}$ on $Q^{\mathrm{red}}_{2,0,d}(\mathbb{P}^n)$ is defined by $\oplus_i \pi_*\mathsf{f}^*\cO(\ell_i)$.
\begin{Thm*}\label{QLP}
For $d \geq 3$, we have an equivalence in the Chow group of $Q_{2,0,d}(X)$,
\begin{align
[Q_{2,0,d}(X)]^{\mathrm{vir}} =& \; e^{\mathrm{ref}}(V_{2,0,d}) \cap [Q^{\mathrm{red}}_{2,0,d}(\mathbb{P}^n)] \nonumber \\
& + [K]_{\mathrm{dim} X-1} \cap \left( [\overline{M}_{1,1}] \times [Q_{1,1,d}(X)]^{\mathrm{vir}} \right) \nonumber \\ \label{qlp}
& + \left( \frac{[K_1 K_2]_{2\mathrm{dim} X-2}}{2}-[K_1]_{\mathrm{dim} X-1}[K_2]_{\mathrm{dim} X-1} \right) \cap \l( \, [\overline{M}_{1,1}] \times [Q_{0,2,d}(X)]^{\mathrm{vir}} \times [\overline{M}_{1,1}] \, \r) \\ \nonumber
& + \frac{1}{2} \sum_{a=0}^{\mathrm{dim} X-1} (-1)^a[A^{a+1}_1]_{\mathrm{dim} X-1-a}[B_1B_2]_{a-1} \cap \l( [\overline{M}_{1,2}] \times [Q'_{0,2,d}(X)]^{\mathrm{vir}} \r).
\end{align}
\end{Thm*}
In Remark \ref{333} we explain $A^a_1=A^a_2$, so the last term is not so strange.
\medskip
\subsection*{Acknowledgements}
We are grateful to Jingchen Niu for delivering us his expertise on the desingularisations of the genus $2$ moduli spaces.
We also thank Luca Battistella, Navid Nabijou, Richard Thomas for helpful comments.
\subsection*{Notation}
For a morphism $f: X \to Y$ of spaces and a perfect complex $\mathbb{E}$ on $Y$, we often denote by $\mathbb{E}|_X$ the derived pullback $f^*\mathbb{E}$. We sometimes regard a locally free sheaf $E$ as its total space.
We denote by $\mathfrak{M}_{g,k,d}$, or simply by $\mathfrak{M}$, the Artin stack of prestable curves with non-negative integer on each component (playing a role of degree) whose sum is $d$. Similarly $\mathfrak{M}^{line}_{g,k,d}$, or simply $\mathfrak{M}^{line}$, denotes the Artin stack of curves with degree $d$ line bundles. The Artin stack of curves with degree $d$ divisors is denoted by $\mathfrak{M}^{div}_{g,k,d}$, or simply $\mathfrak{M}^{div}$.
We denote by $Q^{(i)}$ the image of $\iota_i$ in the picture above (i) for either the moduli spaces of stable quasimaps or the $p$-fields spaces. For instance on $Q^{(3)}$, the evaluation maps (of the $g=0$ quasimap) are the same $\mathrm{ev}_1=\mathrm{ev}_2$. Furthermore, we use the script $(i)$ for relevant objects of the embedding $\iota_i$ unless it needs an explanation. For instance a bundle on $Q^{(i)}$ will be denoted with the script $(i)$.
For variables with two subindices $y_{ij}$, we say $y_i=0$ if $y_{ij}=0$ for all $j$. Also we say $y=0$ if $y_{ij}=0$ for all $i$ and $j$.
\setcounter{tocdepth}{1}
\tableofcontents
\section{Stable quasimaps, $p$-fields and the plan}\label{Sec1}
\subsection*{Stable quasimaps}
A {\em genus $g$, degree $d$ quasimap to $X$ with $k$ marked points} is a triple $(C,L,u)$ where $C$ is a genus $g$, projective, nodal, prestable curve with $k$ marked points, $L$ is a degree $d$ line bundle on $C$, and $u = (u_0,\dots,u_n)$ is a section of $L^{\oplus n+1}$ such that
\begin{align} \label{cond}
\text{$f_i(u)=0 \in \Gamma(C, L^{\otimes \ell_i})$ for all $i$.}
\end{align}
Here $L$ plays a role of $\mathsf{f}^*\cO(1)$. It is a {\em stable quasimap} if it comes with the stability conditions\footnote{In contrast, $(C,L,u)$ is a {\em stable map} defining Gromov--Witten invariants if it is equipped with the stability conditions $1$. $\omega_{C}^{\log} \otimes L^{\otimes 3}$ is ample on $C$, and $2$. the zero of $u$ is empty.}
\begin{itemize}
\item[-]
$\omega_{C}^{\log} \otimes L^{\varepsilon}$ is ample on $C$ for any $\varepsilon >0$, and
\item[-]
the zero of $u$ is a divisor which does not meet nodes nor marked points.
\end{itemize}
We denote by $Q_{g,k,d}(X)$, or simply by $Q(X)$, the moduli space of stable quasimaps.
By \cite{MOP11, CK10, CKM14}, it is proper and equipped with a natural perfect obstruction theory so that the virtual fundamental class
\begin{align}\label{QX}
\left[ Q(X) \right]^{\mathrm{vir}} \ \in\ A_{\mathrm{vdim}}(Q(X) )
\end{align}
is defined, where $\mathrm{vdim}$ denotes the virtual dimension
\[
\mathrm{vdim} \ =\ (\mathrm{dim} X - 3)(1-g) + k - d \cdot c_1(K_X)([line]).
\]
The {\em stable quasimap invariant of $X$} is defined to be an integration over this virtual class.
The reason why the quantum Lefschetz property for the quasimap invariants is simpler is because a quasimap does not have a rational component with less than two special points (called a rational tail) on its domain curve.
\subsection*{Stable quasimaps with $p$-fields} The coherent sheaf $R^1V:=\oplus_iR^1\pi_*\mathsf{f}^*\cO(\ell_i)$ on $Q(\mathbb{P}^n)$ may not vanish.
We denote by $Q_{p,g,k,d}(\mathbb{P}^n)$, or simply by $Q_p$, its dual space
$$
Q_p \ :=\ \mathrm{Spec} _{\cO_{Q(\mathbb{P}^n) } } \left( \mathrm{Sym} R^1V \right).
$$
So $Q_p$ parametrises $(C,L,u,p=(p_1, ..., p_m))$ where $(C,L,u)$ is a stable quasimap to $\mathbb{P}^n$ and
\begin{align*}
p_i \in \Gamma(C, \omega_C \otimes L^{-\ell_i}).
\end{align*}
Recall that imposing the condition \eqref{cond} defines the space $Q(X)$ from $Q(\mathbb{P}^n)$, whereas the above extra data determines $Q_p$ from $Q(\mathbb{P}^n)$.
We will call the section $p=(p_1,\dots,p_m)$ {\em $p$-fields}.
The space $Q_p$ may not be proper, but still comes with a natural perfect obstruction theory, so that the virtual fundamental class
$
[Q_p]^{\mathrm{vir}} \ \in\ A_{\mathrm{vdim}}(Q_p)
$
is defined.
Using the cosection
$
\mathbb{E}^{\vee}_{Q_p/\mathfrak{M}^{line}} = (R\pi_*\cL^{\oplus n+1} \oplus \bigoplus_i (R\pi_*\cL^{\otimes \ell_i})^{\vee} [-1])|_{Q_p} \longrightarrow \cO_{Q_p}[-1],
$
defined in \cite{CL12}, cosection localisation \cite{KL13} allows us to find a localised class $[Q_p]^{\mathrm{vir}} _{\mathrm{loc}}$ of $[Q_p]^{\mathrm{vir}} $ to a smaller space $j: Q(X) \hookrightarrow Q_p$,
\begin{align}\label{locQp}
[Q_p]^{\mathrm{vir}} _{\mathrm{loc}}\ \in\ A_{\mathrm{vdim}}(Q(X)), \ \ \ j_*[Q_p]^{\mathrm{vir}} _{\mathrm{loc}} \ =\ [Q_p]^{\mathrm{vir}} .
\end{align}
Then by \cite{KO18, CL20, CJW21, Pi20}, the localised class $[Q_p]^{\mathrm{vir}} _{\mathrm{loc}}$ is equal to the class $[Q(X)]^{\mathrm{vir}} $ for $X$ defined in \eqref{QX} up to a sign
\begin{align}\label{X=p}
[Q(X)]^{\mathrm{vir}} \ =\ (-1)^{d(\sum_i \ell_i) + m(1-g) }[Q_p]^{\mathrm{vir}} _{\mathrm{loc}}.
\end{align}
\subsection*{Plan of the proof of Theorem \ref{QLP}}
We use \eqref{X=p} to prove Theorem \ref{QLP}.
A good thing to work with $Q_p$ instead of $Q(X)$ is that we can find a nice enough local cut-out model of $Q_p$ whereas it is hard for $Q(X)$.
In Section \ref{Kura}, we describe this explicit cut-out model of $Q_p$ after the suitable base-change of $Q_p$ in Section \ref{desin}.
Using this, we compute the intrinsic normal cone of $Q_p$ in Section \ref{virdecomp} to obtain a decomposition of the virtual class
\begin{align}\label{Qdec}
[Q_{p,2,0,d}]^{\mathrm{vir}} _{\mathrm{loc}} \ =\ [Q_p^{\mathrm{red}}]^{\mathrm{vir}} \ +\ [Q_p^{(1)}]^{\mathrm{vir}} \ +\ [Q_p^{(2)}]^{\mathrm{vir}} \ +\ [Q_p^{(3)}]^{\mathrm{vir}} .
\end{align}
Note that the indices `red', `$(1)$', `$(2)$' and `$(3)$' reflect their geometric origins labelled above. So $Q_p^{(1)},Q_p^{(2)},Q_p^{(3)}$ are supported on the images of the node-identifying morphisms $\iota_i$, ignoring $p$-fields. In fact we will investigate that they are bundles over the images in Section \ref{virdecomp}.
\medskip
Then in Section \ref{reduced}, we prove that $[Q_p^{\mathrm{red}}]^{\mathrm{vir}} $ follows the original quantum Lefschetz formula \eqref{naiveQLP1}
$$
[Q_{p}^{\mathrm{red}}]^{\mathrm{vir}} \ =\ (-1)^{d(\sum_i \ell_i) - m} \; e^{\mathrm{ref}}(V_{2,0,d})\ \cap\ [Q_{2,0,d}^{\mathrm{red}}(\mathbb{P}^n)].
$$
And we show the $i$-th cycle $[Q_p^{(i)}]^{\mathrm{vir}} $ is a part of the RHS of \eqref{qlp}. For $i=1$ for instance, we obtain
\begin{align}\label{Q1vir}
[Q_p^{(1)}]^{\mathrm{vir}} \ =\ (-1)^m\left[ \frac{c(\cH^\vee \boxtimes \mathrm{ev}^*T_X)}{c(\mathbb{L}^\vee \boxtimes \mathbb{L}^\vee)} \right]_{n-m-1} \cap \left( [\overline{M}_{1,1}] \times [Q^{\mathrm{red}}_{p,1,1,d}]^{\mathrm{vir}} \right)
\end{align}
via the pushforward by $\iota_1$. A very brief interpretation of this equality is that the difference of the obstruction bundles defining $[Q_p^{(1)}]^{\mathrm{vir}} $ and $[Q^{\mathrm{red}}_{p,1,1,d}]^{\mathrm{vir}} $ (in the $K$-group of $Q_p^{(1)}$, via the pullback) can be written in terms of the bundle structure of $Q_p^{(1)}$ over the image of $\iota_1$ as well as the pullback bundles of $\cH^\vee \boxtimes \mathrm{ev}^*T_X$, $\mathbb{L}^\vee \boxtimes \mathbb{L}^\vee$. To realise this interpretation to give an actual proof, we do massage spaces and bundles -- deformations, blowups and twistings by divisors, etc. -- in Section \ref{LOT} so that we can get a tidy form \eqref{Q1vir}. Once we do these for all $i$, then by using \cite[Corollary 1.3]{LL22}
$$
e^{\mathrm{ref}}(V_{1,1,d}) \cap [Q_{1,1,d}^\mathrm{red}(\mathbb{P}^n)] \ =\ [Q_{1,1,d}(X)]^{\mathrm{vir}} - \frac{c_1(\mathbb{L}_2)}{12} [Q_{0,2,d}(X)]^{\mathrm{vir}}
$$
together with \eqref{X=p}, the decomposition \eqref{Qdec} proves Theorem \ref{QLP}.
\section{Local defining equations of the $p$-field space} \label{Kura}
For a morphism of vector bundles $d: A \to B$ over a smooth Artin stack $M$, we consider the kernel of $d$ as a space
$$
\mathrm{ker} d\ :=\ \mathrm{Spec} _{\cO_M}\left(\mathrm{Sym} (\mathrm{coker} d^*)\right)\ \subset \ A\ =\ \mathrm{Spec} _{\cO_M}\left(\mathrm{Sym} A^*\right).
$$
Denoting by $\tau_A$ the tautological section, $\mathrm{ker} d$ has a cut-out model
$$
\xymatrix@=18pt{
& B|_{A} \ar[d] \\
\mathrm{ker} d \ := \ (d\circ\tau_{A})^{-1}(0)\ \subset\hspace{-6mm} & A.\ar@/^{-2ex}/[u]_{d\circ \tau_{A}}}
$$
Hence the pullback complex $\{d: A \to B\}|_{\mathrm{ker} d}$ defines a dual relative perfect obstruction theory of $\mathrm{ker} d$ over $M$.
The purpose of this section is to write $Q_p$ as (an open substack of) $\mathrm{ker} d$ over $\mathfrak{M}^{div}$.
\subsection{Cut-out model of the $p$-field space}\label{Ku}
Unlike considering $\mathfrak{M}^{line},$ there is no canonical forgetful morphism of the $p$-field space $Q_p \to \mathfrak{M}^{div}$. But it is defined locally as follows. Since $u=(u_0, ..., u_n)$ is not identically zero for a point $(C,L,u,p) \in Q_p$, we can pick a combination ${\bf u}=\sum a_iu_i\in H^0(C,L)$ nonconstant on each component of $C$ on which $L$ has positive degree. Since it is an open condition we have a morphism
$$
Q_p \ \longrightarrow\ \mathfrak{M}^{div}, \ \ \ (C,L,u,p) \ \longmapsto \ (C, {\bf u}^{-1}(0))
$$
on a local neighborhood.
Let $\cD$ be the universal divisor on the universal curve $\pi: \cC\to \mathfrak{M}^{div}$. Then locally, $Q_p$ is the open substack (defined by the stability condition) of the kernel of a representative
\begin{align} \label{localpot}
[A \stackrel{d}{\longrightarrow } B]\ \cong\ R\pi_*\cO_{\cC}(\cD)^{\oplus n} \oplus \bigoplus_i \left( R\pi_*\cO_{\cC}(\ell_i\cD)[1] \right)^{\vee}.
\end{align}
Hence it defines a local cut-out model relative to $\mathfrak{M}^{div}$ and a natural (local) perfect obstruction theory.
Since we work locally, we may assume $A$ and $B$ are trivial bundles. Then $d$ can be considered as a multi-valued function
\begin{align}\label{OOO}
d\ :\ \mathfrak{M}^{div}\times\mathbb{C}^{\mathrm{rank} A}\ \longrightarrow\ \mathbb{C}^{\mathrm{rank} B}
\end{align}
defining $Q_p$ as (an open substack of) its zero. In the rest of the section, we find a simple expression of $d$ by coordinate changes and blowups.
\subsection{Key Lemma}
Now we focus on $(g,k)=(2,0)$ throughout the Section. We work {\em \'etale locally} on $\mathfrak{M}^{div}$, sometimes without mentioning it. For instance by an element of $\Gamma(\cO_{\mathfrak{M}^{div}})$, we mean an \'etale local function of $\mathfrak{M}^{div}$.
As we have explained in Introduction, considering stable quasimaps has a big advantage in making the quantum Lefschetz formula less complicated than considering stable maps. But there is (essentially only) one technical thing to check, which is obvious in stable maps -- near a domain curve of a stable map $f: C \to \mathbb{P}^n$, $f^*\cO(1)$ is linearly equivalent to $\cO(\sum_{i=1}^d \cD_i)$ with disjoint, fiberwise degree $1$ divisors $\cD_i$. Unfortunately it is not immediately seen near a domain curve of a stable quasimap. Since this was the important starting point to find local cut-out models for stable map moduli spaces in \cite{HL10, HLN18} we need the following Lemma.
In fact, the Lemma is quite general -- it holds near any prestable curve, including a domain curve of a stable quasimap, in genus $2$. Let $\cD$ be any effective divisor of $\mathrm{deg} =d \geq 3$ on the universal curve $\cC$ of $\mathfrak{M}$.
\begin{lemm}\label{divisor splitting}
Locally $\cD$ is linearly equivalent to a sum $\sum_{i=1}^d \cD_i$ of disjoint divisors of degree $1$ at each fiber.
\end{lemm}
The key idea of the proof is to construct a covering map $\cC\to \mathbb{P}^1$ by picking two linearly independent sections $H^0(\cC,\cO(\cD))$, whose dim $= d+1-g+\mathrm{dim} H^1(\cC,\cO(\cD))\geq 2$, not having common zeros. Then the inverse image of a generic point of $\mathbb{P}^1$ is $d$-many distinct points.
\begin{proof}
Pick any local divisor $\cB$ on $\cC$ lying on the minimal genus $2$ subcurve, having degree $1$ at each fiber and not meeting $\cD$.
Because $\cB \cap \cD = \emptyset$, the evaluation morphism $\pi_*(\cO_{\cC}(\cD)) \to \cO_{\cC}(\cD)|_{\cB} \cong \cO_{\mathfrak{M}}$ is surjective, where $\pi : \cC \to \mathfrak{M}$ denotes the projection morphism. This induces an exact sequence
\begin{align}\label{DBses}
0\ \to\ \pi_* \cO_{\cC}(\cD-\cB)\ \to\ \pi_*(\cO_{\cC}(\cD))\ \to\ \cO_{\mathfrak{M}}\ \to\ 0.
\end{align}
Meanwhile, as in \cite[Section 2.3]{HLN18}, we can choose other divisors $\cA_1$ and $\cA_2$ lying on the minimal genus $2$ subcurve such that
\begin{itemize}
\item
$\cA_1$, $\cA_2$, $\cB$ are disjoint to each other, and
\item
$\cA_1$, $\cA_2$ lie on different components if the genus $2$ component consists of two genus $1$ components.
\end{itemize}
Similarly, by \cite[Equation (2.5)]{HLN18}, we obtain a sequence
\begin{align}\label{DBAses}
0\ \to\ \pi_* \cO_{\cC}(\cD -\cB)\ \to\ \xymatrix@C=20mm{\pi_* \cO_{\cC}(\cD + \cA_1 + \cA_2-\cB)\ \ar[r]^-{\mathrm{ev}_{\cA_1} \oplus \; \mathrm{ev}_{\cA_2}}& \cO_{\mathfrak{M}}^{\oplus 2}.}
\end{align}
Note that $\pi_* \cO_{\cC}(\cD + \cA_1 + \cA_2-\cB)$ is a rank $d$ vector bundle and hence locally is $\cO_{\mathfrak{M}}^{\oplus d}$.
Since $d \geq 3$, we can pick a nonzero local section $s\in \Gamma\left(\pi_* \cO_{\cC}(\cD + \cA_1 + \cA_2-\cB)\right)$ mapping to $0$ by $\mathrm{ev}_{\cA_1} \oplus \mathrm{ev}_{\cA_2}$.
Then it factors through $\pi_* \cO_{\cC}(\cD -\cB)$, and hence, by \eqref{DBses}, it can be considered as a section
$$
s\ :\ \cO_{\cC} \ \longrightarrow\ \cO_{\cC}(\cD),
$$
zero on $\cB$. Since the canonical section $s_{\cD}$ of $\cD$ does not vanish on $\cB$, $s_{\cD}$ and $s$ are linearly independent on every fiber.
The common zero $\cD'$ of $s$ and $s_{\cD}$ has then fiberwise degree $d' \leq d-1$ (which may not be constant at each fiber) because $s$ is zero on $\cB$ but $s_{\cD}$ is not. Then at a fiber the sections $s \otimes s^{-1}_{\cD'}$, $s_{\cD} \otimes s^{-1}_{\cD'}$ of $\cO(\cD-\cD')$ defines a degree $d-d'$ morphism $\phi: \cC \to \mathbb{P}^1$. Since it cannot be degree $1$ (which means $\phi$ is an isomorphism), we actually have $d' \leq d-2$.
A generic fiber $\phi^{-1}([a;b])$ consists of distinct divisors $\cD_1$, ..., $\cD_{d-d'}$ away from $\cD'$, and hence we have
$$
\cO_{\cC}(\cD-\cD') \ \cong\ \cO_{\cC}\left(\sum_{i=1}^{d-d'} \cD_i\right).
$$
Note that since $\cD' + \sum \cD_i$ is defined by $bs-as_{\cD}$ this isomorphism is not only at the fiber, but an isomorphism locally on $\mathfrak{M}$.
If $d' \geq 3$, we do the same procedure by replacing $\cD'$ by $\cD$ until we get $d' \leq 2$. Then we proved the lemma unless $d'=2$. Now let us assume that $d'=\mathrm{deg} \cD' =2$. Doing the same procedure for $\cD:=\cD'+\cD_1$ which has degree $3$, the procedure terminates since $d'\leq \mathrm{deg}\cD-2=3-2=1$. Hence the proof is completed.
\end{proof}
Considering the universal divisor $\cD$ on the universal curve $\cC$ on $\mathfrak{M}^{div}$, we obtain the following immediate corollary from the exact sequences \eqref{DBses}, \eqref{DBAses} in the proof of Lemma \ref{divisor splitting}.
\begin{coro} \label{twoisos} In the derived category of a local neighborhood of $\mathfrak{M}^{div}$, we obtain an isomorphism induced by \eqref{DBses}
\begin{align*
R\pi_* \cO_{\cC}(\cD) \ & \cong\ R\pi_* \cO_{\cC}(\cD - \cB) \oplus [\cO_{\mathfrak{M}^{div}} \stackrel{0}{\longrightarrow } 0].
\end{align*}
And the sequence \eqref{DBAses} induces an isomorphism
\begin{align*
R\pi_* \cO_{\cC}(\cD - \cB) \ & \cong\ \left[ \xymatrix@C=20mm{\pi_* \cO_{\cC}(\cD + \cA_1 + \cA_2 - \cB) \ar[r]^-{\tiny{\mathrm{ev}_{\cA_1}\oplus \; \mathrm{ev}_{\cA_2} }} & \cO_{\mathfrak{M}^{div}}^{\oplus 2}} \right].
\end{align*}
\end{coro}
In addition, a similar idea of \cite[Lemma 2.4.1]{HLN18} allows us to have one more isomorphism.
\begin{lemm} \label{oneiso}
The canonical monomorphisms induce an isomorphism
\begin{align*
& \oplus^d_{i=1} \pi_* \cO_{\cC}(\cD_i + \cA_1 + \cA_2 - \cB)
\ \cong\ \pi_* \cO_{\cC}(\cD_1 + \cdots + \cD_d + \cA_1 + \cA_2 - \cB).
\end{align*}
\end{lemm}
Combining all these Lemma \ref{divisor splitting}, Corollary \ref{twoisos} and Lemma \ref{oneiso}, we observe that $R\pi_* \cO_{\cC}(\cD)$ is quasi-isomorphic to
\begin{align}\label{ess}
\left[ \xymatrix@C=15mm{\oplus^d_{i=1} \pi_* \cO_{\cC}(\cD_i + \cA_1 + \cA_2 - \cB) \ar[r]^-{\tiny{\mathrm{ev}_{\cA_1}\oplus \; \mathrm{ev}_{\cA_2} }} & \cO_{\mathfrak{M}^{div}}^{\oplus 2}} \right] \oplus [\cO_{\mathfrak{M}^{div}} \stackrel{0}{\longrightarrow } \cO_{\mathfrak{M}^{div}}^{\oplus 2}].
\end{align}
\subsection{Diagonalisation of the local representative}\label{diagc}
Picking any local identification $\pi_* \cO_{\cC}(\cD_i + \cA_1 + \cA_2 - \cB) \cong \cO_{\mathfrak{M}^{div}}$, $\tiny{\mathrm{ev}_{\cA_1}\oplus \mathrm{ev}_{\cA_2}}$ in \eqref{ess} can be written as a $2 \times d$ matrix $(c_{ji})$, $c_{ji} \in \Gamma(\cO_{\mathfrak{M}^{div}})$.
The goal of this section is to transform the matrix $(c_{ji})$ to a nice diagonal form
$$
(c_{ji}) \sim \left(
\begin{array}{ccccc}
c_1 & 0 & 0 & \cdots & 0 \\
0 & c_2 & 0 & \cdots & 0
\end{array}
\right) =: c
$$
by using row and column operations. In fact it is already studied by Hu-Li-Niu \cite[Section 5]{HLN18}. They found a diagonal form $c$ on a neighborhood by fixing a point in $\mathfrak{M}^{div}$. The description of $c$ depends on a type of a boundary component in which the point is. We list some cases which will appear as a domain curve of a stable quasimap.
\smallskip
\noindent (1). Near a point in the image of $\overline{M}_{1,1}\times \mathfrak{M}^{div}_{1,1,d} \hookrightarrow \mathfrak{M}^{div}_{2,0,d}$ one can find a diagonal matrix $c$ to be
$$
c_1\ =\ 1, \ \ \ c_2\ =\ \zeta,
$$
where $\zeta$ is the node smoothing function in $\Gamma(\cO_{\mathfrak{M}^{div}})$. The proof comes directly from \cite[Section 5.3]{HLN18}.
\smallskip
\noindent (2). Near a point in the image of $\iota_2 : \overline{M}_{1,1}\times \mathfrak{M}^{div}_{0,2,d}\times \overline{M}_{1,1} \xrightarrow{2:1} \mathfrak{M}^{div}_{2,0,d}$ one can find it to be
$$
c_1\ =\ \zeta_1, \ \ \ c_2\ =\ \zeta_2,
$$
where $\zeta_1$ and $\zeta_2$ are the node smoothing functions in $\Gamma(\cO_{\mathfrak{M}^{div}})$. The proof is in \cite[Section 5.5]{HLN18}. Note that the diagonal form in (1) is recovered by $\zeta_2\neq0$.
\smallskip
\noindent (3). Near a point in the image of $\overline{M}_{1,2}\times \mathfrak{M}^{div}_{0,2,d} \hookrightarrow \mathfrak{M}^{div}_{2,0,d}$ we need a blowup to obtain a diagonal transform of $(c_{ji})$. Before we discuss it in the following Section, we introduce some useful facts which we will use.
The entries $c_{ji}$ in the matrix $(c_{ji})$ are non-vanishing functions by \cite[Section 5.4]{HLN18}. Therefore the matrix $(c_{ji})$ can be transformed to
\begin{align}\label{mat3}
\left(
\begin{array}{cccccc}
1 & 0 & 0 & 0 & \cdots & 0 \\
0 & \mathrm{det}_{12} & \mathrm{det}_{13} & \mathrm{det}_{14} & \cdots & \mathrm{det}_{1d}
\end{array}
\right)
\end{align}
where $\mathrm{det}_{k\ell} := \mathrm{det} \left( \begin{array}{cc}
c_{1k} & c_{1\ell} \\
c_{2k} & c_{2\ell}
\end{array}
\right)$.
By \cite[Section 5.4]{HLN18} and \cite[Lemma 2.8.2]{HLN18}, we may assume that the first two determinants can be written as
\[
\mathrm{det}_{12} = \zeta_1 + a \cdot \zeta_2,\ \ \ \mathrm{det}_{13} = \zeta_2 + b \cdot \zeta_1,
\]
where $\zeta_1$ and $\zeta_2$ are the node smoothing functions.
\smallskip
\noindent (4). Near a generic domain curve from the reduced space, one can find it to be
$$
c_1\ =\ 1, \ \ \ c_2\ =\ 1.
$$
The proof is in \cite[Section 5.2]{HLN18}.
This diagonal form is recovered from (1) by letting $\zeta\neq0$.
\subsection{Base change}\label{desin}
Consider the blowup spaces
$$
\widetilde{\mathfrak{M}} := \mathrm{Bl}_{\mathfrak{M}_{1,2,0} \times \mathfrak{M}_{0,2,d}} \mathfrak{M}_{2,0,d} \ \textrm{ and } \
\widetilde{\mathfrak{M}}^{div} \ :=\ \mathfrak{M}^{div}\times_{\mathfrak{M}} \widetilde{\mathfrak{M}}.
$$
On $\widetilde{\mathfrak{M}}^{div}$, the matrix \eqref{mat3} can be transformed to be a diagonal form.
Locally the boundary component $\mathfrak{M}_{1,2,0} \times \mathfrak{M}_{0,2,d}$ is $\{\zeta_1=\zeta_2=0\}$. Thus on a neighborhood of the exceptional divisor, we know either $\zeta_1|\zeta_2$ or $\zeta_2|\zeta_1$. Without loss of generality, we may assume that $\zeta_1|\zeta_2$. Then the matrix \eqref{mat3} can be transformed to
$$
\left(
\begin{array}{ccccc}
1 & 0 & 0 & \cdots & 0 \\
0 & \zeta_1 & 0 & \cdots & 0
\end{array}
\right).
$$
Hence on the blowup, the matrix $c$ for the case (3) in Section \ref{diagc} has a form with $c_1=1$, $c_2=\zeta_1$. Furthermore, the diagonal form in (4) is recovered by this by $\zeta_1\neq 1$.
The global forgetful morphism $Q_p \to \mathfrak{M}$ defines the base change $b:\widetilde{Q}_p := Q_p \times_{\mathfrak{M}} \widetilde{\mathfrak{M}}\to Q_p$. Then the pullbacks give rise to the cut-out model, perfect obstruction theory, and cosection so that the cosection localised virtual cycle
$$
[\widetilde{Q}_p]^{\mathrm{vir}} _{\mathrm{loc}} \ \in\ A_{\mathrm{vdim}}(\widetilde{Q}_X)
$$
is defined. By \cite[Theorem 5.0.1]{Co06}, we have
$
b_*[\widetilde{Q}_p]^{\mathrm{vir}} _{\mathrm{loc}} \ =\ [Q_p]^{\mathrm{vir}} _{\mathrm{loc}}.
$
\subsection{Local cut-out model of $\widetilde{Q}_p$}\label{KuQt}
Recall that we obtained an explicit representative \eqref{ess} of $R\pi_*\cO(\cD)$ with the diagonal matrices $c$ in Section \ref{diagc} and Section \ref{desin} as its differential morphism. We emphasise once again that $\cD$ need not be the universal divisor, cf. Lemma \ref{divisor splitting}. So we apply these diagonalisations to get a local cut-out model not only of $Q(\mathbb{P}^n)$, but also of the $p$-field space $\widetilde{Q}_p$, relative over $\widetilde{\mathfrak{M}}^{div}$ as discussed in Section \ref{Ku}. The induced local defining equation \eqref{OOO} is
\begin{align}\label{KuModel}
\xymatrix@C=0mm@R=7mm{
\mathbb{C}^{2n} \times \prod^m_{i=1} \left( \mathbb{C}^2 \oplus \mathbb{C}^{d\ell_i-1} \right) &\ni & (c_1(z)x_{1j}, c_2(z)x_{2j}) \times \prod_i \left( (c_1(z) p_{1i}, c_2(z)p_{2i}) , 0 \right) \\
\widetilde{\mathfrak{M}}^{div} \times \prod_{j=1}^{n} \left(\mathbb{C}^{2} \times \mathbb{C}^{d-1}\right) \times \mathbb{C}^{2m} \ar[u]_{ c\; \circ \tau} & \ni & \{z\} \times ((x_{1j},x_{2j}))_{1\leq j \leq n} \times \{v\} \times ((p_{1i},p_{2i}))_{1 \leq i \leq m}. \ar@{|->}[u]_{ c\; \circ \tau}
}
\end{align}
We need some explanation.
The morphism $\prod_{j=1}^n\left(\mathbb{C}^{2} \times \mathbb{C}^{d-1}\right) \to \mathbb{C}^{2n}$ above is represented by $R\pi_*\cO(\cD)^{\oplus n}$ and $\mathbb{C}^{2m}\to \prod^m_{i=1} \left( \mathbb{C}^2 \oplus \mathbb{C}^{d\ell_i-1} \right)$ is represented by $\left(\oplus_{i}R\pi_*\cO(\ell_i\cdot\cD)[1]\right)^\vee$, whose direct sum defines a local perfect obstruction theory \eqref{localpot}.
\section{Perfect obstruction theories, cones and virtual cycles}
\subsection{Perfect obstruction theories}\label{LtoG}
Although the cut-out model \eqref{KuModel} is useful in computational aspects, there are also two crucial drawbacks. One is it is not global and the other is this does not give a cut-out model over $\widetilde{\mathfrak{M}}^{line}$ since $\widetilde{\mathfrak{M}}^{div}\to\widetilde{\mathfrak{M}}^{line}$ is not smooth. For later use it is important how we can apply computations with the cut-out model \eqref{KuModel} to the perfect obstruction theory over $\widetilde{\mathfrak{M}}$ or $\widetilde{\mathfrak{M}}^{line}$. In this section, we explain this.
\smallskip
First we recall the perfect obstruction theories. The {\em local} relative perfect obstruction theory over $\widetilde{\mathfrak{M}}^{div}$ is
\begin{align*}
\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}^{div}} = \left(R\pi_*\cO_{\cC}(\cD)^{\oplus n} \right)^\vee\oplus \bigoplus_i R\pi_*\cO_{\cC}(\ell_i\cD) [1],
\end{align*}
which is just the pullback of \eqref{localpot}. Over $\widetilde{\mathfrak{M}}^{line}$, $\widetilde{Q}_p$ is equipped with the {\em global} perfect obstruction theory
$$
\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}^{line}} = \left(R\pi_*\cL^{\oplus n+1} \right)^\vee\oplus \bigoplus_i R\pi_*\cL^{\ell_i} [1],
$$
where $\cL$ is the universal line bundle over the universal curve $\pi : \cC \to \widetilde{Q}_p$. And over $\widetilde{\mathfrak{M}}$ the cone of the composition
$$
\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}^{line}}[-1]\ \longrightarrow\ \mathbb{L}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}^{line}}[-1]\ \longrightarrow\ \mathbb{L}_{\widetilde{\mathfrak{M}}^{line}/\widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p}
$$
defines the {\em global} perfect obstruction theory $\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}}$. Here $\mathbb{L}$ denotes the cotangent complex. Then we have the following diagram of triangles
$$
\xymatrix@C=4mm@R=4mm{
\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}} \ar[r]\ar@{=}[d] & \mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}^{line}}\ar[r]\ar[d] & \mathbb{L}_{\widetilde{\mathfrak{M}}^{line}/\widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p}[1] \ar[d] \\
\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}} \ar[r] & \mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}^{div}}\ar[r]\ar[d] & \mathbb{L}_{\widetilde{\mathfrak{M}}^{div}/\widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p}[1]\ar[d]\\
&\mathbb{L}_{\widetilde{\mathfrak{M}}^{div}/\widetilde{\mathfrak{M}}^{line}}|_{\widetilde{Q}_p}[1] \ar@{=}[r] & \mathbb{L}_{\widetilde{\mathfrak{M}}^{div}/\widetilde{\mathfrak{M}}^{line}}|_{\widetilde{Q}_p}[1].
}
$$
In particular the middle horizontal triangle tells us that the local cut-out model \eqref{KuModel} defines $\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}}$ as well as $\mathbb{E}_{\widetilde{Q}_p/ \widetilde{\mathfrak{M}}^{div}}$ since $\widetilde{\mathfrak{M}}^{div}\to\widetilde{\mathfrak{M}}$ is smooth.\footnote{Beware that the local model \eqref{KuModel} does not define $\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}^{line}}$ immediately because $\widetilde{\mathfrak{M}}^{div}\to \widetilde{\mathfrak{M}}^{line}$ is not smooth.}
So one way from local to global is to consider this forgetful morphism $\widetilde{\mathfrak{M}}^{div} \to \widetilde{\mathfrak{M}}$. Via the morphism of perfect obstruction theories
$$
\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}\ \longrightarrow\ \mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}^{div}},
$$
computations can move from one to the other, where the former is global whereas the latter is local. For instance the smoothness shows that the two intrinsic normal cones
\begin{align*
\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{div}} \ \text{ and }\ \mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}
\end{align*}
are related, the former maps to the latter via the morphism of bundle stacks
\begin{align*}
h^1/h^0\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}^{div}}^{\vee}\right)\ \longrightarrow\ h^1/h^0\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}^{\vee}\right),
\end{align*}
which is actually an affine $T_{\widetilde{\mathfrak{M}}^{div}/\widetilde{\mathfrak{M}}}$-bundle. The precise proof is in \cite[Proposition 3]{KKP03}, but it is more or less obvious thanks to the smoothness. Then the local computation of the cone on the LHS using the cut-out model \eqref{KuModel} will give the computation of the cone on the RHS.
\medskip
A solution to $\widetilde{\mathfrak{M}}^{line}$ is to consider the forgetful morphism $\widetilde{\mathfrak{M}}^{line} \to \widetilde{\mathfrak{M}}$. Since it is smooth as well the morphism of perfect obstruction theories
$$
\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}\ \longrightarrow\ \mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}^{line}},
$$
induces the relationship of the two intrinsic normal cones
\begin{align*
\mathfrak{C}_{ \widetilde{Q}_p / \widetilde{\mathfrak{M}}^{line}}\ \text{ and }\ \mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}},
\end{align*}
namely, the former maps to the latter via the morphism of bundle stacks
\begin{align*}
h^1/h^0\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}^{line}}^{\vee}\right)\ \longrightarrow\ h^1/h^0\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}^{\vee}\right)
\end{align*}
as before. It is also an affine $T_{\widetilde{\mathfrak{M}}^{line}/\widetilde{\mathfrak{M}}}$-bundle, and so is $\mathfrak{C}_{ \widetilde{Q}_p / \widetilde{\mathfrak{M}}^{line}}$ over $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}$.
\subsection{Virtual cycles} \label{virdecomp}
As it is briefly explained in Section \ref{Sec1}, the space $\widetilde{Q}_p$ is decomposed into four irreducible components
\begin{align}\label{qdecomp}
\widetilde{Q}_p \ =\ \widetilde{Q}_p^{\mathrm{red}} \ \cup\ \widetilde{Q}_p^{(1)} \ \cup\ \widetilde{Q}_p^{(2)} \ \cup\ \widetilde{Q}_p^{(3)},
\end{align}
cf. the pictures in Introduction. From the local cut-out model \eqref{KuModel} relative over $\widetilde{\mathfrak{M}}^{div}$, an \'etale local neighborhood of $\widetilde{Q}_p$ is the spectrum of a ring
$$
R\ :=\ B[x,p]\; /(c_1x_{1j}, c_2x_{2j},c_1p_{1i}, c_2p_{2i}),
$$
where $\mathrm{Spec} (B)$ is a smooth neighborhood of $\widetilde{\mathfrak{M}}^{div}$. From this we can read the decomposition \eqref{qdecomp} as follows:
\smallskip
\noindent
(1). Near a point in $\widetilde{Q}_p$ whose domain curve is an element in the image of $\overline{M}_{1,1}\times \mathfrak{M}_{1,1} \hookrightarrow \mathfrak{M}_{2,0}$ but not in the case (2) below, we have seen $c_2=1$ in Section \ref{diagc}. Hence
$$
\widetilde{Q}_p^{(1)}= \{c_1=x_2=p_2=0\}, \ \ \widetilde{Q}_p^{\mathrm{red}} = \{x=p=0\}.
$$
\smallskip
\noindent
(2). Near that in the image of $\overline{M}_{1,1}\times \mathfrak{M}_{0,2}\times \overline{M}_{1,1} \xrightarrow{2:1} \mathfrak{M}_{2,0}$, we have
\begin{align*}
&\widetilde{Q}_p^{(2)} = \{c_1=c_2=0\}, \ \ \widetilde{Q}_p^{\mathrm{red}} = \{x=p=0\}, \\
\text{and a double }&\text{cover }\ \{c_1=x_2=p_2=0\}\cup \{c_2=x_1=p_1=0\} \ \longrightarrow\ \widetilde{Q}_p^{(1)}.
\end{align*}
Note that $\widetilde{Q}_p^{(2)}$ does not meet $\widetilde{Q}_p^{(3)}$.
\smallskip
\noindent
(3). Near that in the image of $\overline{M}_{1,2}\times \mathfrak{M}_{0,2} \hookrightarrow \mathfrak{M}_{2,0}$ but not in the case (2) above, we have $c_2=1$. So
$$
\widetilde{Q}_p^{(3)}= \{c_1=x_2=p_2=0\}, \ \ \widetilde{Q}_p^{\mathrm{red}} = \{x=p=0\}.
$$
\smallskip
\noindent
(4). Near a point outside there, we have $c_1=c_2=1$. Thus $\mathrm{Spec} (R)$ defines $\widetilde{Q}_p^{\mathrm{red}}$.
Then the intrinsic normal cone $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}$ can be decomposed into
\begin{align}\label{others}
\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}} \ =\ \mathfrak{C}^{\mathrm{red}}\ \cup\ \mathfrak{C}^{(1)}\ \cup\ \mathfrak{C}^{(2)}\ \cup\ \mathfrak{C}^{(3)} \ \cup\ \text{others},
\end{align}
each of the first four terms is defined to be the closure of the complement open part in $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}$. For instance,
$$
\mathfrak{C}^{\mathrm{red}}\ \text{ is the closure of }\ \mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}} |_{\widetilde{Q}_p \setminus \widetilde{Q}_p^{(1)} \cup \widetilde{Q}_p^{(2)} \cup \widetilde{Q}_p^{(3)}}\ \subset\ \mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}.
$$
They are actually the closures in $h^1/h^0\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}^{\vee}\right)$ since $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}\subset h^1/h^0\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}^{\vee}\right)$ is a closed substack.
In fact, one can check from the cut-out model \eqref{KuModel} that `others' in \eqref{others} is empty so that we obtain a decomposition
\begin{align}\label{Cdec}
\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}} \ =\ \mathfrak{C}^{\mathrm{red}}\ \cup\ \mathfrak{C}^{(1)}\ \cup\ \mathfrak{C}^{(2)}\ \cup\ \mathfrak{C}^{(3)}.
\end{align}
Here is a brief explanation. Letting $A:=B[x,p]$, one can read the decomposition of $C_{R/A}:=C_{\mathrm{Spec} R/\mathrm{Spec} A}$, a pullback of $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{div}}$, from its spectrum of
\begin{align}\label{CRS}
\frac{R\; [X_{1j},X_{2j},P_{1i},P_{2i}]}{\left(
\begin{array}{c}
x_{1k}X_{1l}-x_{1l}X_{1k},\ x_{1k}P_{1l}-p_{1l}X_{1k},\ p_{1k}P_{1l}-p_{1l}P_{1k}, \\
x_{2k}X_{2l}-x_{2l}X_{2k},\ x_{2k}P_{2l}-p_{2l}X_{2k},\ p_{2k}P_{2l}-p_{2l}P_{2k}
\end{array}
\right).}
\end{align}
\smallskip
\noindent
(1). Near a point over $\widetilde{Q}^{(1)}_p$ ($c_2=1$, $x_2=p_2=0$), $C_{R/A}$ is decomposed into
$$
C^{\;\!(1)}= \{c_1=x_2=p_2=0\}, \ \ C^{\;\!\mathrm{red}} = \{x=p=0\}.
$$
\smallskip
\noindent
(2). Near a point over $\widetilde{Q}^{(2)}_p$, $C_{R/A}$ is decomposed into
\begin{align*}
&C^{\;\!(2)} = \{c_1=c_2=0\}, \ \ C^{\;\!(1)} = \{c_1=x_2=p_2=0\}\cup \{c_2=x_1=p_1=0\}, \ \ C^{\;\!\mathrm{red}} = \{x=p=0\}.
\end{align*}
\smallskip
\noindent
(3). Near a point over $\widetilde{Q}^{(3)}_p$ ($c_2=1$, $x_2=p_2=0$), $C_{R/A}$ is decomposed into
$$
C^{\;\!(3)}= \{c_1=x_2=p_2=0\}, \ \ C^{\;\!\mathrm{red}} = \{x=p=0\}.
$$
\smallskip
\noindent
(4). Near a point over $\widetilde{Q}^{\mathrm{red}}_p$ ($c_2=1$, $x_2=p_2=0$), $C_{R/A}$ is $C^{\;\!\mathrm{red}} = \{x=p=0\}$.
\smallskip
\noindent
So we could check there is no `others' in $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{div}}$. Combining this with the (local) equivalence of
$$
\mathfrak{C}_{ \widetilde{Q}_p / \widetilde{\mathfrak{M}} }\ \text{ and }\ \mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{div}} \ = \ [C_{R/A} /T_{A/B}|_R ]
$$
discussed in Section \ref{LtoG} gives the decomposition \eqref{Cdec}.
Note that the cut-out model \eqref{KuModel} tells us the morphism $d(c \circ \tau):T_A|_R \to C_{R/A}$ (defining the quotient via the composition $T_{A/B}\to T_A$) is
\begin{align} \label{normal}
\partial_{x_1}, \partial_{x_2}, \partial_{p_1}, \partial_{p_2} \ & \longmapsto \ c_1\partial_{X_1}, c_2\partial_{X_2}, c_1\partial_{P_1}, c_2\partial_{P_2}, \nonumber \\
\partial_{c_1}, \partial_{c_2} \ & \longmapsto \ \sum_jx_{1j}\partial_{X_{1j}} + \sum_ip_{1i}\partial_{P_{1i}}, \sum_jx_{2j}\partial_{X_{2j}} + \sum_ip_{2i}\partial_{P_{2i}}.
\end{align}
The cosection introduced in \cite{CL12} defining the localised virtual cycle $[Q_p]^{\mathrm{vir}} _{{\mathrm{loc}} }$ mentioned in \eqref{locQp} is indeed defined on the obstruction sheaf $h^1\left(\mathbb{E}^\vee_{Q_p/\mathfrak{M}^{line}}\right)$ {\em over} $\mathfrak{M}^{line}$. So this gives a morphism $\mathbb{E}^\vee_{Q_p/\mathfrak{M}^{line}}\to \cO_{Q_p}[-1]$ in the derived category. It is proven in \cite{CL12} that this actually factors through the absolute dual perfect obstruction theory $\mathbb{E}^\vee_{Q_p}\to \cO_{Q_p}[-1]$. So its pullback defines cosections on both obstruction sheaves over $\widetilde{\mathfrak{M}}$ and $\widetilde{\mathfrak{M}}^{div}$, $h^1\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}\right)$ and $h^1\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}^{div}}\right)$. The latter is
\begin{align}\label{COsect}
\mathbb{C}^{2n}\times\mathbb{C}^{2m} \ &\longrightarrow\ \mathbb{C}, \\
(X_{1j}, X_{2j}, P_{1i}, P_{2i}) \ & \longmapsto \ \sum_{i,j,k}\left(p_{ki}\frac{df_i}{x_{kj}}(x_k)\cdot X_{kj} -\mathrm{deg} f_i \cdot f_i(x_k)\cdot P_{ki}\right)\nonumber
\end{align}
in the cut-out model \eqref{KuModel}. Here we used the restriction $\mathbb{C}^{2n}\times\mathbb{C}^{2m}\subset \mathbb{C}^{2n} \times \prod^m_{i=1} \left( \mathbb{C}^2 \oplus \mathbb{C}^{d\ell_i-1} \right)$ of the obstruction bundle. It is easy to check that the composition with $c \circ \tau$ is zero. Since the cut-out model \eqref{KuModel} defines actually the absolute perfect obstruction theory, it gives another simple proof that the cosection descends to the obstruction sheaf of the absolute perfect obstruction theory.
\medskip
For Definition below, we use the perfect obstruction theory {\em over} $\widetilde{\mathfrak{M}}$. Especially we use the decomposition \ref{Cdec}.
\begin{defi}\label{CYCLES}
The virtual cycle of the reduced part
$$
[\widetilde{Q}^{\mathrm{red}}_p]^{\mathrm{vir}} \ \in \ A_{\mathrm{vdim}} (\widetilde{Q}_X \cap \widetilde{Q}^{\mathrm{red}}_p)
$$
is defined by the image of $[\mathfrak{C}^{\mathrm{red}}]$ by the cosection localised Gysin map \cite{KL13}. The cycles $[\widetilde{Q}^{(1)}_p]^{\mathrm{vir}} $, $[\widetilde{Q}^{(2)}_p]^{\mathrm{vir}} $ and $[\widetilde{Q}^{(3)}_p]^{\mathrm{vir}} $ are similarly defined by using $[\mathfrak{C}^{(1)}]$, $[\mathfrak{C}^{(2)}]$ and $[\mathfrak{C}^{(3)}]$ respectively.
\end{defi}
Hence we obtain a decomposition of the virtual class
\begin{align*}
[\widetilde{Q}_{p}]^{\mathrm{vir}} _{\mathrm{loc}} \ =\ [\widetilde{Q}_p^{\mathrm{red}}]^{\mathrm{vir}} \ +\ [\widetilde{Q}_p^{(1)}]^{\mathrm{vir}} \ +\ [\widetilde{Q}_p^{(2)}]^{\mathrm{vir}} \ +\ [\widetilde{Q}_p^{(3)}]^{\mathrm{vir}}
\end{align*}
providing \eqref{Qdec} by the pushdown.
\section{Quantum Lefschetz property for the reduced virtual cycle} \label{reduced}
As we have explained in Section \ref{LtoG}, the cone $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{line}}$ is an affine bundle over $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}$. Hence the decomposition \eqref{Cdec} of $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}$ provides a decomposition of $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{line}}$, by abuse of notation,
\begin{align}\label{Cdec2}
\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{line}} \ =\ \mathfrak{C}^{\mathrm{red}}\ \cup\ \mathfrak{C}^{(1)}\ \cup\ \mathfrak{C}^{(2)}\ \cup\ \mathfrak{C}^{(3)}.
\end{align}
The functorial property of localised Gysin homomorphisms \cite{KL13} tells us that the cycle $[\widetilde{Q}_p^\mathrm{red}]^{\mathrm{vir}} $ defined by using the subcone of $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}$ is the same as the one defined by using $\mathfrak{C}^{\mathrm{red}}$, a subcone of $\mathfrak{C}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}^{line}}$ in \eqref{Cdec2}. We will compute the latter one in this section.
The bundle $V:=V_{2,0,d}$ in Theorems \ref{QLP1} and \ref{QLP} is precisely defined to be
$$
V\ :=\ \oplus_i \pi_*\cL^{\otimes \ell_i}\ =\ h^0\left(\mathbb{E}_{\widetilde{Q}_p/\widetilde{Q}(\mathbb{P}^n)}[-1]|_{\widetilde{Q}_p^{\mathrm{red}}}\right),
$$
where $\cL$ is the universal line bundle on the universal curve $\pi:\cC\to \widetilde{Q}^{\mathrm{red}}_p$, and obviously $\widetilde{Q}(\mathbb{P}^n)$ is the base change $Q_{2,0,d}(\mathbb{P}^n)\times_{\mathfrak{M}}\widetilde{\mathfrak{M}}$.
Each point in $\widetilde{Q}_p^{\mathrm{red}}$ has a section data $u \in \Gamma(C, L^{\oplus n+1})$ via the morphism $\widetilde{Q}_p^{\mathrm{red}} \hookrightarrow \widetilde{Q}_p$. Then the morphism
$$
\mathbb{E}^{\vee}_{\widetilde{Q}(\mathbb{P}^n)/\widetilde{\mathfrak{M}}^{line}}=R\pi_*\cL^{\oplus n+1}\ \longrightarrow\ \mathbb{E}_{\widetilde{Q}_p/\widetilde{Q}(\mathbb{P}^n)}[-1]=\oplus_i R\pi_*\cL^{\otimes \ell_i}
$$
induced by $f_1, ..., f_m$ takes the universal section $u$ to a section $(f_i(u))_{1\leq i \leq m} \in \Gamma\left(V\right)$ in $h^0$. It defines the refined Euler class $e^{\mathrm{ref}}(V)$ for Theorems \ref{QLP1} and \ref{QLP}.
Note that $(-1)^{\mathrm{rank}V}=(-1)^{d(\sum_i \ell_i) - m}$ and $\widetilde{Q}^{\mathrm{red}}_p = \widetilde{Q}^{\mathrm{red}}(\mathbb{P}^n)$, which is smooth.
\begin{prop}\label{prop4.1}The reduced virtual cycle satisfies the original quantum Lefschetz formula \eqref{naiveQLP1}
$$
[\widetilde{Q}^{\mathrm{red}}_p]^{\mathrm{vir}} \ =\ (-1)^{d(\sum_i \ell_i) - m}e^{\mathrm{ref}}(V) \ \cap\ [\widetilde{Q}^{\mathrm{red}}(\mathbb{P}^n)].
$$
\end{prop}
\begin{proof}
The cut-out morphism \eqref{KuModel} restricted to $\widetilde{Q}^{\mathrm{red}}_{p}\times \mathbb{C}^{2m}$ mapping to $\prod^m_{i=1} \left( \mathbb{C}^2 \oplus \mathbb{C}^{d\ell_i-1} \right)$ gives rise to a cut-out model defining the perfect obstruction theory $\mathbb{E}_{\widetilde{Q}_p/\widetilde{Q}(\mathbb{P}^n)}|_{\widetilde{Q}_p^{\mathrm{red}}}$. This morphism is precisely
\begin{align*}
g\ :\ \widetilde{Q}^{\mathrm{red}}_{p}\times \mathbb{C}^{2m}\ \longrightarrow\ \prod^m_{i=1} \left( \mathbb{C}^2 \oplus \mathbb{C}^{d\ell_i-1} \right), \ \ u\times (p_{1i}, p_{2i})\ \longmapsto\ (c_1p_{1i}, c_2p_{2i}),
\end{align*}
and the dual $\left(dg|_{\widetilde{Q}^{\mathrm{red}}_p}\right)^\vee$ defines $\mathbb{E}_{\widetilde{Q}_p/\widetilde{Q}(\mathbb{P}^n)}|_{\widetilde{Q}_p^{\mathrm{red}}}$. So $V$ is the dual torsion-free part of the cokernel of $dg|_{\widetilde{Q}^{\mathrm{red}}_p}$, which is the same as the cokernel of $dg'|_{\widetilde{Q}^{\mathrm{red}}_p}$, where
\begin{align*}
g'\ :\ \widetilde{Q}^{\mathrm{red}}_{p}\times \mathbb{C}^{2m}\ \longrightarrow\ \prod^m_{i=1} \left( \mathbb{C}^2 \oplus \mathbb{C}^{d\ell_i-1} \right), \ \ u\times (p_{1i}, p_{2i})\ \longmapsto\ (p_{1i}, p_{2i}).
\end{align*}
Clearly the cut-out model $g'$ defines $(-1)^{d(\sum_i \ell_i) - m}e^{\mathrm{ref}}(V) \cap [\widetilde{Q}^{\mathrm{red}}_{p}]$. Note that the dual of the cosection \eqref{COsect} is $(-\mathrm{deg} f_i\cdot f_i(u))_{1\leq i \leq m} \in \Gamma\left(V\right)$ which deforms to the defining section $(f_i(u))_{1\leq i \leq m} \in \Gamma\left(V\right)$ without changing the zero locus, defining $e^{\mathrm{ref}}(V)$. On the other hand, the normal cone defined by using the model $g'$ gives $\mathfrak{C}^{\mathrm{red}}$ in \eqref{Cdec2} through the computation using \eqref{CRS}. Hence it also defines $[\widetilde{Q}^{\mathrm{red}}_p]^{\mathrm{vir}} $.
\end{proof}
\section{Lower genus contributions from the rest cycles}\label{LOT}
\subsection{Cones in the obstruction bundle}\label{sect:conesinobs}
In this section we consider our space $\widetilde{Q}_p$ {\em over $\widetilde{\mathfrak{M}}$}, so use the perfect obstruction theory $\mathbb{E}_{\widetilde{Q}_p /\widetilde{\mathfrak{M}}}$, the decomposition \eqref{Cdec} and Definition \ref{CYCLES} for virtual cycles. Letting
$$
A\ :=\ \widetilde{\mathfrak{M}}^{div} \times \prod_{j=1}^{n} \left(\mathbb{C}^{2} \times \mathbb{C}^{d-1}\right) \times \mathbb{C}^{2m}
$$
be the local smooth space in the cut-out model \eqref{KuModel} having forgetful map $A\to \widetilde{\mathfrak{M}}$, the dual perfect obstruction theory $\mathbb{E}^{\vee}_{\widetilde{Q}_p /\widetilde{\mathfrak{M}}}$ is locally isomorphic to
$$
\left. \left[\, T_{A/\widetilde{\mathfrak{M}}} \ \xrightarrow{d(c \circ \tau)} \ \cO_A^{\oplus 2n} \oplus \bigoplus_i \cO_A^{\oplus d\ell_i +1} \, \right] \right|_{\widetilde{Q}_p}.
$$
Using this local expression we check that $h^{-1} \left(\mathbb{E}_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p^{(i)}} \right)$ is locally free. We denote its dual by $E^{(i)}$\footnote{This is not $h^{1}\left(\mathbb{E}^\vee_{\widetilde{Q}_p / \widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p^{(i)}}\right)$, since it may not be locally free.}.
Picking any global locally free representative $[F_0 \stackrel{d}{\longrightarrow } F_1]$ of $\mathbb{E}^\vee_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}$, we obtain a diagram
$$
\xymatrix@R=5mm{
& F_1 |_{\widetilde{Q}_p^{(i)}} \ar[d] \ar[r] & E^{(i)} \\
\mathfrak{C}^{(i)}\ \ar@{^(->}[r] &\ [F_1/F_0]|_{\widetilde{Q}_p^{(i)}} .
}
$$
Using this we define $C^{(i)}\hookrightarrow E^{(i)}$ to be the image of the pullback of $\mathfrak{C}^{(i)}\hookrightarrow [F_1/F_0]$. Then its image by the cosection localised Gysin map is $[\widetilde{Q}_p^{(i)}]^{\mathrm{vir}} $ by Definition \ref{CYCLES}. We denote this Kiem-Li's cosection localised Gysin map by $e^{\mathrm{KL}}(E^{(i)})$\footnote{This notation seems not too strange because it follows the properties of Euler classes since it is a bivariant class in rational coefficients \cite{KO18}.} so that
\begin{align}\label{a}
[\widetilde{Q}_p^{(i)}]^{\mathrm{vir}} \ =\ e^{\mathrm{KL}}(E^{(i)})\cap [C^{(i)}].
\end{align}
\medskip
\noindent Now, let us consider other intrinsic normal cones $\mathfrak{C}_{\widetilde{Q}_p^{(i)}/\widetilde{\mathfrak{M}}^{(i)}}$, where $\widetilde{\mathfrak{M}}^{(i)}\subset\widetilde{\mathfrak{M}}$ is the image of
\begin{enumerate}
\item $\widetilde{\mathfrak{M}}_{1,1,0} \times \widetilde{\mathfrak{M}}_{1,1,d}$,
\item $\widetilde{\mathfrak{M}}_{1,1,0} \times \widetilde{\mathfrak{M}}_{0,2,d} \times \widetilde{\mathfrak{M}}_{1,1,0}$,
\item $\widetilde{\mathfrak{M}}_{1,2,0} \times \widetilde{\mathfrak{M}}_{0,2,d}$,
\end{enumerate}
under the node-identifying morphism. These cones are the bundle stacks, zero sections of $h^1/h^0$ of the tangent complexes $\mathbb{T}_{\widetilde{Q}_p^{(i)}/\widetilde{\mathfrak{M}} }$ because $\widetilde{Q}_p^{(i)} \to \widetilde{\mathfrak{M}}^{(i)}$ is smooth. Meanwhile the morphism $\mathbb{T}_{\widetilde{Q}_p^{(i)}/\widetilde{\mathfrak{M}}} \to \mathbb{T}_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p^{(i)}}\to \mathbb{E}^\vee_{\widetilde{Q}_p/\widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p^{(i)}}$ induces a representable morphism of bundle stacks
$$
h^1/h^0\left(\mathbb{T}_{\widetilde{Q}_p^{(i)}/\widetilde{\mathfrak{M}}}\right)\ \longrightarrow\ [F_1/F_0]|_{\widetilde{Q}_p^{(i)}}.
$$
Then defining the cone
$$
C_{(i)}\ :=\ N_{\widetilde{\mathfrak{M}}^{(i)}/\widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p^{(i)}},
$$
its base change takes $C_{(i)}$ to
\begin{align}\label{eq:coneEmbed}
C_{(i)}=\ \mathfrak{C}_{\widetilde{Q}_p^{(i)}/\widetilde{\mathfrak{M}}^{(i)}}\times_{\mathfrak{C}_{\widetilde{Q}_p^{(i)}/\widetilde{\mathfrak{M}}^{(i)}}}N_{\widetilde{\mathfrak{M}}^{(i)}/\widetilde{\mathfrak{M}}}|_{\widetilde{Q}_p^{(i)}}\ \longrightarrow\ \mathfrak{C}_{\widetilde{Q}_p^{(i)}/\widetilde{\mathfrak{M}}^{(i)}}\times_{[F_1/F_0]} F_1\ \longrightarrow\ F_1|_{\widetilde{Q}_p^{(i)}}\ \longrightarrow\ E^{(i)}
\end{align}
through the above bundle stack morphism. Using \eqref{normal}, we see that the first arrow is locally
\begin{enumerate}
\item $\partial_{c_1} \in C_{(1)} \longmapsto d(c \circ \tau)(\partial_{c_1})$,
\item $\partial_{c_1}, \partial_{c_2} \in C_{(2)} \longmapsto d(c \circ \tau)(\partial_{c_1}), d(c \circ \tau)(\partial_{c_2})$,
\item $\partial_{c_1} \in C_{(3)} \longmapsto d(c \circ \tau)(\partial_{c_1})$.
\end{enumerate}
Since $d(c \circ \tau)(\partial_{c_1})$ annihilates the defining equations of $C^{(i)}$, for instance
$$
d(c \circ \tau)(\partial_{c_1}) (d(x_{1l}X_{1k}-x_{1k}X_{1l}))\ =\ 0,
$$
the morphism $C_{(i)} \to E^{(i)}$ factors through
$$
C_{(i)}\ \longrightarrow\ C^{(i)}\ \hookrightarrow\ E^{(i)}.
$$
In fact $C_{(i)}$ maps isomorphic to $C^{(i)}\subset E^{(i)}$ on which $d(c \circ \tau)(\partial_{c_j})$ does not vanish. Note that it vanishes on either $x_1=p_1=0$ or $x_2=p_2=0$. Since $C_{(i)}$ is a bundle we may expect an advantage of using $C_{(i)}$ instead of $C^{(i)}$ for \eqref{a} if this is possible in a certain way. But this is not an absurd fantasy since they are almost isomorphic.
\begin{exam}\label{EXAM1}
The local structure ring \eqref{CRS} tells us that $C^{(1)}$ is (locally) the spectrum of
\begin{align*}
\frac{R/(x_2,p_2,c_1) \; [X_{1j},P_{1i}]}{\left(
\begin{array}{c}
x_{1k}X_{1l}-x_{1l}X_{1k},\ x_{1k}P_{1l}-p_{1l}X_{1k},\ p_{1k}P_{1l}-p_{1l}P_{1k}
\end{array}
\right).}
\end{align*}
Meanwhile by its definition $C_{(1)}$ is (locally) the spectrum of $R/(x_2,p_2,c_1)[Y]$, where the variable $Y$ is a coordinate of $\partial_{c_1}$. So the morphism $C_{(1)} \to C^{(1)}$ is
$$
X_{1j} \ \longmapsto\ x_{1j}Y, \ \ \ P_{1i} \ \longmapsto\ p_{1i}Y.
$$
\end{exam}
\subsection{Outline of the proof of Theorem \ref{QLP}}\label{OUTLINE}
Letting $p_1=p_2=0$, $\eqref{KuModel}$ gives a local cut-out model of $\widetilde{Q}:=\widetilde{Q}(\mathbb{P}^n)$. The decomposition \eqref{qdecomp} of $\widetilde{Q}_p$ then gives rise to the corresponding one of $\widetilde{Q}$,
\begin{align}\label{QDEcomp}
\widetilde{Q}\ =\ \widetilde{Q}^{red}\ \cup\ \widetilde{Q}^{(1)}\ \cup\ \widetilde{Q}^{(2)}\ \cup\ \widetilde{Q}^{(3)}.
\end{align}
Then the components $\widetilde{Q}^{(i)}$ are the images of the following node-identifying morphisms
\begin{enumerate}
\item $\tilde{\iota}_{1} : \overline{M}_{1,1} \times Q^{\mathrm{red}}_{1,1,d} \hookrightarrow \widetilde{Q}$,
\item $\tilde{\iota}_{2} : \overline{M}_{1,1} \times Q_{0,2,d} \times \overline{M}_{1,1} \stackrel{2:1}{\longrightarrow } \widetilde{Q}$,
\item $\tilde{\iota}_{3} : \overline{M}_{1,2} \times \mathbb{P} Q'_{0,2,d} \hookrightarrow \widetilde{Q}$.
\end{enumerate}
In (3), $\mathbb{P} Q'_{0,2,d}$ denotes the projectivisation of $\mathbb{L}_1^{\vee} \oplus \mathbb{L}_2^{\vee}$, sum of dual tautological line bundles over the locus $Q'_{0,2,d}\subset Q_{0,2,d}$ where $\mathrm{ev}_1=\mathrm{ev}_2$. The following Remark explains why $\widetilde{Q}^{(3)}$ is the image of $\tilde{\iota}_{3}$.
\begin{rema}\label{333}
In fact, $\widetilde{Q}^{(3)}$ should be (the image of) projectivisation of the pullback of
$$
N_{\mathfrak{M}_{1,2,0}\times\mathfrak{M}_{0,2,d}/\mathfrak{M}_{2,0,d}}\ \cong\ (\mathbb{L}^{\vee}_{1} \boxtimes \mathbb{L}^{\vee}_{1}) \oplus (\mathbb{L}^{\vee}_{2} \boxtimes \mathbb{L}^{\vee}_{2})
$$
on $\overline{M}_{1,2} \times Q'_{0,2,d}$ since $\widetilde{Q}$ is the base change of the blowup. It is equal to $\overline{M}_{1,2} \times \mathbb{P} Q'_{0,2,d}$ if $\mathbb{L}_1\cong\mathbb{L}_2$ on $\overline{M}_{1,2}$. In \cite[pp.1221--1222]{Zi08}, Zinger proved that the evaluation morphism of the Hodge bundle $\cH\to \mathbb{L}_j$ on $\overline{M}_{1,2}$ maps isomorphic to
$$
\cH\ \xrightarrow{\ \sim\ }\ \mathbb{L}_j(-D)\ \hookrightarrow\ \mathbb{L}_j,
$$
where $D= \overline{M}_{1,1}\times\overline{M}_{0,3} \hookrightarrow \overline{M}_{1,2}$ is a boundary divisor of a collision of the two marked points. Thus we have $\mathbb{L}_1 \cong \cH(D) \cong \mathbb{L}_2.$
\end{rema}
\medskip
As we have mentioned in Section \ref{Sec1}, local computation with \eqref{KuModel} tells us that the $i$-th $p$-field space $\widetilde{Q}_p^{(i)}$ is a vector bundle over $\widetilde{Q}^{(i)}$,
$$
\widetilde{Q}_p^{(i)}\ \cong\ h^0 \l( \left. \left(\oplus_{j=1}^m R\pi_* \left(\cL^{-\ell_j}\otimes \omega_{\cC_{\widetilde{Q}}} \right)\right)\right|_{\widetilde{Q}^{(i)}} \r).
$$
To avoid a confusion, we denote it by $P^{(i)}$ when we consider it as a bundle, but use $\widetilde{Q}_p^{(i)}$ for the space. So the pullback of $P^{(i)}$ on $\widetilde{Q}_p^{(i)}$ is the tautological bundle. On $\widetilde{Q}_p^{(i)}$, the obstruction bundle $E^{(i)}$ was defined in Section \ref{sect:conesinobs}. Unlike {\em over} $\widetilde{\mathfrak{M}}^{line}$, the decomposition $E^{(i)}=E_1^{(i)}\oplus E_2^{(i)}$, where
$$
E_1^{(i)} = h^{-1} \left.\l( \mathbb{E}_{\widetilde{Q} / \widetilde{\mathfrak{M}}}|_{\widetilde{Q}^{(i)}} \r)\right|^\vee_{\widetilde{Q}_p^{(i)}}, \ \ E_2^{(i)}= R^1\pi_* \l( \oplus_{i=1}^m \cL^{\otimes -\ell_i} \otimes \omega_{\cC} \r)\cong \pi_* \l( \oplus_{i=1}^m \cL^{\otimes \ell_i}\r)^\vee,
$$
of the obstruction bundle {\em over} $\widetilde{\mathfrak{M}}$ is not so obvious, but it is proven in \cite[Equation (3.15)]{LL22}.
From now on for simplicity, we denote the domain of the morphism $\tilde{\iota}_i$ by $\bQ^{(i)}$, by $\bQ_p^{(i)}$ the fiber product $\bQ^{(i)}\times_{Q^{(i)}}Q^{(i)}_p$ and by $\bP^{(i)}$ the pullback of $P^{(i)}$. Explicitly,
\begin{enumerate}
\item $\bQ^{(1)} = \overline{M}_{1,1} \times Q_{1,1,d}^\mathrm{red} $,
\item $\bQ^{(2)} = \overline{M}_{1,1} \times Q_{0,2,d} \times \overline{M}_{1,1} $,
\item $\bQ^{(3)} = \overline{M}_{1,2} \times \mathbb{P} Q'_{0,2,d}$,
\end{enumerate}
and the bundle $\bP^{(i)}$ is
\begin{enumerate}
\item $\bP^{(1)} = \cH\, \boxtimes\, \oplus_{i=1}^m \mathrm{ev}^* \cO_{\mathbb{P}^n}(-\ell_i)$,
\item $\bP^{(2)} = \left(\cH \boxtimes \oplus_{i=1}^m \mathrm{ev}_1^* \cO_{\mathbb{P}^n}(-\ell_i) \boxtimes \cO_{\overline{M}_{1,1}} \right)\ \bigoplus \ \left(\cO_{\overline{M}_{1,1}} \boxtimes \oplus_{i=1}^m \mathrm{ev}_2^* \cO_{\mathbb{P}^n}(-\ell_i) \boxtimes \cH \right)$,
\item $\bP^{(1)} = \cH\, \boxtimes\, \oplus_{i=1}^m \mathrm{ev}_1^* \cO_{\mathbb{P}^n}(-\ell_i)$.
\end{enumerate}
Recall that in (3), $\mathrm{ev}_1=\mathrm{ev}_2$. We denote by
$$
\tilde{\iota}_{p,i}\ :\ \bQ_p^{(i)}\ \longrightarrow\ \widetilde{Q}_p^{(i)}.
$$
the base change of the node-identifying morphism $\widetilde{\iota}_i$. and let ${\mathbf E}^{(i)} := \tilde{\iota}_{p,i}^* E^{(i)}$. Then the decomposition ${\mathbf E}^{(i)}={\mathbf E}_1^{(i)}\oplus {\mathbf E}_2^{(i)}$ is
\begin{enumerate}
\item ${\mathbf E}_1^{(1)} = \cH^\vee \boxtimes \mathrm{ev}^*T_{\mathbb{P}^n} $, ${\mathbf E}_2^{(1)} = \cO_{\overline{M}_{1,1}}\boxtimes(\oplus_i \pi_*\cL^{\otimes \ell_i})^{\vee}$,
\item ${\mathbf E}_1^{(2)} = \left(\cH^\vee \boxtimes \mathrm{ev}_1^*T_{\mathbb{P}^n}\boxtimes \cO_{\overline{M}_{1,1}}\right) \oplus \left(\cO_{\overline{M}_{1,1}} \boxtimes \mathrm{ev}_2^*T_{\mathbb{P}^n}\boxtimes \cH^\vee\right)$,
\noindent ${\mathbf E}_2^{(2)} = \cO_{\overline{M}_{1,1}} \boxtimes \l(\oplus_i \pi_*\cL^{\otimes \ell_i} \r)^{\vee} \boxtimes \cO_{\overline{M}_{1,1}}$,
\item ${\mathbf E}_1^{(3)} = \cH^\vee \boxtimes \mathrm{ev}_1^*T_{\mathbb{P}^n} $, ${\mathbf E}_2^{(3)} = \cO_{\overline{M}_{1,2}}\boxtimes (\oplus_i \pi_*\cL^{\otimes \ell_i})^{\vee}$.
\end{enumerate}
Consider the pullback cosection $\sigma^{(i)} : {\mathbf E}^{(i)} \to \cO_{\bQ^{(i)}_p}$, and decompose it into
$$
\sigma_1^{(i)} : {\mathbf E}_1^{(i)} \to \cO_{\bQ^{(i)}_p}\ \text{ and }\ \sigma_2^{(i)} : {\mathbf E}_2^{(i)} \to \cO_{\bQ^{(i)}_p}
$$
accordingly. Using these cosections, we can define Kiem-Li's cosection localised Gysin maps $e^{\mathrm{KL}}({\mathbf E}^{(i)})$ and $e^{\mathrm{KL}}({\mathbf E}^{(i)}_j)$. Letting $\bold{C}^{(i)} := \tilde{\iota}_{p,i}^* C^{(i)}$, the multiplicative property of $e^{\mathrm{KL}}$ \cite[Theorem 3.2]{Oh18} tells us that \eqref{a} becomes
\begin{align}\label{eq:ideal}
[\widetilde{Q}_p^{(i)}]^{\mathrm{vir}} &= \frac{1}{\mathrm{deg}(\tilde{\iota}_{p,i})}(\tilde{\iota}_{p,i})_*\left(e^{\mathrm{KL}}({\mathbf E}^{(i)})\cap [\bC^{(i)}]\right) \\
& = \frac{1}{\mathrm{deg}(\tilde{\iota}_{p,i})}(\tilde{\iota}_{p,i})_*\left( e^{\mathrm{KL}}({\mathbf E}_1^{(i)}) \cap e^{\mathrm{KL}}({\mathbf E}_2^{(i)}) \cap [\bC^{(i)}] \right). \nonumber
\end{align}
Since the cosection $\sigma^{(i)}_2$ on ${\mathbf E}^{(i)}_2\cong\oplus_i \pi_*\cL^{\otimes \ell_i}$ is defined by the (dual of) defining equation $f$ as on $V$ in Section \ref{reduced}, the cycle $e^{\mathrm{KL}}({\mathbf E}_2^{(i)})\cap [\bC^{(i)}]$ is supported on the space ${\mathbf E}_1^{(i)} \times_{Q(\mathbb{P}^n)} Q(X)$. Then the local computation \eqref{COsect} shows that the restriction of the cosection $\sigma^{(i)}_1$ defining $e^{\mathrm{KL}}({\mathbf E}_1^{(i)})$ to this locus ${\mathbf E}_1^{(i)} \times_{Q(\mathbb{P}^n)} Q(X)$ is induced by the surjection
$$
df\ :\ T_{\mathbb{P}^n}|_X\ \twoheadrightarrow\ \oplus_i \cO_{\mathbb{P}^n}(\ell_i)|_X,
$$
whose kernel is $\mathrm{ker}(df) = T_X$. On the locus, $df$ then defines a short exact sequence of bundles
\begin{align}\label{SEES}
0\ \longrightarrow\ \bK^{(i)}\ \longrightarrow\ {\mathbf E}^{(i)}_1\ \longrightarrow\ (\bP^{(i)})^\vee \ \longrightarrow\ 0,
\end{align}
where
\begin{enumerate}
\item $\bK^{(1)} := \cH^\vee \boxtimes \mathrm{ev}^*T_X$,
\item $\bK^{(2)} := \left(\cH^\vee \boxtimes \mathrm{ev}_1^*T_X\boxtimes \cO_{\overline{M}_{1,1}}\right) \oplus \left(\cO_{\overline{M}_{1,1}} \boxtimes \mathrm{ev}_2^*T_X\boxtimes \cH^\vee \right)$,
\item $\bK^{(3)} := \cH^\vee \boxtimes \mathrm{ev}_1^*T_X$.
\end{enumerate}
The tautological section of $\bP^{(i)}$ defines a cosection of $(\bP^{(i)})^\vee$. On $\bQ^{(i)}_p(X):=\bQ^{(i)}_p\times_{Q(\mathbb{P}^n)} Q(X)$, the cosection $\sigma^{(i)}_1$ on ${\mathbf E}^{(i)}_1$ factors through the pullback of this tautological cosection. Thus, again by the multiplicative property \cite[Theorem 3.2]{Oh18}, we have
\begin{align}\label{aa2}
e^{\mathrm{KL}}\l( {\mathbf E}_1^{(i)}\r) \cap \l( e^{\mathrm{KL}}({\mathbf E}_2^{(i)}) \cap [\bC^{(i)}] \r) = e^{\mathrm{FM}}\l(\bK^{(i)}\r) \cap e^{\mathrm{KL}}\l( (\bP^{(i)} )^{\vee} \r) \cap \l( e^{\mathrm{KL}}({\mathbf E}_2^{(i)}) \cap [\bC^{(i)}] \r),
\end{align}
where $e^{\mathrm{FM}}$ denotes the Fulton-MacPherson intersection homomorphism, or Gysin map.
In Sections \ref{DeformCone} and \ref{nomore}, we will explain the second and third equalities below, respectively. The rest equalities and notations are explained after the equations,
\begin{align}\label{eq:ideal2}
[\widetilde{Q}_p^{(i)}]^{\mathrm{vir}} &= \frac{1}{\mathrm{deg}(\tilde{\iota}_{p,i})}(\tilde{\iota}_{p,i})_* \l( e^{\mathrm{FM}}\l(\bK^{(i)}\r) \cap e^{\mathrm{KL}}\l( (\bP^{(i)} )^{\vee} \r) \cap \l( e^{\mathrm{KL}}\l({\mathbf E}_2^{(i)}\r) \cap [\bC^{(i)}] \r) \r) \\ \nonumber
&= \frac{(-1)^{m\cdot i}}{\mathrm{deg}(\tilde{\iota}_{i})}(\tilde{\iota}_{i})_* \l( e^{\mathrm{FM}}\l(\bK^{(i)}\r) \cap e^{\mathrm{KL}}\l( {\mathbf E}_2^{(i)} \r) \cap \left[\bC^{(i)}|_{\bQ^{(i)}}\right] \r) \\ \nonumber
&= \frac{(-1)^{m\cdot i}}{\mathrm{deg}(\tilde{\iota}_{i})}(\tilde{\iota}_{i})_* \l( e^{\mathrm{FM}}\l(\bK^{(i)}\r) \cap e^{\mathrm{KL}}\l( {\mathbf E}_2^{(i)} \r) \cap \left[\bC_{(i)}|_{\bQ^{(i)}}\right] \r) \\ \nonumber
& = \frac{(-1)^{m\cdot i}}{\mathrm{deg}(\tilde{\iota}_{i})}(\tilde{\iota}_{i})_* \l( e^{\mathrm{FM}}\l(\bK^{(i)}\r) \cap \left[ \bC_{(i)}|_{\bQ^{(i)}(X)} \right]^{\mathrm{vir}} \r) \\ \nonumber
& = \frac{(-1)^{m\cdot i}}{\mathrm{deg}(\tilde{\iota}_{i})}(\tilde{\iota}_{i})_* \l(e\l( \frac{\bK^{(i)}|_{\bQ^{(i)}(X)} }{\bC_{(i)}|_{\bQ^{(i)}(X)} } \r) \cap [\bQ^{(i)}(X)]^{\mathrm{vir}} \r).
\end{align}
The first equality is from \eqref{eq:ideal} and \eqref{aa2}. In the fourth equality, the cone $\bC_{(i)}$, the pullback of $C_{(i)}$ in \eqref{eq:coneEmbed}, is a bundle over $\bQ^{(i)}_p$ which is smooth. So its pullback $\bC_{(i)}|_{\bQ^{(i)}}$ is a bundle over $\bQ^{(i)}$. Mimicking Proposition \ref{prop4.1}, we prove $e^{\mathrm{KL}}({\mathbf E}_2^{(i)}) \cap [\bC_{(i)}|_{\bQ^{(i)}}]$ is the pullback cycle of
\begin{enumerate}
\item $(-1)^{d(\sum_i \ell_i)}e^{\mathrm{ref}}(V_{1,1,d}) \cap\l([\overline{M}_{1,1}] \times [Q_{1,1,d}^\mathrm{red}(\mathbb{P}^n)]\r)$,
\item $(-1)^{d(\sum_i \ell_i)+m}e^{\mathrm{ref}}(V_{0,2,d}) \cap\l([\overline{M}_{1,1}] \times [Q_{0,2,d}(\mathbb{P}^n)] \times [\overline{M}_{1,1}]\r)$,
\item $(-1)^{d(\sum_i \ell_i)}e^{\mathrm{ref}}(V_{0,2,d}) \cap\l([\overline{M}_{1,2}] \times [\mathbb{P} Q'_{0,2,d}]\r)$\footnote{Here, $\mathrm{rank} V_{0,2,d}$ is $d(\sum_i \ell_i)$ although it is of genus $0$ because $\mathrm{ev}_1=\mathrm{ev}_2.$}.
\end{enumerate}
We denote the pullback cycle in $A_*(\bQ^{(i)})$ by $[\bQ^{(i)}(X)]^{\mathrm{vir}} $ and that in $A_*(C_{(i)}|_{\bQ^{(i)}})$ by $\left[ \bC_{(i)}|_{\bQ^{(i)}(X)} \right]^{\mathrm{vir}} $. The space $\bQ^{(i)}(X):=\bQ^{(i)}\times_{Q(\mathbb{P}^n)}Q(X)$ is the support. The last equality comes from the fact that $ \bC_{(i)}|_{\bQ_p^{(i)}(X)}$ is contained in $\bK^{(i)}|_{\bQ_p^{(i)}(X)}$ by the cone reduction criterion \cite[Lemma 4.4]{KL13}.
The second equality holds if the cone $\bC^{(i)}$ is isomorphic to the product $\bC^{(i)}|_{\bQ^{(i)}}\times_{\bQ^{(i)}}\bQ^{(i)}_p$ by the property of the tautological bundles and sections,
$$
e^{\mathrm{KL}}\l( (\bP^{(i)})^{\vee} \r)\cap [\bQ_p^{(i)}] = (-1)^{\mathrm{rank} ( \bP^{(i)} ) } e^{\mathrm{ref}}(\bP^{(i)})\cap[\bQ_p^{(i)}] = (-1)^{m \cdot i} [\bQ^{(i)}].
$$
We have to be careful when we use the commutativity
$$
e^{\mathrm{KL}}((\bP^{(i)})^{\vee}) \cap e^{\mathrm{KL}}({\mathbf E}_2^{(i)})\ =\ e^{\mathrm{KL}}({\mathbf E}_2^{(i)})\cap e^{\mathrm{KL}}((\bP^{(i)} )^{\vee})
$$
since the sequence \eqref{SEES} is not defined on the entire space $\bQ^{(i)}_p$. But we can use it if $\bC^{(i)}$ is a product. In fact $\bC^{(i)}$ is not a product itself but we deform it to a product. We work this in Section \ref{DeformCone}.
We know $\bC_{(i)}\to\bC^{(i)}$ \eqref{eq:coneEmbed} is almost isomorphic. Then taking twistings by divisors after blowups gives an actual isomorphism which induces the third equality. This work is addressed in Section \ref{nomore}.
After we get \eqref{eq:ideal2}, we prove Theorem \ref{QLP} in Section \ref{sect:Thm2pf}. When $X$ is a Calabi-Yau $3$-fold we prove Theorem \ref{QLP1} in Section \ref{sect:CY}.
\subsection{Deformation of the cone}\label{DeformCone}
Consider the normal cone
$$
C_{\bC^{(i)} \cap {\mathbf E}_2^{(i)} / \bC^{(i)}}\ \hookrightarrow\ {\mathbf E}^{(i)}
$$
which is a deformation of $\bC^{(i)}$ via deformation to the normal cone \cite[Chapter 5]{Fu}. A direct computation shows it is also contained in the kernel of the cosection \eqref{COsect}.
\begin{lemm} The cone $C_{\bC^{(i)} \cap {\mathbf E}_2^{(i)} / \bC^{(i)}}$ has a component of a product
\begin{align}\label{eq:defcone}
\mathrm{Def}(\bC^{(i)}) := \left.
C_{\bC^{(i)} \cap {\mathbf E}_2^{(i)} / \bC^{(i)}} \right|_{\bQ_p^{(i)}}\ \cong\ \bC^{(i)}|_{\bQ^{(i)}}\times_{\bQ^{(i)}} \bQ^{(i)}_p.
\end{align}
Other components vanish after taken by $e^{\mathrm{KL}}({\mathbf E}^{(i)})$.
\end{lemm}
\begin{proof}
We prove this by using the local coordinate rings in Section \ref{virdecomp} obtained by the cut-out model \eqref{KuModel}. Recall from \eqref{CRS} that locally $C^{(i)}$ is Spec of
\begin{align*}
\frac{R\; [X_{1j},X_{2j},P_{1i},P_{2i}]}{\left(
\begin{array}{c}
x_{1k}X_{1l}-x_{1l}X_{1k},\ x_{1k}P_{1l}-p_{1l}X_{1k},\ p_{1k}P_{1l}-p_{1l}P_{1k}, \\
x_{2k}X_{2l}-x_{2l}X_{2k},\ x_{2k}P_{2l}-p_{2l}X_{2k},\ p_{2k}P_{2l}-p_{2l}P_{2k}
\end{array}
\right),}
\end{align*}
where $R= B[x,p]\; /(c_1x_{1}, c_2x_{2},c_1p_{1}, c_2p_{2})$ is a local coordinate ring of $\widetilde{Q}_p$.
In a neighborhood of a point in $\widetilde{Q}_p^{(1)}$ or $\widetilde{Q}_p^{(3)}$, we have seen $c_1=1$ in Sections \ref{diagc} and \ref{desin}, hence $x_1=p_1=0$. Pulling back via the node-identifying morphism, $\bC^{(i)}$ is a component defined by $\{c_2=0\}$ and $\bC^{(i)} \cap {\mathbf E}_2^{(i)} \subset \bC^{(i)}$ is defined by $\{ X_2 = 0 \}=\{ X_{21} = \dots = X_{2n} = 0 \}$. Introducing a partner variable $X'_2$ of $X_2$, the cone $C_{\bC^{(i)} \cap {\mathbf E}_2^{(i)} / \bC^{(i)}}$ is Spec of
\begin{align*}
\frac{R/(c_2,x_1,p_1)\; [X_{1j},X'_{2j},P_{1i},P_{2i}]}{\left(
\begin{array}{c}
x_{2k}X'_{2l}-x_{2l}X'_{2k},\ x_{2k}P_{2l},\ p_{2k}P_{2l}-p_{2l}P_{2k}
\end{array}
\right).}
\end{align*}
Then it is the union of $\{x_2=0\}$ and $\{P_2=0\}$. We show the component $\{x_2=0\}$ vanishes by $e^{\mathrm{KL}}({\mathbf E}^{(i)})$. To do so it is enough to show that it vanishes by $e^{\mathrm{KL}}({\mathbf E}_1^{(i)})$ by \cite[Theorem 3.2]{Oh18}. We show this by degree reason. The cycle $e^{\mathrm{KL}}({\mathbf E}_1^{(i)})\cap \{x_2=0\}$ is of degree
$$
\mathrm{dim} B[x,p] - \mathrm{rank} {\mathbf E}_1^{(i)}\ =\ \mathrm{dim} B[x,p]-n-1.
$$
On the other hand, $e^{\mathrm{KL}}({\mathbf E}_1^{(i)})\cap \{x_2=0\}$ is contained in the degeneracy locus of the cosection, a pairinig with $p_2$. It is contained in $R/(c_2,x,p)\; [X_{1},P_{1},P_{2}]$ which has dimension less than or equal to $\mathrm{dim} B[x,p]-n-2.$ Thus $e^{\mathrm{KL}}({\mathbf E}_1^{(i)})\cap \{x_2=0\}=0$. The component $\{P_2=0\}$ defines the cone \eqref{eq:defcone}.
The cone $\bC^{(2)}$ is defined by $\{c_1=c_2=0\}$, and $\bC^{(2)}\cap{\mathbf E}^{(2)}$ is $\{X_1=X_2=0\}$ in addition. Then it has $4$ components
$$
\{x_1=x_2=0\}\ \cup\ \{x_1=P_2=0\}\ \cup\ \{P_1=x_2=0\}\ \cup\ \{P_1=P_2=0\}.
$$
Similarly we can show the first three will be killed by $e^{\mathrm{KL}}({\mathbf E}^{(2)})$ by degree reason. Precisely the first one is killed by $e^{\mathrm{KL}}({\mathbf E}_1^{(2)})$, but for the second and third one, we need to decompose ${\mathbf E}_1^{(2)}$ into two parts and use one for each. The fourth one is the cone \eqref{eq:defcone}.
\end{proof}
\subsection{Local freeness of cones}\label{nomore}
In this section we relate the vector bundle $\bC_{(i)}|_{\bQ^{(i)}}$ and cone $\bC^{(i)}|_{\bQ^{(i)}}$. We suppress the notation $|_{\bQ^{(i)}}$ throughout the Section. Locally this restriction is $p_1=p_2=0$.
Consider the morphism $\bC_{(i)} \to {\mathbf E}^{(i)}$, pullback of \eqref{eq:coneEmbed}, locally described in \eqref{normal}, and its projection
\begin{align}\label{aa}
\bC_{(i)}\ \longrightarrow\ {\mathbf E}^{(i)}_1.
\end{align}
Locally we can check $\bC^{(i)}$ is contained entirely in ${\mathbf E}^{(i)}_1$. Since $\bC_{(i)}\to \bC^{(i)}$ is isomorphic on a dense open space, the closure of the image of \eqref{aa} is $\bC^{(i)}$. For $i=1,3$, \eqref{aa} vanishes locally on $\{x_1=0\}$ as explained in Section \ref{sect:conesinobs}. Globally this vanishing locus is the pullback of the intersection $\widetilde{Q}^{(i)} \cap \widetilde{Q}^{\mathrm{red}}$ of components in \eqref{QDEcomp}. Consider the blow-up $b^{(i)} : \widehat{\bQ}^{(i)} \to \bQ^{(i)}$ along this locus and denote the exceptional divisor by $\bD^{(i)}$. Then the embedding \eqref{aa} pulls back to
$
(b^{(i)*}\bC_{(i)})(\bD^{(i)})\ \cong\ b^{(i)*}\bC^{(i)}\ \hookrightarrow\ b^{(i)*}{\mathbf E}^{(i)}_1.
$
For $i=2$, one blowup on the vanishing locus $\{x_1=0\}\cup\{x_2=0\}$ is not enough since this locus is not smooth. So we take a blowup along the intersection $\{x_1=0\}\cap\{x_2=0\}$ first and then take another blowup along the proper transform of $\{x_1=0\}$ and $\{x_2=0\}$, which are disjoint.\footnote{Another candidate could be a blow up along $\{x_1=0\}$ first and then another blowup along the transform of $\{x_2=0\}$. But we take the one due to its advantage on the computation.} We denote by $b^{(2)}: \widehat{\bQ}^{(2)} \to \bQ^{(2)}$ the composition of blowup morphisms. Set $\bD^{(2)}_1$ to be the sum of exceptional divisors of the first blowup and the corresponding one of $\{x_1=0\}$ for the second blowup. Similarly we set $\bD^{(2)}_2$ to be the sum of exceptional divisors of the first blowup and the corresponding one of $\{x_2=0\}$ for the second blowup. Recall that $\bC_{(2)}$ is the pullback of the normal bundle $N_{\mathfrak{M}_{1,1} \times \mathfrak{M}_{0,2} \times \mathfrak{M}_{1,1} / \mathfrak{M}_{2,0} }$,
$$
\bC_{(2)}\ \cong\ (\mathbb{L}_1^{\vee} \otimes \mathbb{L}^{\vee}_{1} ) \oplus ( \mathbb{L}_2^{\vee} \otimes \mathbb{L}_{2}^{\vee} ).
$$
Then the embedding \eqref{aa} pulls back to
\begin{align*}
b^{(2)*}\l(\mathbb{L}_1^{\vee} \otimes \mathbb{L}_{1}^{\vee} \r)\l(\bD^{(2)}_1\r) \oplus b^{(2)*}\l( \mathbb{L}_2^{\vee} \otimes \mathbb{L}_{2}^{\vee}\r)\l(\bD^{(2)}_2\r)\ \cong\ b^{(2)*}\bC^{(2)} \ \hookrightarrow\ b^{(2)*}{\mathbf E}_1^{(2)}.
\end{align*}
\subsection{Proof of Theorem \ref{QLP}}\label{sect:Thm2pf}
So from the third equality of \eqref{eq:ideal2}, the equalities actually hold on the blowup $\widehat{\bQ}^{(i)}$ with twistings by the exceptional divisors. Hence \eqref{eq:ideal2} is
$
[\widetilde{Q}_p^{(i)}]^{\mathrm{vir}} \ =\ \frac{(-1)^{m\cdot i}}{\mathrm{deg}(\widetilde{\iota}_{i})}(\widetilde{\iota}_{i})_* b^{(i)}_* \left( e\left(\frac{ \bK^{(i)}}{\bC_{(i)}(\bD^{(i)})}\right)\cap [ \widehat{\bQ}^{(i)}(X)]^{\mathrm{vir}} \right),
$
where $[\widehat{\bQ}^{(i)}(X)]^{\mathrm{vir}} $ is the cycle, pushing down to $[\bQ^{(i)}(X)]^{\mathrm{vir}} $ via the blowup morphism $b^{(i)}$. For $i=2$, we use
$$
\bC_{(i)}(\bD^{(i)})\ :=\ b^{(2)*}\l(\mathbb{L}_1^{\vee} \otimes \mathbb{L}_{1}^{\vee} \r)\l(\bD^{(2)}_1\r) \oplus b^{(2)*}\l( \mathbb{L}_2^{\vee} \otimes \mathbb{L}_{2}^{\vee}\r)\l(\bD^{(2)}_2\r)
$$
for notational consistence. Here, we could through away $\bD^{(i)}$ in the denominators by using \cite[Lemma 4.1]{LO20},
\begin{align}\label{eq:chernexpress1}
[\widetilde{Q}_p^{(i)}]^{\mathrm{vir}} \ =\ \frac{(-1)^{m\cdot i}}{\mathrm{deg}(\widetilde{\iota}_{i})}(\widetilde{\iota}_{i})_* \left( \left[ \frac{ c( \bK^{(i)}) }{c(\bC_{(i)})} \right]_{\star} \cap [\bQ^{(i)}(X)]^{\mathrm{vir}} \right),
\end{align}
where $\star=\mathrm{dim} X-1$ for $i=1,3$ and $\star=2\mathrm{dim} X-2$ for $i=2$.
\medskip
We compute \eqref{eq:chernexpress1} explicitly to get Theorem \ref{QLP}.
\subsubsection{$i=1$ case} Recall from Section \ref{OUTLINE} that
\begin{itemize}
\item $\bK^{(1)} = \cH^\vee \boxtimes \mathrm{ev}^*T_X$,
\item $\bC_{(1)} \cong \mathbb{L}^\vee \boxtimes \mathbb{L}^\vee$,
\item $[\bQ^{(1)}(X)]^{\mathrm{vir}} = (-1)^{d(\sum_i \ell_i)}e^{\mathrm{ref}}(V_{1,1,d}) \cap\l([\overline{M}_{1,1}] \times [Q_{1,1,d}^\mathrm{red}(\mathbb{P}^n)]\r)$.
\end{itemize}
Combining these with \cite[Theorem 1.1]{LL22}
\begin{align*}
e^{\mathrm{ref}}(V_{1,1,d}) \cap [Q_{1,1,d}^\mathrm{red}(\mathbb{P}^n)] \ =\ [Q_{1,1,d}(X)]^{\mathrm{vir}} - [K]_{\mathrm{dim} X-1}\cap
\l( \, [\overline{M}_{1,1}] \times [Q_{0,2,d}(X)]^{\mathrm{vir}} \, \r),
\end{align*}
\eqref{eq:chernexpress1} for $i=1$ becomes
\begin{align}\label{FINAL1}
[Q^{(1)}_p]^{\mathrm{vir}} = & (-1)^{d(\sum_i \ell_i) + m}\, [K]_{\mathrm{dim} X-1} \cap \left( [\overline{M}_{1,1}] \times [Q_{1,1,d}(X)]^{\mathrm{vir}} \right) \\ \nonumber
& - (-1)^{d(\sum_i \ell_i) + m} \,[K_1]_{\mathrm{dim} X-1}[K_2]_{\mathrm{dim} X-1} \cap \l( \, [\overline{M}_{1,1}] \times [Q_{0,2,d}(X)]^{\mathrm{vir}} \times [\overline{M}_{1,1}] \, \r),
\end{align}
where, as introduced in Introduction, $K$ denotes the cohomology class $K= \frac{c\,(\cH^\vee \boxtimes \;\mathrm{ev}^* T_X)}{c\,(\mathbb{L}^\vee \boxtimes\; \mathbb{L}^\vee)}$.
\subsubsection{$i=2$ case} Recall from Section \ref{OUTLINE} that
\begin{itemize}
\item $\bK^{(2)} = \left(\cH^\vee \boxtimes \mathrm{ev}_1^*T_X\boxtimes \cO_{\overline{M}_{1,1}}\right) \oplus \left(\cO_{\overline{M}_{1,1}} \boxtimes \mathrm{ev}_2^*T_X\boxtimes \cH^\vee \right)$,
\item $\bC_{(2)} \cong (\mathbb{L}_1^{\vee} \otimes \mathbb{L}^{\vee}_{1} ) \oplus ( \mathbb{L}_2^{\vee} \otimes \mathbb{L}_{2}^{\vee} )$,
\item $[\bQ^{(2)}(X)]^{\mathrm{vir}} = (-1)^{d(\sum_i \ell_i)+m}e^{\mathrm{ref}}(V_{0,2,d}) \cap\l([\overline{M}_{1,1}] \times [Q_{0,2,d}(\mathbb{P}^n)] \times [\overline{M}_{1,1}]\r)$.
\end{itemize}
So for $i=2$, \eqref{eq:chernexpress1} becomes
\begin{align}\label{FINAL2}
[Q^{(2)}_p]^{\mathrm{vir}} = \frac{(-1)^{d(\sum_i \ell_i) + m}}{2} \,[K_1K_2]_{2\mathrm{dim} X-2} \cap \l( \, [\overline{M}_{1,1}] \times [Q_{0,2,d}(X)]^{\mathrm{vir}} \times [\overline{M}_{1,1}] \, \r).
\end{align}
\subsubsection{$i=3$ case} Recall from Section \ref{OUTLINE} that
\begin{itemize}
\item $\bK^{(3)} = \cH^\vee \boxtimes \mathrm{ev}_1^*T_X$,
\item $\bC_{(3)} \cong \mathbb{L}^\vee \boxtimes \cO_{\mathbb{P} Q'_{0,2,d}}(-1)$,
\item $[\bQ^{(3)}(X)]^{\mathrm{vir}} = (-1)^{d(\sum_i \ell_i)}e^{\mathrm{ref}}(V_{0,2,d}) \cap\l([\overline{M}_{1,2}] \times [\mathbb{P} Q'_{0,2,d}]\r)$
\end{itemize}
where $\cO_{\mathbb{P} Q'_{0,2,d}}(-1)$ is the tautological line bundle of $\mathbb{P} Q'_{0,2,d} = \mathbb{P}(\mathbb{L}^{\vee}_{1}\oplus \mathbb{L}^{\vee}_{2})$.
To compute \eqref{eq:chernexpress1} we first expand $c\,(\bK^{(3)})/c\,(\bC_{(3)})$
\begin{align*}
\frac{c\,(\bK^{(3)})}{1+ c_1(\mathbb{L}^\vee)+c_1( \cO_{\mathbb{P} Q'_{0,2,d}}(-1))}\ &=\ c\,(\bK^{(3)})\cdot \sum_{a\geq 0}
\frac{(-1)^a\cdot c_1(\cO_{\mathbb{P} Q'_{0,2,d}}(-1))^a}{(1+ c_1(\mathbb{L}^\vee))^{a+1}}\\
&=\ \sum_{a\geq 0}(-1)^a\cdot A^{a+1}\cdot c_1(\cO_{\mathbb{P} Q'_{0,2,d}}(-1))^a,
\end{align*}
where $A^t=\frac{c\,(\cH^\vee \boxtimes\; \mathrm{ev}_1^*T_X)}{c\,(\mathbb{L}^\vee\boxtimes\; 1)^{t}}$ as introduced in Introduction. Its $(\mathrm{dim} X-1)$-part $[c\,(\bK^{(3)})/c\,(\bC_{(3)})]_{\mathrm{dim} X-1}$ is
\begin{align}\label{AA}
\sum_{a\geq 0}^{\mathrm{dim} X-1}(-1)^a\cdot [A^{a+1}]_{\mathrm{dim} X-1-a}\cdot c_1(\cO_{\mathbb{P} Q'_{0,2,d}}(-1))^a.
\end{align}
By definition of Segre classes \cite[Chapter 3.1]{Fu}, we have
$$
p_* \l(c_1(\cO_{\mathbb{P} Q'_{0,2,d}}(-1))^a \cap [\mathbb{P} Q'_{0,2,d}]\r)\ =\ s_{a-1}\l( \mathbb{L}^{\vee}_{1}\oplus \mathbb{L}^{\vee}_{2} \r) \cap [Q'_{0,2,d}(\mathbb{P}^n)]\ =\ [B_1B_2]_{a-1}\cap [Q'_{0,2,d}(\mathbb{P}^n)]
$$
where $p: \mathbb{P} Q'_{0,2,d}\to Q'_{0,2,d}(\mathbb{P}^n)$ is the projection morphism and $B=\frac{1}{c\,(\mathbb{L}^\vee)}$. So by the projection formula, capping \eqref{AA} with $[\overline{M}_{1,2}] \times [\mathbb{P} Q'_{0,2,d}]$ and pushing it down to $\overline{M}_{1,2} \times Q'_{0,2,d}(\mathbb{P}^n)$ becomes
\begin{align}\label{AAA1}
&p_*\l(\left[\frac{c\,(\bK^{(3)})}{c\,(\bC_{(3)})}\right]_{\mathrm{dim} X-1}\cap([\overline{M}_{1,2}] \times [\mathbb{P} Q'_{0,2,d}])\r)\\ \nonumber
&=\sum_{a\geq 0}^{\mathrm{dim} X-1}(-1)^a\cdot [A^{a+1}]_{\mathrm{dim} X-1-a} [B_1B_2]_{a-1}\cap \l([\overline{M}_{1,2}] \times [Q'_{0,2,d}(\mathbb{P}^n)]\r).
\end{align}
Next we compute the cycle $e^{\mathrm{ref}}(V_{0,2,d}) \cap\l([\overline{M}_{1,2}] \times [Q'_{0,2,d}(\mathbb{P}^n)]\r)$ in $\overline{M}_{1,2}\times Q_{0,2,d}(\mathbb{P}^n)$. Denoting by $j: Q'_{0,2,d}(\mathbb{P}^n) \hookrightarrow Q_{0,2,d}(\mathbb{P}^n)$ the embedding and by $\overline{V}_{0,2,d}$ the bundle $\oplus_{i=1}^{m} \pi_* \cL^{\ell_i}$ on $Q_{0,2,d}(\mathbb{P}^n)$, the evaluation morphism gives rise to a sequence
$$
0\ \longrightarrow\ V_{0,2,d} \ \longrightarrow\ j^*\overline{V}_{0,2,d} \ \xrightarrow{\ \mathrm{ev}_1-\,\mathrm{ev}_2\ }\ \mathrm{ev}_1^*\oplus_{i=1}^{m} \cO(\ell_i) \ \longrightarrow\ 0.
$$
Denoting by $\Delta_{\mathbb{P}^n}\in H^n(\mathbb{P}^n \times \mathbb{P}^n)$ the diagonal class, we have
\begin{align}\label{AAA2}
e^{\mathrm{ref}}(V_{0,2,d}) \cap\l([\overline{M}_{1,2}] \times [Q'_{0,2,d}(\mathbb{P}^n)]\r)\ &=\ \frac{e^{\mathrm{ref}}(\overline{V}_{0,2,d})}{e(\oplus_{i=1}^{m} \cO(\ell_i))}\cap \l([\overline{M}_{1,2}] \times\l((\mathrm{ev}_1\times\mathrm{ev}_2)^*\Delta_{\mathbb{P}^n}\cap [Q_{0,2,d}(\mathbb{P}^n)]\r)\r) \nonumber \\
&=\ [\overline{M}_{1,2}] \times\l(\frac{(\mathrm{ev}_1\times\mathrm{ev}_2)^*\Delta_{\mathbb{P}^n}}{e(\oplus_{i=1}^{m} \cO(\ell_i))} \cap [Q_{0,2,d}(X)]^{\mathrm{vir}} \r) \\
&=\ [\overline{M}_{1,2}] \times [Q'_{0,2,d}(X)]^{\mathrm{vir}} \nonumber
\end{align}
where $[Q'_{0,2,d}(X)]^{\mathrm{vir}} $ is the cycle defined in \eqref{Q1'}. Note that $\Delta_{\mathbb{P}^n}|_X=e(T_{\mathbb{P}^n}|_X)$ and $\Delta_X|_X=e(T_X)$. Hence by \eqref{AAA1} and \eqref{AAA2}, \eqref{eq:chernexpress1} becomes
\begin{align}\label{FINAL3}
[Q^{(3)}_p]^{\mathrm{vir}} = (-1)^{d(\sum_i \ell_i) + m} \sum_{a\geq 0}^{\mathrm{dim} X-1}(-1)^a\cdot [A^{a+1}]_{\mathrm{dim} X-1-a} [B_1B_2]_{a-1}\cap \l([\overline{M}_{1,2}] \times [Q'_{0,2,d}(X)]^{\mathrm{vir}} \r).
\end{align}
So \eqref{FINAL1}, \eqref{FINAL2}, \eqref{FINAL3} and \eqref{X=p} prove Theorem \ref{QLP}.
\subsection{Calabi-Yau $3$-folds} \label{sect:CY}
Suppose that $X$ is a Calabi-Yau $3$-fold. Set
$$
\alpha := c_1(\cH^\vee\boxtimes 1), \ \ \beta := c_2(1\boxtimes \mathrm{ev}^*T_X) ,\ \ \psi := c_1(1\boxtimes \mathbb{L}).
$$
\subsubsection{$i=1$ case}Then since $\alpha = c_1(\mathbb{L}^\vee \boxtimes 1)$ we have
\begin{align*}
[K]_2\ =\ \left[\frac{c\,(\cH^\vee \boxtimes \mathrm{ev}^*T_{X})}{c\,(\mathbb{L}^\vee \boxtimes \mathbb{L}^{\vee} )}\right]_{2} \ = \ \left[ \frac{(1 + 3\alpha + \beta) }{ (1 + \alpha - \psi) } \right]_2.
\end{align*}
Its nontrivial contribution to the integration over $[\overline{M}_{1,1}] \times (e^{\mathrm{ref}}(V_{1,1,d}) \cap [Q_{1,1,d}^\mathrm{red}(\mathbb{P}^n)])$ is only $-\alpha\psi$. Hence
$$
[Q^{(1)}_p]^{\mathrm{vir}} \ =\ -\frac{(-1)^{d(\sum_i \ell_i) + m}}{24}\; c_1(\mathbb{L})\cap (e^{\mathrm{ref}}(V_{1,1,d}) \cap [Q_{1,1,d}^\mathrm{red}(\mathbb{P}^n)])
$$
Using \cite[Corollary 1.3]{LL22}
\begin{align*}
e^{\mathrm{ref}}(V_{1,1,d}) \cap [Q_{1,1,d}^\mathrm{red}(\mathbb{P}^n)] = [Q_{1,1,d}(X)]^{\mathrm{vir}} - \frac{c\,(\mathbb{L})}{12}[Q_{0,2,d}(X)]^{\mathrm{vir}} ,
\end{align*}
we obtain
\begin{align}\label{AAAA1}
[Q^{(1)}_p]^{\mathrm{vir}} \ =\ -\frac{(-1)^{d(\sum_i \ell_i) + m}}{24}\; c_1(\mathbb{L})\cap [Q_{1,1,d}(X)]^{\mathrm{vir}} + 2\frac{(-1)^{d(\sum_i \ell_i) + m}}{24^2}c_1(\mathbb{L}_1)c_1(\mathbb{L}_2)\cap [Q_{0,2,d}(X)]^{\mathrm{vir}} .
\end{align}
\subsubsection{$i=2$ case} Similarly we have
$$
\left[K_1K_2\right]_{4} = \left[ \frac{(1 + 3\alpha_1 + \beta_1)}{(1 + \alpha_1 - \psi_1)} \cdot \frac{(1 + 3\alpha_2 + \beta_2)}{(1 + \alpha_2 - \psi_2)} \right]_4.
$$
The nontrivial contribution is $\alpha_1\alpha_2 (-3 \psi_1\psi_2 - 3\beta_1 - 3\beta_2)$. Hence we obtain
\begin{align}\label{AAAA2}
[Q^{(2)}_p]^{\mathrm{vir}} \ &=\ \frac{(-1)^{d(\sum_i \ell_i) + m} }{2} \alpha_1\alpha_2 (-3 \psi_1\psi_2 - 3\beta_1 - 3\beta_2) \cap \left( [\overline{M}_{1,1}] \times [Q_{0,2,d}(X)]^{\mathrm{vir}} \times [\overline{M}_{1,1}] \right) \\ \nonumber
&=\ -(-1)^{d(\sum_i \ell_i) + m} \frac{3}{2\cdot 24^2} (c_1(\mathbb{L}_1)c_1(\mathbb{L}_2)+c_2(\mathrm{ev}_1^*T_X)+c_2(\mathrm{ev}_2^*T_X)) \cap [Q_{0,2,d}(X)]^{\mathrm{vir}}
\end{align}
\subsubsection{$i=3$ case}
Since $(\mathrm{ev}_1 \times \mathrm{ev}_2)^*(\Delta_X) \in H^3(Q_{0,2,d}(X))$ and the degree of $[Q_{0,2,d}(X)]^{\mathrm{vir}} $ is $2$, $[Q'_{0,2,d}(X)]^{\mathrm{vir}} =0$. Thus $[Q_p^{(3)}]^{\mathrm{vir}} = 0$.
\medskip
By \eqref{AAAA1}, \eqref{AAAA2} and \eqref{X=p}, we prove Theorem \ref{QLP1}.
|
1,116,691,497,823 | arxiv | \section{Introduction}
Visual SLAM (V-SLAM) is one of enabling technologies for autonomous systems such as self-driving cars, unmanned aerial vehicles and space robots.
While most V-SLAM solutions rely on point features due to their simplicity,
line features commonly seen in man-made environments
are less sensitive to lighting variation and position ambiguity and have been only used in recent work~\cite{rother2003linear,marzorati2007integration,klein2008improving,koletschka2014mevo,zhang2015building,lu2015robust,gomez2016robust}.
In principle, the combination of point and line features would provide more geometric constraints about the structure of the environment than either one,
which motivates us to design robust V-SLAM with point and line features.
Recently, optimization-based approaches have become favorable for the V-SLAM due to its superior accuracy per computational unit as compared with filtering-based approaches~\cite{strasdat2010real}.
In particular, graph-based SLAM is one of the most popular formulations which constructs a factor graph whose nodes correspond to the states to estimate and edges represent measurement constraints between the nodes.
When incorporating the line features into the traditional point feature-based graph SLAM framework, two challenges arise:
The first one is that the spatial line is often over parameterized for the convenience of transformation~\cite{lu2015robust,klein2008improving,marzorati2007integration},
which incurs extra computational overhead in the graph optimization.
{Note that while a spatial line has only {\em four} degrees of freedom, typically it is represented by its two spatial endpoints or the $ Pl\ddot{u}cker $ coordinates with {\em six} degrees of freedom.}
Secondly,
it is known that the Jacobian plays an important role when using an iterative approach to solve the graph optimization problem.
In part because of the over parametrization, most approaches~\cite{lu2015robust,zhang2015building} using line features typically employ the numerically computed Jacobians, which incurs approximation.
In contrast, we analytically compute the Jacobians during the graph optimization in order to improve accuracy as well as efficiency.
In particular, this paper introduces a robust and efficient graph-based visual SLAM system using both point and line features with a unified cost function,
combining the re-projection errors of points and lines.
In our back-end, the spatial line is parametrized by the orthonormal representation, which is the minimal and decoupled representation.
Based on this minimal parametrization, we further derive the analytical Jacobian of the line re-projection error.
Specifically, the main contributions of this paper are the following:
\begin{itemize}
\item An improved extraction and matching method for line features is introduced to robustify data association.
\item In the back-end of the proposed visual SLAM, we employ the orthonormal (minimal) representation to parameterize lines and analytically compute the corresponding Jacobians.
\item We design and implement a complete visual SLAM system using both point and line features, which includes stereo matching, frame tracking, local mapping, bundle adjustment of both line feature and point feature, as well as point-line based loop detection. Extensive experimental results are presented to validate its performance.
\end{itemize}
\begin{figure*}[!t]
\centering
\subfigure [Point and line features detected in one image]{\includegraphics[scale=.2]{./figure/i1}}~~
\subfigure [The point and line map]{\includegraphics[scale=.3]{./figure/i2}}
\caption{The proposed visual SLAM with point and line features on the it3f dataset \cite{zhang2015building}.
Note that in (b), the green lines indicate the trajectory of camera motion. The blue frames represent keyframes, the current frame in green, and the local map for the tracking at that moment in red. }
\label{ifig1}
\end{figure*}
\section{Related Work}
Some methods have been proposed to parameterize line in three-dimensional (3D) space efficiently. Sola \cite{sola2012impact} summarizes several methods to represent line including $ Pl\ddot{u}cker $ coordinates, Anchored $ Pl\ddot{u}cker $ Lines, and homogeneous-points line etc. For minimizing the number of the parameters, Bartoli \cite{bartoli2005structure} proposed the orthonormal representation with the minimum four parameters to represent spatial lines in SFM.
Combination of point and line features has been utilized in the SLAM community recently. Marzorati et al.~\cite{marzorati2007integration} proposed a SLAM with points and lines, which uses a special trifocal cameras to detect and reconstruct 3D lines. Rother~\cite{rother2003linear} reconstructed points and lines at the cost of requiring a reference plane to be visible in all views. Koletschka et al.~\cite{koletschka2014mevo} proposed a stereo odometry based on points and lines, which computes the sub-pixel disparity of the endpoints of the line and deals with partial occlusions. Lu \cite{lu2015robust} fuses point and line features to form a RGBD visual odometry algorithm, which extracts 3D points and lines from RGB-D data. It is also proved that fusing these two types of features produced smaller uncertainty in motion estimation than using either feature type alone in his work. Ruben \cite{gomez2016robust} proposed a probabilistic approach to stereo visual odometry based on the combination of points and line segments, which weighs the associated errors of points and line segments according to their covariance matrices.
\section{Detection and Representation of Line Features}
\subsection{Extraction and Description of Line Features}
Line Segment Detector (LSD) \cite{von2010lsd} is a popular feature detector for line segments.
It is designed to work on noisy image in various scenes without parameter tuning and is able to provide subpixel accuracy.
However, the LSD suffers from the problem of dividing a line into multiple segments in some scenarios as shown in Fig.~\ref{rfig1},
causing failures for matching and tracking line features.
\begin{figure}[!h]
\centering
\includegraphics[width=3.2in,height=0.8in ]{./figure/3lsd}
\caption{Performance of LSD. Left: Original image. Right: Line features detected in the image by LSD.}
\label{rfig1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=2in,height=0.4in ]{./figure/3dis}
\caption{Distances between two line segments.}
\label{rfig2}
\end{figure}
Therefore,
in this paper, we seek to improve the LSD algorithm by minimizing the influence of dividing a line into multiple segments.
In particular, we merge the line segments which should be on the same one straight line while being divided to several parts. For each line segment extracted by LSD, the start point and end point can be distinguished, because the direction is encoded by which side of the line segment is darker.
In our improvement, we merge the segments according to their differences of both direction and distance. As shown in Fig. \ref{rfig2}, $ l $ represents the minimum distance between the endpoints of the two segments, and $ d $ indicates the distance from the midpoint of one segment to the other line segment. If $ d $ and $ l $ are smaller than the given threshold and the direction difference is also small, then the two segments are considered to be the candidates to be combined. This improved line detector has the advantages of making more robust and accurate data association as demonstrated in our experiments.
Fig. \ref{rfig3} shows the result of the two different detectors. Note that the merged line segments found by our improved detector is represented by the LBD line descriptor \cite{Zhang2013}, which is a 256-bit vector same as ORB point descriptor \cite{rublee2011orb}.
The distance between the two descriptors can be another criterion for fusing two lines.
\begin{figure}[!b]
\centering
\includegraphics[width=3.2in,height=0.8in ]{./figure/3lsdres}
\caption{ Comparative results of different detectors. Left: The original LSD detector. Right: The proposed improved detector.}
\label{rfig3}
\end{figure}
\subsection{Line Feature Matching}\label{seclinematch}
Based on the LBD line segment descriptor, we introduce the geometric properties \cite{woo20092d} of line to perform effective line matching. In our approach, two successfully matched line features $ \bm{l}_1 $ , $ \bm{l}_2 $ need to satisfy the following conditions:
\begin{enumerate}
\item the angular difference of two matched lines is smaller than a given threshold $ \Phi $;
\item the length of $ \bm{l}_1 $, $ \|\bm{l}_1\| $ is similar to the length of $ \bm{l}_2 $, $ \|\bm{l}_2\| $: $\frac{{min\left( {{\|\bm{l}_1\|},{\|\bm{l}_2\|}} \right)}}{{max\left( {{\|\bm{l}_1\|},{\|\bm{l}_2\|}} \right)}} > \tau $;
\item the overlapping length of the two lines is greater than a certain value: $\frac{{{\bm{l}_{overlap}}}}{{\min ({\|\bm{l}_1\|},{\|\bm{l}_2\|})}} > \beta $;
\item the distance between the two LBD descriptors is less than a certain value.
\end{enumerate}
\subsection{Geometric Representation}
As 3D line can be initialized by its two spatial points, we assume that their homogeneous coordinates are
$\bm{X}_1 = {({x_1},{y_1},{z_1},{r_1})^T}$, $\bm{X}_2 = {({x_2},{y_2},{z_2},{r_2})^T}$ respectively, while the inhomogeneous coordinates are represented as ${\bm{\tilde{X}}_1} $, ${\bm{\tilde{X}}_2} $.
Then $ Pl\ddot{u}cker $ coordinates can be constructed as follows:
\begin{equation}
\bm{\mathcal{L}} = \left[ {\begin{array}{*{20}{c}}
{{{\bm{\tilde{X}}}_1} \times {{\bm{\tilde{X}}}_2}}\\
{{r_2}{{\bm{\tilde{X}}}_1} - {r_1}{{\bm{\tilde{X}}}_2}}
\end{array}} \right] = \left[ {\begin{array}{*{20}{c}}
\bm{n}\\
\bm{v}
\end{array}} \right] \in {\mathbb{P}^5} \subset {\mathbb{R}^6}
\label{3eq1}
\end{equation}
which is a 6-dimensional vector consisting of $ \bm{n} $ and $ \bm{v} $. $ \bm{v} $ is the direction vector of the line and $ \bm{n} $ is the normal vector of the plane determined by the line and the origin \cite{sola2012impact}.
Since 3D line has only four degrees of freedom, the $ Pl\ddot{u}cker $ coordinates are over parameterized. In the back-end graph optimization, the extra degrees of freedom will increase the computational cost and cause the numerical instability of the system.
Thus, Bartoli \cite{bartoli2005structure} proposed the orthonormal representation with minimum four parameters. We can obtain the orthonormal representation $ (\bm{U},\bm{W})\in{SO(3)}\times{SO(2)} $ from $ Pl\ddot{u}cker $ coordinates:
\begin{align}
{\bm{\mathcal L}} &= \left[ {{\bm{n}}|{\bm{v}}} \right] = \left[ {\begin{array}{*{20}{c}}
{\frac{{\bm{n}}}{{\left\| {\bm{n}} \right\|}}}&{\frac{{\bm{v}}}{{\left\| {\bm{v}} \right\|}}}&{\frac{{{\bm{n}} \times {\bm{v}}}}{{\left\| {{\bm{n}} \times {\bm{v}}} \right\|}}}
\end{array}} \right]\left[ {\begin{array}{*{20}{c}}
{\left\| {\bm{n}} \right\|}&0\\
0&{\left\| {\bm{v}} \right\|}\\
0&0
\end{array}} \right]\\
&= {\bm{U}}\left[ {\begin{array}{*{20}{c}}
{{w_1}}&0\\
0&{{w_2}}\\
0&0
\end{array}} \right].
\end{align}
The orthonormal representation of line $ (\bm{U},\bm{W})$ consists of:
\begin{align}
{\bm{U}} &= {\bm{U}(\bm{\theta})} = \left[ {\begin{array}{*{20}{c}}
{{u_{11}}}&{{u_{12}}}&{{u_{13}}}\\
{{u_{21}}}&{{u_{22}}}&{{u_{23}}}\\
{{u_{31}}}&{{u_{32}}}&{{u_{33}}}
\end{array}} \right]\\
{\bm{W}} &= {\bm{W}(\theta)} = \left[ {\begin{array}{*{20}{c}}
{{w_1}}&{ - {w_2}}\\
{{w_2}}&{{w_1}}
\end{array}} \right].
\end{align}
We can update the orthonormal representation with the minimum four parameters
$\bm{{\delta}}_{\theta}={[\bm{\theta}^T,\theta ]}^T\in{\mathbb{R}^4} $,
we can update $ \bm{U} $ with the vector $ \bm{\theta} \in{\mathbb{R}^3} $, and update $ \bm{W} $ with $ \theta $.
Each sub-parameter of $ \bm{\delta}_{\theta}$ has a specific geometric interpretation. $ \bm{W} $ updated by $ \theta $ encapsulates the vertical distance $ \bm{{\rm{d}}} $ from the origin to the spatial line. As shown in Fig. \ref{3fig5}, in the case of fixed $ \bm{W} $ represented in gray, the three-dimensional vector $ \bm{\theta} $ is related to the rotation of the line around three axes, drawn in orange, green, and blue.
\begin{figure}[!h]
\centering
\includegraphics[width=.6\columnwidth]{./figure/plv1}
\caption{ Geometric interpretation of four parameters $ \bm{\delta}_{\theta}$ in updating orthonormal representation. }
\label{3fig5}
\end{figure}
Note that in the proposed visual SLAM system,
we only use the orthonormal representation in the back-end optimization, as it is the minimal and decoupled representation.
However, in the other steps, the $ Pl\ddot{u}cker $ coordinates are used due to its convenience in camera projection, endpoints trimming, and line initialization~\cite{bartoli2005structure,zhang2015building}.
\section{Graph Optimization with Point and Line Measurements}
In what follows, we present in detail how the line measurements are incorporated into our graph-based visual SLAM system,
while the point measurements are treated in a standard way, for example, as in ORB-SLAM~\cite{murORB2}.
\subsection{Measurement Models of Point and Line Features}\label{sec41}
We use the transformation matrix $ \bm{T}_{cw}\in{SE(3)} $ to denote the transformation from world frame to camera frame, which consists of a rotation matrix $ \bm{R}_{cw}\in{SO(3)} $, and a translation vector $ \bm{t}_{cw}\in{\mathbb{R}^3} $, as shown in~\eqref{eq4}. First, we convert the 3D line $ \bm{\mathcal{L}}_w $ from the world frame to the camera frame~\cite{bartoli20013d} as shown in \eqref{eq5}, denoted as $ \bm{\mathcal{L}}_c $, with representation of the $ Pl\ddot{u}cker $ coordinates. Then the 3D line $ \bm{\mathcal{L}}_c $ is projected to the image in~\eqref{eq6}, described as ${\bm{l}'}$ on image plane, according to the known intrinsic parameters of camera. It should be noted that only normal components $ \bm{n}_c $ in the $ Pl\ddot{u}cker $ coordinates $ \bm{\mathcal{L}}_c $ can provide meaningful information in the projection. Then, the re-projection error of 3D line is represented as the distance between two homogeneous endpoints $ \bm{x}_s $, $ \bm{x}_e $ of the matched line segment $ \bm{z} $ to the back-projected line ${\bm{l}'}$ on image plane as shown in~\eqref{eq7}.
\begin{equation}
{\bm{T}_{cw}} = \left[ {\begin{array}{*{20}{c}}
{{{\bm{R}}_{cw}}}&{{\bm{t}_{cw}}}\\
\bm{0}&1
\end{array}} \right]
\label{eq4}
\end{equation}
\begin{equation}
{\bm{\mathcal L}_c} = \left[ {\begin{array}{*{20}{c}}
{{\bm{n}_c}}\\
{{\bm{v}_c}}
\end{array}} \right]={\bm{\mathcal{H}}_{cw}}{\bm{\mathcal L}_w} = \left[ {\begin{array}{*{20}{c}}
{{{\bm{R}}_{cw}}}&{{{\left[ {{{\bm{t}}_{cw}}} \right]}_ \times }{{\bm{R}}_{cw}}}\\
\bm{0}&{{{\bm{R}}_{cw}}}
\end{array}} \right]{\bm{\mathcal L}_w},
\label{eq5}
\end{equation}
where ${\left[ . \right]_ \times }$ denotes the skew-symmetric matrix of a vector, and $ {\bm{\mathcal{H}}_{cw}} $ represents transformation matrix of the line.
\begin{equation}
{\bm{l}'}={\bm{\mathcal{K}}}{\bm{n}_c} = \left[ {\begin{array}{*{20}{c}}
{{f_v}}&0&0\\
0&{{f_u}}&0\\
{ - {f_v}{c_u}}&{{f_u}{c_v}}&{{f_u}{f_v}}
\end{array}} \right]{{\bm{n}}_c}=\left[ {\begin{array}{*{20}{c}}
{{l_1}}\\
{{l_2}}\\
{{l_3}}
\end{array}} \right],
\label{eq6}
\end{equation}
where $ \bm{\mathcal{K}} $ denotes the projection matrix of the line~\cite{sola2012impact}.
\begin{equation}
{e_l} = {\rm{d}}\left( {{\bm{z}},{\bm{l}'}} \right) = {\left[ {\frac{{{\bm{x}}_s^{\rm{T}}{\bm{l}'}}}{{\sqrt {l_1^2 + l_2^2} }},\frac{{{\bm{x}}_e^{\rm{T}}{\bm{l}'}}}{{\sqrt {l_1^2 + l_2^2} }}} \right]^{\rm{T}}},
\label{eq7}
\end{equation}
where $ {{\rm{d}}}(.) $ denotes the distance function.
The camera pose $ {\bm{T}}_{kw} $, the 3D point position $ {\bm{X}_{w,i}} $, and the position of 3D line $ {\bm{\mathcal{L}}_{w,j}} $ are denoted as vertices in the graph model. The two types of edge, the pose-point edge in~\eqref{geq8}, the pose-line edge in~\eqref{geq9}, are constructed according to the front-end data association. The re-projection errors encapsulated in the edges are:
\begin{equation}
{Ep_{k,i}}{{ = }}{{\bm{x}}_{k,i}} - \bm{{\rm{K}}}{\bm{T}_{kw}}{\bm{X}_{w,i}}
\label{geq8}
\end{equation}
\begin{equation}
{El_{k,j}} = {\rm{d}}\left( {{{\bm{z}}_{k,j}},\bm{\mathcal{K}}\rm{n}_c[{\bm{\mathcal{H}}_{cw}}{\bm{\mathcal{L}}_{w,j}}]} \right),
\label{geq9}
\end{equation}
where $ {{\bm{x}}_{k,i}} $ stands for the coordinates of point in the image, $\rm{n}_c[.]$ denotes the normal components of the $ Pl\ddot{u}cker $ coordinates.
For simplicity, we omit the conversion from homogeneous coordinates to the inhomogeneous in the above equations. Assuming that the observations obey Gaussian distribution, the final cost function $ C $ can be obtained as in \eqref{leq10}, Where $ \bm{\Sigma p}^{ - 1} $, $ \bm{\Sigma l}^{ - 1} $ are the inverse covariance matrices of points and lines, and $ {\rho _p} $, $ {\rho _l} $ are robust Huber cost functions. The back-end optimization minimizes the cost function $ C $.
\begin{equation}
\label{leq10}
C = \mathop \sum \limits_{k,i} {\rho _p}\left( {Ep_{k,i}^{\rm{T}}{\bm{\Sigma p}}_{k,i}^{ - 1}{Ep_{k,i}}} \right) + \mathop \sum \limits_{k,j} {\rho _l}\left( {El_{k,j}^{\rm{T}}{\bm{\Sigma l}}_{k,j}^{ - 1}{El_{k,j}}} \right)
\end{equation}
\subsection{Jacobian of Line Re-projection Error}
It is known that the Jacobian is important when using an iterative approach to solve the the graph optimization problem.
To the best of our knowledge, this is the first paper deriving out the analytical Jacobains of re-projection errors with respect to line parameters, which including the Jacobian with respect to the small pose changes ${{\bm{\delta }}_\xi }$, and to the four dimensional vector ${{\bm{\delta }}_\theta }$ which updates the orthonormal representation. The Jacobian of the line re-projection error ${el} = {{{\rm{d}}}}({\bm{z},{\bm{l}}'})$ with respect to the back-projected line ${\bm{l}'} = {[{l_1},{l_2},{l_3}]^T}$ is given by:
\begin{equation}
\frac{{\partial {el}}}{{\partial {\bm{l}}'}}
= \frac{1}{{{l_n}}}{\left[ {\begin{array}{*{20}{c}}
{{u_1} - \frac{{{l_1}{e_1}}}{{l_n^2}}}&{{v_1} - \frac{{{l_2}{e_1}}}{{l_n^2}}}&1\\
{{u_2} - \frac{{{l_1}{e_2}}}{{l_n^2}}}&{{v_1} - \frac{{{l_2}{e_2}}}{{l_n^2}}}&1
\end{array}} \right]_{2 \times 3}},
\end{equation}
where ${e_1} = {\bm{x}}_s^{\rm{T}}{\bm{l}}'$, ${e_2} = {\bm{x}}_e^{\rm{T}}{\bm{l}}'$, ${l_n} = \sqrt {(l_1^2 + l_2^2)} $. ${{\bm{x}}_s} = {\left[ {{u_1},{v_1},1} \right]^{\rm{T}}}$ and ${{\bm{x}}_e} = {\left[ {{u_2},{v_2},1} \right]^{\rm{T}}}$ are the two endpoints of matched line segment in the image.
Recall the projection of 3D line ${\bm{l}}' = \bm{\mathcal{K}}{{\bm{n}}_c}$, then:
\begin{equation}
\frac{{\partial {\bm{l}}'}}{{\partial {\bm{\mathcal{L}}_c}}} = \frac{{\partial {\cal K}{{\bm{n}}_c}}}{{\partial {\bm{\mathcal{L}}_c}}} = {\left[ {\begin{array}{*{20}{c}}
\bm{\mathcal{K}}&{\bm{0}}
\end{array}} \right]_{3 \times 6}}
\end{equation}
Assuming that the orthonormal representation of line in the world frame $ {\bm{\mathcal{L}} }_w$, which consists of $\bm{U}$ and $\bm{W}$,
%
We write the Jacobians directly:
\begin{equation}
\frac{{\partial {{\bm{\mathcal{L}}}_c}}}{{\partial {{\bm{\mathcal{L}}}_w}}} = \frac{{\partial {{\bm{\mathcal{H}}}_{cw}}{{\bm{\mathcal{L}}}_w}}}{{\partial {{\bm{\mathcal{L}}}_w}}} = {{\bm{\mathcal{H}}}_{cw}}
\end{equation}
\begin{equation}
\frac{{\partial {{\bm{\mathcal{L}}}_w}}}{{\partial {{\bm{\delta }}_\theta }}} = {\left[ {\begin{array}{*{20}{c}}
{ - {{\left[ {{w_1}{{\bm{u}}_1}} \right]}_ \times }}&{ - {w_2}{{\bm{u}}_1}}\\
{ - {{\left[ {{w_2}{{\bm{u}}_2}} \right]}_ \times }}&{ {w_1}{{\bm{u}}_2}}
\end{array}} \right]_{6 \times 4}},
\end{equation}
where $ \bm{u}_i $ is the $ i_{th} $ column of $ \bm{U} $.
It is difficult to compute $\frac{{\partial {{\bm{\mathcal{L}}}_w}}}{{\partial {{\bm{\delta }}_\xi }}}$ directly, so we divide the pose changes ${{\bm{\delta }}_\xi }$ into two parts, the translation part ${{\bm{\delta }}_\rho }$ and the rotation part ${{\bm{\delta }}_\phi }$.
${{\bm{\delta }}_\phi }$ are set to zeros when computing Jacobian with respect to ${{\bm{\delta }}_\rho }$. With a transformation matrix ${\bm{T}^*}$ containing the translation ${{\bm{\delta }}_\rho }$, the new line ${\bm{\mathcal{L}}}_c^*$ is:
\begin{equation}
{\bm{T}^*} = \exp \left( {{{\bm{\delta }}_\xi} ^\wedge} \right){T_{{{cw}}}} \approx \left[ {\begin{array}{*{20}{c}}
{\bm{I}}&{{{\bm{\delta }}_\rho }}\\
{{{\bm 0}^{{T}}}}&1
\end{array}} \right]{T_{{{cw}}}}
\end{equation}
\begin{equation}
{{\bm{R}}^*} = {{\bm{R}}_{cw}}\text{, }{{\bm{t}}^*} = {{\bm{\delta }}_\rho } + {{\bm{t}}_{cw}}
\end{equation}
\begin{equation}
{\bm{\mathcal{H}}}_{cw}^* = \left[ {\begin{array}{*{20}{c}}
{{{\bm{R}}_{cw}}}&{{{\left[ {{{\bm{\delta }}_\rho } + {{\bm{t}}_{cw}}} \right]}_ \times }{{\bm{R}}_{cw}}}\\
{\bm 0}&{{{\bm{R}}_{cw}}}
\end{array}} \right]
\end{equation}
\begin{equation}
{\bm{\mathcal{L}}}_c^* = {\bm{\mathcal{H}}}_{cw}^*{{\bm{\mathcal{L}}}_w} = \left[ {\begin{array}{*{20}{c}}
{{{\bm{R}}_{cw}}{{\bm{n}}_w} + {{\left[ {{{\bm{\delta }}_\rho } + {{\bm{t}}_{cw}}} \right]}_ \times }{{\bm{R}}_{cw}}{{\bm{v}}_w}}\\
{{\bm{R}}_{cw}^{{T}}{{\bm{v}}_w}}
\end{array}} \right],
\end{equation}
where $ \exp \left( {{{\bm{\delta }}_\xi} ^\wedge} \right) $ denotes the exponential map from Lie algebras to Lie Groups (hence $ {{{\bm{\delta }}_\xi} ^\wedge} $ is a Lie algebra). Then it is easy to deduce the partial derivative of $ \bm{\delta}_{\rho} $ :
\begin{equation}
\frac{{\partial {\bm{\mathcal{L}}}_c^*}}{{\partial {{\bm{\delta }}_\rho }}} = \left[ {\begin{array}{*{20}{c}}
{\frac{{{{\left[ {{{\bm{\delta }}_\rho } + {{\bm{t}}_{cw}}} \right]}_ \times }{{\bm{R}}_{cw}}{{\bm{v}}_w}}}{{\partial {{\bm{\delta }}_\rho }}}}\\
{\bm 0}
\end{array}} \right] = {\left[ {\begin{array}{*{20}{c}}
{ - {{\left[ {{{\bm{R}}_{cw}}{{\bm{v}}_w}} \right]}_ \times }}\\
{\bm 0}
\end{array}} \right]_{6 \times 3}}
\end{equation}
The process to deduce $\frac{{\partial {\bm{\mathcal{L}}}_c^*}}{{\partial {{\bm{\delta }}_\phi }}}$ is similar to $\frac{{\partial {\bm{\mathcal{L}}}_c^*}}{{\partial {{\bm{\delta }}_\rho }}}$, except for ${{\bm{\delta }}_\rho } = {\bm 0}$. We only shows the final result Eq.\ref{4eq20}, and drops the coordinate frame subscripts for readability. Readers can refer to the Appendix for more details.
\begin{equation}
\frac{{\partial {\bm{\mathcal{L}}}_c^*}}{{\partial {{\bm{\delta }}_\phi }}} = = {\left[ {\begin{array}{*{20}{c}}
{ - {{\left[ {{\bm{Rn}}} \right]}_ \times } - {{\left[ {{{\left[ {\bm{t}} \right]}_ \times }{\bm{Rv}}} \right]}_ \times }\;}\\
{ - {{\left[ {{\bm{Rv}}} \right]}_ \times }}
\end{array}} \right]_{6 \times 3}}
\label{4eq20}
\end{equation}
Stacking the Jacobians of ${{\bm{\delta }}_\rho }$ and ${{\bm{\delta }}_\phi }$ , we can obtain the final Jacobian of ${{\bm{\delta }}_\xi }$:
\begin{equation}
\frac{{\partial {\bm{\mathcal{L}}}_c^*}}{{\partial {{\bm{\delta }}_\xi }}} = {\left[ {\begin{array}{*{20}{c}}
{ - {{\left[ {{{\bm{R}}}{{\bm{n}}}} \right]}_ \times } - {{\left[ {{{\left[ {{{\bm{t}}}} \right]}_ \times }{{\bm{R}}}{{\bm{v}}}} \right]}_ \times }}&{ - {{\left[ {{{\bm{R}}}{{\bm{v}}}} \right]}_ \times }}\\
{ - {{\left[ {{{\bm{R}}}{{\bm{v}}}} \right]}_ \times }}&{\bm 0}
\end{array}} \right]_{6 \times 6}}
\end{equation}
Finally, the Jacobian of the re-projection error with respect to the line parameters can be found using the chain rule:
\begin{equation}
J{l_\xi } = \frac{{\partial {e_l}}}{{\partial {{\bm{\delta }}_\xi }}} = \frac{{\partial {e_l}}}{{\partial {\bm{l}}'}}\frac{{\partial {\bm{l}}'}}{{\partial {{\bm{\mathcal{L}}}_c}}}\frac{{\partial {{\bm{\mathcal{L}}}_c}}}{{\partial {{\bm{\delta }}_\xi }}}
\end{equation}
\begin{equation}
J{l_\theta } = \frac{{\partial {e_l}}}{{\partial {{\bm{\delta }}_\theta }}} = \frac{{\partial {e_l}}}{{\partial {\bm{l'}}}}\frac{{\partial {\bm{l'}}}}{{\partial {{\bm{\mathcal{L}}}_c}}}\frac{{\partial {{\bm{\mathcal{L}}}_c}}}{{\partial {{\bm{\mathcal{L}}}_w}}}\frac{{\partial {{\bm{\mathcal{L}}}_w}}}{{\partial {{\bm{\delta }}_\theta }}}
\end{equation}
Once these analytical Jacobians are available, we can employ iterative algorithms such as Gaussian-Newton to solve the graph optimization problem.
\section{Experimental Results}
\subsection{System Implementation}
The proposed visual SLAM system is designed and implemented based on ORB-SLAM2~\cite{murORB2} and has three main parallel threads (see Fig. \ref{pfig1}): Tracking, Local Mapping and Loop Closing.
The global BA thread is started only after finishing loop closing.
In the following, we briefly describe each component while focusing on the difference from~\cite{murORB2}.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{./figure/overview}
\caption{The architecture of the proposed graph-based visual SLAM system using both point and line features.}
\label{pfig1}
\end{figure}
\subsubsection{Tracking}
Our system uses rectified stereo image sequence as input. For every input frame, four threads are launched to extract point feature (Keypoints) and line feature (Keylines) for left and right image in parallel. ORB features are applied for point feature detection and description. Line feature is detected by LSD and described by LBD descriptor. Then two threads are launched for stereo matching and all the features are classified as stereo or monocular features according to whether the feature in the left image could find its stereo match in the right image, as shown in Fig. \ref{pfig2}. The stereo matching of 3D lines performs as described in Section~\ref{seclinematch}. For each monocular feature, we search a match with other unmatched features in other keyframes. Once finding the matched feature, we triangulate the feature in the same way as stereo features.
\begin{figure}[!h]
\centering
\includegraphics[width=.85\columnwidth]{./figure/overview2m}
\caption{The workflow of pre-processing images.}
\label{pfig2}
\end{figure}
Motion estimation is made by two types of tracking, namely tracking last frame and tracking local map. The former one gives an initial pose estimation with the correspondences of adjacent frame, while the latter one refines the pose with much more constraints between the current frame and the local map. After the data association, the pose is estimated by motion-only BA using Levenberg-Marquardt algorithm~\cite{more1978levenberg}.
We use a constant velocity motion model to predict the camera pose as a prior when tracking last frame. Once the prior is known, the map points and the map lines in the last frame or the local map can be projected to current frame to build more associations. Then we perform a guided search to bound the complexity and enhance the accuracy of matching. Since the 3D line may be partially observed, the projected 2D line cannot be handled the same as the projected 2D point. Fig. \ref{pfig3} shows a simple example, the dash lines can't be observed by the camera while the solid lines can. In order to ensure the visibility of the projected 2D line segments in image plane, we propose a culling based method described as follow:
\begin{enumerate}
\item Transform the 3D line $ \bm{\mathcal{L}}_w $ from world frame to current frame according to the prior $ {{\bm T}_{kw}}' $. Compute the two endpoints $ \bm{X}_{sk} $ and $ \bm{X}_{ek} $.
\item Discard the line if both $ \bm{X}_{sk} $ and $ \bm{X}_{ek} $ are behind the camera. If one of them is behind the camera, compute the intersection of the plane and the 3D line by ${{\bm{X}}_{ik}} = {{\bm{X}}_{sk}} + \lambda \left( {{{\bm{X}}_{sk}} - {{\bm{X}}_{ek}}} \right)\text{ , where $ \lambda $ is a value between 0 and 1}$, as depicted in Fig. \ref{pfig3}.
\item Project the two 3D endpoints in front of the camera to image plane. Since the projected line maybe lays across or even out of the image bound, all the projected lines must be dealt by Liang-Barsky algorithm \cite{liang1984new} which is an efficient line clipping algorithm and can retain the orientation of original line.
\end{enumerate}
Then line matching can be done efficiently thanks to the restricted searching space and binary descriptor. The last step is to decide whether the current frame is a new keyframe. We adopt the same policy as ORB-SLAM2~\cite{murORB2} and add more conditions related to line features.
\begin{figure}[!h]
\centering
\includegraphics[width=.5\columnwidth]{./figure/xjp3}
\caption{ Partial observation of 3D line. (The dash lines can't be observed by the camera while the solid lines can. The red points denotes the intersection of the plane and the 3D line.)}
\label{pfig3}
\end{figure}
\subsubsection{Local Mapping}
Once a new keyframe is added, the connections between the current keyframe and other frames will be updated by providing the co-visible information. Local mapping triangulates more map points and lines, removes outlier landmarks, and deletes redundant keyframe. All the camera poses and landmarks in the local map are adjusted by performing local BA. During back-end optimization, the 3D line is parameterized by infinite spatial line, hence its endpoints have no affect on the final optimization results. However, the endpoints play an important role in matching and visualizing, so our system need to maintain two endpoints of the 3D line after optimization. It can be done by back-projecting the 2D line in current keyframe and trimming the corresponding 3D line, which is similar to SLSLAM~\cite{zhang2015building}.
\subsubsection{Loop Closing and Global BA}
The loop closing thread is used to reduce drift accumulated during exploration by loop detection and loop correction. Loop detection try to find candidate keyframes based on the technique of bags of words. The visual vocabulary should be trained offline with both point and line features. Here we cluster the ORB features and LBD features to build their own vocabulary by DBOW~\cite{galvez2012bags} respectively. For every input keyframe, it is converted to the bag of words vector and stored in the online database. The similarity score between two bag of vector $ \bm{v}_a $ and $ \bm{v}_b $ can be computed as follow:
\begin{equation}
{\rm{s}} = {{\lambda \rm{s}_p}}{\left( {{{\bm{v}}_{{a}}},{{\bm{v}}_{{b}}}} \right)} + \left( {1 - {{\lambda }}} \right){\rm{s}_l}{\left( {{{\bm{v}}_{{a}}},{{\bm{v}}_{{b}}}} \right)},
\end{equation}
where $ \lambda $ is an empirical weight coefficient related to scenes. ${\rm{s}_p}{\left( {{{\bm{v}}_{{a}}},{{\bm{v}}_{{b}}}} \right)}$ and ${\rm{s}_l}{\left( {{{\bm{v}}_{{a}}},{{\bm{v}}_{{b}}}} \right)}$ are the similarity score of point feature and line feature. Then we can find the correspondences between the new keyframe and the candidate keyframe. we also refine the correspondences with time consistency test~\cite{mur2014fast}. And try to compute a $ SE(3) $ transformation matrix by EPnP \cite{lepetit2009epnp} with corresponding points in a RANSAC scheme \cite{fischler1981random}. If failed, we alternatively compute a $ SE(3) $ by a method proposed in \cite{pradeep2012egomotion} using the matching lines across two stereo views. Finally, a pose graph optimization is launched to correct the loop. Once finished, a global BA is incorporated to achieve the optimal solution in a separate thread.
\subsection{Results}
Various experiments have been conducted in both synthetic and real-world scene. The accuracy and time efficiency of our approach are analyzed. In these experiments, the algorithm run on a computer with Intel Core i7-2600 @ 3.40GHz and 16GB memory in a 64-bit Linux operating system.
\subsubsection{Synthetic data}
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{./figure/e1}
\caption{ Synthetic scene with 25 lines and variable number of points. }
\label{efig1}
\end{figure}
There is an accurate data association in synthetic Scene. And this experiment is proposed to verify the correctness and advantage of the introduced line feature in the back-end optimization. The derived Jacobian of 3D line re-projection error is used in the optimization. The synthetic scene in Fig. \ref{efig1} contains a house with a total of 25 lines and variable number of points. This construction is similar to the scene in \cite{sola2012impact}. Virtual stereo camera with baseline of 0.5m moves around the house, collecting images of $ 640\times480 $ pixels. Gaussian white noise with a variance of 1 pixel are added to the points and endpoints of lines in the captured images. Loop detection is disabled to display pose error clearly. $ RMSE $ (Root Mean Square Error) of $ RPE $ (Relative Pose Error) is the metric to evaluate the performance of our method. Fig. \ref{efig2} shows an estimated trajectory by our proposed system. The average result of Monte Carlo experiments of 25 runs, is shown in Table \ref{etab1}. $ RPEtrans1 $ and $ RPErot1 $ are translation and rotation errors obtained in the scene with lots of point features, while $ RPEtrans2 $ and $ RPErot2 $ result from the scene containing few points. In the scene with comparable number of points and lines, odometry based on only point feature performs better than one using only lines. The reason may be that re-projection error of an infinite long spatial line is only related to the normal vector of the $ Pl\ddot{u}cker $ line coordinates as shown in Section~\ref{sec41}. So the matched point features produce more constrains than the same number lines. The table shows that the method based on point features has a larger error than on line features in the scene with few points. Our method based on fusion of points and lines outperform than the both.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{./figure/modie2}
\caption{Top and oblique views of estimated camera trajectory. }
\label{efig2}
\end{figure}
\begin{table}[!h]
\caption{RPE of the Methods Based on Different Features} \vspace{-2em}
\label{etab1}
\begin{center}
\begin{tabular}{l|c|c|c}\hline
&Point Feature &Line Feature& Point-Line Feature \\
\hline
$ RPEtrans1(m) $ & 0.08702& 0.09827& \textbf{0.07852}\\
\hline
$ RPErot1(rad) $ & 0.00430& 0.00486& \textbf{0.00381}\\
\hline
$ RPEtrans2(m) $ & 0.19254& 0.09621& \textbf{0.08637}\\
\hline
$ RPErot2(rad) $ & 0.00798& 0.00481& \textbf{0.00408}\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Real data}
The real-world scene experiment is carried on both it3f dataset \cite{zhang2015building} and KITTI dataset \cite{geiger2013vision}. For a more comprehensive assessment of the approach presented in this article, several open source approaches are compared in this section, including ORB-SLAM2 \cite{murORB2}, SLSLAM \cite{zhang2015building}, PLSVO \cite{gomez2016robust} and PL-SLAM presented in this paper. ORB-SLAM2 is a complete point feature based SLAM system that contains map reuse, loop detection and relocation. SLSLAM is based on the straight line feature, constructing scene composed of only straight lines, which is a relatively excellent line based SLAM system. PLSVO is only an odometry using two endpoints to represent the spatial line and performing brute-force matching in the front-end.
\begin{figure}[!h]
\centering
\includegraphics[width=.9\columnwidth]{./figure/e3}
\caption{ Sample images used in it3f dataset~\cite{zhang2015building}. }
\label{efig3}
\end{figure}
\begin{figure
\centering
\includegraphics[width=0.45\textwidth]{./figure/loop1}
\caption{ Results before and after of the loop closure. Left: Results before the loop closure. Right: Results after the loop closure and loop correction. }
\label{efig5}
\end{figure}
Fig. \ref{efig3} shows images from the it3f dataset, and Fig. \ref{ifig1} shows the results generated from this dataset. Fig. \ref{efig5} shows the trajectory and map of the camera before and after a loop closure followed by a bundle adjustment. PLSVO has a poor performance on this dataset, so we only compare ORB-SLAM2 and SLSLAM with our proposed PL-SLAM. it3f dataset doesn't provided the ground truth. The degrees of drift before the loop closure are compared. For fair comparison, both ORB-SLAM2 and PL-SLAM disable the loop detection thread, and use the same parameters in point feature extraction. For each image, we extract 1000 point features at 8 scale levels with a scale factor of 1.2.
\begin{table}[h]
\caption{Errors Before Loop Closure} \vspace{-1em}
\label{etab2}
\begin{center}
\begin{tabular}{c|c}\hline
Method& Errors Before Loop Closure\\
\hline
PL-SLAM & $ {[-0.3549,1.4056,-0.0273]}^T $\\
ORB-SLAM2 & $ {[-0.3748,1.9065,0.17442]}^T $\\
SLSLAM & $ {[-0.3141,-0.5455,-0.06449]}^T $\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table*}[!h]
\caption{Results of ORB-SLAM, PLSVO and PL-SLAM on KITTI Dataset} \vspace{-1em}
\label{etab3}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}\cline{2-10}
&\multicolumn{3}{c|}{PLSVO}&\multicolumn{3}{c|}{ORB-SLAM2}&\multicolumn{3}{c}{PL-SLAM}\\\cline{2-10}
&Trans(m)&Rot(rad)&ATE(m)&Trans(m)&Rot(rad)&ATE(m)&Trans(m)&Rot(rad)&ATE(m)\\\hline
sequence 03& 0.2247& 0.0046& 13.2415& 0.1598& 0.0023& 2.6638& 0.1601& 0.0024& \textbf{2.6203}\\
sequence 04& 0.2045& 0.0019& 2.3020& 0.1180& 0.0015& 0.7415& 0.1183& 0.0017& \textbf{0.3663}\\
sequence 10& 0.1809& 0.0053& 9.0208& 0.1143& 0.0022& 6.3974& 0.1166& 0.0021& \textbf{5.9207}\\\hline
\end{tabular}
\end{center}
\end{table*}
Fig. \ref{efig6} shows the top and side views of the reconstruction results by the three systems without loop closures, respectively. The point with zero coordinates is the starting point and the other is the finishing point. Table \ref{etab2} shows the drift before the loop closure (translation in $ X(m), Y(m) , Z(m) $). It can be observed from the table that PL-SLAM perform better than ORB-SLAM2, which demonstrate the strength of including constraints of straight line. SLSLAM has the best performance, only -0.5455 meters error in the vertical direction. A reason can account for this is that it3f dataset contains low-textured scenarios, reflective white walls, windows and floor etc. At the same time, due to the influence of the ceiling lights, point features prone to be mismatched and bring big errors. In the optimization process of our proposed approach, we don't set different weights to the error terms of points and lines in \eqref{leq10} with consideration of versatility. When the component based on point feature has unstable performance and low accuracy, the proposed system based on combination of point and line features will be affected, which coincides with the synthetic scene experiment.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{./figure/it3fres1}
\caption{ Comparison results on it3f dataset~\cite{zhang2015building} without loop closure. The top and bottom row show the top and side views of the results. }
\label{efig6}
\end{figure}
In terms of time efficiency, the execution time will not increase much because features are extracted in parallel threads. For images with dimensions of $ 640\times480 $, the feature extraction and stereo matching in ORB-SLAM2 requires 32.15ms, while our system requires 42.401ms with additional consideration of line features on it3f dataset. Our tracking thread can achieve a performance of 15.1 frame/s, which can satisfy the real-time requirements.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{./figure/e7sub}
\caption{ Results on KITTI dataset. Left: The map composed of point and line features.
Right: One frame with extracted point and line features. }
\label{efig7}
\end{figure}
We also evaluate our system on KITTI odometry benchmark. Sequences 03, 04, and 10, which have scenarios with lines, are selected. Fig. \ref{efig7} shows the result on KITTI dataset in our system. In this experiment, we only compare our PL-SLAM with ORB-SLAM2 and PLSVO\footnote{As the source code of the front-end module in SLSLAM is unavailable, we do not include it in the experiments on KITTI dataset.}. Loop detection modules are all disabled for fair comparison.
In this experiment, $ RPE $ and $ ATE $ (Absolute Trajectory Error) are used as evaluation criterion. Table \ref{etab3} shows the results of the experiment, where $ Trans $ and $ Rot $ represent $ RPE $ of the translations and rotations respectively. The smallest $ ATE $ in each sequence is marked in the table.
It is shown that our system has acceptable performance in several sequences. A performance improvement can be achieved compared to the original ORB-SLAM2. PLSVO has a poor performance because of the brute-force matching in data association and accumulated errors.
\section{Conclusions}
To improve the accuracy and robustness of visual SLAM, we present a graph-based approach using point and line features. The spatial line is expressed by the orthonormal representation in the optimization process, which is the compactest and decoupled form. And the Jacobians of re-projection error with respect to the line parameters is also derived out to make a good performance.
It is proved that fusing these two types of features will produce more robust estimation in synthetic and real-world scene. Our robust visual SLAM is also able to work in real-time. In the future, we will investigate how to introduce inertial sensors into our system with point and line features.
\section*{Appendix}
This appendix will explain the Jacobian with respect to ${{\bm{\delta }}_\phi }$ in detail. ${{\bm{\delta }}_\rho }$ are set to zeros when computing Jacobian with respect to ${{\bm{\delta }}_\phi }$. With a transformation ${{\bm{T}}^*}$ containing rotation ${{\bm{\delta }}_\phi }$, the new 3D line is denoted as ${\bm{\mathcal{L}}}_c^*$:
\begin{equation}
{{\bm{T}}^*} = \exp \left( {{{\bm{\delta }}_\xi} ^\wedge} \right){{\bm{T}}} \approx \left( {{\bm{I}} + \left[ {\begin{array}{*{20}{c}}
{{{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }}&{\bm 0}\\
{{{\bm 0}^{\bm{T}}}}&{0}
\end{array}} \right]} \right){{\bm{T}}}
\end{equation}
\begin{equation}
{{\bm{R}}^*} = \left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){{\bm{R}}}\text{, }{{\bm{t}}^*} = \left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){{\bm{t}}}
\end{equation}
\begin{equation}
{\bm{\mathcal{H}}}_{cw}^* = \left[ {\begin{array}{*{20}{c}}
{\left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){\bm{R}}}&{\left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){{\left[ {\bm{t}} \right]}_ \times }{\bm{R}}}\\
{\bm 0}&{\left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){\bm{R}}}
\end{array}} \right]
\end{equation}
\begin{small}
\begin{equation}
{\bm{\mathcal{L}}}_c^* = {\bm{\mathcal{H}}}_{cw}^*{{\bm{\mathcal{L}}}_w} = \left[ {\begin{array}{*{20}{c}}
{\left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){\bm{Rn}} + \left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){{\left[ {\bm{t}} \right]}_ \times }{\bm{Rv}}}\\
{\left( {{\bm{I}} + {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }} \right){\bm{Rv}}}
\end{array}} \right],
\end{equation}
\end{small}
where ${\left[ . \right]_ \times }$ denotes the skew-symmetric matrix of a vector. In the process of deducing ${\bm{\mathcal{H}}}_{cw}^*$, the properties of rotation matrix $\left( {{\bm{Ra}}} \right) \times \left( {{\bm{Rb}}} \right) = {\bm{R}}\left( {{\bm{a}} \times {\bm{b}}} \right)\text{, }{\bm{R}} \in SO\left( 3 \right)$ is used.
Then $\frac{{\partial {\bm{\mathcal{L}}}_c^*}}{{\partial {{\bm{\delta }}_\phi }}}$ can be written directly:
\begin{align}
\frac{{\partial {\bm{\mathcal{L}}}_c^*}}{{\partial {{\bm{\delta }}_\phi }}} &= \left[ {\begin{array}{*{20}{c}}
{\frac{{\partial {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }{\bm{Rn}}}}{{\partial {{\bm{\delta }}_\phi }}} + \frac{{\partial {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }{{\left[ {\bm{t}} \right]}_ \times }{\bm{Rv}}}}{{\partial {{\bm{\delta }}_\phi }}}}\\
{\frac{{\partial {{\left[ {{{\bm{\delta }}_\phi }} \right]}_ \times }{\bm{Rv}}}}{{\partial {{\bm{\delta }}_\phi }}}}
\end{array}} \right]\\
&= \left[ {\begin{array}{*{20}{c}}
{ - \frac{{{{\left[ {{\bm{Rn}}} \right]}_ \times }{{\bm{\delta }}_\phi }}}{{\partial {{\bm{\delta }}_\phi }}} - \frac{{\partial {{\left[ {{{\left[ {\bm{t}} \right]}_ \times }{\bm{Rv}}} \right]}_ \times }{{\bm{\delta }}_\phi }}}{{\partial {{\bm{\delta }}_\phi }}}}\\
{ - \frac{{{{\left[ {{\bm{Rv}}} \right]}_ \times }{{\bm{\delta }}_\phi }}}{{\partial {{\bm{\delta }}_\phi }}}}
\end{array}} \right]\\
& = {\left[ {\begin{array}{*{20}{c}}
{ - {{\left[ {{\bm{Rn}}} \right]}_ \times } - {{\left[ {{{\left[ {\bm{t}} \right]}_ \times }{\bm{Rv}}} \right]}_ \times }\;}\\
{ - {{\left[ {{\bm{Rv}}} \right]}_ \times }}
\end{array}} \right]_{6 \times 3}}
\end{align}
\addtolength{\topmargin}{0.249cm}
\bibliographystyle{ieeetr}
|
1,116,691,497,824 | arxiv | \section{Introduction}\label{introduction}
Recent years have witnessed considerable success in applying ideas from twistor theory to the calculation of observables in four-dimensional massless supersymmetric field theory\footnote{See \cite{Witten:2003nn,Adamo:2011pv,ArkaniHamed:2012nw,Atiyah:2017erd} and references therein for a selective overview.}. In many ways twistors are the ideal variables to think about certain problems in supersymmetric field theory and it is natural to try to extend these methods to higher dimensions where new applications may be found. There is something special about four dimensional twistor theory and there are a number of ways to generalise Penrose's notion of a twistor \cite{Penrose:1967wn} to higher dimensions but none that enjoys all of the features explicit in four dimensions. Nonetheless any formalism that retains even some of the magic of four-dimensional twistor theory is likely to be worth pursuing.
In four dimensions solutions to the massless equations of motion of helicity $h$ are given by $(0,1)$-forms; elements of\footnote{Or equivalently using {\v C}ech cohomology.} $H_{\bar{\partial}}^{0,1}({\cal O}(-2h-2);\P\mathbb{T})$, where twistor space $\P\mathbb{T}$ in this context is an open subset of $\mathbb{C}\P^3$. A natural choice and one that is in many ways closest to the original spirit of the twistor programme is to define twistor space for $d$ dimensions as the space of projective pure spinors of the complexified conformal group $SO(d+2;\mathbb{C})$. Of particular interest beyond four dimensions is the case of $d=6$, where twistor space is a quadric\footnote{The condition $Z^2=0$ comes from the purity requirement.} inside $\mathbb{C}\P^7$ and progress has been made in studying free self-dual conformal theories there \cite{Mason:2011nw,Mason:2012va,Saemann:2011nb}. Physical states are given by\footnote{Corresponding to direct and indirect Penrose transforms respectively.} $H^2$ and $H^3$ cohomology classes (i.e. Dolbeault $(0,2)$ or $(0,3)$ -forms modulo exact ones) and it is not clear how to introduce interactions or how to describe non self-dual theories in such a framework. Matters only get worse in ten dimensions where elements of $H^5$ and $H^{10}$ cohomology classes describe physical states and the purity condition becomes even more cumbersome to deal with. Moreover, the natural connection with the space of null geodesics is lost.
Alternatively one can seek to generalise the study of the space of null geodesics to higher dimensions. In this context, the space is usually referred to as ambitwistor space \cite{LeBrun,Isenberg:1978kk}. It has been noted on a number of occasions \cite{Evans:1987tm, Berkovits:1990yc, Cederwall:1992bi} that the division algebras provide an interesting unified guide for how to think about ambitwistor space in dimensions three, four, six and ten. Based on the generalisation of $SL(2;\mathbb{C})$ to $SL(2;\Bbb{K}_{d-2})$, where $\Bbb{K}_{1}=\mathbb{R}$, $\Bbb{K}_{2}=\mathbb{C}$, $\Bbb{K}_{4}=\Bbb{H}$, and $\Bbb{K}_{8}=\Bbb{O}$. This is compelling due to the existence of the isomorphisms $SL(2;\Bbb{K}_{d-2})\simeq SO(d-1,1)$, the Lorentz group in $d$ dimensions.
In four dimensions, we have $SL(2;\mathbb{C})$, where a basis is given by $\sigma^{\mu}$, where $\sigma^0=1$ and $\sigma^i$ are the Pauli matrices. A null momentum is written as $P^{\mu}= \lambda^a\sigma^{\mu}_{a\dot{a}}\widetilde{\lambda}^{\dot{a}}$. This may be written in terms of the pair of complex-valued spinors $\lambda_a$ and $\widetilde{\lambda}_{\dot{a}}$, as $P_{a\dot{a}}=\lambda_a\widetilde{\lambda}_{\dot{a}}$. Under $SO(2)$ transformations $(\lambda_a,\widetilde{\lambda}_{\dot{a}})\rightarrow (\lambda_a e^{i\theta},e^{-i\theta}\widetilde{\lambda}_{\dot{a}})$, the momentum is preserved. Introducing twistors $Z^I=(\omega_{\dot{\alpha}},\lambda^{\alpha})\in\mathbb{C}\P^3$ and dual twistors $W_I=(\widetilde{\lambda}^{\dot{\alpha}},\widetilde{\omega}_{\alpha})\in\widetilde{\mathbb{C}\P}^3$, where $\omega_{\dot{\alpha}}$ and $\widetilde{\omega}_{\alpha}$ are given by the incidence relations $\omega_{\dot{\alpha}}=X_{\dot{\alpha}\alpha}\lambda^{\alpha}$ and $\widetilde{\omega}_{\alpha}=X_{\dot{\alpha}\alpha}\widetilde{\lambda}^{\dot{\alpha}}$, the generator of these $SO(2)$ transformations may be written as
$$
U=\lambda^{\alpha}\tilde{\omega}_{\alpha}-\omega_{\dot{\alpha}}\widetilde{\lambda}^{\dot{\alpha}}\equiv Z\cdot W,
$$
where we take $\tilde{\omega}_{\alpha}$ and $\omega_{\dot{\alpha}}$ to be conjugate to $\lambda^{\alpha}$ and $\widetilde{\lambda}^{\dot{\alpha}}$ respectively. The condition $U=0$ is simply the standard statement that ambitwistor space is the quadric $Z\cdot W=0$ inside $\mathbb{C}\P^3\times\widetilde{\mathbb{C}\P}^3$. A natural action for an ambitwistor string embedding into projective ambitwistor space $\P\Bbb{A}$ was given in \cite{Geyer:2014fka}
$$
S=\int_\Sigma W\hspace{-.06cm}\cdot\hspace{-.06cm} \bar{\partial}Z-W\hspace{-.06cm}\cdot\hspace{-.06cm} \bar{\partial}Z+e\, U,
$$
where $e$ is a Lagrange multiplier imposing $U=W\cdot Z=0$. The basic form of this construction generalises to dimensions six and ten, where there there is a corresponding division algebra isomorphism.
In six dimensions the isomorphism $SL(2;\Bbb{H})\simeq SO(5,1)$ suggests a natural link to the Quaternions. As in four dimensions, Ambitwistor space is defined by a constraint surface inside some larger space. The $SO(2)$ symmetry in four dimensions becomes an $SU(2)$ symmetry in six dimensions. These symmetry groups are the isometry groups of the group manifolds $S^1$ and $S^3$ and, extending to ten dimensions, a connection with $S^7$ was found, this being a somewhat special case as it is parallelizable yet not a Lie group (it is the sole compact example). The relationships to the Hopf fibrations $S^1\xrightarrow{S^0} S^1$, $S^3\xrightarrow{S^1} S^2$, $S^7\xrightarrow{S^3} S^4$, and $S^{15}\xrightarrow{S^7} S^8$ (the projective spaces $\Bbb{R}\P^1$, $\Bbb{C}\P^1$, $\Bbb{H}\P^1$ and $\Bbb{O}\P^1$ respectively) were explored in \cite{Cederwall:1993nx}.
Our interests here are in the ten-dimensional case. Here the isomorphism $SL(2;\Bbb{O})\simeq SO(9,1)$ suggests a natural link to the Octonions. $S^7$ is a paralellizable but is not the manifold of a Lie group and, as noted in \cite{Cederwall:1992bi,Berkovits:1990yc}, the algebra of the ten-dimensional constraints is not a Lie algebra but is a \emph{soft} algebra in which the structure `constants' depend explicitly on $\lambda^a(z)$ fields and so are really structure \emph{functions}. This fact will add a complexity not present in the four- and six-dimensional cases. A key motivation for studying the ten-dimensional case is the recent work on understanding the origin of the CHY formulation of scattering amplitudes \cite{Cachazo:2013iea,Cachazo:2013hca} given by the ambitwistor string theory of \cite{Mason:2013sva} (see also \cite{Berkovits:2013xba}). The discussion here provides a unified description that includes the four-dimensional ambitwistor string of \cite{Geyer:2014fka} and the ten-dimensional one of \cite{Mason:2013sva}. It also may provide insight into why other ambitwistor strings have been less successful as critical string theories.
In the following section we discuss the geometry of the ambitwistor space and the construction of sigma models. Of particular interest are the constraints that must be imposed. In section three we construct the BRST charge; however, the failure of the gauge symmetry to close off-shell means that the naive BRST charge is not nilpotent when acting on the full space of fields. In section four we discuss gauge-fixing and employ the machinery of the BV formalism to enlarge the space of fields to allow for a nilpotent BRST transformation. The classical Master Action is presented and is found to require quadratic terms in the antifields. Section five briefly discusses outstanding issues and directions for future work.
\section{A Spinorial Perspective on Ambitwistor String Theory}
We start with the ambitwistor string of \cite{Mason:2013sva} with action
\begin{equation}\label{XP}
S=\int_{\Sigma}P_{\mu}\bar{\partial}X^{\mu}+\mu T+\frac{h}{2}P^2.
\end{equation}
The target space is the space of null geodesics. The stress tensor is $T(z)=P_{\mu}\partial X^{\mu}$ and the Beltrami differentials $\mu(z)$ and $h(z)$ are Lagrange multipliers which impose the conditions $T(z)=0$ and $P^2(z)=0$ respectively. One may think of this action as a holomorphic version of the massless particle worldline action and generalisations with manifest worldsheet or spacetime supersymmetry also exist \cite{Mason:2013sva,Berkovits:2013xba}. There is a gauge symmetry corresponding to the constraints and we gauge fix $\mu(z)$ and $h(z)$ in the usual way and introduce holomorphic ghost systems, $(b,c)$ and $(\tilde{b},\tilde{c})$, both of conformal weight $2$. The theory is conformal in 26 (complex) dimensions but the physical interpretation of this theory is unclear\footnote{Although significant progress in our understanding has been made recently by \cite{Berkovits:2018jvm}.}; however, the supersymmetric generalisations appear to reproduce perturbative type II supergravity in ten-dimensions.
It is worth taking the time to stress that, unlike the superparticle, the ambitwistor string theory is a CFT and enjoys all of the privileges of the state-operator correspondence. As such, the physical states of the theory describe operator deformations of the target space. And unlike the worldline theory, which describes a particle moving on a potentially curved yet static background, the ambitwistor string is expected to encode the full dynamics of that background. Thus, despite superficial similarities in the formalism, these ambitwistor string theories are qualitatively distinct from their worldline counterparts.
Taking the action (\ref{XP}) as a starting point, we wish to recast it in a more manifestly spinorial form along the lines of that achieved in \cite{Berkovits:1990yc,Cederwall:1992bi} for the superparticle. On the constraint surface $P^2(z)=0$, the field $P_{\mu}(z)$ may be written as
\begin{equation}\label{P}
P^{\mu}(z)=\lambda^a(z)\,\Gamma^{\mu}_{ab}\,\lambda^b(z).
\end{equation}
where $a=1,2,3...(2d-4)$ and it is important to note the weight $1/2$ field $\lambda^a(z)$ is a generic spinor. It is \emph{not} pure. The $\Gamma_{ab}^{\mu}=\Gamma_{ba}^{\mu}$ satisfy the Clifford algebra relation $\{\Gamma^{\mu},\Gamma^{\nu}\}=2\eta^{\mu\nu}$ and are related to a representation of the ten-dimensional gamma matrices $\gamma_{\mu A}{}^{B'}$ by
$$
\gamma_{\mu A}{}^{B'}=\left(\begin{array}{cc}
0 & \Gamma^{ab}_{\mu}\\
\Gamma_{ab}^{\mu} & 0
\end{array}\right).
$$
The identity
\begin{equation}\label{identity}
\Gamma_{ab}^{\mu}\Gamma_{\mu cd}+\Gamma_{ad}^{\mu}\Gamma_{\mu bc}+\Gamma_{ac}^{\mu}\Gamma_{\mu db}=0,
\end{equation}
which holds for $d=3,4,6,10$, ensures that $\Gamma_{ab}^{\mu}P_{\mu}\lambda^b=0$. Hence the weaker condition $P^2(z)=0$ follows identically. The identity (\ref{identity}) is the crucial fact that makes the connection with the division algebra possible \cite{Evans:1987tm}. We introduce weight $1/2$ fields $\omega_a(z)$ via the incidence relation
\begin{equation}\label{incidence}
\omega_a(z)=X_{\mu}(z)\,\Gamma^{\mu}_{ab}\,\lambda^b(z).
\end{equation}
We define an ambitwistor coordinate as ${\cal Z}=(\omega_a, \lambda^a)$ and, using the incidence relations (\ref{incidence}), the action (\ref{XP}) may be written as
\begin{equation}\label{S2}
S= \int_{\Sigma}{\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}+\mu T.
\end{equation}
where the $P^2(z)=0$ constraint is solved automatically by (\ref{P}) and we have adopted the notation ${\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}:= \frac{1}{2}(\omega_a\bar{\partial}\lambda^a-\lambda^a\bar{\partial}\omega_a)$. The stress tensor is $T={\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\partial{\cal Z}$. As written, the target space of this sigma model is not the space of null lines but is much larger. A constraint must be introduced to reduce the target space of the naive the embedding ${\cal Z}:\Sigma\rightarrow\Bbb{C}^{2(2d-4)}$ to the ($2d-3$ dimensional) physical space appropriate for a null line.
\subsection{The Constraint}
In this section we review the construction of the constraint in ten dimensions (see \cite{Cederwall:1992bi,Berkovits:1990yr} for details and the Appendix for a brief discussion\footnote{More on the Octonions may be found in \cite{Baez:2001dm}.}). The key idea of this section is the constraint (\ref{G2}) which generates gauge transformations that preserve $P_{\mu}(z)$ and the eager reader may safely skip the details of this section on a first reading. We shall take $d=10$ and focus on the bosonic sector of the theory. The supersymmetric extension is discussed briefly in the Appendix and we anticipate the inclusion of the additional sectors required by supersymmetry will be straightforward.
It is useful to package the sixteen $\lambda^a(z)$ into a two component spinor of $SL(2;\Bbb{O})$, which we write as
$\lambda_I(z)=\left( \lambda^+(z) , \lambda^-(z)\right)$. Following \cite{Berkovits:1990yr}, we set $\lambda^a(z)=(\lambda^+_A(z),\lambda^-_A(z))$, where $A=1,2,...8$ and so
$$
\lambda^+(z)=\sum_{A=1}^8\lambda^+_A(z)\,e_A, \qquad \lambda^-(z)=\sum_{A=1}^8\lambda^-_A(z)\,e_A.
$$
Here $e_8=1$ and the $e_i=-\bar{e}_i$, where $i=1,2,...,7$, is a representation of the Octonions. Similarly, the gamma matrices may be written as $SL(2;\Bbb{O})$ matrices $\Gamma^{\mu}_{IJ}$. This division of the sixteen components of $\lambda^a$ into the $8+8$ components of $\lambda^{\pm}$ clearly breaks manifest Lorentz covariance but we shall see that it is ultimately possible to work in terms of covariant objects only at the cost of introducing a redundancy in the description of the constraints.
The $\Gamma^{\mu}_{IJ}$ satisfy the Clifford algebra relation and care must be taken to keep track of the order of operations as the Octonions are not associative. It will be helpful to introduce $\Gamma^A=(\Gamma^i,\Gamma^8)$, where $i=1,2,..7$, so that $\Gamma^{\mu}=(\Gamma^+,\Gamma^A,\Gamma^-)$ where $2\Gamma^{\pm}=\Gamma^0\pm\Gamma^9$. We can write the momentum in $SL(2;\Bbb{O})$ notation as $P_{IJ}=\Gamma_{IJ}^{\mu}P_{\mu}=\lambda_I\bar{\lambda}_J$ where
$$
P_{IJ}=\left(\begin{array}{cc} \sqrt{2}P^+ & P^Ae_A \\ P^A\bar{e}_A & \sqrt{2}P^- \end{array}\right),
$$
where $I=1,2$ and $\sqrt{2}P^+=|\lambda^-|^2$, $P^Ae_A=\lambda^+\bar{\lambda}^-$, and $\sqrt{2}P^- =|\lambda^+|^2$. It is clear that $P^2=\det(P_{IJ})=0$.
We are interested in transformations of the $\lambda^a(z)$ that preserve the momentum $P_{\mu}(z)$ (\ref{P}). The transformations $\delta\lambda^+=\{\varepsilon^iU^+_i,\lambda^+\}=\lambda^+O^+(\varepsilon)$ and $\delta\bar{\lambda}^+=\{\bar{U}_i,\lambda^+\}=\bar{O}^+(\varepsilon)\bar{\lambda}^+$, where $O_i^+$ is some linear transformation on the spinor and $\varepsilon^i$ is parameter, leaves $P_{++}=|\lambda_+|^2$ invariant if $U^+_i=-\bar{U}^+_i$. A similar argument follows for the $|\lambda^-|^2$ component, giving a generator $U_i=U_i^++U_i^-$. To leave $P^Ae_A=\lambda_1\bar{\lambda}_2$ invariant we require that $U_i^+$ and $U_i^-$ be related by $(\lambda^+O^+)\bar{\lambda^-}= \lambda^+(O^-\bar{\lambda}^-)$. For $d=3,4,6$ the algebra is associative so $U^+_i$ and $U^-_i$ take the same form and we are able to write the generator in terms of $SL(2;\Bbb{K}_{d-2})$ spinors. In ten dimensions, the algebra is not associative and it is not possible to write the generator in an $SL(2;\Bbb{O})$ covariant form \cite{Cederwall:1992bi}.
A natural choice for the linear action on the spinor is $\lambda^+O^+(\varepsilon)=\lambda^+\varepsilon^je_j$ (Octonion multiplication on the right). The condition $(\lambda^+O^+)\bar{\lambda^-}= \lambda^+(O^-\bar{\lambda}^-)$ determines the corresponding transformation of $\lambda^-$, a modified Octonion multiplication from the left. Introducing conjugate variables $\omega_I$, one can find explicit expressions for the generators $U_i^{\pm}$. For $d=3,4,6$ the algebra is associative and in these cases we can take\footnote{$\omega$ and $\lambda$ are conjugate so this is a rotation of the components of $\lambda$ where $\omega\sim \partial/\partial\lambda$.} the generator to have the form $J=U+\bar{U}$. Specifically,
$$
J=\frac{1}{2}\Big( \lambda^{\dagger I}\omega_I-\omega^{\dagger}_I\lambda^I\Big)
$$
for $d=4$ which generates $U(1)$ and
$$
J_i=\frac{1}{2}\Big( \lambda^Ie_i\bar{\omega}_I-\omega_Ie_i\bar{\lambda}^I\Big), \qquad i=1,2,3
$$
for $d=6$. Here $e_i$ is one of the fundamental Quaternion units $\{i,j,k\}$ and the $J_i$ generate $SU(2)$, generalising the $U(1)$ in four dimensions. In general, one can show that the $P_{\mu}(z)$ are preserved by transformations generated by $J^i=U^i+\bar{U}^i$ where \cite{Berkovits:1990yr}
$$
U^i=\lambda^+e_i\bar{\omega}_++\frac{(\lambda^-\bar{\lambda}^+)}{|\lambda^+|^2}(\lambda^+e_j)\bar{\omega}_-.
$$
The commutator of two $J^i$ gives rise to structure functions \cite{Berkovits:1990yr} $h_{ij}{}^k(\lambda)+\bar{h}_{ij}{}^k(\lambda)$
$$
h_{ij}{}^k(\lambda)=-i\frac{\bar{\lambda}^+}{2|\lambda^+|^2}\Big( (\lambda^+e_j)e_i-(\lambda^+e_i)e_j \Big)e_k.
$$
The Quaternions are associative and so, in six dimensions, the structure functions become $-i[e_i,e_j]e_k=i\varepsilon_{ijk}$, the structure constants of $SU(2)$. In ten dimensions, unlike the other cases, the failure of associativity means that the algebra is necessarily field dependent and is not a conventional Lie algebra, but is a \emph{soft} algebra \cite{Sohnius:1982rs}. We shall see this field-dependence of the algebra appear explicitly in models later on where it will lead to a gauge algebra that only closes on-shell. Thus, the requirement that we ultimately introduce anti-fields to quantise the theory can be traced back to the failure of the Octonions to be associative.
The symmetry generator $J^i$ in ten dimensions breaks manifest Lorentz invariance. Writing $G^I=(G_+,G_-)$ where $G_{\pm}=\lambda^i_{\pm}J^i$, we have a generator $G^I$ that transforms as an $SL(2;\Bbb{O})$ spinor \cite{Cederwall:1992bi}. For example, in four dimensions we may take $G^J= \lambda^I\lambda^J\bar{\omega}_I-\omega_IP^{JI}$, where we have written $\lambda^J\bar{\lambda}^I=P^{JI}$. Unlike the $J^i$, this form of the constraint is the same in $d=3,4,6$ and $10$. In ten dimensions the constraint may be written as the $SO(9,1)$ spinor
\begin{equation}\label{G2}
G^a=(\lambda^c\Gamma^{\mu}_{cd}\lambda^d)\Gamma_{\mu}^{ab}\omega_b-2\lambda^a\lambda^b\omega_b,
\end{equation}
where an overall normalisation has been chosen for convenience. The price to be paid in working with a manifestly Lorentz covariant formalism is that not all of the sixteen $G^a$ can be linearly independent and so the symmetry generated by the $G^a$ is reducible.
The commutator of two generators is
\begin{equation}\label{algebra1}
[G^a,G^b]=f^{ab}_c(\lambda)G^c,
\end{equation}
where $f^{ab}_c(\lambda)$ are functions of $\lambda^a(z)$. It is worth noting that, in any dimension, $f^{ab}_c(\lambda)$ are functions of $\lambda^a(z)$. This is due to the fact that the extra factor of $\lambda^a$ included in going from the generators $U^i$ to $G^a$ means that, whilst the $h_{ij}{}^k$ may or may not be constant, depending on the dimension under consideration, dimensional analysis alone requires that the $f^{ab}_c$ will be functions of $\lambda^a(z)$.
\subsection{The Sigma Model and its Symmetries}
Incorporating the constraint (\ref{G2}) into the action (\ref{S2}) leads to the constrained action
\begin{equation}\label{S3}
S= \int_{\Sigma} {\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}+\mu T+e_aG^a,
\end{equation}
where we have included a Lagrange multiplier $e_a$ to impose the constraint $G^a=0$. The $2d-4$ functions $G^a$ are not linearly independent but are related by the $d$ expressions
$$
G^aZ_a^{\mu}=0,
$$
where $Z^{\mu}_a(z)\equiv\Gamma^{\mu}_{ab}\lambda^b(z)$. The transformation of $\lambda^a(z)$ generated by $G^a(z)$ preserves $P_{\mu}(z)$. It is not hard to see that this requires $(\delta \lambda^a)\Gamma_{ab}^{\mu}\lambda^b=0$ and so $\delta\lambda^a$ and hence $G^a$ is in the kernel of $Z_a^{\mu}$. This is a direct result of (\ref{identity}). In turn $Z^{\mu}_a$, treated as a map from the space of spinors $\lambda$ to Minkowski space, has a non-empty kernel as a result of the identity $Z^{\mu}_aP_{\mu}=0$. There are thus only $d-3$ independent constraints. The space of null lines is $2d-3$ dimensional. Subject to the constraint $G^a=0$, the ambitwistors have $Z^I$ have $2(2d-4)-(d-3)=3d-5$ components. The symmetry $\delta{\cal Z}^I=C{\cal Z}^I$ removes one degree of freedom and the remaining $d-2$ non-physical degrees of freedom are removed by the gauge symmetry generated by $G^a$ which we now review.
As is clear from the construction outlined in the previous section, $G^a$ does not act symmetrically on $\lambda^a(z)$ and $\omega_a(z)$ and so it is useful to work in terms of these spinor fields rather than the twistor ${\cal Z}(z)$. These fields transform as \cite{Berkovits:1990yc}
$$
\delta\lambda^a=\Big(-(\lambda^c\Gamma^{\mu}_{cd}\lambda^d)\Gamma_{\mu}^{ab}+2\lambda^a\lambda^b\Big)\varepsilon_b, \qquad \delta\omega_a=\Big(2\Gamma^{\mu}_{ac}\lambda^c(\Gamma^{de}_{\mu}\omega_e)-2\delta^d_a(\lambda^e\omega_e)-2\lambda^d\omega_a\Big)\varepsilon_d,
$$
where $\varepsilon_a$ depends on the worldsheet coordinates. It will be useful to introduce
\begin{equation}\label{xi}
\xi^{ab}\equiv-(\lambda^c\Gamma^{\mu}_{cd}\lambda^d)\Gamma_{\mu}^{ab}+2\lambda^a\lambda^b, \qquad
\tilde{\xi}_a{}^b\equiv 2\Gamma^{\mu}_{ac}\lambda^c(\Gamma^{be}_{\mu}\omega_e)-2\delta^b_a(\lambda^e\omega_e)-2\lambda^b\omega_a
\end{equation}
so that $G^a=\xi^{ab}\omega_b$, $\delta\lambda^a=\xi^{ab}\varepsilon_a$ and $\delta\omega_a=\tilde{\xi}_a{}^b\varepsilon_b$. In addition there are conformal transformations generated by the stress tensor $T(z)$. It is useful to write the generators of infinitesimal conformal and gauge transformations as
$$
\mathbf{G}(\varepsilon)=\oint\, \mathrm{d} z\;\varepsilon_a(z)G^a(z), \qquad \mathbf{T}(\nu)=\oint\, \mathrm{d} z\;v(z)T(z),
$$
respectively, where $v(z)$ is a weight $(-1,0)$ field (a worldsheet vector), $\varepsilon_a(z)$ is a weight $(-1/2,0)$ field and $T(z)$ is the stress tensor. The contour is implicitly assumed to encircle the point on $\Sigma$ where the operator is inserted. The algebra of the $G^a$'s is $[\mathbf{G}(\varepsilon),\mathbf{G}(\breve{\varepsilon})]=\mathbf{G}(\tilde{\varepsilon})$ where
$\tilde{\varepsilon}_{c}(z)=4\delta_c^{[a}\lambda^{b]}(z)\varepsilon_{a}(z)\breve{\varepsilon}_{b}(z)$ and may be written as (\ref{algebra1}) from which we see that the constraints are first class. We note that the algebra is not a Lie algebra, but has structure functions
\begin{equation}\label{f}
f_a^{bc}(\lambda)=-4\delta_a^{[b}\lambda^{c]}(z).
\end{equation}
Such algebras have been studied in many contexts \cite{Sohnius:1982rs} and are sometimes referred to as \emph{soft} gauge algebras\footnote{As opposed to a `hard' gauge algebra that would feature structure constants.}. This $\lambda^a$-dependence will play a crucial role in what follows and, as shown in \cite{Cederwall:1992bi,Berkovits:1990yr}, the field dependence in $f^{ab}_c(\lambda)$ can be traced back to the failure of associativity of the Octonions. If we adopt the notation
$$
[\nu_i,\nu_j]=\nu_i\partial \nu_j-\nu_j\partial \nu_i, \qquad [\nu_i,\varepsilon_{j}]_a=\frac{1}{2}\varepsilon_{ja}\partial\nu_i-\nu_i\partial \varepsilon_{ja}, \qquad [\varepsilon_{i},\varepsilon_{j}]_a=f_a^{bc}(\lambda)\varepsilon_{ib}\varepsilon_{jc}.
$$
The full gauge algebra of the theory may then be written as
$$
[\mathbf{T}(v_i),\mathbf{T}v_j)]=-\mathbf{T}([v_i,v_j]), \qquad [\mathbf{T}(v_i),\mathbf{G}(\varepsilon_j)]=-\mathbf{G}([v_i,\varepsilon_j]),
$$
\begin{equation}\label{algebra}
[\mathbf{G}(\varepsilon_i),\mathbf{G}(\varepsilon_j)]=-\mathbf{G}([\varepsilon_i,\varepsilon_j]).
\end{equation}
\subsection{Transformation of $e_a$}
The Lagrange multiplier $e_a(z)$ is a weight $(-1/2,1)$ bosonic field and invariance of the action (\ref{S3}) requires it to transform as a connection
$$
\delta e_a=-\bar{\partial}\varepsilon_a-4\delta_a^{[b}\lambda^{c]}\varepsilon_ce_b,
$$
and the algebra of these transformations closes on shell
\begin{equation}\label{newalg}
[\delta_{\varepsilon_1},\delta_{\varepsilon_2}]e_a=\delta_{\varepsilon_3}e_a+4\delta_a^{[b}\delta^{c]}_d\frac{\delta S}{\delta \omega_d}\varepsilon_b\tilde{\varepsilon_c},
\end{equation}
where $\varepsilon_3=-4\delta_a^{[b}\lambda^{c]}\varepsilon_{2b}\varepsilon_{1c}$ and
$$
\frac{\delta S}{\delta \omega_a}=\bar{\partial}\lambda^a+\xi^{ab}e_b,
$$
gives the $\lambda^a$ equation of motion. The algebra may be written as
$$
[G^a,G^b]=f_{c}^{ab}G^c+4q\delta_c^{[a}\delta^{b]}_d\frac{\delta S}{\delta \omega_d},
$$
where $q=1$ for $e_a(z)$ and $q=0$ for other fields. The fact that the algebra (\ref{newalg}) is open on $e_a$ (i.e. only closes up to equations of motion) will have repercussions on how we deal with defining path integrals in the theory.
This is not the most general transformation of $e_a$ that is a symmetry of the action, we may also add a shift term proportional\footnote{The sign is chosen for later convenience.} to $Z^{\mu}_a$, since $Z^{\mu}_aG^a=0$ identically by virtue of the identity (\ref{identity}), and so we have
\begin{equation}\label{e2}
\delta e_a=-\bar{\partial}\varepsilon_a-4\delta_a^{[b}\lambda^{c]}\varepsilon_ce_b-Z^{\mu}_a\varepsilon_{\mu}.
\end{equation}
This expresses the reducibility of the gauge symmetry and the identity $Z^{\mu}_aP_{\mu}=0$ means that only $d-1$ of the $\varepsilon_{\mu}$ are independent.
This reducibility is easy to understand at the level of the field transformations. Consider the gauge transformations with parameter $\varepsilon_a=Z^{\mu}_a\varepsilon_{\mu}$. It is not hard to see that $\delta\lambda^a=\xi^{ab}Z^{\mu}_b\varepsilon_{\mu}=0$, for any $\varepsilon_{\mu}(z)$. The variation of the action under a gauge transformation with parameter $Z^{\mu}_a\varepsilon_{\mu}$ is
$$
\delta S= \int \delta\omega_a\left(\bar{\partial}\lambda^a+\xi^{ab}e_b\right)+\delta e_aG^a.
$$
On-shell, where $\bar{\partial}\lambda^a+\xi^{ab}e_b=0$, this implies that $\delta e_a=0$ and so in the path integral, when we quotient out by the action of the gauge group, we want to exclude such transformations from consideration. It is also clear that the shift $\varepsilon_{\mu}\rightarrow \varepsilon_{\mu}+P_{\mu}\varepsilon$ has no effect on the above argument and so we want to think of the $\varepsilon_{\mu}$ as only being defined up to the addition of $P_{\mu}\varepsilon$. The algebra closes on-shell and so the quantisation of this theory will require the full BV treatment, an issue we will deal with in section four.
\section{The BRST Charge}
We introduce a BRST charge $Q$ which acts on the space of fields (except $e_a(z)$). This construction will be algebraic and will have nothing to say about the action of the theory in question. If we neglect the $e_a(z)$ field, the algebra closes off shell on the fields and we can follow standard procedure to construct suitable BRST charge. The fact that the algebra only closes on-shell on the $e_a(z)$ field will be dealt with in section four.
\subsection{A sketch of gauge-fixing}
To streamline the presentation we will ignore the conformal transformations in what follows, effectively setting $b(z)$ and $c(z)$ to zero. It is straightforward to accommodate the conformal ghost sector later.
\vspace{.3cm}
\noindent\emph{Ghosts}
\vspace{.2cm}
\noindent We start with the Lagrangian ${\cal L}_0+e_aG^a$, where ${\cal L}_0= {\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}$. This has symmetry
$$
\delta e_a=\bar{\partial}\varepsilon_a+f_a^{bc} \varepsilon_b e_c, \qquad \delta \lambda^a=\xi^{ab} \varepsilon_b, \qquad \delta\omega_a=\tilde{\xi}_a{}^b\varepsilon_b,
$$
where $\xi^{ab}$ and $\tilde{\xi}_a{}^b$ are given by (\ref{xi}). We introduce ghosts $(\eta_a,\rho^a)$ and the BRST charge
\begin{equation}\label{firstQ}
Q=\oint\, \mathrm{d} z\;\eta_a\Big(G^a+\frac{1}{2}f^{ab}_c\eta_b\rho^c\Big).
\end{equation}
$\eta_a(z)$ and $\rho^a(z)$ are of conformal weight $-1/2$ and $3/2$ respectively and satisfy the anti-commutation relations $\{\rho^a(z),\eta_b(w)\}=\delta^a_b\delta(z-w)$. This BRST charge tells us how to augment the constraint $G^a$ so that it also acts on the ghosts and we introduce
$$
H^a\equiv\{Q,\rho^a\}=G^a+f^{ab}_c\eta_b\rho^c.
$$
Before we move on to consider the reducibility of these generators, we briefly consider the ghost-modified generator $H^a$. It was already observed, in a slightly different context in \cite{Cederwall:1992bi}, that the algebra of the $H^a$ does not close without the introduction of additional generators.
$$
[H^a,H^b]=f^{ab}_cH^c+f^{abc}_dJ_c{}^d, \qquad [H^a,J_b{}^c]=f^{ad}_bJ_d{}^c-f^{ac}_dJ_b{}^d,
$$
$$
[J_a{}^b,J_c{}^d]=\delta_c^bJ_a{}^d-\delta_a^dJ_c{}^b
$$
where $f^{ab}_c$ is given by (\ref{f}) and $f^{abc}_d=4\delta^{[a}_d\xi^{b]c}$ with $\xi^{ab}$ given by (\ref{xi}). We have introduced the generators $J_a{}^b=\eta_a\rho^b$ which simply exchange the ghost fields amongst themselves. The enlargement of the soft algebra by the currents $J_a{}^b(z)$ may play an interesting role but we do not consider it further here.
There is an obstruction to setting the fields $e_a(z)$ to zero globally. The best we can do is to fix
\begin{equation}\label{gf1}
F_a:=e_a(z)-\sum_r s_a^r(z) \tilde{\mu}_r=0,
\end{equation}
where $\tilde{\mu}_r$ is a basis for the moduli space of these fields and $s_a^r(z)$ are worldsheet fields, which we take to transform under the BRST transformation as $\delta s_a^r(z)=m_a^r(z)$, where $\delta m_a^r(z)=0$. We then introduce the gauge-fixing fermion
$$
\psi=\int_{\Sigma}\, \mathrm{d}^2 z\;\rho^a(z) \left(e_a(z)- \sum_{r}s_a^r(z) \tilde{\mu}_r \right).
$$
The (non-minimal) action is then given by the action
$$
S=\delta\psi+\int_{\Sigma}{\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}+b\bar{\partial}c+F_a\pi^a,
$$
where we have introduced an auxiliary field $\pi_a$ in the gauge-fixing term and $\delta\psi$ denotes the BRST variation of $\psi$. It is useful to define an inner product $\langle\;,\;\rangle$ given by integration over $\Sigma$, for example
$$
\langle H^a,\tilde{\mu}_r\rangle=\int_{\Sigma}\, \mathrm{d}^2z\,H^a(z)\, \tilde{\mu}_r(z), \qquad \langle b,\mu_m\rangle=\int_{\Sigma}\, \mathrm{d}^2z\,b(z)\, \mu_m(z),
$$
where $\mu_m$ ($m=1,2,...,n+3g-3$) are Beltrami differentials defined by $\mu_m=\partial\mu/\partial\tau^m$ and $\tau^m$ ($m=1,2,...n+3g-3$) are local holomorphic coordinates on the moduli space of an n-puntured genus $g$ Riemann surface. Using standard techniques and including the contribution from the conformal ghosts we find that the correlation functions of observables are naively given by
\begin{equation}\label{action1}
\langle {\cal V}_1(z_1)...{\cal V}_n(z_n)\rangle=\int_{\Gamma_n}\left\langle\prod_m\langle b,\mu_m\rangle\;\prod_{a,r}\langle\rho^a,\tilde{\mu}_r\rangle\;\delta\Big(\langle H^a,\mu_r\rangle\Big)\; {\cal V}_1(z_1)...{\cal V}_n(z_n)\right\rangle,
\end{equation}
where the correlation function under the integral is computed using a path integral with the action
\begin{equation}\label{S4}
S=\int_{\Sigma}{\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}+\rho^a\bar{\partial}\eta_a+b\bar{\partial}c.
\end{equation}
and ${\cal V}_i(z_i)$ are physical operators inserted at the point $z_i$, corresponding to some observable (i.e. in the cohomology of $Q$). The expression (\ref{action1}) cannot be the full story as we have not taken into account the reducibility of the gauge symmetry and the BRST operator (\ref{firstQ}) is not quite right. The cycle of integration $\Gamma_n$ is over an appropriate space and will be discussed briefly in section 4.5. A complaint could be levelled at the above expression in that the action is not BRST invariant. This is due to the fact that $\delta^2 e_a$ is only weakly zero, i.e. it is zero up to terms proportional to the equations of motion and so $\delta^2\psi\neq 0.$ This issue is easily dealt with within the framework of BV quantisation \cite{Batalin:1984jr,Batalin:1981jr} and we shall return to this issue in section 4.
\vspace{.3cm}
\noindent\emph{Ghosts for ghosts}
\vspace{.2cm}
\noindent The action (\ref{S4}) has the additional fermionic symmetry generated by $H^{\mu}=Z^{\mu}_a\rho^a$ and given by
\begin{equation}\label{trans}
\delta'\lambda^a=0, \qquad \delta'\omega_a=\Gamma^{\mu}_{ab}\rho^b \varepsilon_{\mu}, \qquad \delta'\eta_a=Z^{\mu}_a\varepsilon_{\mu}.
\end{equation}
We add to the action the term $e_{\mu}H^{\mu}$, mirroring $e_aG^a$, and find the above transformations are symmetry of the action (\ref{S4}) if $e_{\mu}(z)$ transforms as
$$
\delta' e_{\mu}=\bar{\partial}\varepsilon_{\mu},
$$
which is consistent with $[H^{\mu},H^{\nu}]=0$. Note that $\varepsilon_{\mu}(z)$ is a grassmann parameter. The idea that $H^{\mu}=0$ should be fixed by a Lagrange multiplier is reminiscent of the condition that the `$b$-ghosts' annihilate the physical state $b|\Psi\rangle=0$. In this case it is tempting to require that $\rho^a|\Psi\rangle=0$; however, this is too many conditions on $|\Psi\rangle$ as not all of the $\rho^a$ are independent of each other. Instead the relevant condition is $\rho^a|\Psi\rangle=0$, suplimented with $Z^{\mu}_a\rho^a=0$. The condition on $Z^{\mu}_a\rho^a$ removes $d$ of the $\rho^a$, leaving $d-4$ independent degrees of freedom so that $\rho^a|\Psi\rangle=0$ imposes $d-4$ conditions on $|\Psi\rangle$.
We gauge fix the symmetry generated by (\ref{trans}) as above by introducing ghosts $(\rho^{\mu},\eta_{\mu})$. $\rho^{\mu}$ and $\eta_{\mu}$ have Bose statistics and are of weight $2$ and $-1$ respectively. They satisfy the commutation relation $[\rho^{\mu}(z),\eta_{\nu}(w)]=\delta^{\mu}_{\nu}\delta(z-w)$. We introduce the BRST charge for the transformations (\ref{trans})
$$
Q'=\oint\, \mathrm{d} z\;\eta_{\mu}(z)H^{\mu}(z),
$$
and a gauge-fixing fermion
$$
\psi'=\int_{\Sigma}\, \mathrm{d}^2 z\;\rho^{\mu}(z) \left(e_{\mu}(z)-\sum_{\dot{r}}s_{\mu}^{\dot{r}}(z)\tilde{\mu}_{\dot{r}}\right),
$$
where the $\tilde{\mu}_{\dot{r}}$ are a basis for the moduli space of the $e_{\mu}(z)$ fields. $\delta\psi'$ will provide a kinetic term for the new ghosts $\rho^{\mu}\bar{\partial}\eta_{\mu}$ in the gauge-fixed action.
\vspace{.3cm}
\noindent\emph{Ghosts for ghosts for ghosts}
\vspace{.2cm}
\noindent If we take as a starting point the Lagrangian ${\cal L}_0+\rho^a\bar{\partial}\eta_a+\rho^{\mu}\bar{\partial}\eta_{\mu}$ we find the action has the residual (bosonic) symmetry
$$
\delta''\lambda^a=0, \qquad \delta''\omega_a=2 \varepsilon \rho^{\mu}\Gamma_{\mu ab}\lambda^b, \qquad \delta''\eta_{\mu}=\varepsilon Z_{\mu}.
$$
In the same way as above, we introduce a constraint on the ghosts $\rho^{\mu}$, only $d-1$ of which are independent
$$
H=\rho^{\mu}(\lambda^a\Gamma_{\mu ab}\lambda^b)=\rho^{\mu}Z_{\mu}=0.
$$
This ensures that the condition $\rho^{\mu}|\Psi\rangle=0$ only places $d-1$ constraints on physical states. Thus, the conditions $H^a=0$, $H^{\mu}=0$, and $H=0$ ensure that $\rho^a|\Psi\rangle=0$ places exactly $d-3$ constraints on the state. We introduce a fermionic ghost system $(\rho,\eta)$ where $\rho$ and $\eta$ have conformal weight $3$ and $-2$ respectively and obey the anti-commutation relation $\{\rho(z),\eta(w)\}=\delta(z-w)$. The field transformations are generated by the BRST charge
$$
Q''=\oint\, \mathrm{d} z\;\eta(z) H(z),
$$
and $\{Q'',\rho(z)\}=H(z)$. The Lagrangian
$$
{\cal L}={\cal L}_0 +\rho^a\bar{\partial}\eta_a+\rho^{\mu}\bar{\partial}\eta_{\mu}+eH,
$$
is invariant under the symmetry generated by $H$ if the Lagrange multiplier $e(z)$ transforms as $\delta'' e=\bar{\partial}\varepsilon$. Introducing the gauge-fixing fermion
$$
\psi''=\int_{\Sigma}\rho(z) \left(e(z)-\sum_{\ddot{r}}s^{\ddot{r}}(z)\tilde{\mu}_{\ddot{r}}\right),
$$
the gauge-fixed action now also includes the kinetic term $\rho\,\bar{\partial}\eta$ for these new ghosts.
\subsection{The BRST Charge in detail}
The problem with the preceding discussion is that it neglects the possible effect of terms in the BRST charge that may involve interaction terms between the different ghost sectors. It is not hard to see that the the total charge
$$
\oint\, \mathrm{d} z \;c\,{\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\partial{\cal Z}+\eta_a\left(G^a+\frac{1}{2}f^{ab}_c\eta_b\rho^c\right)+\rho^aZ_a^{\mu}\eta_{\mu}+\rho^{\mu} P_{\mu}\eta=\oint\, \mathrm{d} z\;c\,{\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\partial{\cal Z}+\eta_aH^a+\eta_{\mu}H^{\mu}+\eta H,
$$
does not square to zero. The crude sketch above gives a feel for the role of the additional ghost sectors but neglects important details. We now turn to a more careful discussion of gauge-fixing and the construction of the BRST charge. We will take the BRST charge to have the form
$$
Q=\oint\, \mathrm{d} z\; c\left(T+\frac{1}{2}T_{gh}\right)+\eta_a\left(G^a+\frac{1}{2}f^{ab}_c\eta_b\rho^c\right)+\rho^aZ_a^{\mu}\eta_{\mu}+\rho^{\mu} Z_{\mu}\eta+...,
$$
where the $+...$ terms denote terms required to ensure that $Q^2=\frac{1}{2}\{Q,Q\}=0$, which we seek to determine. Here $\{\cdot,\cdot\}$ is a Poisson bracket\footnote{Our focus will be on the construction of the classical gauge-fixed action.} given by
$$
\{A,B\}= \sum_I\int_{\Sigma}\, \mathrm{d}^2 z\left(\frac{\delta A}{\delta \phi^I(z)} \frac{\delta B}{\delta \chi_I(z)}- \frac{\delta A}{\delta \chi_I(z)} \frac{\delta B}{\delta \phi^I(z)}\right),
$$
where $\phi^I$ denotes the fields $(\lambda^a,c,\eta_a,\eta_{\mu},\eta)$ and $\chi_I$ are the conjugate fields $(\omega_a,b,\rho^a,\rho^{\mu},\rho)$ which may be thought of as the functional derivatives $(\delta/\delta \lambda^a,\delta/\delta c,\delta/\delta\eta_a,\delta/\delta \eta_{\mu},\delta/\delta\eta)$. The stress tensor $T(z)$ includes contributions from all ghost sectors, with the exception of the conformal $(b,c)$ system, which has stress tensor $T_{gh}(z)$. A suitable ansatz for the BRST charge is
\begin{equation}\label{Q1}
Q=\oint\, \mathrm{d} z\; c\left(T+\frac{1}{2}T_{gh}\right)+\sum_{p=0}^3Q_p,
\end{equation}
where $T$ is now the stress tensor for all sectors except for the $(b,c)$ conformal ghosts
\begin{equation}\label{T}
T=-\frac{1}{2}\Big(\omega_a\partial \lambda^a-\lambda^a\partial\omega_a \Big)+ T_{3/2}+T_2+T_3,
\end{equation}
where
$$
T_{3/2}=\frac{1}{2}\eta_a\partial \rho^a-\frac{3}{2}\rho^a\partial\eta_a, \qquad T_2=-\eta_{\mu}\partial \rho^{\mu}-2\rho^{\mu}\partial\eta_{\mu}, \qquad T_3=2\eta\partial \rho-3\rho\partial\eta.
$$
The conformal ghost stress tensor takes the usual form $T_{\text{gh}}=c\partial b-2b\partial c$. This issue is largely orthogonal to the role played by the conformal ghosts and, to streamline the presentation, we shall suppress all mention of the conformal ghosts until the end. The constraint $G^a$ is linear in $\omega_a$ and so we only need consider an ansatz for $Q$ that is linear in the $\rho$-ghosts \cite{Ferraro:1992ek}. The remaining contributions to the BRST charge are
\begin{eqnarray}
Q_0&=&\oint\, \mathrm{d} z\; \eta_aG^a,\nonumber\\
Q_1&=&\oint\, \mathrm{d} z\Big( \rho^aZ_a^{\mu}\eta_{\mu}+\frac{1}{2}f^{ab}_c\eta_a\eta_b\rho^c\Big),\nonumber\\
Q_2&=&\oint\, \mathrm{d} z\Big( \rho^{\mu}Z_{\mu}\eta-{\cal C}^{a\mu}_{\nu}\eta_a\eta_{\mu}\rho^{\nu}+\frac{1}{6}{\cal M}_{\mu}^{abc}\eta_a\eta_b\eta_c\rho^{\mu}\Big),\nonumber\\
Q_3&=&\oint\, \mathrm{d} z\Big( \frac{1}{2}{\cal N}^{\mu\nu}\eta_{\mu}\eta_{\nu}\rho-{\cal C}^a\eta_a\eta\rho+\frac{1}{2}{\cal M}^{ab\mu}\eta_a\eta_b\eta_{\mu}\rho\Big).
\end{eqnarray}
The structure functions were found previously $f^{ab}_c=4\delta^{[a}_c\lambda^{b]}$ and the reducibility factors are $Z^{\mu}_a=\Gamma_{ab}^{\mu}\lambda^b$ and $Z_{\mu}=\lambda^a\Gamma_{ab\mu}\lambda^b$. The condition $Q^2=0$ places restrictions on the functions ${\cal C}^{a\mu}_{\nu}$, ${\cal C}^a$, ${\cal N}^{\mu\nu}$, ${\cal M}_{\mu}^{abc}$, and ${\cal M}^{ab\mu}$. At the classical level, these restrictions take the form of differential equations in $\lambda^a$
\begin{eqnarray}\label{de}
\xi^{ad}f^{bc}_d&=&\xi^{bd}\partial_d\xi^{ac}-\xi^{cd}\partial_d\xi^{ab},\nonumber\\
\xi^{ac}\partial_cZ^{\mu}_b+Z^{\mu}_cf^{ca}_b&=&{\cal C}^{a\mu}_{\nu}Z^{\nu}_b,\nonumber\\
\xi^{[a|d}\partial_df^{|bc]}_e&=&f_d^{[ab}f^{c]d}_e-\frac{2}{3}{\cal M}^{abc}_{\mu}Z^{\mu}_{e},\nonumber\\
{\cal C}^{a\mu}_{\nu}Z^{\lambda}_{a}+{\cal C}^{a\lambda}_{\nu}Z^{\mu}_{a}&=&{\cal N}^{\mu\lambda}Z_{\nu},\nonumber\\
\xi^{[a|c}\partial_c{\cal C}^{|b]\mu}_{\nu}&=&-\frac{1}{2}f^{ab}_c{\cal C}^{c\mu}_{\nu}-{\cal C}^{[a|\mu}_{\lambda}{\cal C}^{|b]\lambda}_{\nu}+{\cal M}^{ab\mu}Z_{\mu}+{\cal M}^{abc}_{\mu}Z^{\mu}_c,\nonumber\\
{\cal C}^aZ^{\mu}_a&=&{\cal N}^{\mu\nu}Z_{\nu},\nonumber\\
\xi^{[a|c}\partial_c{\cal C}^{|a]}&=&-\frac{1}{2}f_{c}^{ab}{\cal C}^c+{\cal C}^{[a}{\cal C}^{b]}+{\cal M}^{ab\mu}Z_{\mu}.
\end{eqnarray}
where $\partial_a=\partial/\partial\lambda^a$. The requirement that anomalies vanish place additional conditions on the functions and we shall discuss the vanishing of the conformal anomaly briefly below. The first equation in the list gives the structure functions (\ref{f}). We solve the other differential equations starting with the input data
$$
\xi^{ab}\equiv(\lambda^c\Gamma_{cd}^{\mu}\lambda^d)\Gamma_{\mu}^{ab}-2\lambda^a\lambda^b, \qquad Z^{\mu}_a\equiv\Gamma_{ab}^{\mu}\lambda^b, \qquad Z_{\mu}\equiv\eta_{\mu\nu}(\lambda^a\Gamma_{ab}^{\nu}\lambda^b).
$$
The equations (\ref{de}) may be solved to give
$$
f_a^{bc}=-4\lambda^{[b}\delta^{c]}_a, \qquad {\cal C}^{a\mu}_{\nu}=2\Gamma^{\mu ab}\Gamma_{\nu bc}\lambda^c, \qquad {\cal C}^a=4\lambda^a, \qquad {\cal N}^{\mu\nu}=4\eta^{\mu\nu},
$$
\begin{equation}\label{constants}
{\cal M}_{\mu}^{abc}=0, \qquad {\cal M}^{ab\mu}=0.
\end{equation}
The classical BRST variation of all fields, with the exception of $e_a$, is then given by $\delta_Q\phi^i(z)=\{Q,\phi^i(z)\}$.
It will be useful to define the extended generators
\begin{equation}\label{G}
{\cal G}^a(z)=-\{Q,\rho^a(z)\}, \qquad {\cal G}^{\mu}(z)=-\{Q,\rho^{\mu}(z)\}, \qquad {\cal G}(z)=-\{Q,\rho(z)\}.
\end{equation}
Explicitly
$$
{\cal G}^a=G^a+f^{ab}_c\eta_b\rho^c-{\cal C}^{a\mu}_{\nu}\eta_{\mu}\rho^{\nu}-{\cal C}^a\eta\rho-\frac{1}{2}c\partial \rho^a+\frac{3}{2}\partial (c\rho^a),
$$
$$
{\cal G}^{\mu}=\rho^aZ^{\mu}_a-{\cal C}^{a\mu}_{\nu}\eta_a\rho^{\nu}+{\cal N}^{\mu\nu}\eta_{\nu}\rho-c\partial\rho^{\mu}+2\partial(c\rho^{\mu}),
$$
$$
{\cal G}=\rho^{\mu}Z_{\mu}+{\cal C}^a\eta_a\rho+5c\partial\rho-3\rho\partial c.
$$
${\cal G}^a$ clearly generalises $G^a$ to include the ghost sector. ${\cal G}^{\mu}$ and ${\cal G}$ may be thought of as nonlinear generalisations of the expressions $\rho^aZ^{\mu}_a$ and $\rho^{\mu}Z_{\mu}$ respectively.
Finally we note that the central charge of the theory, including the complete reducible ghost sector and the conformal ghosts may be straightforwardly calculated. We find that it vanishes if $d=26$. This is as expected from the traditional form of the theory presented in \cite{Mason:2013sva}. Details of this calculation may be found in the Appendix.
\section{The Master Action and Gauge Fixing}
We have noted at various points that the gauge algebra (BRST symmetry) only closes (is nilpotent) up to equations of motion. Given that the only physical field that exhibits this problem is $e_a(z)$, which we gauge fix, one could ask whether this really causes any problems in practice. For most considerations it is unlikely that this issue will cause any significant problems; however, a well-developed procedure to deal with such cases does exist \cite{Batalin:1984jr,Batalin:1981jr} and so, for completeness, we discuss this here.
\subsection{$e_a$ and its descendants}
The discussion so far has largely neglected $e_a(z)$. In particular, the BRST charge does not explicitly encode the gauge transformation of this field. We can generalise the BRST transformation of $e_a(z)$ to include the shift symmetry of (\ref{e2})
\begin{equation}\label{d2}
\delta e_a=-\bar{\partial}\eta_a+f_a^{bc} \eta_b e_c-e_{\mu}Z^{\mu}_a,
\end{equation}
where $e_{\mu}$ is a Grassmann odd field. We see that $\delta e_a=e_{\mu}Z^{\mu}_a+...$ acts as a shift symmetry which removes $d-1$ components\footnote{Since $Z^{\mu}_a(e_{\mu}+P_{\mu}\alpha)=Z^{\mu}_ae_{\mu}$ for any function $\alpha$.} of $e_a$. The BRST charge (\ref{Q1}) determines how the ghosts transform
$$
\delta \eta_a=Z^{\mu}_a\eta_{\mu}+\frac{1}{2}f_a^{bc}\eta_b\eta_c, \quad \delta\eta_{\mu}=Z_{\mu}\eta-{\cal C}^{a\nu}_{\mu}\eta_a\eta_{\nu}, \qquad \delta\eta=\frac{1}{2}{\cal N}^{\mu\nu}\eta_{\mu}\eta_{\nu}-{\cal C}^a\eta_a\eta,
$$
with the coefficients given by (\ref{constants}). With a little work one can show that
$$
\delta^2e_a=-\Big(\bar{\partial}\lambda^b+\xi^{bc}e_c\Big)\Big( 2\eta_b\eta_a-\Gamma_{ab}^{\mu}\eta_{\mu} \Big)+Z^{\mu}_a\Big(-\delta e_{\mu}-\bar{\partial}\eta_{\mu}+{\cal C}^{b\nu}_{\mu}e_b\eta_{\nu}-{\cal C}^{b\nu}_{\mu}\eta_be_{\nu}\Big),
$$
where the identity
$$
\xi^{ac}\partial_cZ^{\mu}_b+Z^{\mu}_cf^{ca}_b={\cal C}^{a\mu}_{\nu}Z^{\nu}_b,
$$
from (\ref{de}) has been used to simplify the expression. If we generalise the BRST symmetry to include the transformations
\begin{equation}\label{d1}
\delta e_{\mu}\equiv -\bar{\partial}\eta_{\mu}+{\cal C}^{b\nu}_{\mu}e_b\eta_{\nu}-{\cal C}^{b\nu}_{\mu}\eta_be_{\nu}+eP_{\mu},
\end{equation}
then $\delta^2e_a$ vanishes up to terms proportional to $\delta S/\delta\omega_a$. We are at liberty to include a shift symmetry $\delta e_{\mu}=...+eP_{\mu}$ to account for the reducibility at level one. Taking (\ref{d1}) as the BRST transformation of $e_{\mu}$, we now consider the second variation of $e_{\mu}$ and with a little work find
$$
\delta^2e_{\mu}=-\Big(\bar{\partial}\lambda^b+\xi^{bc}e_c\Big)\Big( 2 \Gamma^{\nu ad}\Gamma_{\mu db}\eta_a\eta_{\nu}-2Z_{\mu a}\eta \Big)+Z_{\mu}\Big( \delta e + \bar{\partial}\eta-{\cal C}^a\eta e_a-{\cal C}^a\eta_a e+{\cal N}^{\mu\nu}e_{\mu}\eta_{\nu}\Big),
$$
where the identity
$$
\xi^{[a|c}\partial_c{\cal C}^{|b]\mu}_{\nu}+\frac{1}{2}f^{ab}_c{\cal C}^{c\mu}_{\nu}+{\cal C}^{[a|\mu}_{\lambda}{\cal C}^{|b]\lambda}_{\nu}=0,
$$
in (\ref{de}) has been used. The requirement that $\delta^2e_{\mu}$ vanishes up to terms proportional to $\delta S/\delta\omega_a$, fixes the BRST transformation of $e(z)$ and we generalise the BRST symmetry to include
\begin{equation}\label{d3}
\delta e\equiv -\bar{\partial}\eta+{\cal C}^a\eta e_a-{\cal C}^a\eta_a e+{\cal N}^{\mu\nu}e_{\mu}\eta_{\nu}.
\end{equation}
With a bit of work one may show that, using the identities
$$
{\cal C}^{a\mu}_{\nu}Z^{\lambda}_{a}+{\cal C}^{a\lambda}_{\nu}Z^{\mu}_{a}={\cal N}^{\mu\lambda}Z_{\nu}, \qquad {\cal C}^aZ^{\mu}_a={\cal N}^{\mu\nu}Z_{\nu},
$$
that $\delta e^2$ also vanishes up to terms proportional to $\delta S/\delta\omega_a$. In particular, we have
$$
\delta^2 e=-4(\bar{\partial}\lambda^a+\xi^{ab}e_b)\eta_a\eta.
$$
The fact that $\delta^2$ only vanishes up to equations of motion will require the widening of the space of fields to include anti-fields if we are to construct a BRST-invariant action. We shall define (\ref{d2}), (\ref{d1}) and (\ref{d3}) to be the BRST transformations of the fields $e_a(z)$, $e_{\mu}(z)$ and $e(z)$ respectively. We augment the BRST charge $Q$ to generate these transformations also. We will call this augmented BRST charge $\widehat{Q}$ and we shall see that the action of $\widehat{Q}$ on the space of fields is naturally incorporated into the BV framework.
\subsection{Deformations and Moduli}
The basic framework we have been exploring is that of a closed $n$-punctured Riemann surface $\Sigma$ with a bundle which has a soft algebra, generated by $G^a(z)$, associated with a natural $S^7$ action. There will be obstructions to setting the gauge fields to zero everywhere and upon gauge-fixing, the functional integrals over the fields $e_a(z)$, $e_{\mu}(z)$, $e(z)$ and $\mu(z)$ reduce to finite dimensional integrals over a moduli space which we shall denote by ${\cal E}$. The algebra (\ref{algebra}) suggests that we may think of ${\cal E}$ as a bundle over the moduli space of closed Riemann surfaces ${\cal M}$.
To integrate over this space we consider deformations of the underlying Riemann surface and the gauge bundle along the lines of \cite{DHoker:1988pdl}. Deformations in the moduli of $\Sigma$ are generated by the stress tensor $T(z)$ and, at genus zero, a basis for such deformations is given by translating the location of $n-3$ of the $n$ punctures. If we introduce a coordinate system $z_i$ in a small disc $\mathscr{D}_i$ around the $i$'th puncture, the moduli deformation is encoded in a worldsheet vector field $v(z_i)$ which gives $z_i\rightarrow z_i+v_m(z_i)\delta\tau^m$ with $\tau^m$ a holomorphic coordinate on ${\cal M}$. A basis of $n-3$ vectors will be denoted by $\vec{v}_m(z_i)=\Big(v_1(z_i),...,v_{n-3}(z_i)\Big)$ and it is natural to choose a basis such that we associate each component with a different puncture. In a small annulus around a puncture the vector field $v(z)$ is related to the Beltrami differential $\mu_m(z)=\partial_m\mu$ by $\bar{\partial}v_m=\mu_m$ where $\partial_m=\partial/\partial\tau^m$. And so we can encode the deformation as the charge
$$
\mathbf{T}(\vec{v}_m)=\int_{\Sigma}\, \mathrm{d}^2z\,T(z)\mu_m(z)=\sum_{i=1}^n\oint_{\mathscr{D}_i}\, \mathrm{d} z_i\,T(z_i)v_m(z_i),
$$
where we have used $\partial\Sigma=-\cup_{i=1}^n\partial\mathscr{D}_i$. And similarly for the $b$-field
$$
\mathbf{b}(\vec{v}_m)\equiv \sum_{i=1}^n \oint_{\mathscr{D}_i}\, \mathrm{d} z_i\;b(z_i)v_m(z_i).
$$
The $\vec{v}_m$ give rise to a basis for $T{\cal M}$ (see for example \cite{Zwiebach:1992ie} for details). We need a similar basis for the tangent to the fibres when integrating over the $e_a$ (and its descendants). The space of $e_a$ is the space of weight $(1,-1/2)$ worldsheet fields with bosonic statistics and so the moduli space for $e_a(z)$ is the space of such fields, modulo the gauge transformations (\ref{e2}). This space is familiar from the integration over worldsheet gravitini in the conventional superstring, except in this case we have a parity reversed field and so for each of the $2d-4$ $e_a(z)$ we have a copy of this $n-2$ dimensional space. Let $\{\tilde{\mu}_r\}$ be a basis for the $n-2$ dimensional tangent space to this moduli space. The gauge-fixed $e_a(z)$ may be written as
$$
e_a(z,\tau)=\sum_r s_a^r(z)\,\tilde{\mu}_r(z,\tau),
$$
just as in the standard gravitino case, except the $s_a^r(z)$ are worldsheet fields with the opposite statistics. As above, it is useful to introduce
\begin{equation}\label{G1}
\langle{\cal G}^a,\tilde{\mu}_r\rangle=\int_{\Sigma}\, \mathrm{d}^2z\,{\cal G}^a(z)\,\tilde{\mu}_r(z).
\end{equation}
Similar objects $\langle{\cal G}^{\mu},\tilde{\mu}_{\dot{r}}\rangle$ and $\langle{\cal G},\tilde{\mu}_{\ddot{r}}\rangle$ are defined using an appropriate basis $\{\tilde{\mu}_{\dot{r}}\}$ and $\{\tilde{\mu}_{\ddot{r}}\}$ for the moduli spaces of $e_{\mu}(z)$ and $e(z)$ respectively. Let $D$, $\dot{D}$ and $\ddot{D}$ be the dimensions of the moduli spaces of the fields $e_a(z)$, $e_{\mu}(z)$ and $e(z)$ respectively (i.e. the dimension of the space of such fields, modulo infinitesimal gauge transformations corresponding to the BRST transformations found in the section above). Then $r=1,2,...,D$ and $\ddot{r}=1,2,...,\ddot{D}$ index bosonic directions and $\dot{r}=1,2,...,\dot{D}$ indexes grassmann directions. The ghost systems $(\rho^a,\eta_a)$, $(\rho^{\mu},\eta_{\mu})$ and $(\rho,\eta)$ are of the conventional $b\bar{\partial}c$ type of weight $3/2$, $2$ and $3$ respectively and, neglecting the moduli associated with punctures, we have that $D=(2d-4)(2g-2)$, $\dot{D}=d(3g-3)$ and $\ddot{D}=5g-5$ where $g$ is the genus of $\Sigma$. (The general variation of $e_a(z,\tau)$ is given by $\Delta e_a=-\bar{\partial}\varepsilon_a+f_a^{bc}(\lambda)\varepsilon_ce_b-Z^{\mu}_a\varepsilon_{\mu}+\delta\tau^r\partial_r e_a$, where $\tau^r$ are moduli (coordinates on the space of $e_a$ modulo infinitesimal gauge transformations). We assume the parameters $\varepsilon_a$ and $\varepsilon_{\mu}$ are independent of the moduli $\tau^r$ and so the shift symmetry $e_a\rightarrow e_a-Z^{\mu}_a\varepsilon_{\mu}$ can be used to remove $d-3$ components of $e_a$ for a given $\tau^r$ but the $e_a$ still vary with $\tau^r$, thus the relevant moduli space after gauge-fixing is still $D$-dimensional.) The fibres of ${\cal E}$ are then $(D+\ddot{D}|\dot{D})$-dimensional. Correlation functions involving the $\langle \rho^a,\tilde{\mu}_{r}\rangle$, $\delta(\langle\rho^{\mu},\tilde{\mu}_{\dot{r}}\rangle)$ and $\langle\rho,\tilde{\mu}_{\ddot{r}}\rangle$ are expected to give top (holomorphic) forms on these fibres. More generally, we expect correlation functions involving the $\mathbf{b}(\vec{v}_m)$ in addition to these ghost insertions to have an interpretation as forms on ${\cal E}$. The bundle ${\cal E}$ appears to have a rich structure and more work needs to be done to clarify the details.
\subsection{Gauge Fixing}
Our starting point is the non-minimal action
$$
S=\int_{\Sigma} {\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}+e_aG^a +\delta\Psi+\pi^a{\cal F}_a+\pi^{\mu}{\cal F}_{\mu}+\pi {\cal F},
$$
where BRST variations are generated by $\widehat{Q}$, the full BRST charge, which includes the transformations (\ref{d2}), (\ref{d1}) and (\ref{d3}). We take the gauge-fixing fermion to be
\begin{equation}\label{gff}
\Psi=\int_{\Sigma}\, \mathrm{d}^2z\Big(\rho^a {\cal F}_a+\rho^{\mu}{\cal F}_{\mu}-\rho {\cal F}\Big),
\end{equation}
where
$$
{\cal F}_a=e_a- \sum_{r=1}^{n-2}s_a^r(z) \tilde{\mu}_r , \qquad {\cal F}_{\mu}=\mathrm{e}_{\mu}- \sum_{\dot{r}}s_{\mu}^{\dot{r}}(z) \tilde{\mu}_{\dot{r}}, \qquad {\cal F}=\mathrm{e}- \sum_{\ddot{r}}s^{\ddot{r}}(z) \tilde{\mu}_{\ddot{r}}.
$$
The $s_a^r(z)$ are worldsheet fields, which transform as $\delta s_a^r(z)=m_a^r(z)$, where $\delta m_a^r(z)=0$. We introduce similar fields such that $\delta s_{\mu}^{\dot{r}}=m_{\mu}^{\dot{r}}$ and $\delta s^{\ddot{r}}=m^{\ddot{r}}$. The Lagrange multipliers $\pi^a$, $\pi^{\mu}$ and $\pi$ set ${\cal F}_a=0$, ${\cal F}_{\mu}=0$ and ${\cal F}=0$ respectively. Thus the only contribution to the variation of the gauge-fixing fermion $\Psi$ comes from $\delta\Psi\approx-\langle\rho^a,\delta{\cal F}_a\rangle+\langle\rho^{\mu},\delta{\cal F}_{\mu}\rangle+\langle\rho,\delta {\cal F}\rangle$ where it is understood that $\approx$ denotes equality subject to the equations on motion for the $\pi$'s. Using the variations of the Lagrange multipliers $e_a(z)$, $e_{\mu}(z)$ and $e(z)$ derived in the last section, it is straightforward to show that
\begin{eqnarray}\label{dPsi}
\delta\Psi\approx\int_{\Sigma}\, \mathrm{d}^2z \Bigg(&&\rho^a\bar{\partial} \eta_a-\rho^{\mu}\bar{\partial}\eta_{\mu}+\rho\bar{\partial}\eta +\Big({\cal G}^a-G^a\Big)e_a
+{\cal G}^{\mu}e_{\mu}+{\cal G}e \nonumber\\
&&-\sum_ rm_a^r\tilde{\mu}_r\rho^a+\sum_{\dot{r}}m_{\mu}^{\dot{r}}\tilde{\mu}_{\dot{r}}\rho^{\mu}+\sum_{\ddot{r}}m^{\ddot{r}}\tilde{\mu}_{\ddot{r}}\rho\Bigg).
\end{eqnarray}
Integrating out the $m_a^r$, $m_{\mu}^{\dot{r}}$ and $m^{\ddot{r}}$ results in the insertion of the operators
\begin{equation}\label{B}
\prod_{r,a} \langle\rho^a,\tilde{\mu}_r\rangle, \qquad \prod_{\dot{r},\mu}\delta\Big(\langle\rho^{\mu},\tilde{\mu}_{\dot{r}}\rangle\Big), \qquad \prod_{\ddot{r}}\langle\rho,\tilde{\mu}_{\ddot{r}}\rangle,
\end{equation}
respectively into the path integral. Similarly, integrating out the $s_a^r(z)$, $s_{\mu}^{\dot{r}}(z)$ and $s^{\ddot{r}}(z)$ results in the insertion of the operators
\begin{equation}\label{GG}
\prod_{a,r}\delta\Big(\langle{\cal G}^a,\tilde{\mu}_{r}\rangle\Big), \qquad \prod_{\mu,\dot{r}}\langle{\cal G}^{\mu},\tilde{\mu}_{\dot{r}}\rangle, \qquad \prod_{\ddot{r}}\delta\Big(\langle{\cal G},\tilde{\mu}_{\ddot{r}}\rangle\Big).
\end{equation}
Putting the stress tensor ghost contribution in, the gauge-fixed Lagrangian is then
\begin{equation}\label{gfS}
S= \int_{\Sigma} {\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z} + b\bar{\partial}c+\rho^a\bar{\partial}\eta_a-\rho^{\mu}\bar{\partial}\eta_{\mu}+\rho\bar{\partial}\eta,
\end{equation}
with the ghost insertions discussed above as well as the usual holomorphic $\mathbf{b}(\vec{v}_m)$ insertions. As noted above, the worldsheet vectors $\vec{v}_m(z_i)=(v_1,...,v_{n-3})$ form a basis for the $n-3$ moduli deformations based at the $i$'th puncture $z_i\rightarrow z_i+v(z_i)$. At genus zero the deformations can be chosen to simply translate the puncture. The $SL(2)$ invariance may be used to fix three punctures so that $\vec{v}_m(z_i)=\delta_{m i}$ for $i=1,2,..,n-3$ and vanishes for $i=n-2,n-1,n$ (the three fixed punctures).
Introducing some local operators ${\cal V}_i(z_i)$ in the cohomology of $\widehat{Q}$, a correlation function of tree-level observables is given by\footnote{Due to its statistics, the object
$$
Y=\prod_{\mu,\dot{r}}\langle{\cal G}^{\mu},\tilde{\mu}_{\dot{r}}\rangle\;\delta\Big(\langle\rho^{\mu},\tilde{\mu}_{\dot{r}}\rangle\Big),
$$
plays a role akin to that of a picture changing operator in the supersymmetric theory.}
\begin{equation}\label{integral}
A_n=\int_{\Gamma_n} \left\langle\;\prod_m\mathbf{b}(\vec{v}_m)\;\prod\mathbf{B}(\tilde{\mu})\;\prod\delta \Big(\mathbf{G}(\tilde{\mu})\Big){\cal V}_1...{\cal V}_n\right\rangle.
\end{equation}
where $\prod\mathbf{B}(\tilde{\mu})$ denotes the product of the ghost insertions (\ref{B}) and $\prod\delta \left(\mathbf{G}(\tilde{\mu})\right)$ denotes the product of the terms in (\ref{GG}). The action used to compute the correlation function under the integral is is (\ref{gfS}). $\Gamma_n\subset {\cal E}$ is a cycle that we discuss briefly towards the end of section 4.5, although we freely admit that, at this stage, we have no concrete method of determining it. It may be possible to to adapt the methods of \cite{Ohmori:2015sha} to evaluate the result of this integral in terms of the stationary points of a suitably defined Morse function.
\subsection{Open algebras and the Master Action}
We finally turn to the fact that the soft algebra only closes on-shell. Starting with the action
$$
S_{(0)}= \int_{\Sigma} {\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}+e_aG^a,
$$
we add a gauge fixing term
$$
S_{(1)}=\delta\Psi.
$$
and possibly non-minimal terms involving the $\pi$-fields as above. If the action of the BRST charge is nilpotent then $\delta^2\Psi=0$ and the combined action $S=S_{(0)}+S_{(1)}$ is BRST-invariant. In our case the gauge-fixed action above is not BRST invariant and the origin of this is the fact that the presence of $e_a(z)$ and its descendants in $\Psi$ means that $\delta^2\Psi$ only vanishes up to terms proportional to $\delta S/\delta\omega_a$. The BV construction \cite{Batalin:1984jr, Batalin:1981jr} tells us how to construct an off-shell BRST-invariant action by expanding the space of fields to include anti-fields. The key idea is that, for each field $\phi^i$, one introduces an anti-field\footnote{The $^*$ notation signifies the object to be an antifield. No notion of complex or Hermitian conjugation is intended. We include ghosts in our definition of `field'.} $\phi^*_i$ whose BRST variation is precisely the equation of motion for the corresponding field
$$
\delta \phi^*_i=\frac{\delta {\cal S}}{\delta \phi^i}.
$$
In this way terms proportional to the equations of motion are rendered trivial in the extended BRST cohomology. We clearly need to extend our definitions of the BRST charge $Q$ and classical action to accommodate the anti-fields.
We sketch the basic idea here but details on the formalism may be found in \cite{Henneaux:1989jq} and \cite{Gomis:1994he} contains a number of worked examples. The main new ingredient is the bracket $(\;,\;)$ defined by
$$
({\cal A},{\cal B})=\sum_i\int_{\Sigma}\left(\frac{\delta^r{\cal A}}{\delta \phi^i(z)}\frac{\delta^l {\cal B}}{\delta \phi^*_i(z)}-\frac{\delta^r {\cal A}}{\delta \phi^*_i(z)}\frac{\delta^l {\cal B}}{\delta \phi^i(z)}\right),
$$
where ${\cal A}$ and ${\cal B}$ are functionals of the fields and anti-fields and the $l$ $(r)$ superscript denotes the functional derivatives acting from the left (right), so that for example that $(\phi^i(z),\phi^*_j(w))=\delta^i_j\delta(z-w)$. The classical Master Action ${\cal S}$ is required to satisfy $({\cal S},{\cal S})=0$ and may be found iteratively as a series
$$
{\cal S}=S_{(0)}+S_{(1)}+S_{(2)}+...,
$$
where the `boundary conditions' $S_{(0)}$ and $S_{(1)}$ are, broadly speaking, as above. The expansion may be thought of as a polynomial expansion in antifields where $S_{(0)}$ is the original action containing fields only and $S_{(1)}$ is linear in antifields. The gauge-fixing fermion, which is a functional of the fields, tells us the relationship between the fields and antifields
\begin{equation}\label{anti}
\phi_i^*=\frac{\delta\Psi}{\delta\phi^i},
\end{equation}
and we may write $S_{(1)}=\sum_i\phi_i^*\delta \phi^i$. Indeed, if the algebra closes off-shell then the master action will be at most linear in the antifields
$$
{\cal S}=S_0+\sum_i\frac{\delta\Psi}{\delta\phi^i}\delta \phi^i=S_0+\delta\Psi,
$$
and we can eliminate the antifields entirely. We then extend the minimal gauge-fixed action by introducing terms $b_i^*\pi^i$ such that $\delta b^i=\pi^i$. The BRST transformations, including (\ref{d2}), (\ref{d1}) and (\ref{d3}), are given by
$$
\delta_{\widehat{Q}} \phi^i=({\cal S},\phi^i)=\frac{\delta {\cal S}}{\delta \phi^*_i}.
$$
More generally, the extended notion of BRST transformation is also given by $\delta \phi^i=({\cal S},\phi^i)$ but now the action may have quadratic or higher dependence on the anti-fields.
The ambitwisitor string includes a field $e_a(z)$ whose algebra only closes on-shell and we anticipate a Master Action that is non-linear in the antifields. To illustrate the point, let us see what happens if we try to repeat the previous construction ignoring the fact that the $e_a$ algebra is open. We find that the action $S=S_0+\delta\Psi$ is not BRST invariant. How can this be? The algebra is only nilpotent on the support if the $\lambda^a(z)$ equation of motion. This is a consequence of the fact that the $f_a^{bc}(\lambda)$ are functions of $\lambda^a(z)$ and not constants (and so ultimately the failure of the Octonions to be associative). This problem can be overcome by including non-linear antifield terms in ${\cal S}$. Following the general formalism laid out in \cite{Henneaux:1989jq} it is not too hard to see that what we need is the additional term
$$
S_{(2)}=e^{*a}\omega^{*b}\Big(2\eta_b\eta_a-\Gamma_{ab}^{\mu}\eta_{\mu}\Big)+2e^{*\mu}\omega^{*a}\Big( \Gamma^{\nu ad}\Gamma_{\mu db}\eta_a\eta_{\nu}-\Gamma_{\mu ab}\lambda^b\eta\Big)-4\omega^{*a}e^*\eta_a\eta.
$$
The modified action is then ${\cal S}=S_{(0)}+S_{(1)}+S_{(2)}$. Note that the presence of $e^{a*}(z)$, $e^{*\mu}(z)$, $e^*(z)$ and $\omega^{*a}(z)$ fields will alter the BRST transformation of the $e_a(z)$, $e_{\mu}(z)$, $e(z)$ and $\omega_a(z)$ fields; however, since our chosen gauge-fixing fermion (\ref{gff}) is independent of $\omega_a(z)$, we see that
$$
\omega^{*a}=\frac{\delta \Psi}{\delta\omega_a}=0,
$$
and so these modifications, though essential in ensuring the BRST-invariance of the theory, do not affect our previous considerations.
As an illustration, let us ignore the reducibility of the symmetries of the theory and focus on the failure of the BRST charge to be nilpotent on $e_a(z)$ (this amounts to setting the $\eta_{\mu}(z)$ and $\eta(z)$ ghosts to zero). The variation of ${\cal S}$ with respect to $\omega^{*a}(z)$ gives the $\lambda^a$ equation of motion (now incorporated into the cohomology as an exact cycle)
$$
\delta \omega^{*a}=({\cal S},\omega^{*a})=\frac{\delta {\cal S}}{\delta\omega_a}=\bar{\partial}\lambda^a+\xi^{ab}e_b.
$$
The $e_a$ transformation is now
$$
\delta e_a=({\cal S},e_a)=\frac{\delta S}{\delta e^*_a}=-\bar{\partial}\eta_a+f_{a}^{bc}\eta_be_c+2\omega^{*b}\eta_a\eta_b.
$$
A quick calculation shows, taking into account the variation of $\omega^{*a}$, that $\delta^2e_a$ now vanishes.
\subsection{Computing Observables}
We want to move towards computing observables and in this final section we make some speculative remarks in this direction. An important issue is how to make sense of the somewhat formal path integral expression we have derived in previous sections. We will focus on the bosonic case but we expect the supersymmetric generalisation to be straightforward. We will be particularly interested in thinking about the expression (\ref{integral}) as an integral of a top holomorphic form $\Omega=\left\langle {\cal V}_1...{\cal V}_n\right\rangle$ over a bundle ${\cal E}$ with base ${\cal M}$. The correlation function under the integral should be computed using the Master Action subject to the constraint (\ref{anti}). Starting with the gauge-fixed action
\begin{equation}\label{X}
S=\int_{\Sigma} {\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z} +S_{\text{gh}},
\end{equation}
where $S_{\text{gh}}=b\bar{\partial}c+\rho^a\bar{\partial}\eta_a-\rho^{\mu}\bar{\partial}\eta_{\mu}+\rho\bar{\partial}\eta$, we consider how this changes as we move around the space ${\cal E}$. It is important to note that $\hat{\delta}$ denotes a change in the moduli of the gauge fields, whereas the discussion in previous sections has largely focussed on infinitesimal gauge transformations at the same point in moduli space. Under a small change in the worldsheet metric $\hat{\delta}\mu=\delta\tau^m(\partial \mu/\partial\tau^m)$, the Lagrangian changes as $\hat{\delta}{\cal L}=\hat{\delta}\mu\, T$ where $T$ is the full stress tensor (\ref{T}). Similarly, a change in the Lagrange multiplier $e_a$ gives rise to $\hat{\delta}{\cal L}=\hat{\delta}e_a\;{\cal G}^a$, where $\hat{\delta}e_a=\delta\tau^r(\partial e_a/\partial\tau^r)$, and so the general variation of $e_a(z,\tau)$ is given by $\Delta e_a=-\bar{\partial}\varepsilon_a-4\delta_a^{[b}\lambda^{c]}\varepsilon_ce_b-Z^{\mu}_a\varepsilon_{\mu}+\delta\tau^r\partial_r e_a$. Similar expressions may be found for changes in $e_{\mu}$ and $e$ and the corresponding response on the gauge-fixed action may be easily deduced from the expression for $\delta\Psi$ given in (\ref{dPsi}), yielding
$$
\hat{\delta} S=\int_{\Sigma}\hat{\delta}\mu\,T+\hat{\delta}e_a\,{\cal G}^a+\hat{\delta}e_{\mu}\,{\cal G}^{\mu}+\hat{\delta}e\,{\cal G}.
$$
Using the transformations (\ref{G}), an invariant action is given by by adding the term \cite{Witten:2012bh}
$$
S_{\text{ext}}=\int_{\Sigma}\,\hat{\delta}\mu \,b+ \hat{\delta} e_a\,\rho^a+ \hat{\delta} e_{\mu}\,\rho^{\mu}+ \hat{\delta}e\,\rho,
$$
to (\ref{X}). This term produces $b$ and $\rho$ ghost zero mode contributions and is another way of seeing how the $\mathbf{b}(\vec{v})$ and $\mathbf{B}(\tilde{\mu})$ insertions (\ref{B}) in (\ref{integral}) arise. Introducing ${\cal W}:=b\,\mu+\rho^a\, e_a+\rho^{\mu}\,e_{\mu}+\rho\, e$, the extended action may be written in terms of the extended BRST operator $\widehat{Q}$ which generates these transformations
$$
S=\int_{\Sigma} {\cal Z}\hspace{-.06cm}\cdot\hspace{-.06cm}\bar{\partial}{\cal Z}+\{\widehat{Q},{\cal W}\}+S_{\text{gh}}.
$$
Given a set of observables ${\cal V}_i(z_i)$, we expect the correlation function of $n$ such observables given by a path integral to follow the general approach outlined in \cite{Ohmori:2015sha}. The path integral localises on critical points of the Morse function $\Re({\cal I})$ where
$$
{\cal I}=-\langle\mu,T\rangle-\langle e_a,{\cal G}^a\rangle-\langle e_{\mu},{\cal G}^{\mu}\rangle+\langle e,{\cal G}\rangle,
$$
giving the tree-level correlation function as a sum over such critical points. The role of the antifields would need to be carefully examined. The key outstanding task is to identify the cohomology of the BRST operator and to write down concrete examples of vertex operators so that the above prescription may be investigated fully.
\section{Discussion}
There is a sense in which the twistor variables used here provide a more natural description of the ambitwistor string, one that may make the subtleties relating constructions in different dimensions more explicit. We have seen that, even in the bosonic case, working in the twistorial variables ${\cal Z}(z)$ instead of the more familiar pair $(X(z),P(z))$ leads to significant increase in the complexity of the theory due to the more involved constraints. Whilst one might argue that this complexity is indicative of a richness that the twistor variables bring to the surface, it seems likely that explicit computations will be less efficient in this formalism. However, it is possible that certain types of problems that are difficult or intractable in the conventional approach may be fruitfully tackled in this language.
In particular, there has been progress recently in constructing ambitwistor strings in non-trivial Neveu-Schwarz backgrounds \cite{Adamo:2017nia,Adamo:2017sze} and it would also be interesting to generalise the construction here to curved backgrounds. An interesting feature of the target space supersymmetric theory is that, in contrast to the conventional superstring, the reducibility of the constraints of the ambitwistor string considered here appear manageable and one might hope to make progress in the computation of scattering amplitudes with Ramond-Ramond backgrounds. The example of $AdS_5\times S^5$ would be particularly interesting. Even though such a calculation would simply be a re-derivation of known supergravity results one might hope that the method, arising as it does from a worldsheet theory, might shed some light on the corresponding problem in superstring theory.
Before more ambitious applications can be investigated there are a number of outstanding issues to be clarified. The most pressing is to determine the spectrum of the physical states of the supersymmetric theory and to construct explicit vertex operators. The classical starting point is the ambitwistor string of \cite{Mason:2013sva} and so our expectation is that the theory described here also describes Type II supergravity, or at least those results accessible from perturbation theory. Related to this is the fact that one would also like a careful definition of the operators used and possible gauge anomalies. The structure of the ghost vacuum also deserves a more thorough analysis. It would also be interesting to see what the analogue of the scattering equations are in this formalism. We expect the role played by the scattering equations in the conventional formalism will be filled by the constraints $G^a(z)$ but, without concrete expressions for vertex operators in this language, it is difficult to make a concrete proposal. We leave these and other questions to future work.
\begin{center}
\textbf{Acknowledgements}
\end{center}
The authors would like to thank Nathan Berkovits, Lionel Mason and Paul Townsend for helpful discussions. This work has been partially supported by STFC consolidated grant ST/P000681/1.
|
1,116,691,497,825 | arxiv |
\section{Introduction}
\label{sec:introduction}}
\IEEEPARstart{A}{s} the aggressive scaling of complementary metal-oxide-semiconductor (CMOS) technology nodes reaches physical device limits, traditional CMOS-based architectures face significant challenges ranging from saturated performance gains to increased power density and variability, coupled with concerns for reliability.
The continual miniaturization of CMOS technology has not only complicated chip design but also necessitated advanced and expensive fabrication facilities.
Alternative technologies are being pursued extensively, which can augment CMOS in enabling higher memory and logic efficiency.
These include several emerging devices such as silicon nanowire field-effect transistors (SiNW-FETs)~\cite{de2012polarity}, memristors~\cite{chua1971memristor},
negative capacitance FETs (NCFETs)~\cite{salahuddin2008use}, spin devices~\cite{manipatruni2019scalable}, etc.
Spintronic devices, in particular, have emerged as one of the top contenders for the post-CMOS era~\cite{manipatruni2019scalable}.
\begin{table}[ht]
\centering
\footnotesize
\caption{IP Protection Techniques Versus
Untrusted Entities
in IC Supply Chain
(\ding{51}: Protection Offered, \ding{55}: No Protection Offered)}
\label{tab:protection_comparison_2}
\setlength{\tabcolsep}{1mm}
\renewcommand{\arraystretch}{1.6}
\begin{tabular}{*{4}{c}}
\hline
\textbf{Technique} &
\textbf{FEOL/BEOL} &
\textbf{Test Facility} &
\textbf{End-User}
\\ \hline
Logic Locking &
\ding{51}/\ding{51} & \ding{51}~\,\,(\cite{yasin16_test}) &
\ding{51}
\\ \hline
Layout Camouflaging & \ding{55}/\ding{55}~\,\,(\ding{51}/\ding{55}~\cite{patnaik2020obfuscating}) &
\ding{55}~\,\, &
\ding{51}
\\ \hline
Split Manufacturing &
\ding{51}/\ding{55}~\,\, &
\ding{55} &
\ding{55}~\,\,(\ding{51}~\cite{patnaik2018best,patnaik2019modern})
\\ \hline
\textbf{Dynamic Camouflaging} &
\ding{51} &
\ding{51} &
\ding{51}
\\ \hline
\end{tabular}
\end{table}
\begin{figure*}[ht]
\centering
\includegraphics[width=\textwidth]{figures/Threat_model.pdf}
\caption{Threat model for
\textit{dynamic camouflaging}-based IP protection.
Green and red blocks represent the \textit{trusted} and \textit{untrusted} entities, respectively.
The protection schemes---which are all flavors of dynamic camouflaging---employed for each of the untrusted entities are mentioned below the respective red blocks, and indicated by green shields.
Gate replacement, which can either be random or through some designer's chosen heuristic, involves the selective replacement of gates in the original netlist with polymorphic gates (magneto-electric spin-orbit (MESO) gates in this work).
After fabrication and testing, the design is sent back to the design house (or some trusted facility) for functional reconfiguration before being deployed in the open market.
ATPG stands for automatic test pattern generation.
}
\label{fig:threat_model}
\end{figure*}
The expeditious globalization of the electronics industry has resulted in the outsourcing of the integrated circuit (IC) supply chain.
Such a distributed supply chain, which is often spread across geographically different locations, enables various attacks, ranging from piracy of intellectual property (IP) to illegal and unauthorized overproduction of ICs, and targeted insertion of malicious circuits known as hardware Trojans.
IP piracy, in particular, is quite multi-faceted and an attacker has different avenues to mount such an attack, ranging from an \textit{untrustworthy foundry}, an \textit{untrustworthy test facility}, to \textit{malicious end-users} (Fig.~\ref{fig:threat_model}).
Estimates suggest loss to the tune of billions of dollars annually due to infringement of IP cores.
While malicious employees residing in an untrusted foundry or an end-user could pirate the design by reverse engineering (RE) and/or mounting Boolean satisfiability (SAT)-based attacks~\cite{subramanyan15,massad15} to decipher the chip IP, an adversary in the test facility can misuse test patterns to compromise the security of a chip~\cite{yasin16_test,yasin17_TIFS}.
Various design-for-trust schemes have been proposed in the literature (including few which have been demonstrated on silicon) over the past decade to counter IP piracy.
Table~\ref{tab:protection_comparison_2} summarizes the protection offered by some of these techniques in the face of untrusted entities; they are discussed briefly next.
\subsection{An Overview of IP Protection Schemes}
\textbf{Logic locking} (LL) protects the underlying design IP by inserting dedicated locks, which are controlled by a secret key.
A locked circuit contains additional inputs, which are referred to as \textit{key inputs}, and are driven by an on-chip \textit{tamper-proof memory} (TPM).
Most common locking mechanisms are realized by inserting additional logic (e.g., XOR/XNOR gates, AND/OR gates or look-up tables (LUTs)).
The locked IC is activated by a trusted facility or the design house after fabrication and testing (but before deploying in the open market), namely by loading the secret key onto the chip's dedicated TPM.
Examples include random logic locking (RLL), fault analysis-based locking (FLL)~\cite{rajendran15},
Anti-SAT~\cite{xie2016mitigating}, and stripped functionality logic locking (SFLL)~\cite{yasin17_CCS}.
Note that the overall security of LL hinges on the \textit{secure realization of TPMs}, which remain under active research and development~\cite{anceau17}.
\textbf{Layout camouflaging} (LC) obfuscates the layout implementation---and thereby attempts to obfuscate the
functionality---by using specialized camouflaged cells which aim to be indistinguishable across several functions.
This can be achieved by (i)~using dummy contacts~\cite{rajendran13_camouflage},
(ii)~leveraging threshold voltage-dependent cells~\cite{erbagci16},
(iii)~incorporating AND-tree camouflaging~\cite{li16_camouflaging}, and (iv) obfuscating the interconnects~\cite{patnaik2020obfuscating}.
An important consideration for LC is that almost all prior works need to \textit{trust the foundry} for implementing their obfuscation mechanisms.
\textbf{Split manufacturing} (SM) entails the physical separation of the entire chip stack into front-end-of-line (FEOL) and back-end-of-line (BEOL) layers, across geographically distinct foundries.
Typically, the FEOL consists of transistors (device layer) and lower metal layers (M1--M3) which are fabricated by an advanced, off-shore \textit{untrustworthy} foundry, while the remaining metal layers are manufactured on top of the incomplete chip at a \textit{trustworthy}, in-house, low-end facility~\cite{rajendran2013split}.
This physical separation of the design IP avoids dissemination of the complete layout information to one untrustworthy foundry.
A multitude of techniques has been proposed in the recent literature to safeguard FEOL layouts for SM,
e.g.,~\cite{rajendran2013split,patnaik18_SM_ASPDAC,patnaik18_SM_DAC}.
However, it is essential to note that, SM can safeguard the design IP from untrusted foundries only, \textit{but not against untrusted end-users}.
To summarize, although IP protection techniques have been proposed to safeguard the supply chain
against malicious entities, each of these solutions have some caveats.
Logic locking has the potential to protect the IC supply chain end-to-end but, in its current state, the resilience depends on a TPM to store the secret key.
\subsection{Role of Emerging Devices in Securing Hardware}
Emerging devices are prime candidates for augmenting hardware security~\cite{bi16_JETC,alasad2017leveraging,patnaik18_GSHE_DATE,patnaik2019spin}.
The controllable ambipolarity in
SiNW-FETs has been exploited to implement camouflaged
layouts in~\cite{bi16_JETC}.
Recent research in the field of emerging device-based security has explored the domain of spintronics~\cite{ghosh2016spintronics}.
Spin devices like the charge-spin logic
and magneto-electric spin-orbit logic (MESO)~\cite{manipatruni2019scalable} possess innate \textit{run-time polymorphism} and
\textit{post-fabrication reconfigurability} capabilities, which are typically not afforded by CMOS and other emerging devices.
The additive nature of the input spin currents coupled with a magnetic tunnel junction (MTJ)-based differential voltage readout enables these spin devices to implement majority logic directly and exhibit polymorphic characteristics.
Recent works~\cite{patnaik18_GSHE_DATE,patnaik2019spin} on using emerging devices for LC have leveraged polymorphic logic for static camouflaging.
However, the \textit{true potential} of polymorphic devices lies in \textit{dynamic camouflaging},
which is unexplored yet---therefore, exploring dynamic camouflaging is the focus of this paper.
\subsection{Dynamic Camouflaging}
Dynamic camouflaging involves obfuscating and switching the device-level functionality \textit{post-fabrication}, as well as \textit{during run-time}, thereby hindering various attacks throughout the IC supply chain.
\ul{We study \textit{dynamic camouflaging} using polymorphic spin devices
and establish \textit{security} and \textit{computational accuracy} as two entangled design variables, especially for error-tolerant applications such as image processing and
machine learning.}
We focus on such scenarios as we believe they are meaningful, but we note that polymorphic gates can in
principle result in any arbitrary dynamic behavior. However, such behavior can be impractical,
as it would come along with an excessive loss of computational accuracy.
In other words, we study dynamic camouflaging based on run-time reconfiguration among functionally equivalent or approximately equivalent circuit structures
with the help of polymorphic gates, while maintaining the practicality of such circuits.
For such applications, dynamic camouflaging can thwart both exact~\cite{subramanyan15,massad15} and approximate SAT (\textit{AppSAT})
attacks~\cite{shamsi17}, as we show in this work.
In general, we discuss extensively about securing the supply chain
end-to-end using spin-based devices,
and circumventing the risks associated with
untrusted foundries, test facilities, and end-users (Fig.~\ref{fig:threat_model} and Table~\ref{tab:protection_comparison_2}).
\subsection{Contributions}
The contributions of this work are summarized
as follows:
\begin{enumerate}
\item We introduce the concept of \textit{dynamic camouflaging} leveraging the inherent functional polymorphism of spin devices.
Toward this end, we demonstrate the promising security properties pertaining to the MESO device as a representative spin device. We choose the MESO device owing to its superior performance metrics and CMOS compatibility.
\item We propose a secure end-to-end solution to
counter IP piracy across the distributed IC supply chain, encompassing an untrusted foundry, untrusted test facility, and an untrusted end-user.
This is the \textit{first} work in the context of LC to safeguard the supply chain end-to-end.
Extensive simulations demonstrate the superior resilience of our proposed scheme against state-of-the-art attacks.
\item From the purview of an untrusted foundry, we show that advanced ``inside foundry'' attacks do not compromise our security claims, as we rely on the concept of \textit{post-fabrication reconfigurability}.
\item The idea of post-fabrication
reconfigurability is also leveraged to demonstrate resilience against attackers in an
untrusted testing facility.
By employing \textit{post-test configuration}, we
protect the design IP against test-data mining attacks like \textit{HackTest}~\cite{yasin17_TIFS}.
We carry out detailed simulations on various test cases for static and dynamic camouflaging.
\item We extend the benefits of dynamic camouflaging, through \textit{dynamic morphing}, to protect also against untrusted
end-users, especially for error-tolerant applications such as image processing.
We show the implications of using approximate SAT-based attacks (\textit{AppSAT})~\cite{shamsi17} for the same.
\item Finally, we project the superior cost in terms of synthesis-level power, performance, and area (PPA) for full-chip camouflaging in contrast
with other selected, spin-based camouflaging schemes.
\end{enumerate}
\section{Background and Motivation}
\label{sec:background}
Here, we discuss the recent advancements in LC
along with demonstrated attacks, which have been tailored toward static camouflaging.
Further, we report on some early studies directed toward the notion of dynamic camouflaging.
\subsection{Static Layout Camouflaging \& SAT-Based attacks}
\label{sec:static_camouflaging}
\begin{figure*}[ht]
\centering
\includegraphics[width=2\columnwidth]{figures/MESO_gates.pdf}
\caption{(a-h) Implementation of INV, BUF, AND, OR, NAND, NOR, XOR, XNOR with a single MESO device, using different input configurations.
Signals $A$ and $B$ are logic inputs, and $X$ is a control input required for some functionalities.
Note that INV, BUF, XOR, and XNOR gates have dummy wires/contacts at their input terminals, to make them optically indistinguishable from other implementations.}
\label{fig:primitive}
\end{figure*}
Early research in the field of LC was aimed primarily toward (i)~selection of gates to be
camouflaged, and (ii)~the design of camouflaged cells.
Most of the existing LC schemes have a high layout cost (in terms of PPA)
and are therefore limited for practical implementation.
The ambiguous XOR-NAND-NOR camouflaged cell proposed
in the seminal work of ~\cite{rajendran13_camouflage} has a power overhead of 5.5$\times$,
timing overhead of 1.6$\times$, and area overhead
of 4$\times$, respectively, when compared to a
conventional 2-input NAND gate.
Promising works such as the threshold-dependent, full-chip LC proposed in~\cite{erbagci16} induces overheads of 14\%, 82\%, and 150\% on PPA, respectively.
Therefore, existing LC schemes can be applied
only selectively due to their significant impact
on PPA budgets.
Such a constrained application of these
techniques (e.g., camouflaging fixed set of gates) leads to a compromise in security,
which is discussed next.
\ul{It should also be noted that most existing camouflaging schemes
necessitate the use of a \textit{trusted} foundry and the camouflaging effected by them is \textit{static}.}
In 2015, Subramanyan \textit{et al.}~\cite{subramanyan15} and Massad \textit{et al.}~\cite{massad15} independently
challenged the security guarantees offered by LL and LC, respectively.
The attack---commonly referred to as SAT-based attack in the literature---leverages Boolean satisfiability
to compute so-called \textit{discriminating input patterns} (DIPs).
By definition, a DIP generates different outputs for the same input pattern across two (or more) different keys, which indicates that at least one of the keys is incorrect.
The attack then proceeds in a step-wise fashion
where different DIPs are evaluated until all wrong keys have been eliminated.
Inspired by the promise raised by the SAT-based attack, research groups focused on SAT-resilient camouflaging
schemes~\cite{li16_camouflaging} which force the attack to explore exponential numbers of DIPs.
Such SAT-resiliency is achieved by inserting so-called point functions for, e.g., AND-trees, OR-trees, which ultimately leads to very low output corruptibility.
High-corruptibility schemes like
FLL~\cite{rajendran15} are integrated for such SAT-resilient schemes to improve output corruptibility, thereby providing a two-layer defense.
Shamsi \textit{et al.}~\cite{shamsi17} formulated \textit{AppSAT}, which reduces such compound schemes to their low-corruptibility constituent by ``peeling off'' the high-corruptibility portion.
\subsection{Toward Dynamic Camouflaging}
\label{sec:toward_dynamic_camo}
Dynamic camouflaging builds on the foundations of polymorphic computing which is
a subset of reconfigurable computing.
Reconfigurable computing using programmable devices (such as field-programmable gate arrays, FPGAs)
typically fix the logic functionality of the chip
\textit{before} run-time.
In polymorphic computing, however, the devices are
reconfigured in time and space \textit{during} run-time.
Therefore, dynamic camouflaging involves dynamically obfuscating the circuit at the device/circuit level.
\ul{Individual gates are configured correctly \textit{only after fabrication and testing}, and these gates can further switch between
different functionalities at run-time by application of certain control inputs---we refer to this approach as \textit{dynamic morphing}.}
Contrary to static camouflaging schemes like~\cite{rajendran13_camouflage,erbagci16,li16_camouflaging,patnaik2020obfuscating},
dynamic camouflaging requires polymorphic logic gates.
Prior work using programmable CMOS for IP protection
leverage reconfigurable logic barriers~\cite{baumgarten2010preventing}
and reconfigurable key gates~\cite{liu2014embedded}.
These techniques \textit{do not} use
functional polymorphism, but rather fix the logic functionality using select lines and/or key-bits. Although it is possible to
implement functional polymorphism using CMOS-based reconfigurable units, such as LUTs within FPGAs, the overheads incurred by such schemes can be
high, as discussed further in Section~\ref{sec:PPA_cost_analysis}.
The notion of dynamic functional obfuscation was put forward by Koteshwara \textit{et al.}~\cite{koteshwara17}, where sequentially triggered counters are leveraged to provide security guarantees. This scheme requires additional circuitry to alter the key, which is potentially prone to removal attacks.
Another study leverages \textit{hot-carrier injection}
to program threshold voltage-based CMOS gates post-fabrication~\cite{akkaya2018secure}.
The authors also showed a proof-of-concept implementation by fabricating obfuscated adders
in 65-nm bulk CMOS process.
However, they \textit{do not support run-time reconfiguration} and suffer from large PPA
overheads.
For example, a camouflaged NAND gate incurs power overhead of 9.2$\times$, delay overhead of 6.6$\times$, and area overhead of 7.3$\times$,
all with respect to a regular 2-input NAND gate.
Run-time polymorphism and, hence,
dynamic camouflaging is challenging
to implement for CMOS at the device level,
owing to fundamental limits.
Our scheme enables a radically different solution, wherein we use the unique properties of
spin devices to achieve truly polymorphic chips.\footnote{While we choose the MESO device as a representative example for our work,
the concepts presented in this work can be readily extended to any emerging device which exhibits qualities like functional polymorphism and post-fabrication functionality reconfiguration.}
This is especially useful for error-tolerant applications such as image processing.
We argue that dynamic camouflaging is also
particularly promising for
approximate computing applications, which trade-off computational accuracy for better energy-efficiency
(Sec.~\ref{sec:case_study}).
\section{Dynamic Camouflaging: Working Principle}
\label{sec:working_principle}
\subsection{The Magneto-Electric Spin-Orbit (MESO) Device: Construction and Operation}
The spin device considered in this study is the MESO device, whose operation is based on the phenomena of magneto-electric (ME) switching~\cite{lottermoser2004magnetic}
and inverse spin-orbit effects~\cite{dyakonov1971current}.
The schematic of the MESO device implementing different Boolean functions is shown in Fig.~\ref{fig:primitive}.
The inputs/outputs are electric currents, and the logical information is encoded in the direction of the current flow.
A detailed description can be found in~\cite{manipatruni2019scalable}.
During the writing phase, an input electric
current flowing in the $\pm \hat{y}$ direction through the non-magnetic interconnect sets up
an electric field in the $\pm \hat{z}$ direction within the ME capacitor (red in Fig.~\ref{fig:primitive}).
The resulting ME field switches the magnetization state of the ferromagnet (purple) along the $\pm \hat{x}$ direction.
Information is written into the MESO device by transducing
the input electric current
into the magnetization state of the device.
Typical room-temperature multiferroics used for the ME capacitor include BiFeO$_3$ and LuFeO$_3$.
The charge accumulation across an ME capacitor in response to an applied electric field is given as $Q_{\text{ME}}=A_{\text{ME}}(\epsilon_{\text{0}}\epsilon_{\text{mf}}E+P_{\text{mf}})$, where $A_{\text{ME}}$ is the cross-sectional area of the capacitor, $\epsilon_{\text{0}} = 8.85\times 10^{-12}$ F/m is the permittivity of free space, $\epsilon_{\text{mf}}$ is the relative dielectric permittivity of the ME, and $P_{\text{mf}}$ is the saturated ferroelectric polarization.
For the BiFeO$_3$ capacitor considered in~\cite{manipatruni2019scalable}, $A_{\text{ME}}=10^{-16}$ m$^2$, while $\epsilon_{\text{mf}}=54$.
The electric field to be applied to the ME capacitor to switch it all-electrically is $E=E_{\text{mf}}B_{\text{c}}/B_{\text{mf}}$, where $E_{\text{mf}}= 1.8\times10^6$ V/m refers to the electric switching field, $B_{\text{mf}}=0.03$ T is the exchange bias at switching field,
and $B_{\text{c}}=0.1$ T is the ME switching field.
After the writing process is complete, which takes $\sim 200$ ps~\cite{manipatruni2019scalable},
the supply voltages $V^+$ and $V^-$ are turned on to initiate the reading phase.
In the reading phase, a spin-polarized current is
injected into the spin-orbit coupling (SOC) layer,
which converts the spin current into electric
current at the output node ($I_{\text{out}}$),
due to the inverse spin-Hall and inverse Rashba-Edelstein
effects~\cite{shen2014microscopic}.
These topological effects result in the shifting of the Fermi surface of the high-SOC material
in k-space.
This shift causes a charge imbalance and hence a charge current in the Fermi surface,
in a direction orthogonal to the injected spin density.
The magnitude of the charge current transduced by
the SOC layer as a result of the applied spin
density is given by
\begin{equation}
j_{\text{c}} = \frac{\alpha_{\text{R}}\tau_{\text{s}}}{\hbar} j_{\text{s}}=\lambda_{\text{IREE}} \>j_{\text{s}},
\end{equation}
where $\alpha_{\text{R}}$ is the Rashba coefficient, $\tau_{\text{s}}$ is the spin relaxation time and $\lambda_{\text{IREE}}$ ($\sim1.4\times10^{-8}$m) is the inverse Rashba-Edelstein length of the SOC material~\cite{manipatruni2019scalable}.
The direction of the output current is determined
by the polarity of the supply voltages $V^+$/$V^-$
(+/- 100 mV) applied on the nanomagnet, and the final
magnetization state of the ferromagnet.
For instance, when the ferromagnetic moment is along $+\hat{x}$ and the flow of the injected spin current is along $-\hat{z}$, with spin polarization along $+\hat{x}$, the direction of the charge current generated is along $+\hat{y}$ (Fig.~\ref{fig:primitive}a).
However, when the ferromagnet is reversed to the $-\hat{x}$ direction, with the injected
spin current direction unchanged
but the spin polarization now along $-\hat{x}$,
the output charge current reverses to $-\hat{y}$.
The same reversal in the direction of output
current can also be achieved by keeping the ferromagnetic moment constant and flipping
the voltage polarities $V^{+}/V^{-}$.
The total intrinsic switching time of the MESO device is a combination of the time taken to charge the multiferroic
capacitor, $\tau_{\text{ME}}$, and the ferroelectric polarization/magnetization reversal time, $\tau_{\text{mag}}$.
These are given as $\tau_{\text{ME}}= 2Q_{\text{ME}}/ I_{\text{ISOC}}$ and
$\tau_{\text{mag}}= \pi/ \gamma B_c,$
where
$ I_{\text{ISOC}}$ is the current produced by the spin-orbit
effect and $\gamma$ is the gyromagnetic ratio of the electron.
Evaluating these switching times according to the parameters provided in the supplementary material of~\cite{manipatruni2019scalable} yields an
intrinsic switching time of $\sim$230 ps. The total switching time of the MESO device is then obtained as $\sim$258 ps, by adding the
interconnect delay of $2.9$ ps (quoted from the supplementary material of~\cite{manipatruni2019scalable}) and the extrinsic peripheral delay of
$\sim$25 ps which corresponds to
multiplexers (MUXes) simulated using \textit{Cadence Virtuoso} for the 15-nm CMOS node, considering the NCSU FreePDK15 FinFET library, for a supply voltage of 0.8V.
For a further, in-depth analysis about the switching and transduction processes in the MESO device,
interested readers are kindly referred to~\cite{manipatruni2019scalable}.
Finally, we note that the MESO device has sufficient gain, namely $\sim10$~\cite{manipatruni2019scalable}, to drive multiple fan-out stages.
\subsection{Polymorphic Gates}
\label{sec:polymorphic_gates}
By switching the polarity of the supply voltages, we can implement a buffer (BUF) or an inverter (INV) using the same device (Fig.~\ref{fig:primitive}(a,b)).
Further, we can implement complex gates such as majority logic, by leveraging the additive nature of the input signals.
As shown in Fig.~\ref{fig:primitive} (c,d), $A$ and $B$ are the signal inputs and $X$ is the tie-breaking control input.
The polarity of $X$ decides the functionality of the MESO gate.
Here, for $X=-I$, it realizes an AND gate and for $X=+I$, it realizes an OR gate.
To implement NAND and NOR gates, the polarities of the supply voltages are flipped
(Fig.~\ref{fig:primitive} (e,f)).
For XOR and XNOR gates, the tie-breaking input $X$ is eliminated, and one signal is provided at the input terminal.
The other input signal is encoded in the voltage domain and applied directly at the $V^+$/$V^-$ terminals
(Fig.~\ref{fig:primitive} (g,h)).
Illustrative waveforms showing the device operation and functional reconfiguration between AND/OR and NAND/NOR, on flipping the control signal $X$, are shown in Fig.~\ref{fig:MESO_Timing}.
The MESO device with additional peripheral
circuitry is shown in Fig.~\ref{fig:MESO_peripherals}.
The \textit{control bits} deciding the input and control signals can either be derived from a control block
(Fig.~\ref{fig:MESO_adder_subtractor}),
or even from a true random number generator (TRNG),
if random reconfiguration is
applicable, e.g., for error-tolerant applications
such as image (video) processing, machine learning, etc.
Configuring the MESO device via different supply voltages and
electric currents allow us to dynamically implement
all basic Boolean gates within a single structure.
This essential feature is used for dynamic camouflaging in this work.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\columnwidth]{figures/MESO_peripherals.pdf}
\caption{A generic MESO gate with peripheral
MUXes, which dictate the input and control
signals through control bits $\text{C}_1$--$\text{C}_8$.
This generic structure implements any of the Boolean functionalities in Fig.~\ref{fig:primitive} (a-h) once the appropriate control bits are provided.}
\label{fig:MESO_peripherals}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\columnwidth]{figures/MESO_timing.pdf}
\caption{Timing waveforms for MESO-based AND / OR / NAND / NOR
gates from \textit{behavioral Verilog} models,
which represent the estimated overall delays of the MESO device along with their intended functionality.
The MESO primitive's function morphs based on the value of the
voltage terminals and the control signal $X$. Toggling $X$ allows one to morph between OR $\leftrightarrow$ AND and NOR $\leftrightarrow$ NAND.
Morphing between OR and NOR involves setting the
top/bottom voltage terminals as $V^-$/$V^+$ or $V^+$/$V^-$; the converse is true for AND $\leftrightarrow$ NAND morphing.
Note that the morphing time is included in the total switching time of
$\sim$258 ps, in the form of the peripheral MUX delay.}
\label{fig:MESO_Timing}
\end{figure}
The design house can either provide a fully-camouflaged layout composed of only MESO devices, or a camouflaged layout where selected CMOS gates are replaced by MESO gates.
The MESO device is compatible with CMOS processes
in the BEOL, enabling heterogeneous integration.\footnote{In general, hybrid \textit{spin-CMOS} designs have been explored in a prior work, e.g., ~\cite{yogendra2015domain}.}
The proportion of the design camouflaged by a designer depends on the scope of application and impact of camouflaging on PPA overheads.
The replacement of logic gates can also be performed
in a manner conducive to protecting the critical infrastructure (i.e., design secrets, proprietary IP).
Please note that the MESO-based primitive can also be leveraged for \textit{static camouflaging}.
In such a scenario, the peripheral circuitry (Fig.~\ref{fig:MESO_peripherals}) dictating the functionality of the MESO device shall be fed with fixed
control bits and control signals.
Static camouflaging using spin devices has been explored in prior works; interested readers are referred to~\cite{alasad2017leveraging,patnaik18_GSHE_DATE,patnaik2019spin} for further details.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\columnwidth]{figures/MESO_Add_Sub_layout.pdf}
\caption{MESO adder/subtractor highlighting the capabilities for
\textit{functional reconfiguration.}
The XOR and AND gates are implemented as static MESO gates and the INV / BUF is a polymorphic
MESO gate whose function is derived from control bits ($\text{C}_1$ and $\text{C}_2$) fed by a simple control block.
\textit{A} and \textit{B} are the inputs, \textit{S} is sum, \textit{D} is difference, \textit{Ca} is carry,
and \textit{Bo} is borrow.
Note that dummy contacts are omitted here for the sake of simplicity.}
\label{fig:MESO_adder_subtractor}
\end{figure}
\section{Security Analysis: Untrusted Foundry}
\label{sec:security_analysis_foundry}
An attacker in the foundry can readily infer the IP implemented in CMOS, whereas the MESO gates appear as white-box devices, albeit \textit{without any fixed functionality.}
The MESO implementation of Boolean gates is optically indistinguishable concerning
their physical layout (Fig.~\ref{fig:MESO_adder_subtractor}), which renders optical inspection-guided RE difficult.
\ul{Since our approach here relies on \textit{post-fabrication reconfigurability}, it is intuitive
that our scheme is resilient to ``inside foundry'' attacks}.
As shown in Fig.~\ref{fig:Camo_example}, the \textit{post-fabrication reconfigurability} of MESO gates hinders the attacker's effort to infer the exact functionality.
A random gate-guessing attack on the circuit shown in Fig.~\ref{fig:Camo_example} has a solution space of 36 possible netlists,
with only one amongst them being correct.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\columnwidth]{figures/Camo_example.pdf}
\caption{An example circuit where two gates (U22 and U28) are camouflaged. The
camouflaged gates can assume any one of the outlined six 2-input functions.
The correct functionality of these camouflaged gates is shown in blue.
As the functionality of MESO gates can be reconfigured \textit{post-fabrication}, an attacker's effort of inferring the exact functionality is hindered.
With a random gate-guessing attack, an attacker has 36 possible netlists to consider, with only one amongst them being correct.}
\label{fig:Camo_example}
\end{figure}
\subsection{Threat Model}
\label{sec:foundry_threat_model}
The threat model which we adopt for security analysis for an untrusted foundry is outlined as follows:
\begin{itemize}
\item A malevolent employee in the foundry has access to the physical design, including
material and layout parameters of the MESO gates
and the chip interconnects.
While an adversary in a foundry can readily obtain
the dimensions and material composition of the nanomagnet in each MESO gate and, hence,
understand its magnetic properties including saturation magnetization, energy barrier, and critical ME field for switching,
these design details do not leak any information about the intended
functionality implemented by the gate.
\item He/she is aware of the underlying gate selection algorithm, number, and type of
camouflaged gates, but is oblivious to the actual functionality implemented by the camouflaged gate.
\item For security analysis, we assume that the working chip is not yet available in the open market.
Thus, he/she has to apply ``inside foundry'' attacks which are explained briefly next.
\end{itemize}
\begin{figure*}[tb]
\centering
\subfloat[]{\includegraphics[width=.24\textwidth]{figures/orig_ckt.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.24\textwidth]{figures/version1.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.24\textwidth]{figures/version2.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.24\textwidth]{figures/version3.pdf}}
\caption{An illustration of an incorrect gate assignment leading to logic redundancy.
In (a), gates U31 and U33 are camouflaged using a simple NAND/NOR primitive, giving rise to four possible options.
Correct assignment of camouflaged gates is shown in blue.
Incorrect assignment of gates leads to circuit configurations (b), (c), and (d), respectively.
Note the reduction of gates in (c) and (d) compared to (a), while the gate count is identical in
(a) and (b), albeit (b) functions differently than (a).}
\label{fig:redundancy_example}
\end{figure*}
\subsection{Attack Model}
\label{sec:foundry_attack_model}
Recently, researchers have proposed attacks~\cite{massad17_CoRR,li2019piercing}, which can be carried out within the confines of an untrusted foundry.
These attacks do not leverage an \textit{activated working chip} as an oracle, which is in contrast
with algorithmic SAT-based attacks~\cite{subramanyan15,massad15,shamsi17}.
Though these attacks have been primarily tailored toward LL, we believe these would readily apply on LC schemes as well, given that any LL problem can be modeled as an LC scheme and vice-versa.
Besides, for the attacks proposed in~\cite{massad17_CoRR,li2019piercing}, the basic premise is that an incorrect assignment of key-bits
involves significant
logic redundancies compared to the correct assignment of key-bits.
The attack in~\cite{li2019piercing} determines the likely value of key-bits individually by
comparing the levels of logic redundancy for each logic value.
\textbf{Example:} We illustrate the effect of an incorrect assignment of key-bits (gates), leading to logic redundancy using a simple example.
Consider the circuit shown in Fig.~\ref{fig:redundancy_example}(a), logic gates U31 and U33 are camouflaged using a NAND/NOR camouflaging primitive, which leads to four combinations for [U31, U33].
The circuits are shown in Fig.~\ref{fig:redundancy_example}(b--d),
and they correspond to scenarios
[U31 = NAND, U33 = NAND],
[U31 = NAND, U33 = NOR], and
[U31 = NOR, U33 = NOR], respectively.
After re-synthesis, an incorrect combination of gates deciphered by an attacker leads to circuits with fewer gates (Fig.~\ref{fig:redundancy_example}(c) and Fig.~\ref{fig:redundancy_example}(d)) when compared to the original circuit.
We also note that an attacker might end up with cases like that of Fig.~\ref{fig:redundancy_example}(b), where the total number of gates is same as the original circuit; however, these circuits differ in functionality.
Having no access to these attacks~\cite{massad17_CoRR,li2019piercing},
we refrain from a direct, independent comparison.
However, for the sake of completeness of the security analysis, we perform quantitative experiments, based
on the essence of findings quoted in the respective works of~\cite{massad17_CoRR,li2019piercing}.
For example, the \textit{desynthesis} attack~\cite{massad17_CoRR} can
correctly infer 23 (up to 29) and
47 (up to 59) key-bits for 32 and 64 key-gates, respectively, while
the authors of ~\cite{li2019piercing} report success rate in 25\%--75\% percentile distribution.
For a fair comparison, we consider similar ranges of correctly inferred gates.\footnote{Note that this is a powerful assumption for the attacker's capabilities.
This is because the respective attacks~\cite{massad17_CoRR,li2019piercing} tackle LL schemes,
where modeling a locked gate
requires only one key-bit, whereas for multi-function camouflaging schemes like ours, multiple key-bits are required for modelling one camouflaged gate (Fig.~\ref{fig:Camo_MUX_model}).}
\subsection{Experimental Setup}
\label{sec:foundry_setup}
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\columnwidth]{figures/Camo_mux_model.pdf}
\caption{Modeling of camouflaged circuits using MUXes.
Each camouflaged gate is replaced with a corresponding MUX which dictates the functionality based on the
value assigned to the select inputs (A1--A6).
In this example, the
camouflaged cell can implement any one of the
eight functions viz. OR, NOR, AND, NAND, INV, BUF, XOR, and XNOR.
This modeling has been used throughout the paper.}
\label{fig:Camo_MUX_model}
\end{figure}
We model the MESO primitive, as
shown in Fig.~\ref{fig:Camo_MUX_model}.
The logical inputs $a$ and $b$ are fed in parallel into all eight possible Boolean functions,
and outputs of those gates are connected to an
8-to-1 MUX with three select lines/key-bits.
For a fair evaluation, we camouflage the same set of gates
for the ISCAS-85 benchmarks c5315 and c7552.
Gates are chosen \textit{randomly} at the beginning and then memorized. Ten such sets are created for each benchmark.
To emulate the attack results from~\cite{massad17_CoRR,li2019piercing}, we employ the following procedure.
We implement a script which randomly picks the correct assignment amongst the camouflaged gates
such that we obtain three sets, each corresponding to 50\%, 70\%, and 90\% correctly inferred gates.
This procedure is repeated ten times, each for ten different iterations of camouflaged gates, giving us 100 unique trials.
Our camouflaging scheme has been implemented using \textit{Python} scripts operating on \textit{Verilog} files.
Hamming distance (HD) is computed leveraging \textit{Synopsys VCS}
with 100,000 input patterns and, functional correctness is ascertained by \textit{Synopsys Formality}.
\subsection{Results}
\label{sec:foundry_exp_results}
\begin{figure*}[tb]
\centering
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c2670_HD_untrusted_foundry_MESO_32.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c2670_HD_untrusted_foundry_MESO_64.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c2670_HD_untrusted_foundry_MESO_128.pdf}}
\\
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c5315_HD_untrusted_foundry_MESO_32.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c5315_HD_untrusted_foundry_MESO_64.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c5315_HD_untrusted_foundry_MESO_128.pdf}}
\\
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c7552_HD_untrusted_foundry_MESO_32.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c7552_HD_untrusted_foundry_MESO_64.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/c7552_HD_untrusted_foundry_MESO_128.pdf}}
\\
\caption{Hamming distance (HD) plotted against the percentage of correctly inferred gates
(50\%, 70\%, and 90\% of total camouflaged gates) of different sets of camouflaged gates, on selected
ISCAS-85 benchmarks c2670, c5315 and c7552.
Mean HD is proportional to the number of correctly inferred gates amongst the total number of camouflaged gates.
Each box comprises data for 100 trials of random selection of gates to camouflage.}
\label{fig:untrusted_foundry_exps}
\end{figure*}
Once we ascertain the percentage of correctly inferred gates for different levels of attack accuracy (50\%, 70\%, and 90\%),
we calculate the HD between the reconstructed and the golden netlist.
The results are shown as box-plots in Fig.~\ref{fig:untrusted_foundry_exps} for two ISCAS-85 benchmarks c5315 and c7552.
It is intuitive to note that, as the percentage of
the correctly inferred gates is increased,
there is a steady reduction in the HD, which also hints that the reconstructed netlist becomes functionally similar to the original circuit.
For the ISCAS-85 benchmark c7552, assuming an attack accuracy of 90\%, the mean HD increases from about 2\% when 32 gates are camouflaged (29 are
inferred correctly) to 5\% when 128
gates are camouflaged (115 are inferred correctly).
Note that such HD numbers could already
suffice for an attacker recovering an
approximate version of the original functionality.
However, for attacks which can only recover 50--70\% of the total camouflaged gates,
the HD for the reconstructed circuit is between 6\% to 25\%,
depending on the size and type of the benchmark,
the number of gates being camouflaged, and the number of gates correctly inferred.
These findings also imply that camouflaging large parts of a design might suffice to thwart ``inside foundry'' attacks, which
we confirm by a simple experiment as discussed next.
For a larger ITC-99 benchmark like b22\_C, we camouflage 50\% of the total logic gates (7,228 gates) present in the overall design.
Assuming that an attacker can identify 90\% of these gates correctly, this still leaves 722 gates wrongly inferred, which yields an HD of 43\% (across ten random trials).
\ul{Overall, the property of \textit{post-fabrication reconfigurability} for the MESO gates
allow us to change the functionality, enabling superior security through dynamic camouflaging.}
\section{Security Analysis: Untrusted Test Facility}
\label{sec:untrusted_test}
Attackers present in the test facility having access to test patterns and corresponding output responses (generated and supplied by the trusted design house), can jeopardize the security guarantees offered by LL and LC.
Modern Automatic Test Pattern Generation (ATPG) algorithms have been designed to maximize the fault coverage (FC) with minimal test pattern count, which directly translates to a lower test cost.
Such an approach, however,
divulges critical information pertaining to the internal circuit specifics~\cite{yasin17_TIFS}.
In the context of VLSI testing principles,
detection of a stuck-at-fault
involves two principal
components, namely (i)~fault activation and
(ii)~fault propagation.
In \textit{fault activation}, a faulty node is assigned a value opposite to the fault induced on that particular node.
Consider the example shown in Fig.~\ref{fig:example_Test_Camo}; here, the output
of logic gate U4 is \textit{s-a-1} (stuck-at-1).
In order to detect this fault, fault activation is achieved by setting this node
to logic '0'.
Next, \textit{fault propagation} entails
propagating the effect of the fault along a sensitization path to one of the primary outputs.
To achieve fault propagation (here to O2), the output of U3 must be '1'.
An input pattern which can detect a fault at a
given node by achieving the effects mentioned above is defined as a \textit{test pattern}.
In Fig.~\ref{fig:example_Test_Camo}, the input pattern \textit{11001} and the corresponding output response \textit{11} is supplied to the test facility, among others.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.85]{figures/Example_Test_Camo.pdf}
\caption{An input pattern which helps in the detection of stuck-at-1 fault at the output of U4. The circuit output `1/0' at output O2 indicates the response for fault-free/faulty circuit are 1 and 0,
respectively.
The input pattern 11001, along with the expected output response 11 is
provided to the test facility for testing manufactured ICs.
The test data hints that U3 cannot be NOR.
Note that, here we assume
that the camouflaged gate can function only as NAND/NOR.}
\label{fig:example_Test_Camo}
\end{figure}
\subsection{Threat Model}
\label{sec:testing_threat_model}
Apart from outsourcing of chip fabrication, many design companies also outsource the testing phase to off-shore companies such as Amkor, ASE, SPIL, etc.~\cite{yasin17_TIFS}.
The implications of
an untrusted test facility in the supply chain have been explored
in the context of
LL~\cite{yasin16_test} and static LC~\cite{yasin17_TIFS}.
However, there has been no thorough analysis yet on the efficacy of test data-based attacks~\cite{yasin17_TIFS} for different
static camouflaging schemes as well as
dynamic camouflaging.
In our threat model, the attacker
resides in the test facility and has access to
the following assets:
\begin{itemize}
\item Gate-level camouflaged netlist, e.g., obtained by RE.
\item Knowledge of the test infrastructure, which includes identification of
scan chains, compressor, decompressor, et cetera, on the target chip.
\item Test patterns and their corresponding output responses, which have been provided by the design house.
He/she also has access to ATPG tools used to generate the test patterns.
\end{itemize}
\subsection{Attack Model}
\label{sec:testing_attack_model}
Yasin \textit{et al.}~\cite{yasin17_TIFS} proposed \textit{HackTest}, which revealed the true identity of camouflaged gates within minutes
by exploiting test data.
The attack leverages the fact that the generation of test patterns is typically tuned to obtain the highest possible FC.
Hence, given the test stimuli and responses,
an attacker can search over the key space (using optimization techniques) to infer the correct assignment of camouflaged gates which maximizes the FC.
Arguably, such an attack is more powerful than SAT-based attacks~\cite{subramanyan15,massad15} which require access to a working chip.
The process of ATPG is highly dependent on the internal specifics of the underlying circuit, which include the type and count of gates, the inter-connectivity amongst these gates, etc.
Years of research have yielded powerful algorithms which lower the test pattern count
while achieving a high FC.
However, these algorithms do not factor in security (yet), and thereby, test patterns
become a rich source of information for an opportunistic attacker.
Next, we briefly explain the notion of \textit{HackTest} with a simple example; interested readers are kindly referred to~\cite{yasin17_TIFS} for further details.
\textbf{Example:} Upon performing ATPG for the circuit shown in Fig.~\ref{fig:Camo_example}, for the correct assignment of two camouflaged
gates (U22 = OR and U28 = AND), eight test patterns are generated by \textit{Synopsys Tetramax}, providing a fault and test
coverage of 100\%.
Camouflaging two gates with two functions each gives rise to four possible circuit configurations;
Table~\ref{tab:fault_covergae_init} denotes the
FC for these configurations.
Armed with input patterns and
corresponding output responses, both tailored for the correct assignment, an attacker calculates
the FC for all possible circuit configurations.
As shown in Table~\ref{tab:fault_covergae_init},
maximal FC is observed only for the correct assignment of camouflaged gates.
This is because, for \textit{static camouflaging}, test patterns have to be generated for the correct assignment of camouflaged gates.
An attacker can easily use FC to guide his/her attack to identify the correct functionality of camouflaged gates.
\begin{table}[ht]
\centering
\footnotesize
\caption{Fault coverage achieved for different assignments to the camouflaged netlist in Fig.~\ref{fig:Camo_example}.
Here we assume U22 and U28 are implementing either AND/OR.
The correct assignment
is OR and AND, respectively;
note that other assignments
result in significantly lower fault coverage.}
\label{tab:fault_covergae_init}
\begin{tabular}{ccc}
\hline
\textbf{U22} &
\textbf{U28} &
\textbf{Fault Coverage (\%)} \\
\hline
AND & AND & 63.33 \\ \hline
AND & OR & 38.33 \\ \hline
OR & AND & 100 \\ \hline
OR & OR & 78.33 \\ \hline
\end{tabular}
\end{table}
\subsection{Experimental Setup}
\label{sec:testing_setup}
We launch \textit{HackTest} on selected benchmarks of ISCAS-85 and ITC-99 suite.
Statistics of benchmarks like the
number of logic gates (\# Gates),
number of faults (\# Faults),
number of test patterns generated by \textit{Synopsys Tetramax} (\# Test patterns), and
corresponding FC are shown in Table~\ref{tab:stats_BM_testing}.
We implement the MESO-based camouflaging primitive along with some selected prior art~\cite{rajendran13_camouflage,bi16_JETC}.
As \textit{HackTest} requires a \textit{BENCH} file format, we employ custom scripts to convert \textit{Verilog} files to required formats.
For the small-scale ISCAS-85 benchmarks,
we prepare ten random sets each for camouflaging 32, 64, and 128 gates, respectively.
For the large-scale ITC-99 benchmarks, we camouflage 350 gates.
For the sake of uniformity in comparison, the selection of camouflaged gates is random but fixed, i.e., they are common and maintained across all benchmarks for any given camouflaging scheme.
The attack is implemented using custom \textit{Python} scripts executing within
\textit{Synopsys Tetramax}.
All attack experiments are carried out on an
Intel Xeon E5-4660 @ 2.2 GHz with \textit{CentOS 6.9} and the time-out (t-o) is set to 24 hours.
\begin{table}[ht]
\centering
\footnotesize
\caption{Statistics of ISCAS-85 and
ITC-99 benchmarks used in this work.
All benchmarks achieve 100\% test coverage and achieve exactly/close to 100\% fault coverage.}
\label{tab:stats_BM_testing}
\setlength{\tabcolsep}{1mm}
\begin{tabular}{ccccc}
\hline
\textbf{Benchmark} &
\textbf{\# Gates} &
\textbf{\# Faults} &
\textbf{\# Test Patterns} &
\textbf{Fault Coverage (\%)} \\
\hline
c880 & 273 & 1,764 & 63 & 100 \\ \hline
c1908 & 230 & 1,462 & 80 & 100 \\ \hline
c2670 & 433 & 2,936 & 134 & 100 \\ \hline
c3540 & 814 & 5,472 & 177 & 100 \\ \hline
c5315 & 1,232 & 7,708 & 124 & 100 \\ \hline
c7552 & 1,197 & 7,474 & 167 & 100 \\ \hline
b14\_C & 4,125 & 24,668 & 470 & 99.99 \\ \hline
b15\_C & 6,978 & 42,310 & 812 & 99.96 \\ \hline
b20\_C & 9,226 & 54,894 & 897 & 99.89 \\ \hline
b22\_C & 14,457 & 85,852 & 1,356 & 99.95 \\ \hline
\end{tabular}
\end{table}
\subsection{Results}
\label{sec:testing_exps_results}
Next, we detail our observations on employing \textit{HackTest} for various test cases.
We begin by examining the impact of \textit{HackTest}
on various static camouflaging schemes.
Finally, we enumerate our findings concerning the resiliency of \textit{dynamic camouflaging}, which is the main focus of this work.
\subsubsection{HackTest on Static Camouflaging}
\begin{figure*}[tb]
\centering
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/key_resolved_NAND_NOR_64.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/key_resolved_NAND_NOR_XOR_64.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/key_resolved_MESO_64.pdf}}
\\
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/key_resolved_NAND_NOR_128.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/key_resolved_NAND_NOR_XOR_128.pdf}}
\hfill
\subfloat[]{\includegraphics[width=.32\textwidth]{figures/key_resolved_MESO_128.pdf}}
\caption{Percentage of key-bits resolved by \textit{HackTest}
for static LC of different sets of camouflaged gates, on selected ISCAS-85 benchmarks.
The top three box-plots denote 64 camouflaged gates, while bottom three box-plots
denote 128 camouflaged gates for three camouflaging schemes: NAND/NOR, NAND/NOR/XOR, and MESO primitive (implementing 8
functions, Fig.~\ref{fig:primitive}).
Each box comprises data for 10 trials of random selection of gates to camouflage.}
\label{fig:HackTest_basic}
\end{figure*}
Yasin \textit{et al.}~\cite{yasin17_TIFS} demonstrated the efficacy of \textit{HackTest} on benchmarks camouflaged with 32/64 gates employed with NAND/NOR camouflaged cells.
For the sake of completeness, we implement this scheme along with a few others.
The success rate for \textit{HackTest}~\cite{yasin17_TIFS} is reported as the percentage of key-bits inferred by the attack.
From the box-plots with 64 gates camouflaged (Fig.~\ref{fig:HackTest_basic} (a), (b), and (c)), the attack's complexity is demonstrated with the
increase in the number of functions
implemented by a single camouflaged gate.
For the NAND-NOR camouflaging scheme,
\textit{HackTest} performs extremely well; all ten random iterations of ISCAS-85 benchmark c7552 can be decamouflaged correctly.
We observe a high accuracy rate for other benchmarks as well, except for \textit{c1908}.
Though the overall accuracy remains high for the NAND-NOR-XOR camouflaging scheme~\cite{rajendran13_camouflage}, it fails to compete, especially when compared to the NAND-NOR camouflaging scheme.
For the MESO-based static camouflaging
scheme, which supports eight functions,
a stark reduction in attack's efficiency is observed.
From the box-plots with 128 gates camouflaged
(Fig.~\ref{fig:HackTest_basic} (d), (e), and (f)), the attack's complexity is demonstrated with an increase in the number of camouflaged gates (w.r.t. Fig.~\ref{fig:HackTest_basic} (a), (b), and (c)).
The overall success rate is lower for all the
camouflaging schemes, hinting on the fact that
the attack's success rate is proportional to the total number of camouflaged gates and the number of correctly/incorrectly inferred gates.
\begin{table}[tb]
\centering
\footnotesize
\caption{Impact of \textit{HackTest} on MESO-based static camouflaging on Hamming distance (HD) and Output error rate (OER) for selected ISCAS-85 benchmarks with 128 camouflaged gates.
HD and OER are averaged across 10 random trials of camouflaged gates.}
\label{tab:static_camo_TEST_HD_OER}
\setlength{\tabcolsep}{1mm}
\begin{tabular}{ccccc}
\hline
\textbf{Benchmark} &
\textbf{\# Camo. Gates} &
\textbf{\# Correctly Inferred} &
\textbf{HD (\%)} &
\textbf{OER (\%)} \\
\hline
c880 & 128 & 23 & 47 & 100 \\ \hline
c1908 & 128 & 29 & 46 & 100 \\ \hline
c2670 & 128 & 51 & 28 & 100 \\ \hline
c3540 & 128 & 39 & 47 & 100 \\ \hline
c5315 & 128 & 65 & 23 & 100 \\ \hline
c7552 & 128 & 64 & 24 & 100 \\ \hline
\textbf{Average} & \textbf{128} & \textbf{45} & \textbf{35.8} & \textbf{100} \\ \hline
\end{tabular}
\end{table}
We also analyze the effect of incorrect key-bits on security metrics
HD and Output Error rate (OER) for MESO-based static camouflaging primitive; results are shown in Table~\ref{tab:static_camo_TEST_HD_OER}.
The HD for benchmarks c880, c1908,
and c3540 approach the ideal value of 50\%,
while the values are around 25\% for benchmarks c2670, c5315, and c7552.
The OER, however, is 100\% for all the designs.
The decrease in HD for benchmarks c5315 and c7552 can be ascertained to the fact that
the number of wrongly inferred gates form a very small portion of the overall design.
For example, we camouflage 128 gates out of 1,197 gates for c7552 which forms about 10.69\% of the overall design.
\textit{HackTest} resolves 64 gates correctly, bringing the percentage of wrongly inferred gates
to 5.35\%.
Similarly, camouflaging 128 gates out of 273 for c880 forms 46.89\% of the design.
\textit{HackTest} resolves only 23 gates correctly, which increases the proportion of
wrongly inferred gates to 38.46\%, which is higher than c7552.
To summarize, we observe that the efficiency of \textit{HackTest} is directly proportional to
(i)~size and type of the benchmark,
(ii)~number and type of camouflaged gates, and
(iii)~number of functions implemented by a
camouflaged gate.
\subsubsection{HackTest on Dynamic Camouflaging}
\begin{table}[tb]
\centering
\scriptsize
\caption{
Impact of increasing the number of possible functions implemented by the MESO-based primitive on \textit{HackTest}'s accuracy for selected ITC-99 benchmarks.
Test patterns are generated by \textit{Tetramax ATPG} for fault coverage and test coverage
of 99\% and 100\%, respectively.
Results are averaged across 10 random trials of camouflaged gates.}
\label{tab:dynamic_camo_testing_diff_functions}
\setlength{\tabcolsep}{0.5mm}
\begin{tabular}{cccccc}
\hline
\textbf{Benchmark} &
\textbf{\# Camo. Gates} &
\textbf{3 functions} &
\textbf{4 functions} &
\textbf{8 functions} &
\textbf{16 functions}
\\ \hline
b14\_C & 350 & 20.37 & 15.13 & 14.69 & 11.49
\\ \hline
b15\_C & 350 & 11.47 & 10.4 & 8.58 & 7.23
\\ \hline
b20\_C & 350 & 17.03 & 14.03 & 11.11 & 8.51
\\ \hline
b22\_C & 350 & 27.03 & 21.52 & 15.33 & 12.48
\\ \hline
\textbf{Average} & 350 & 18.98 & 15.27 & 12.43 & 9.93 \\ \hline
\end{tabular}
\end{table}
\begin{table}[tb]
\centering
\scriptsize
\setlength{\tabcolsep}{0.4mm}
\renewcommand{\arraystretch}{1.07}
\caption{Impact of \textit{HackTest} for MESO-based static (S. Camo) and dynamic camouflaging (D. Camo) schemes on HD and OER for selected ITC-99 benchmarks.
The number of wrongly inferred gates for dynamic camouflaging is higher on average when compared to static camouflaging, which translates to an
improved HD.
HD and OER are calculated by averaging across
10 random trials.}
\begin{tabular}{ccccccccc}
\hline
\multirow{2}{*}
{\textbf{Benchmark}} &
\multicolumn{2}{c}{\textbf{\# Wrongly Inferred Gates}} &
\multicolumn{2}{c}{\textbf{HD (\%)}} & \multicolumn{2}{c}{\textbf{OER (\%)}} \\
\cline{2-7} &
\textbf{S. Camo.} &
\textbf{D. Camo.} &
\textbf{S. Camo.} &
\textbf{D. Camo.} &
\textbf{S. Camo.} &
\textbf{D. Camo.}
\\ \hline
b14\_C & 249 & 298 & 36.04 & 42.09 & 100 & 100
\\ \hline
b15\_C & 296 & 320 & 32.07 & 34.23 & 100 & 100
\\ \hline
b20\_C & 279 & 310 & 32.15 & 35.03 & 100 & 100
\\ \hline
b22\_C & 212 & 296 & 20.09 & 29.57 & 100 & 100
\\ \hline
\textbf{Average} & 259 & 306 & 30.09 & 35.23 & 100 & 100
\\ \hline
\end{tabular}
\label{tab:comparison_Static_Dynamic_TEST_ITC}
\end{table}
As elucidated before, none of the static camouflaging approaches~\cite{rajendran13_camouflage,li16_camouflaging}
allow for post-fabrication reconfiguration
and, hence, test patterns are generated
for the correct assignment of camouflaged gates.
\ul{MESO-based dynamic camouflaging
circumvents this threat by allowing for \textit{post-test configuration}.
That is, the fabricated IC can be initially configured with an incorrect I/O mapping and functionality.}
The ``falsely configured'' IC and related test data are then
sent to the test facility.\footnote{
Testing for structural defects does not require the chip to be functional; chips can be configured to any function and tested
with no loss in test quality~\cite{yasin16_test,yasin17_TIFS}.}
Accordingly, an attacker will end up with an \textit{incorrect} IP when
mounting \textit{HackTest} on the IC.\footnote{This resonates with the idea of \textit{post-test activation}~\cite{yasin16_test}, which is the adopted strategy for safeguarding against
untrusted test facilities in logic locking.}
After testing is finished,
the MESO gates are reconfigured (by the design
house or some trusted entity) to reflect the
true, intended functionality.
Table~\ref{tab:dynamic_camo_testing_diff_functions} details the effect of increasing the number of possible functions implemented by the MESO-based primitive on \textit{HackTest}'s accuracy.
We observe that the attack accuracy reduces (for the same set of camouflaged gates) when the number of functions implemented by the MESO-based primitive is increased.
This can be reasoned from the fact that, with an increase in the number of possible functions,
the attack has a larger solution space to tackle.
Finally, we also examine the security promises for both static and dynamic camouflaging for the MESO-based primitive; results are shown in Table~\ref{tab:comparison_Static_Dynamic_TEST_ITC}.
It can be seen that the number of wrongly inferred gates is higher for dynamic camouflaging.
This increase
also translates to a higher HD (about 5.14\%).
The OER, however, remains at 100\% for both the schemes.
Figure~\ref{fig:line_graph_FC_HD} shows the dependence of \textit{HackTest}'s success rate as a function of HD
for selected ITC-99 benchmarks.
This plot reiterates that the degree of functional reconfiguration (measured as HD) has a strong impact on the accuracy of \textit{HackTest}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\columnwidth]{figures/FC_HD_success.pdf}
\caption{Success rate of \textit{HackTest} as a function of HD for selected ITC-99 benchmarks. It is evident that the degree of
functional reconfiguration, also expressed by HD, can be leveraged by a designer to influence the overall success rate for \textit{HackTest}.}
\label{fig:line_graph_FC_HD}
\end{figure}
\section{Security Analysis: Untrusted End-User}
\label{sec:untrusted_user}
\subsection{Threat Model}
\label{sec:end_user_threat_model}
The threat model which we employ for security analysis for an untrusted end-user follows very closely to the ones described in the literature~\cite{subramanyan15,massad15,li16_camouflaging}.
\begin{itemize}
\item The attacker has access to advanced, specialized equipment for reverse engineering an IC,
which includes setup to depackage an IC, delayer it, imaging of individual layers, and
image-processing tools.
\item Further, he/she can readily distinguish between a camouflaged cell and a regular, standard cell.
If hybrid spin-CMOS circuits are used, it is straightforward to identify the CMOS gates, whereas the complexity is increased manifold, if all the gates are implemented using MESO devices.
\item The attacker is aware of the total number of camouflaged gates, and the number and type of functions implemented by each camouflaged cell.
\item He/she procures multiple chip copies from the open market, uses one of them as an oracle (to observe the input-output mapping),
and extracts the gate-level netlist of the chip by reverse engineering the others.
This paves the way for algorithmic SAT-based attacks~\cite{subramanyan15,massad15,shamsi17}.
\item Consistent with the most prior art, we assume that an attacker cannot invasively probe the output of a camouflaged cell.\footnote{We
acknowledge that there is an attack proposed by Keshavarz \textit{et al.}~\cite{KeshavarzHOST2018}, where an SAT-based formulation is augmented with probing
and fault-injection capabilities to reverse engineer a relatively small \textit{S-Box}.
Still, it remains to be seen how this attack would fare when large-scale camouflaging is effected.
Having no access to this attack at the time of writing, we refrain from any empirical analysis.}
It is straightforward to note that once an adversary is allowed probing capabilities, i.e., to probe the output of a camouflaged cell
(or read out contents from a TPM for locking),
then the security guarantees offered by these schemes are substantially weakened, if not even nullified.
\end{itemize}
An attacker may also try to observe various side-channels like power, timing, photonic, acoustic, etc.
However, note that we do not consider the effect of side-channels emission from the MESO
switch in this work; this remains part of our future work, once efficient circuit- and/or layout-level models are available for MESO gates.
\subsection{Attack Model and Setup}
\label{sec:end_user_attack_setup}
In 2015, Subramanyan \textit{et al.}~\cite{subramanyan15} and Massad \textit{et al.}~\cite{massad15} independently
demonstrated SAT-based attacks to circumvent security guarantees offered by LL and LC, respectively.
Interested readers are referred to the respective papers for further details.
We leverage the publicly available attack~\cite{subramanyan15} to perform the security analysis for an untrusted end-user.
\begin{figure}[b]
\centering
\includegraphics[scale=0.095]{figures/dynamic_morphing.pdf}
\caption{Dynamic morphing of gate X4 in a representative circuit.
Circuit implementing $f_1$ is the original template and $f_2$, $f_3$ are the morphed versions.}
\label{fig:circuits1}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[scale=0.36]{figures/SAT_Table.pdf}
\caption{SAT-based attack~\cite{subramanyan15} on the polymorphic circuit of Fig.~\ref{fig:circuits1}.
For $k_0$ and $k_1$, the INV and BUF operations performed on the output of $X2$.}
\label{fig:dynamic_camo_SAT}
\end{figure}
\subsection{Results and Discussion}
\label{sec:dynamic_camo_end_user_results}
Next, we explain how dynamic camouflaging can
thwart attacks arising from the perspective of malicious end-users.\footnote{
The MESO-based primitive can
also be leveraged in the context of static camouflaging to protect against malicious end-users.
It has been shown empirically in prior studies~\cite{massad15,patnaik2020obfuscating,patnaik18_GSHE_DATE,patnaik2019spin} that large-scale camouflaging with cells implementing more functions
pose significant computational complexity for such attackers.}
We illustrate the related concept of \textit{run-time
polymorphism for dynamic morphing}
through a conceptual example; consider the circuit
in Fig.~\ref{fig:circuits1}.
Here, $X4$ is the only camouflaged, polymorphic
gate modeled with three key-bits.
Assuming a key distribution such that
INV, BUF, AND, OR, NAND, NOR, XOR, and XNOR gates
correspond to key-bits $\{000, 001,...., 111\}$, respectively, the dynamic key of the circuit cycles from $100$ to $101$, and then to $111$, as per the outlined
functional reconfiguration in Fig.~\ref{fig:circuits1}.
The application of the SAT-based attack~\cite{subramanyan15,massad15} for the simple scenario in
Fig.~\ref{fig:circuits1} is
explained next (Fig.~\ref{fig:dynamic_camo_SAT}).
Consider that the oracle (i.e., an actual working chip obtained from the market) implements $f_1$ during
the first iteration of the SAT solver,
where the input applied is $101$.
Note that the oracle is to be configured for test mode, to provide access to the circuit internals through scan chains, as required when modeling the
whole circuit for the SAT-based attack.
In principle, the oracle may behave differently in the test mode and in the operational (functional) mode.
Naturally, the SAT solver is oblivious to the function being active internally in the oracle during \textit{any} iteration.
Also note that, once inputs are applied to the oracle, the SAT solver has to wait until the oracle provides the corresponding outputs.
Now, that first SAT iteration prunes key combinations
$k_0, k_2, k_5$, and $k_7$.
While this is happening, assume that the gate $X4$
has morphed into NOR, and the oracle
is now implementing
function $f_2$. In the second SAT iteration, the input pattern $100$, therefore, eliminates keys $k_3, k_4$, and $k_6$.
Thereafter, the SAT solver concludes that the correct key bit and identity of gate $X4$ are $001$ and BUF, respectively.
In essence, dynamic camouflaging can deceive
and mislead the SAT solver to converge to an \textit{incorrect key}, leading to an
\textit{incorrect} gate assignment.
For an exploratory study,
we extend the framework from~\cite{subramanyan15} to realize SAT-based attacks on polymorphic versions of ITC-99 benchmarks.
Even for 100,000 randomized trials,
the related attacks fail due to inconsistent I/O mappings, as these induce \textit{unsatisfiable} (\textit{UNSAT}) scenarios for the attack framework.
Taking this simple example from Fig.~\ref{fig:circuits1} further,
for error-tolerant applications like image/video processing, the circuit may
indeed be reconfigured randomly, e.g.,
by deriving the control bits of the MESO gates from a TRNG---we present results for \textit{AppSAT}~\cite{shamsi17} on such error-tolerant applications in Section~\ref{sec:MESO_CeNN_results_discussion}.
Besides SAT-based attacks, when concerned about \textit{physical attacks} conduced by an end-user, one has to ensure that the interconnect fabric
which routes the control bits and control signals to the MESO gates is resilient against probing; e.g., shielding may be used toward that end~\cite{ngo17}.
\textit{Removal attacks} targeting the TRNG
shall result in floating controls for the MESO gates, leading to noisy outputs, and loss of functionality (as well as hindrance of SAT-based attacks discussed above).
Other advanced attacks, e.g., directed at distorting the entropy of the TRNG to change its bias~\cite{bayon2012contactless}, are considered out-of-scope for this work.
\section{Case Study: MESO CeNN-Based Approximate Image-Processing IP}
\label{sec:case_study}
In this section, we demonstrate how dynamic camouflaging can help in protecting
approximate, error-tolerant circuits.
We design a cellular neural network (\textit{CeNN}) using MESO gates.
\textit{CeNNs} are a massively parallel neural network-based computing paradigm, which consist of
an n-dimensional array of locally interconnected cells that communicate within a neighbourhood.
They are typically used in a variety of applications including image filtering and reconstruction,
edge detection, solving partial differential equations and optimization problems. The cells of a CeNN are multiple-input single-output processors, characterized by an internal state variable. These processing cells act as neurons that integrate the input currents, and the interconnects between the cells act as synapses that perform weighting of the inputs.
The dynamical state equation for a CeNN neuronal cell, put forth by Chua and Yang \textit{et al.}~\cite{chua1988cellular}, is as follows:
\begin{equation}
\begin{aligned}
C\frac{dx_{ij}}{dt} &=-\frac{1}{R}x_{ij}+\sum_{kl}A(i,j;k,l)f(x_{kl})\\
&+\sum_{kl}B(i,j;k,l)U_{kl}+I_{ij}
\end{aligned}
\end{equation}
where $x_{ij}$ is the internal state of a neuron, $\{i,j\}$ represent the neighbourhood of the neuron, $A$ and $B$ are the synaptic weights connecting two neighbouring cells, $I$ is a constant bias current, $R$ and $C$ are the resistance and
capacitance of the cell, $U_{kl}$ and $f(x_{kl})$ are the input and state-dependent output of the cells, respectively.
As demonstrated in~\cite{pan2016proposal},
spintronic devices are able to directly implement a CeNN for image-processing applications, without the need for analog VLSI elements. We chose CeNN as a representative example to highlight the application of dynamic camouflaging in real scenarios owing to the fact that it can function as a relatively simple and low-cost image-processing circuit, with a single layer of input cells. However, the concept of dynamic camouflaging can be extended to any approximate IP, without loss of generality.
\subsection{Construction}
We adopt the same methodology for the
construction of the magnetic synapses and neuron of the CeNN as in~\cite{pan2016proposal}.
However, we design the neuron cells using the MESO device instead of the all-spin logic device used
in~\cite{pan2016proposal}.
The parameters for the MESO device used for the CeNN cells are obtained from~\cite{manipatruni2019scalable}.
Fig.~\ref{fig:CeNN} (a) highlights the connectivity of cells in the MESO CeNN and Fig.~\ref{fig:CeNN} (b) shows the construction of the MESO CeNN.
The transient switching of a MESO CeNN cell along with the CeNN templates $\{$A, B, I$\}$ used for simple image reconstruction (from~\cite{pan2016proposal}) are portrayed in Fig.~\ref{fig:CeNN} (c).
The central MESO device is connected to eight other MESO devices in a $3\times3$ grid.
The weighting operation can be realized by (i)~using a layer of CMOS transistors with different driving strengths, in between the input and output layers, as demonstrated in~\cite{pan2016proposal}, or by (ii)~inserting multi-terminal magnetic domain wall (DW) weighting devices~\cite{he2017energy} in the interconnects between the input and output layers.
Both these weighting mechanisms are able to implement several levels of weights for precise image-processing applications.
We note that the former approach also requires additional transduction circuitry for converting the current signals from the input layers, into voltage signals that can be fed to the CMOS driving transistors.
In the case of the DW weighting devices, dedicated programming terminals are used to set the position of the DW and control the conductance (weight) of the device, which then scales the current passing through its input terminals.
Readers are referred to the respective papers for further details.
The weighting units are omitted from Fig.~\ref{fig:CeNN} (b) for simplicity.
\begin{figure}[tb]
\centering
\includegraphics[scale=0.17]{figures/MESO_CNN.pdf}
\caption{(a) Inter-connectivity of cells in the MESO-based Cellular Neural Network (CeNN).
(b) Construction of the MESO CeNN for image reconstruction.
Each cell in the network is implemented by
a MESO device.
(c) Magnetization vs. time shows the
switching of the central MESO CeNN cell, when inputs from its nearest neighbor cells are applied.
The switching delay is $\sim200$ ps.
The MESO CeNN is simulated using a Landau-Lifshitz-Gilbert dynamics framework on \textit{CUDA-C}~\cite{kani2017modeling}, with 1,000 nanomagnet simulations per cell.
Inset shows templates $\{A, B, I\}$ used to configure the CeNN.}
\label{fig:CeNN}
\end{figure}
\begin{figure*}[tb]
\centering
\includegraphics[width=1\textwidth]
{figures/AppSAT_flowchart.pdf}
\caption{Flowchart illustrating the application of \textit{AppSAT} on a dynamically camouflaged approximate image-processing IP.
\textit{AppSAT} recovers partial keys from the
different circuit templates, and the equivalent stitched key deviates significantly from the original IP.}
\label{fig:AppSAT_flowchart}
\end{figure*}
\subsection{Experimental Setup}
We investigate the implications of attacking an
approximate image-processing IP with \textit{AppSAT}~\cite{shamsi17}.
Approximate circuits are vulnerable to such attacks since the attacker can recover a functionally-similar IP.
Since the original circuit is approximate to begin with, an attacker might be satisfied by
obtaining an IP which has, say, 95$\%$ fidelity compared to the original design.
For example, consider the case of an approximate image reconstruction hardware module.
The application of \textit{AppSAT} on this module may give an attacker a functionally-similar circuit and, if needed, he/she could then augment this reverse engineered module with software ML models to obtain a precision equivalent to the original image reconstruction hardware.
In this section, we show that the \textit{AppSAT}-recovered IP of an approximate circuit can deviate significantly from the original IP, enough to render such an attack futile.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.6]{figures/TETC_TRNG.pdf}
\caption{Generation of control/selector signal for randomized functional transformation of the circuit between a set of pre-determined templates.}
\label{fig:random_reconf}
\end{figure}
To safeguard the MESO-based CeNN against
\textit{AppSAT}, we use \textit{run-time polymorphism}
for dynamic morphing.
This means that in the MESO CeNN circuit of Fig.~\ref{fig:CeNN} (b), certain MESO gates will be polymorphic, enabling a circuit-level polymorphic
reconfiguration between the original circuit and, say, three templates f$_1$, f$_2$, and f$_3$.
Hence, the image-processing IP will work at a \textit{sub-optimal accuracy} which is inversely proportional
to the HD between the different morphing circuit templates and the original function.\footnote{Here we use HD as a representative metric for image quality.
However, HD can be translated to other image processing relevant metrics and the conclusions of this study do not depend on the choice of this metric. }
By tuning this HD through system-level design, i.e., selecting gates that need to be polymorphic, one can control how similar the IP recovered by
\textit{AppSAT} will be when compared to the original IP.
Reconfiguration between these circuit templates is controlled by using a TRNG to drive a selector circuit,
which selects one set of key-bits, as shown in a simple example in Fig.~\ref{fig:random_reconf}.
Note that, in this scenario, the TRNG does not control the distinct key-bits of the MESO gate individually;
rather, the control signal is derived from the TRNG such that it randomly cycles between pre-determined sets,
which will transform the gate/circuit into one out of several pre-determined templates.
\ul{When \textit{AppSAT} is mounted on
such a dynamically morphing circuit,
the constant functional reconfiguration results in the attack recovering parts of the key from different circuit templates at different instances of time.
Therefore, the overall stitched key recovered by \textit{AppSAT} from all the circuit versions may have an HD significantly different from the original IP.} Please note that, the HD between the polymorphic templates and the original IP, which dictates the accuracy at the system-level, is \textit{different}
from the HD between the \textit{AppSAT}-recovered IP
and the original IP.
The application of \textit{AppSAT} on the approximate IP, along with the key recovery and HD calculation, is represented as a flowchart in Fig.~\ref{fig:AppSAT_flowchart}.
Since CAD tools and synthesizable \textit{Verilog} libraries for emerging spin devices like MESO are
under development,
we present proof-of-concept simulations on large-scale ITC-99 benchmarks.
In experiments on b14\_C benchmark, $\sim11\%$, $\sim9\%$ and $\sim14\%$ HDs between the original IP and the polymorphic templates f$_1$, f$_2$, and f$_3$, respectively, translate to $\sim28\%$ HD between the \textit{AppSAT}-recovered IP and the original IP.
In Table~\ref{tab:HD_comparison}, templates 1-3 are approximate versions of each benchmark, with their respective HD from the original design.
We execute \textit{AppSAT} (setup details same as in~\cite{shamsi17}) considering that the benchmark morphs between its original form and
three approximate templates.
\textit{AppSAT} provides an \textit{approximate-key} after time-out, unlike the SAT-based attack~\cite{subramanyan15}.
With this approximate key, HD is computed between the \textit{AppSAT}-recovered
IP and the original IP, whereas results are quoted in the last column of Table~\ref{tab:HD_comparison}.
\begin{table}[ht]
\centering
\footnotesize
\setlength{\tabcolsep}{1mm}
\renewcommand{\arraystretch}{1.3}
\caption{Comparison of HD (in \%)
between various polymorphic templates and original function, and HD inferred between \textit{AppSAT} recovered IP and original IP
for selected ITC-99 benchmarks.
HD is calculated using \textit{Synopsys VCS} for 100,000 patterns}
\label{tab:HD_comparison}
\begin{tabular}{*{5}{c}}
\hline
\textbf{Benchmark}
& \multicolumn{3}{c}{\textbf{HD (in \%) from the original design}}
& \multicolumn{1}{c}{\textbf{HD inferred }}
\\
\cline{2-4}
& \textbf{template-1}
& \textbf{template-2}
& \textbf{template-3}
& \textbf{after \textit{AppSAT}}
\\
\hline
b14\_C
& 11.22 & 9.26 & 13.78
& 28.81
\\ \hline
b15\_C
& 12.35 & 9.62 & 12.88
& 32.15
\\ \hline
b17\_C
& 11.14 & 10.62 & 15.24
& 36.22
\\ \hline
b20\_C
& 12.51 & 14.37 & 17.86
& 34.34
\\ \hline
\end{tabular}
\end{table}
\subsection{Results and Discussion}
\label{sec:MESO_CeNN_results_discussion}
The image reconstructed by the CeNN IP as recovered by
\textit{AppSAT} (at various representative values of HD between
the \textit{AppSAT}-recovered IP and original IP) is shown in Fig.~\ref{fig:Image_Processing}.
Although the average HD numbers for the proof-of-concept simulations and attacks above are between 28--36\% (see last column of
Table~\ref{tab:HD_comparison}), here we assume an even more powerful attack.
That is, we gauge the resilience offered by dynamic morphing when trying to reconstruct images for representative HD values of 10--25\%.
As can be seen in Fig.~\ref{fig:Image_Processing} (e),
at a sufficiently large HD of $25\%$, the \textit{AppSAT}-recovered CeNN IP fails to faithfully reconstruct the original image.
For the \textit{AppSAT}-recovered IPs incurring even larger HDs in Table~\ref{tab:HD_comparison},
the reconstructed image will naturally be even more noisy.
\begin{figure}[ht]
\centering
\subfloat[Original]{\includegraphics[scale=0.3]{figures/Original.pdf}\label{Image1}}
~
\subfloat[$10\%$ HD]{\includegraphics[scale=0.3]{figures/ER10.pdf}\label{Image2}}
~
\subfloat[$15\%$ HD]{\includegraphics[scale=0.3]{figures/ER15.pdf}\label{Image3}}
~
\subfloat[$20\%$ HD]{\includegraphics[scale=0.3]{figures/ER20.pdf}\label{Image4}}
~
\subfloat[$25\%$ HD]{\includegraphics[scale=0.3]{figures/ER25.pdf}\label{Image5}}
\caption{(a) Original image to the MESO-based CeNN image reconstruction.
(b-e) Images reconstructed with approximate IP of the CeNN recovered from \textit{AppSAT}, for HD of $10\%$, $15\%$, $20\%$, and $25\%$, respectively between \textit{AppSAT}-recovered IP and the original IP.
It is essential to note that this HD is different from the accuracy of the approximate circuit reported in Table~\ref{tab:HD_comparison}.}
\label{fig:Image_Processing}
\end{figure}
Further, text reconstructed using the approximate CeNN IP, recovered by \textit{AppSAT} (for an HD of 20$\%$ between \textit{AppSAT}-recovered IP and the original IP), is incorrectly inferred by optical character recognition (OCR) engines like \textit{Tesseract}~\cite{smith2007overview} (Fig.~\ref{fig:CeNN_word}).\footnote{\textit{Tesseract} is the industry standard OCR used by \textit{Google} on its mobile devices and for text detection in Gmail.
It has been trained using Google's character dataset containing millions of images, and can identify more than 100 languages.}
To substantiate the inability of an attacker to
gain a satisfactory approximate IP of the image reconstruction hardware, we use the Long Short Term Memory (LSTM) recurrent neural network module of the \textit{Tesseract 4.0} OCR engine on all alphabets at various HDs between \textit{AppSAT}-recovered IP and original IP. As shown in Fig.~\ref{fig:CeNN_barplot}, the neural network-based OCR is unable to faithfully detect the reconstructed text at higher HDs close to 25$\%$.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.25]{figures/CeNN_word.pdf}
\caption{Incorrectly inferred text, reconstructed using an approximate IP of the CeNN recovered by \textit{AppSAT}, for HD of $20\%$ between \textit{AppSAT}-recovered IP and the original IP. }
\label{fig:CeNN_word}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.4]{figures/CeNN_barplot.pdf}
\caption{Proportion of alphabets correctly
identified by neural net-based \textit{Tesseract 4.0} OCR engine, where the images were reconstructed using a MESO-based CeNN, for various HD between the \textit{AppSAT}-recovered IP and the original IP.}
\label{fig:CeNN_barplot}
\end{figure}
Thus, we point out that, there is a clear trade-off between the accuracy of
the original IP
and the resilience to \textit{AppSAT} attacks,
in terms of how closely \textit{AppSAT}
is able to resolve the original IP.
However, \ul{for approximate applications like image processing,
which can tolerate a certain degree of error,
our scheme of dynamic morphing can thwart attempts to recover even an approximate version of the IP.}
Advanced IP protection mechanisms based on point-functions~\cite{xie2016mitigating,li16_camouflaging} and stripping of functionality~\cite{yasin17_CCS}
may not be suitable for protecting error-tolerant applications
such as the image-processing system considered in this section.
This is because the above mentioned techniques trade-off output corruptibility for SAT-attack resilience; the lower the output corruptibility, the stronger the resilience.
Hence, attacks working on the notion of recovering an approximate version of the protected IP (e.g., \textit{AppSAT}) are able to successfully recover
a \textit{satisfactorily functionally similar} IP
if protected using~\cite{li16_camouflaging,yasin17_CCS}.
We finally like to note that dynamic morphing cannot protect systems which demand highly accurate and error-free computations, e.g., cryptographic applications.
How reconfiguration at run-time might help in providing additional layer of security for these systems remains an open problem.
In general, dynamic camouflaging is suitable for applications that can tolerate a certain degree of error, including machine
learning, image processing, neuromorphic circuits etc., which also require protection against reverse engineering due to the sensitive nature of
their IP.
\section{Synthesis-Level Cost Analysis}
\label{sec:PPA_cost_analysis}
In this section, we benchmark the
synthesis-level cost for the MESO-based
camouflaging primitive along with other spin-based devices.
We use the ITC-99 suite for benchmarking, rather than the CeNN image-processing IP demonstrated in Section~\ref{sec:case_study}, to showcase
the general prospects of full-chip camouflaging using spin-based devices.
We note that full-chip dynamic camouflaging may be uncalled for practical applications as in the case study above; again, this analysis here
is for benchmarking of different devices.
\textbf{Setup:}
We compare the all-spin logic (ASL) primitive~\cite{alasad2017leveraging}, the giant spin-Hall effect (GSHE) primitive~\cite{patnaik2019spin}, and the MESO
primitive of this work in Table~\ref{tab:devices_PPA_comparison}.
The baseline designs are implemented in CMOS
and have been synthesized using \textit{Synopsys Design Compiler} with 2-input gates, in addition to inverters and buffers.
For each synthesized netlist, we replace all cells by their corresponding emerging-device model.
Given that libraries and physical-design files for spin-based devices are not available yet, and also given that leading CAD vendors like
\textit{Synopsys} and \textit{Cadence} do not support system-level simulations of such spin-based devices yet, this setup is a practical approach.
For the MESO
primitive, we also include the peripheral MUXes shown in Fig.~\ref{fig:MESO_peripherals}, and we characterize them using \textit{Cadence
Virtuoso} for the 15-nm CMOS node using the \textit{NCSU FreePDK15} FinFET library, for a supply voltage of 0.8V.
For benchmarking, for example, the ITC-99 benchmark b17\_C
comprises 24,228 2-input and inverter/buffer instances.
Using the GSHE primitive, along with its peripherals, each of these instances
would consume a power of $0.2673$ $\mu$W~\cite{patnaik2019spin}.
For the MESO primitive, again with peripherals, each gate would consume $0.0615$ $\mu$W.
With simple arithmetic calculations we conclude that the GSHE-based logic would consume 6.5 mW while the MESO-based logic would consume 1.5 mW for
b17\_C. For area calculations, the same approach is taken.
For timing calculations, we keep track of the gates in the critical path, and the delay numbers are summed up.
For example, we observe 50 gates in the critical path for b17\_C. For the GSHE-based logic, each of these gates would incur a delay of
1.83 ns~\cite{patnaik2019spin}, resulting in a total delay of 91.5 ns. For MESO-based logic, with peripherals,
a delay of 0.2579 ns incurs for each instance, which totals to 12.895 ns.
\begin{table}[ht]
\centering
\scriptsize
\setlength{\tabcolsep}{1mm}
\caption{Comparison of Selected Emerging Device Primitives}
\label{tab:devices_PPA_comparison}
\input{figures/tab-comparison-GSHE}
\\[1mm]
The delay for Obfuscated MESO is the sum of the switching time for the intrinsic device (230 ps), the switching time for the interconnect (2.9 ps), and the delay induced by the peripheral MUXes.
The corresponding switching energies are 9.3 aJ for the device and 0.18 aJ for the interconnect;
these values are extracted from Table~4 of the supplementary material of~\cite{manipatruni2019scalable}.
The peripheral MUXes (shown in Fig.~\ref{fig:MESO_peripherals}) have been simulated using \textit{Cadence Virtuoso} for the
15-nm CMOS node using the NCSU FreePDK15 FinFET library, for a supply voltage of 0.8V.
The area for an intrinsic MESO device, without peripherals, is
$0.014$ $\mu \text{m}^2$~\cite{manipatruni2019scalable}.
\end{table}
\textbf{Results:}
We provide the comparison between selected emerging device primitives in Table~\ref{tab:devices_PPA_comparison}.
The results for full-chip camouflaging are presented in Table~\ref{tab:ppacomparison}.
We note that ASL-based~\cite{alasad2017leveraging} and GSHE-based~\cite{patnaik2019spin} full-chip camouflaging incurs
excessive power and timing overheads.
On the other hand, MESO-based camouflaging
offers substantial reductions relative to
these spin devices and can be expected to perform even better than CMOS-based camouflaging schemes.\footnote{In general, for smaller camouflaging scales and
hybrid designs (i.e., emerging spin devices along with CMOS),
area and power gains would scale down accordingly, whereas performance will remain similar, given that
the emerging devices dominate the switching times.}
This is because the polymorphic MESO device consumes significantly lower switching energy, in the order of $\sim\!\!10$ atto Joules,
due to its energy-efficient electric-field-driven reversal.
\begin{table}[ht]
\centering
\scriptsize
\setlength{\tabcolsep}{0.9mm}
\renewcommand{\arraystretch}{1.2}
\caption{Comparison between Area, Power, and Performance for ASL-based~\cite{alasad2017leveraging}, GSHE-based~\cite{patnaik2019spin},
and MESO-based full-chip camouflaging on selected ITC-99 benchmarks.
Absolute values are provided.
Area is in $\mu$m$^{2}$, Power in mW, and Delay in ns. N/A indicates not available.}
\label{tab:ppacomparison}
\begin{tabular}{*{10}{c}}
\hline
\textbf{Benchmark}
& \multicolumn{3}{c}
{\textbf{ASL-based~\cite{alasad2017leveraging}}} & \multicolumn{3}{c}
{\textbf{GSHE-based~\cite{patnaik2019spin}}}
& \multicolumn{3}{c}{\textbf{MESO-based}}\\
\cline{2-10}
& \textbf{Area} & \textbf{Power} & \textbf{Perf.}
& \textbf{Area} & \textbf{Power} & \textbf{Perf.}
& \textbf{Area} & \textbf{Power} & \textbf{Perf.} \\
\hline
b15\_C
& N/A & 2,702 & 54
& 223.6 & 2.1 & 71.4
& 183.1
& 0.5
& 10.1
\\ \hline
b17\_C
& N/A & 8,494 & 71
& 702.6 & 6.5 & 91.5
& 575.4
& 1.5
& 12.9
\\ \hline
\textit{b18\_C}
& N/A & 21,783 & 137
& 1,800.2 & 16.6 & 115.3
& 1,474.3
& 3.8
& 16.3
\\ \hline
\textit{b19\_C}
& N/A & 42,027 & 165
& 3,473.7 & 32.1 & 177.5
& 2,844.9
& 7.4
& 25
\\ \hline
\end{tabular}
\end{table}
We also compare synthesis-level PPA cost with a prior CMOS- and LUT-based scheme~\cite{baumgarten2010preventing} in Table~\ref{tab:ppacomparison_NEW}.
As mentioned in Section~\ref{sec:toward_dynamic_camo},
functional polymorphism can also be implemented using CMOS-based reconfigurable units, such as FPGA LUTs.
We source the implemented scheme of~\cite{baumgarten2010preventing} from the set of benchmarks provided in~\cite{subramanyan15}.
On average, the scheme~\cite{baumgarten2010preventing} incurs area and power overheads of 193\% and 206\%, respectively, over original designs.
The MESO-based reconfiguration scheme does not incur such cost for area and power, but rather significant savings; only for delay/performance,
the MESO-based scheme incurs a higher cost than the CMOS-based scheme.
Therefore, the use of MESO devices can offer significant advantages for dynamic camouflaging, especially for circuits which are not reliant on
high performance.
\begin{table}[tb]
\centering
\scriptsize
\setlength{\tabcolsep}{0.43mm}
\renewcommand{\arraystretch}{1.2}
\caption{Comparison between Area, Power, and Performance for LUT-based obfuscation~\cite{baumgarten2010preventing} and MESO-based primitive for dynamic reconfiguration on selected ISCAS-85 benchmarks.
Absolute values are provided.
Area is in $\mu$m$^{2}$, Power in mW, and Delay in ns.
}
\label{tab:ppacomparison_NEW}
\begin{tabular}{*{10}{c}}
\hline
\textbf{Benchmark}
& \multicolumn{3}{c}
{\textbf{Original (CMOS)}}
& \multicolumn{3}{c}
{\textbf{LUT-based (CMOS)~\cite{baumgarten2010preventing}}}
& \multicolumn{3}{c}{\textbf{MESO-based}}\\
\cline{2-10}
& \textbf{Area} & \textbf{Power} & \textbf{Perf.}
& \textbf{Area} & \textbf{Power} & \textbf{Perf.}
& \textbf{Area} & \textbf{Power} & \textbf{Perf.} \\
\hline
c432
& 164.65 & 0.03 & 2.79
& 543.71 & 0.11 & 2.96
& 4.89
& 0.01
& 5.67
\\ \hline
c880
& 239.93 & 0.03 & 3.31
& 780.98 & 0.12 & 3.48
& 6.34
& 0.02
& 7.48
\\ \hline
c1908
& 250.57 & 0.05 & 3.72
& 674.31 & 0.14 & 3.89
& 5.79
& 0.02
& 4.9
\\ \hline
c2670
& 396.87 & 0.06 & 3.16
& 1,193.0 & 0.18 & 3.24
& 10.31
& 0.03
& 7.22
\\ \hline
c3540
& 780.18 & 0.14 & 3.85
& 2,308.08 & 0.42 & 3.91
& 22.52
& 0.06
& 7.74
\\ \hline
c5315
& 1,029.95 & 0.17 & 3.63
& 2,764.01 & 0.43 & 3.73
& 28.76
& 0.07
& 6.45
\\ \hline
c7552
& 1,138.48 & 0.23 & 3.93
& 2,936.11 & 0.55 & 3.88
& 28.81
& 0.08
& 7.99
\\ \hline
\textbf{Average Cost} & -- & -- & -- &
\textbf{193\%} & \textbf{206\%} & \textbf{3\%} & \textbf{-97\%} & \textbf{-56\%} & \textbf{96\%} \\
\hline
\end{tabular}
\end{table}
\section{Conclusion and Future Work}
\label{sec:conclusion}
Functional polymorphism has been largely unexplored in the context of securing hardware.
We present \textit{dynamic camouflaging} as a novel design-for-trust technique, based on the foundations of run-time polymorphism and post-fabrication reconfigurability exhibited by emerging spin-based devices.
Dynamic camouflaging serves well to secure the supply chain
end-to-end, including the foundry, the test facility,
and the end-user.
We show that securing error-tolerant IPs, such as image processors, is suitable
from the standpoint of dynamic camouflaging.
Finally, MESO-based full-chip camouflaging can offer
savings in PPA when compared to both ASL-based and GSHE-based camouflaging approaches.
As a part of future work,
we aim to explore viable techniques for securing
non-error-tolerant systems like cryptographic applications and/or mission critical systems
via dynamic camouflaging.
Besides, we will explore the design and implementation of system-level control circuitry for dynamic camouflaging.
\section*{Acknowledgments}
The work of Satwik Patnaik was supported by the Global Ph.D. Fellowship at NYU/NYU AD.
Besides, this work was carried out in part on the HPC facility at NYU AD.
This work was supported in part by the
Semiconductor Research Corporation (SRC) and
the National Science Foundation (NSF)
through ECCS 1740136.
\ifCLASSOPTIONcompsoc
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,497,826 | arxiv |
\section{Introduction}
\subsection{Background and Motivation}
The tremendous growth in new wireless systems and services has caused a shortage in the useful radio spectrum~\cite{2012president}.
For instance, the Federal Communications Commission (FCC)
and the National Telecommunications and Information Administration (NTIA) proposed
to adjust the allocation of 150 megahertz bandwidth from surveillance and air defense to
radar and communication systems in the 3.5 GHz band~\cite{federal2012enabling,locke2010assessment,liu2021integrated,cui2021integrating}. Agile spectrum use and spectrum sharing among different radio systems provide promising technologies for alleviating the lack of useful spectrum.
By exploiting radar spatial degrees of freedom (DoFs), the joint transmit design of a multiple-input multiple-output matrix completion (MIMO-MC) radar and a point-to-point multiple-input multiple-output (MIMO) communication system~\cite{li2016joint,li2017joint} has received considerable {attention}. {However, the influence of cross-system interference in} {joint radar-communication scenarios} {is still not thoroughly studied.}
In terms of spectrum sharing in the 3.5 GHz spectrum and hardware similarity, a multicarrier radar waveform is considered to be one of the best choices for spectrum sharing between radar and the communication system. Indeed, a multicarrier {waveform} has already been widely accepted as a physical layer modulation solution both in communication systems~\cite{3gpp.36.331, IEEE2001,IEEE201} and in the radar field~\cite{levanon2000multifrequency,levanon2002multicarrier,sen2011multiobjective,sen2011adaptive}. {Furthermore, a communication signal that is scattered or reflected by the target may also be {utilized} for radar sensing purposes \cite{bica2016mutual,shi2017power}.
}
Currently, the interference signal can be projected into the nullspace of the channel matrix to avoid interference {at the receiver}. The majority of recent projection-based radar-communication coexistence techniques, e.g.,~\cite{sodagari2012projection,mahal2017spectral,khawar2014beampattern}, project either radar or communication signals into the other system's nullspace. {Among them\cite{sodagari2012projection,mahal2017spectral,khawar2014beampattern}, a multiuser communication system is simplified to a communication subsystem with combined signal spaces of all communication users, ignoring their mutual interference. Then, the radar subsystem searches for a nullspace or alternative nullspace of the communication subsystem
into which the cross-system interference is projected.}
However, the feasibility of the {precoder-only design} depends on the {availability of} channel state matrix, {and on the high-quality feedback from the receiver to the transmitter.} {Furthermore, comparing with precoder-only projection-based design~\cite{sodagari2012projection,mahal2017spectral,khawar2014beampattern}, this type of
precoder design only allows for avoiding interference on each subsystem but not for both.}
In the case that radar and communication systems are jointly designed and co-located, {the sharing of channel information and interference awareness can be easily arranged. Consequently, the transmitted waveforms and receiver processing for radar and communication systems can be jointly optimized.} For example, precoders and decoders in radar and communication systems may be jointly designed to construct signal spaces and orthogonal interference spaces and to obtain more effective and flexible interference management.
\subsection{Contributions of This Work}
In this paper, we propose {interference alignment (IA) ~\cite{cadambe2008interference} based precoder and decoder co-design to manage interference for spectrum sharing between multicarrier radar~\cite{bica2016generalized} communication systems}. DoF exploitation and mutual interference between radar and communication systems are studied for the coexistence of radar and multiple communication users. The main contributions of this paper are as follows:
\begin{itemize}
\item {The multicarrier model of~\cite{bica2016generalized} is extended to a more general setting in which multicarrier radar and communication systems coexist. {In comparison to} ~\cite{bica2016generalized}, the resulting generalized multicarrier radar-communication signal model is applicable to a multicarrier radar-communication coexistence scenario and {thereby, shows} the differences between multicarrier radar and communication waveforms. }
\item {A joint precoder--decoder design is proposed using the max-SINR criterion and IA theory for a multicarrier-multiuser radar-communication coexistence scenario. {For receivers} {subject to} {cross-system interference,} the signal space and interference space are spanned by columns of the decoder. Consequently, mutual interference between radar and multiuser communication systems can be almost completely eliminated by the proposed joint~design.}
\item {For $K$ communication users and one radar user interference channel, with the assumption that the IA constraint is feasible, the proposed joint precoder--decoder design is able to achieve $N_{sc}(K+1)/2$ total DoFs, which is known as the achievable DoF upper bound for the $(K+1)$-user interference channel with $N_{sc}$ subcarriers~\cite{wu2011degrees}. In other words, if radar waveforms and communication codebooks are appropriately designed, this proposed IA-based design is able to achieve the optimal total information throughput for the entire radar-communication coexistence system. Radar {subsystem} could obtain better detection performance, diversity gain, and interference-free DoFs compared to a subspace-based precoder-only design.}
\end{itemize}
\subsection{Brief Overview of Related Work}
Various contributions have been presented in the radar and communication spectrum sharing literature. In general, these works can be classified into three main classes~\cite{chiriyath2017radar,bhattarai2016overview,li2022assisting,9540344}: codesign, cooperation and coexistence.
\begin{itemize}
\item \textbf{{Co-design}}:
{When hardware sharing between radar and communication systems} is possible, radar and communication systems can be jointly designed to maximize {their performances}~\cite{cui2022optimal,liu2021cram,liu2021integrated}. One example is a {Orthogonal Frequency Division Multiplexing (OFDM) dual-functional} radar-communication system: information transmission and target localization tasks can be
{independently} and simultaneously accomplished by a {co-designed OFDM waveforms} \cite{sit2011ofdm,sit2011extension}. {Another example can be found in~\cite{blunt2010embedding, euziere2014dual, hassanien2015dual,hassanien2016signaling}, where communication information
is} embedded into the sidelobe of the radar waveform to develop a co-designed dual-function system.
\item \textbf{Cooperation}: Limited information can be shared between radar and communication subsystems to {improve resource efficiency rather than isolate systems, e.g., to re-use shared spectrum by efficient interference managements}. {Bliss et al.} presented cooperative joint radar-communications inner bounds in~\cite{bliss2014cooperative,chiriyath2016inner,chiriyath2016jo} and extended this concept to MIMO systems~\cite{rong2017multiple}. Radar waveforms can be embedded as { a pilot signal for communication systems} in a doubly selective channel for detection and channel state estimation~\cite{harper2017performance}. {From our perspective, this work belongs to cooperative designs due to the limited information exchange, although we
aim to suppress the interference via precoder-decoder co-designs.}
\item \textbf{Coexistence}: {When radar and communication systems coexist and share spectrum in a non-cooperative mode, interference management becomes a key issue.} Practically, if the interfering energy is weak or the signal structure is unknown, an interfering signal may be treated as interference, e.g., interference from a Wi-Fi transmitter to a radar receiver. Furthermore, physical separation was introduced in~\cite{lackpour2011overview,hessar2016spectrum} to reduce the interfering energy below the noise level. Most of the prior radar-communication spectrum sharing approaches address interference management by exploiting {the orthogonality} property~\cite{wang2008application,saruthirathanaworakun2012opportunistic,bica2015opportunistic,sodagari2012projection,mahal2017spectral,khawar2014beampattern} or designing radar and communication signals while guaranteeing acceptable performance~\cite{li2016joint,li2016optimum,li2017joint,zheng2017joint}. A robust precoder that minimizes power {was proposed} for coexistence between MIMO radar and downlink multiuser MIMO communication systems in~\cite{liu2017interference}. The works in~\cite{chiriyath2016inner,chiriyath2017radar} explore information-theoretic bounds for a single joint radar-communication user. The theoretical foundation of joint radar-communication research is established in these~studies.
\end{itemize}
{However, the capacity of a multiuser radar-communication interference channel~\cite{carleial1975case,sato1981capacity,etkin2008gaussian} is still an open problem. An inspiring study~\cite{cadambe2008interference} showed that the IA scheme could achieve the total DoF upper bound as $\frac{K}{2}$ for a $K$-user interference channel. The key idea of IA is a linear precoding and decoding technique to design the signal space and orthogonal interference space for each user. {Essentially, the IA is a tradeoff between DoFs of signal space and interference space in a multiuser interference channel.}}
The definition of DoF is clear in the communication literature but is rarely studied in the {radar community}. In general, radar extracts information from targets~\cite{bekkerman2006target}. {Maximizing the DoFs upper bound corresponds to maximizing the achievable maximum number of interference-free measurements of the targets}. When the independent random variables follow an exponential distribution and the Neyman--Pearson detection strategy is applied, maximizing the frequency {DoFs corresponds to maximizing the diversity gain because the diversity gain for each independent variable in the exponential family is 1}~\cite{he2011diversity,mishra2022signal}. \textls[-25]{Although there is a large amount of results in the radar-communication coexistence category, the problem of interference management has not {been well investigated} in multiuser communication scenarios considering mutual interference and DoF exploitation simultaneously.}
\textbf{Notation}: In this paper, matrices are denoted by capital letters, vectors are denoted by boldface, $Re\{ \cdot \}$ means the real part of a complex signal, $\circ$ is the Hadamard product, $\mathbf{A}_{[i]} $ means matrix $\mathbf{A}$ for the $i$th transmitter and receiver, $\mathbf{A}_{[ij]} $ means matrix $\mathbf{A}$ with respect to the $i$th radio transmitter and $j$th radio receiver, $\mathbf{A}_{[R]} $ means matrix $\mathbf{A}$ with respect to the radar only, $\mathbf{A}_{N \times M}$ denotes an $N \times M$-dimensional matrix, $\lfloor \cdot \rfloor$ is the floor function, $\mathbf{A}^{H} $ is the Hermitian transpose of matrix $\mathbf{A}$, and $\overline{\mathbf{A}}$ is matrix $\mathbf{A}$ on the reciprocal interference channel. {Moreover, the math symbols are summarized in {Abbreviations}.
\section{Generalized Multicarrier Radar-Communication Coexistence Model}
In this section, we extend a generalized multicarrier radar signal model~\cite{bica2016generalized} to more general settings, where radar and communication systems coexist and share the same radio spectrum. {We start from a single-input single-output (SISO) point-to-point generalized multicarrier signal model to capture multicarrier radar and multicarrier communication signals.
Then, the signal model is applied for $K+1$ radar and communication users, where a radar user coexists with $K$ communication users.
}
\subsection{Transmitter}
{\textls[-15]{The discrete-time signal model, {as detailed in Appendix A}, can be rewritten in a compact matrix~form as,}}
\begin{equation}\label{eqt}
\mathbf{Y_T} = (\bm \Omega \circ \mathbf{B}) \mathbf{P} \mathbf{C} \mathbf{S}
\end{equation}
where $\circ$ is the Hadamard product. $ \mathbf{Y_T}$ is an $ N_{sc} \times M$-dimensional sample matrix {at transmitter} that occupies $M$ time slots and $N_{sc}$ subcarriers for $k=\{0,1,2,...,MN_{sc}-1\}$ {transmit waveform}, denoted as a resource block for communication or pulse for radar. {The matrix $\bm \Omega$ is an $N_{sc} \times N_{sc}$-dimensional selection matrix, in which the elements of $\bm \Omega$ are either 1 or 0 to { activate or deactivate} subcarriers for either communication or radar subsystem. Matrix $\mathbf{B}\in \mathbb{C}^{N_{sc} \times N_{sc} }$ is considered to be a modulation matrix. The matrices $\mathbf{P} \in \mathbb{C}^{N_{sc} \times N}$ and $\mathbf{C}\in \mathbb{C}^{N\times N_p} $ respectively denote the linear frequency precoding matrix that contains {subcarrier weights} and a coding
matrix, where $N_p$ denotes the length of the uncoded data sequence. Matrix $\mathbf{S}\in \mathbb{C}^{N_p\times M}$ contains payload data $\mathbf{S}= [ \mathbf{s}(1)... \mathbf{s}(M)]$, where $\mathbf{s}(M)$ is an $N_p$ data vector for $M$ time slots. }
A block diagram of a multicarrier system for {co-designed radar and communication system} is shown in Fig.~\ref{figure0}. This figure illustrates how the generalized multicarrier radar-communication signal model can be applied to generate signals at the transmitter and process them at the receiver. The input data will be processed by the channel coder, precoder, IDFT, subcarrier selection, parallel-to-serial {conversion and digital-to-analog converter to generate a multicarrier radar communication signal. After decoding, the channel information is estimated in the communication subsystem, or target information is estimated in the radar subsystem.} {In practice, the selection matrix selects carriers to construct a multicarrier waveform. Various waveform can be generated by
enabling or disabling its elements, such as step approximation of up-chirp pulse, pseudo-random frequency-hopping pulse and multicarrier communication signal with a Comb-type pilot subcarrier. For example,
a widely used radar pulse and communication signal can be formed by deactivating resource block elements of $\bm \Omega$ \cite{bica2016generalized}.}
\begin{figure}[H]
\centering
\includegraphics[width=3in]{ds.pdf}
\caption{{Multicarrier radio system for both multicarrier communication and multicarrier radar systems or a colocated radar-communication system. In the case of a radar or communication user, the dashed boxes of communication or radar in the block diagram are ignored. P/S (S/P) denotes parallel-to-serial or serial-to-parallel conversion, and DAC/ADC represents a digital-to-analog converter or analog-to-digital converter.}}
\label{figure0}
\end{figure}
In an OFDM system, matrix $\mathbf{B}$ is a Vandermonde matrix, and its elements are associated with subcarriers as follows \cite{bica2016generalized}:
\begin{equation} \label{eqB}
\mathbf{B} =
\begin{pmatrix}
1 & 1 &\cdots & 1 \\
1 & \beta &\cdots & \beta^{N_{sc}-1} \\
1 & \beta^{2} &\cdots & \beta^{2(N_{sc}-1)} \\
\cdots & \cdots & \cdots & \cdots \\
1 & \beta^{N_{sc}-1} &\cdots & \beta^{(N_{sc}-1)(N_{sc}-1)}
\end{pmatrix},
\end{equation}
{with each element {denoted by} $\beta^{(k \text{ mod } N)n}$
, where $n$ is the carrier index running of each columns and $\beta^{(k \text{ mod } N)n}$ denotes the sampling index of each row. Moreover, $\beta = e^{j2\pi \Delta f(\frac{T_c}{N_{sc}})}$
denotes baseband subcarriers without a carrier/sampling index, in which $T_c$ is the symbol duration of the multicarrier system, and $\Delta f$ is the subcarrier spacing.} The same active/inactive
subcarrier pattern is {applied} for the time duration of each radar subpulse/communication symbol. This pattern justifies the ($k$ mod $N_{sc}$) time duration above. In the special case of an OFDM signal,
matrix $\mathbf{B}$ is an inverse discrete Fourier transform (IDFT) matrix, $ \mathbf{B}^{H}\mathbf{B} =\mathbf{I}_{N_{sc}\times N_{sc}}$, where $T_c = \frac{1} {\Delta f}$.
In a multicarrier communication
system, the precoder functions as a multi-mode beamformer that allocates signal
power according to channel quality, similar to the MIMO spatial precoder in~\cite{vu2007mimo}. Furthermore, the power leakage could be {minimized} by an orthogonal precoding design~\cite{ma2011optimal}. {For multicarrier radar,
$\mathbf{P}$ is applied as a multimode beamformer, projecting the radar signal into a designed subspace.} Moreover, the data channel coding rate is $\frac{N_p}{N}$ when coded data are transmitted in parallel and $N-N_p$ is the coding redundancy~\cite{declercq2014channel}. The code rate upper bound of coding matrix $\mathbf{C}$ is shown in~\cite{polyanskiy2010channel}. As channel coding increases the {transmission} reliability by introducing redundancy, it could increase the number of columns in the data matrix, i.e., $\frac{N_p}{N} \leq 1$. {Matrix $\mathbf{C}$ facilitates pulse compression
and consequently improves radar {range resolution}. However, it is indeed a channel coder in the communication subsystem to improve the reliability of communication transmission \cite{bica2016generalized}. In this paper, we employ the IA criterion to cancel the interference for both systems.}
As the transmitted waveform is always known {at the radar receiver}, without a loss of generality, we assume that the radar waveform is an all one matrix. Then, \mbox{$\mathbf{S} \mathbf{S}^{H} = M\mathbf{1}_{N_p\times N_p}$} for the radar signal. The transmitted payload data in the communication system is unknown to the receiver. It may be modeled with a complex Gaussian distribution, while its power is considered to be $\mathbb{E}(\mathbf{S}\mathbf{S}^{H})= \sigma_S^2 \mathbf{I}_{N_p\times N_p}$.
\subsection{Receiver}
Both the multicarrier radar and communication receiver in the {co-designed} system will obtain a radar response from the target reflection or an observation from an active transmitter in a communication system. For a radar receiver, assume that a single moving target
is located at direction $\theta$. By ignoring interference, a general form of the received signals before decoding may be written as follows:
\begin{equation}\label{eq2}
\mathbf{Y_{R}}
= \mathbf{H}( \bm \theta ) (\bm \Omega\circ \mathbf{I}_{N_{sc} \times N_{sc}}) \mathbf{P}
\mathbf{C}\mathbf{S}+\mathbf{W},
\end{equation}
where matrix $\mathbf{Y_R}$ is an $N_{sc} \times M$-dimensional received signal matrix. $\mathbf{W}$ denotes the \mbox{$N_{sc} \times M$}-dimensional complex white Gaussian noise that {obeys complex Gaussian distribution $\mathcal{CN}(\bm 0, \delta_W \mathbf{I}_{N_{sc} \times N_{sc}})$}. A detailed derivation of the model is presented in {Appendix \ref{app2}}.
The channel frequency response matrix denoted by $\mathbf{H}(\bm \theta) \in \mathbb{C}^{N_{sc} \times N_{sc}}$ could be estimated for each $L$ symbol duration under the block fading channel assumption. {When an OFDM signal is employed, the channel for each subcarrier is assumed to be frequency flat and the channel state matrix is diagonal matrix $\mathbf{H}(\bm \theta)= \text{diag} \{ h_1(\theta), h_2(\theta) \cdots ,h_{N_{sc}}(\theta)\}$, where $h_1(\theta)$ is the channel frequency response at the first subcarrier }.
Considering the radar task at hand, the channel state matrix describes the combined effects of
target response, scattering, channel fading, Doppler shift, and power decay with distance. Among these parameters, the direction of arrival (DoA), {delay}, Doppler shift, and target response are of particular interest for the radar system. Note that if multi-antenna transceivers are used, spatial processing, such as DoA estimation may be performed. If a single-antenna system is employed, the DoAs can be estimated using a mechanically rotating directional antenna, as in many classical radar systems~\cite{skolnik1970radar}. The communication receiver treats Doppler shifts, multipath effects, and power decay as channel distortion. To ensure reliability of data transmission, channel coding and frequency precoding are typically employed. One needs to design a channel coding matrix $\mathbf{C}$ and a precoding matrix $\mathbf{P}$ to address the dynamic {nature of} ${\textbf{H}(\bm{\theta})} $.
In particular, for a co-located radar-communication node as shown in Figure~\ref{figure2}, the communication subsystem could easily share a channel state matrix with a radar subsystem.
{Moreover, in cooperative settings if the target also carries a communication system, the performances of channel and target parameter estimation may both benefit from jointly estimating procedure}. A practical example of such a system is vehicle radar and vehicle-to-vehicle communication network.
\begin{figure}[H]
\centering
\includegraphics[width=3.5in]{ic.pdf}
\caption{A practical vehicle networking example, in which the colocated radar-communication system coexists with a multiuser communication network}
\label{figure2}
\end{figure}
\subsection{Multiuser Radar-Communication Spectrum Sharing Scenario}
In this {subsection}, we consider a simplified $K+1$ user radar-communication spectrum sharing scenario, which consists of $K$ SISO communication users and a SISO multicarrier mono-static radar user. They share $N_{sc}$ subcarriers and the same frequency band.
Consequently, the {total number} of possible transmitter and receiver (TX-RX) pairs (including the signal channel and interference channel) is $K(K+1)$. {Each user is considered to be an intended TX-RX pair (matched link) transmitting a useful signal. Hence, the sum of useful signal channels is $K$. Interference occurred in unintended TX-RX pairs (mismatched link), which may be from radar TX to one communication RX, from one communication TX to radar RX or from communication TX to another unintended communication RX.} {In a nutshell,} the interference channels correspond to ${K^2}$ unintended links.
A mono-static radar is co-located with one of these communication nodes. {The co-designed radar communication system is considered to be a cooperating radar and a communication subsystem separately, as illustrated in Fig.~\ref{figure2}}. In this network configuration, we denote by $\mathcal{A}$ the set of communication users and by $\mathcal{B}$ the set of radar users, where
$\vert \mathcal{A} \vert = K, \vert \mathcal{B} \vert = 1 $. Here, $\vert \mathcal{A} \vert$ denotes the cardinality of set $\mathcal{A}$.
The co-located multicarrier mono-static radar illuminates one target in the direction $\theta$. {Note that radar is equipped with highly directional antenna} {or phased array} and is {capable of forming narrow beams} such that it can only interfere with part of the communication nodes.
Recall that the signals are experiencing a block fading channel. Before decoding, the received multicarrier signal at the $i$th receiver in the $K+1$-user radar-communication coexistence scenario may be written as follows:
\vspace{12pt}
\begin{equation}\label{eq3}
\begin{split}
\mathbf{Y_{R} }& = \underbrace{\mathbf{H}_{[R]}(\theta) (\bm \Omega_{[R]}
\circ \mathbf{I}_{N_{sc} \times N_{sc}} )\mathbf{P}_{[R]}\mathbf{C}_{[R]} \mathbf{S}_{[R]}}_{\text{Radar Signal}}\\
&+\underbrace{\sum_{j \in \mathcal{A}} \mathbf{H}_{[ij]} (\bm \Omega_{[j]}\circ \mathbf{I}_{N_{sc} \times N_{sc}})
\mathbf{P}_{[j]}\mathbf{C}_{[j]} \mathbf{S}_{[j]}}_{\text{Communication Signal}} \\
& +\mathbf{W}_{[i]}\\
\end{split}
\end{equation}
where $ \mathbf{Y}_{\mathbf{R}[i]}$ is {the $N_{sc} \times M$-dimensional} received signal for the $i$th user. Here, the $i$th receiver may refer to all $K$ communication receivers and radar receiver $i \in \mathcal{A} \cup \mathcal{B}$. $\bm \Omega_{[R]}$ denotes the selection matrix associated with {adaptive waveform design}, e.g., pseudo-random pulse or multi-pulse radar. Due to the block fading channel assumption, the channel coherence interval is $L$ pulses/resource blocks. {Channel coding matrix $\mathbf{C}$ will remain unchanged because it is designed based on $\mathbf{H}$ in both communication and radar subsystems. Consequently, $\bm \Omega_{[R]}$, $\bm \Omega_{[j]}$, $\mathbf{P}$ and $\mathbf{S}_{[j]}$ may vary over $L$ pulses/resource blocks. }
To describe {this scenario} in detail, some key conditions are stated as follows:
\begin{itemize}
\item Channel State Information (CSI): The channels $\mathbf{H}_{[ii]}$ and $\mathbf{H}_{[ij]}$ and the target response {matrix} ${\textbf{H}(\bm{\theta})}_{[ii]}$ are considered to be perfectly estimated and {fed back to the communication and radar transmitters}, respectively. {CSI estimation is commonly employed in communications systems. For radar, one feasible method is to treat a known radar waveform as a shared pilot in both radar and communication systems. } The pilot-aided approach can estimate all the channel information between radar and communication users~\cite{harper2017performance}. Another approach is to embed the same pilot signal in both the radar coherence interval and communication frames~\cite{li2017joint}.
Moreover, channel reciprocity may be exploited for interference channels.
\item Synchronization: Both radar and communication systems are assumed to be synchronized. If they are colocated, then they may share the same clock, in which case, synchronization is not an issue. The other subsystems need to be synchronized in a similar manner to any multiuser communication system; the clock synchronization may be easier in communication systems but still feasible for radar. Existing radar clock synchronization technology may be employed, such as using a Global Navigation Satellite System (GNSS)~\cite{yulin2006synchronization,wang2009gps}, using a pilot signal~\cite{wang2007approach} or using an OFDM frame~\cite{schmidl1997robust} to achieve time and frequency synchronization~\cite{sit2011ofdm}.
\item Shared Information: As discussed above, the following information is shared among all users: {selection matrix $\bm \Omega^{(l)}$ and communication power $\sigma_S^2 \mathbf{I}_{N_p\times N_p}$}. This shared information {will be employed} to calculate the transmitted signal power and consequently to solve the following optimization problem {in the next section}. {Alternatively, the transmitted signal power can be constrained by the plimited by a power constraint.}
\item Doppler and schedule: The Doppler shift is assumed to be constant during a coherent interval for the $L$ pulse. The feedback of the channel state matrix, transmission of clock synchronization and shared information call for a {protocol for exchanging information between the communication and radar subsystems}. Providing channel feedback is common in most modern communication systems and part of the standards. A radar system can also take advantage of feedback and estimate channels. One feasible approach is to transmit information between the radar coherent interval ~\cite{li2017joint}.
\end{itemize}
\section{Max-SINR Joint Precoder--Decoder Design}
{IA} is an emerging DoF-based interference management technique in wireless communications that aligns the interference caused by other users in an interference signal subspace that is orthogonal to
the user-desired signal subspace~\cite{gomadam2011distributed}. This technique can be applied in the time, frequency or spatial {domains}. Furthermore, in a high-SNR regime, this technique can achieve high-interference elimination and offer a $\frac{K}{2}$ achievable total DoF upper bound in the $K$-interference-communication-user scenario without frequency/time/spatial symbol extension~\cite{cadambe2008interference,wu2011degrees}. Symbol extension means additional diversity, for example, by expanding bandwidth or adding antennas in this paper.
At the same time, the traditional orthogonal spectrum allocation, which allocates a nonoverlapping signal to each user, could only obtain $\frac{1}{K}$ interference-free DoFs
In this section, we propose a joint precoder--decoder design using the max-SINR criterion and IA approach to solve the radar-communication {spectrum sharing} problem.
{Communication receivers are interfered with by radar via a direct path.} As the radar signal reflected from the target may also have a high power, {we could treat the scattered radar signal as noise for the communication receiver and consequently a medium SNR scenario}. IA in a high-SNR regime typically focuses on
minimizing the leakage interference while ignoring noise. However, in this radar-communication coexistence problem in a medium-SNR regime, it is necessary to take the noise into account. Hence, we will employ the max-SINR criterion for our design.
\subsection{Ideal IA Constraints}
{The ideal IA constraints in a multicarrier radar-communication coexistence scenario can be written as {follows}~\cite{gomadam2011distributed}}
\begin{subequations}\label{IA1}
\begin{gather}\label{IA11}
\mathbf {Q}_{[i]}^H \mathbf{H}_{[ij]} \mathbf{P}_{[j]} = \mathbf{0}_{{d_{[i]} \times d_{[j]}}} , \\
\text{rank}( \mathbf{Q}_{[i]}^H \mathbf{H}_{[ii]} \mathbf{P}_{[i]} ) = d_{[i]} , \label{IA12}\\
\forall i \neq j ; \forall i,j \in \mathcal{A} \cup \mathcal{B} ,\label{IA14}
\end{gather}
\end{subequations}
where $ \mathbf{0}_{{d_{[i]} \times d_{[j]}}}$ is a $d_{[i]} \times d_{[j]}$ zero matrix, $\mathbf{Q}_{[i]}$ is an $N_{sc} \times N_{[i]}$-dimensional decoder matrix of the $i$th communication node, and recall that $N_{[i]}=d_{[i]}$
$d_{[i]}$ denotes the {number of} user-desired DoFs for the $i$th
user. The channel matrix of the interference channel between the $i$th radio node and the $j$th radio node is $\mathbf{H}_{[ij]}$, {which is known to both radar and communication subsystems}, and the matrix of the signal channel at the $i$th user is $\mathbf{H}_{[ii]}$. When $i \in \mathcal{A}$, $\mathbf{H}_{[ii]}$ is a target response matrix $\mathbf{H}_{[ii]}(\theta)$, which is of interest to the radar subsystem.
{Constraint \eqref{IA11} means that the precoder is designed to project the interference from the $j$th radio transmitter to the $i$th receiver's nullspace.} The nullspace is designed by decoder $\mathbf{Q}_{[i]}$ to align interference from all $j$ transmitters, $ \forall i \neq j ; \forall j \in \mathcal{A} \cup \mathcal{B}$. Equation \eqref{IA12} implies that the designed signal space should {provide the number of desired DoFs}. The ideal IA constraints design precoders $\mathbf{P}_{[j]}$ to project interference such that it is aligned in the nullspace of $\mathbf{Q}_{[i]}$ and design $\mathbf{Q}_{[i]}$ to guarantee that the signal space and interference space exits. {Noting \eqref{IA12} must be taken into account due to the diagonal channel state matrix of the OFDM channel, though constraint \eqref{IA12} is automatically satisfied almost surely in the MIMO configurations such that it can be ignored.}
\subsection{Reciprocity and Feasibility of IA}
{In a time division duplex (TDD) radio link}, the roles of the receiving and transmitting antennas are functionally interchanged, while the instantaneous transfer characteristics of the radio channel remain unchanged.
This channel reciprocity can be exploited in IA design. The reciprocity of IA~\cite{gomadam2011distributed} comes from the identity IA constraints between the original interference channel and the reciprocal interference channel, where the original precoder and decoder are considered to be a reciprocal decoder and precoder, respectively. In other words, IA constraints are still feasible after the signal direction is reversed. Using reciprocity, the IA constraints of {\eqref{IA1}} can be written {as follows:
\begin{equation} \label{IAintuitiveReceprocal}
\mathbf{0} = \overline{\mathbf {Q}}_{[j]}^H \overline{\mathbf{H}}_{[ji]} \overline{\mathbf{P}}_{[i]},
\mathbf{\overline{\widetilde{H}}}_{[ii]} = \overline{\mathbf {Q}}_{[i]}^H \overline{\mathbf{H}}_{[ii]} \overline{\mathbf{P}}_{[i]}, \\
\forall i \neq j ; \forall i,j \in \mathcal{A} \cup \mathcal{B}
\end{equation}
\textls[-35]{{where $\overline{\mathbf{P}}$, $\overline{\mathbf {Q}}$ and $\overline{\mathbf{H}}_{[ii]}$ denote the precoder, decoder and channel matrix on the reciprocal channel, respectively, where} $\overline{\mathbf {Q}}_{[i]}=\mathbf {P}_{[i]}, \overline{\mathbf {P}}_{[i]}=\mathbf {Q}_{[i]},$} $\forall i \in \mathcal{A} \cup \mathcal{B}$. Furthermore, $\mathbf{{0}}_{{d_{[i]} \times d_{[j]}},[ij]}$ denotes the interference nullspace from the $i$th receiver to the $j$th transmitter on the reciprocal channel. $\mathbf{\overline{\widetilde{H}}}_{[ii]}$ denotes the channel matrix for the $i$th user itself on the reciprocal channel. The reciprocity of IA does not change the user-desired DoFs of each user while projecting an undesired signal into the nullspace. However, it plays an important role in the following distributed {iterative} algorithm {proposed in this paper}.
As with the communication signal, the DoFs of a radar subsystem need to be properly chosen to realize IA. The radar subsystem also {desires more available DoFs that facilitates more flexible waveform design and a higher diversity gain}. In this paper, we assume that the number of desired DoFs is predetermined. The DoFs of radar need to satisfy the feasibility condition of IA, which is written as follows:
\begin{equation}\label{IA13}
\begin{cases}
&d_{[R]}+d_{[i]} \leq N_{sc} ,\forall i\in \mathcal{A} \cup \mathcal{B} \\
&2d_{[R]}(N_{sc}-d_{[R]})-\sum_{i\in \mathcal{A}_{sub}}d_{[R]}d_{[i]} \geq 0,\forall \mathcal{A}_{sub}\subset \mathcal{A} \cup \mathcal{B}. \\
\end{cases}
\end{equation}
If IA is feasible, then the dimension of projection between the strategy space and channel matrix must be non-negative~\cite{bresler2011feasibility}. This condition can easily be formulated based on Theorem 2 in~\cite{bresler2011feasibility}. Note that \eqref{IA13} is a necessary condition. IA requires finding feasible signal strategies while the channel matrix is fixed. However, the feasibility is established in a reverse way, where the strategy is fixed and the study channel matrix is set for which strategy is feasible. The space of strategies can be represented by the product of Grassmannians ($7$, \cite{bresler2011feasibility}). Furthermore, if $d_{[i]}=d_{[R]}=d$, i.e., the desired DoFs of the communication and radar nodes are identical and equal to $d$, the IA is feasible if and only if
\begin{equation}\label{IAfe}
d \leq \frac{2N_{sc}}{K+1}.
\end{equation}
This condition degenerates into a necessary condition when $d_{[i]}=d_{[R]}=d=1, K\geq 3$, where {each users call for a single data stream} (Theorem 1,~\cite{du2014feasibility})
\begin{equation}
K \leq 2N_{sc}-2.
\end{equation}
{This solution is a degenerated} {solution} {of \eqref{IAfe}, i.e. $2N_{sc}-1$, because there will be at least one remaining channel not used by any transmitter for each IA solution in multicarrier scenario.} The general feasibility condition of a proper IA network with multiple streams for each user remains an open problem.
\subsection{Distributed {Max-SINR} Precoder--Decoder Design}
Based on IA theory, \eqref{IA11}--\eqref{IA14} could then be formulated as an {interference minimization problem {as follows:}}
\begin{equation}
\begin{aligned}
\underset{\mathbf{P}_{[i]},\mathbf{Q}_{[i]}}{\min} &Tr(\mathbf{Q}_{[i]}^{H} \mathbf{H}_{[ij]}
\mathbf{P}_{[j]}) \label{1a} \\
s.t. \quad
&\text{rank}( \mathbf{Q}_{[i]}^{H} \mathbf{H}_{[ii]} \mathbf{P}_{[i]}) = d_{[i]}.
\end{aligned}
\end{equation}
{However, the interference minimization criteria are not optimal when taking noise and the radar channel matrix into account.}
{Without loss of generality, \eqref{IA11}--\eqref{IA14} could be formulated as a SINR optimization problem {as follows:} }
\begin{subequations}\label{IAoptimal}
\begin{alignat}{1}
\underset{\mathbf{P}_{[i]},\mathbf{Q}_{[i]}}{\max} &Tr(\frac{\mathbf{Q}_{[i]}^{H} \mathbf{H}_{[ii]}
\mathbf{A}_{[i]}\mathbf{A}_{[i]}^{H} \mathbf{H}_{[ii]}^H
\mathbf{Q}_{[i]} }{
\mathbf{Q}_{[i]}^{H} (\sum_{j=1}^{Z_{[i]}} \mathbf{H}_{[ij]} \mathbf{A}_{[j]}\mathbf{A}_{[j]}^{H} \mathbf{H}_{[ij]}^H+ \delta_W \mathbf{I}
) \mathbf{Q}_{[i]} })\label{IAo1}\\
s.t. \quad
&\text{rank}( \mathbf{Q}_{[i]}^{H} \mathbf{H}_{[ii]} \mathbf{P}_{[i]}) = d_{[i]}, \label{IAo2}
\end{alignat}
\end{subequations}
{where $Z_{[i]}$ is the number of interfering sources for the $i$th receiver.} Projecting interference into the designed nullspace with fixed SNR implies maximizing the SINR as \eqref{IAo1}. Let us recall \eqref{eqt};
the signal power emitted from the $i$th transmitter may be written as
\begin{equation}\label{calculateA}
\mathbf{A}_{[j]} \mathbf{A}_{[j]}^{H}=
\begin{cases}
M (\bm \Omega_{[R]}\circ \mathbf{I}_{N_{sc} \times N_{sc} })
\mathbf{P}_{[R]} Tr(\mathbf{C}_{[R]} \mathbf{1} \mathbf{C}_{[R]}^{H}) \mathbf{P}_{[R]}^{H} (\bm \Omega_{[R]} \circ \mathbf{I}_{N_{sc} \times N_{sc} })^H
\text{, if } j \in \mathcal{B} \\
\sigma_{S}^2(\bm \Omega_{[j]}\circ \mathbf{I}_{N_{sc} \times N_{sc} })
\mathbf{P}_{[j]} Tr(\mathbf{C}_{[j]} \mathbf{C}_{[j]}^{H}) \mathbf{P}_{[j]}^{H} (\bm \Omega_{[j]} \circ \mathbf{I}_{N_{sc} \times N_{sc} })^H
\text{, if } j \in \mathcal{A}.
\end{cases}
\end{equation}
{where $\mathbf{1}$ is a $N_p \times N_p$-dimensional all-ones matrix.} Recall that the set $\mathcal{A}$ contains communication users. Some of these communication receivers are interfered by radar, denoted as the subset $\mathcal{A}_r$, and the complement set of communication users not interfered by radar is denoted as $\mathcal{A}_c$. {If the $i$th communication receiver is interfered by radar, $i \in \mathcal{A}_r$, it will experience interference from $Z_{[i]} = K-1$ communication transmitters
and one radar transmitter.
If this receiver is not interfered by radar, $i \in \mathcal{A}_c$, its interference
is caused by $Z_{[i]} = K-1$ communication transmitters only.
If this receiver is a radar receiver, then it is subject to interference from all $Z_{[i]} =K$ communication transmitters.}
Here, we use the commutative law of the Hadamard product and the generalized radar-communication coexistence signal model in Section \uppercase\expandafter{\romannumeral2}-C.
The trace term in \eqref{calculateA} corresponds to the signal power before the precoding and modulation operation.
The objective function is maximized in an iterative manner. In each iteration, we maximize the objective function \eqref{IAo1} to find $\mathbf{Q}_{[i]}$ at each receiver. Then, at each transmitter, we will find the original $\mathbf{P}_{[i]}$ according to \eqref{IAo1}. At this time, \eqref{IAo1} can be further simplified to accelerate the calculations.
The interference plus noise covariance matrix for the $i$th receiver may be written {as~follows}:
\begin{equation}\label{calculateB}
\mathbf{D}_{[i]} =\sum_{j=1}^{Z_{[i]}}\mathbf{H}_{[ij]} \mathbf{A}_{[j]} \mathbf{A}_{[j]}^{H} \mathbf{H}_{[ij]}^H+ \delta_W \mathbf{I}.
\end{equation}
Recall that all terms except $\mathbf{Q}_{[i]}$ are fixed to find $\mathbf{P}_{[i]}$. When maximizing \eqref{IAoptimal}, $\mathbf{Q}_{[i]}$ is normalized to limit the decoder element scope for eigenvectors and have easier calculations. It may be given by the following:\vspace{12pt}
\begin{equation}\label{calculateQ}
\mathbf{Q}_{[i]} = \frac{\mathcal{V}_{d_{[i]}}( \mathbf{D}_{[i]}^{-1}\mathbf{H}_{[ii]} \mathbf{P}_{[i]}\mathbf{P}_{[i]}^{H}\mathbf{H}_{[ii]}^{H})}{\|\mathcal{V}_{d_{[i]}}( \mathbf{D}_{[i]}^{-1}\mathbf{H}_{[ii]} \mathbf{P}_{[i]}\mathbf{P}_{[i]}^{H}\mathbf{H}_{[ii]}^{H})\| },
\end{equation}
where $\mathcal{V}_{d_{[i]}}(\mathbf{A})$ denotes the eigenvectors corresponding to the $d_{[i]}$ smallest eigenvalues of $\mathbf{A}$.
The objective function in \eqref{IAoptimal} is not convex. However, we could find a solution by {exploiting channel reciprocity and employing a distributed iterative algorithm}. This algorithm is guaranteed to converge, but it may not necessarily find the global optimum~\cite{gomadam2011distributed}. By taking all radio nodes into account, we can write an algorithm for solving this optimization problem in \eqref{IAoptimal}.
See Algorithm~\ref{alg:A} for detailed steps. The reciprocal interference channel is still IA feasible by choosing the original precoder as a decoder of the reciprocal interference channel and the original decoder as a precoder, respectively. The algorithm finds $\mathbf{Q}_{[i]}$ and $\mathbf{P}_{[i]}$ iteratively in two stages.
\begin{algorithm}
\caption{Max-SINR design algorithm.}
\label{alg:A}
\begin{algorithmic}[1]
\STATE {Estimate radar channel $\mathbf{H}_{[R]}$ and radar's interference channel $\mathbf{H}_{[iR]}$ }
\STATE {Initialize $\mathbf{P}_{[i]}$,$\mathbf{Q}_{[i]}$ with independent row vectors, $\forall i\in \mathcal{A} \cup \mathcal{B}$}
\REPEAT
\REPEAT
\STATE Identify location and type of $i$th node, $\forall i\in \mathcal{A} \cup \mathcal{B} $
\STATE Choose $Z_{[i]}$ according to its location and type
\STATE Calculate transmitted signal power according to equation \eqref{calculateA}
\STATE Calculate interference pulse noise covariance matrix $\mathbf{D}_{[i]}$ according to equation \eqref{calculateB}
\STATE Find $N_{sc} \times d_{[i]} $ matrix $\mathbf{Q}_{[i]}$ on each receiver according to \eqref{calculateQ}
\UNTIL{All $\mathbf{Q}_{[i]}$ are found}
\REPEAT
\STATE Use channel reciprocity
\STATE Identify location and type of $i$th node, $\forall i\in\mathcal{A} \cup \mathcal{B} $
\STATE Choose $Z_{[i]}$ according to its location and type
\STATE Calculate power of reciprocal transmitted signal according to equation \eqref{calculateA}
\STATE Calculate interference pulse noise covariance matrix $\mathbf{D}_{[i]}$ of reciprocal signal according to equation \eqref{calculateB}
\STATE Calculate $N_{sc} \times d_{[i]} $ matrix $\mathbf{P}_{[i]}$ on each reciprocal receiver according to \eqref{calculateQ}
\UNTIL{All $\mathbf{P}_{[i]}$ are calculated}
\STATE Check rank of $\mathbf{Q}_{[i]}^{H} \mathbf{H}_{[ii]} \mathbf{P}_{[i]}$ to verify (14b).
\UNTIL (14b) is satisfied and {\eqref{IAoptimal} has converged or {maximum allowed iteration count is achieved.}}
\end{algorithmic}
\end{algorithm}
The proposed distributed algorithm operates for finding precoders and decoders in an alternating manner. We start by
finding solutions for $\mathbf{Q}_{[i]}$ at each receiver from \eqref{IAoptimal}, with fixed $\mathbf{P}_{[i]}$, $\mathbf{P}_{[j]}$, $\mathbf{Q}_{[j]}$,$ \forall i \neq j ; \forall i,j \in \mathcal{A} \cup \mathcal{B}$. Precoders $\mathbf{P}_{[i]}$ are found using fixed decoders $\mathbf{Q}_{[i]}$, $\mathbf{Q}_{[j]}$, $\mathbf{P}_{[j]}$, $ \forall i \neq j ; \forall i \in \mathcal{A} \cup \mathcal{B}$, where $j$ is selected based on its node type and location.
In the next stage, we will reverse the signal direction, the original transmitter will be treated as a receiver, and the original receiver will be treated as a transmitter. {Consequently, decoders of reciprocal channel be found, which are indeed $\mathbf{P}_{[i]}$ of the original channel in \eqref{IAoptimal}, by fixing the other terms $\mathbf{Q}_{[i]}$, $\mathbf{Q}_{[j]}$ and $\mathbf{P}_{[j]}$.} For each precoder and decoder, the rank of signal space is checked after one solution is obtained. This iteration will continue until convergence, which is evaluated by comparing \eqref{IAo1} to a threshold value or until the iteration count reaches its maximum value.
The constraint \eqref{IAo2} may also be relaxed by considering matrix dimension $N$ as $d_{[i]}$ at the $i$th user. This condition guarantees that a trivial and useless solution of all zeros is avoided.
Consequently, we design an $N_{sc} \times d_{[i]}$ precoder and a $ d_{[i]} \times N_{sc} $ decoder for the $i$th node.
{Note that the precoder and the decoder for both radar and communication subsystems in a colocated node could be more conveniently found by the proposed algorithms. In cases where radar and communication subsystems are colocated and consequently suffer from the same interference, the interference channel matrix estimated by the communication system could easily be re-used for radar precoder and decoder designs. } Furthermore, if noise statistics are the same in both radar and communication subsystems when sharing the same hardware architecture, the interference plus noise covariance matrix is also identical. {It could be shared between the subsystems to reduce calculation time, by using the same memory or optical fibers.}
\section{Simulation Examples}
In this section, we present simulation results to demonstrate the performance of the proposed max-SINR
joint precoder--decoder design algorithm. {The simulation is done using MATLAB 2019b and a desktop with an i7-10700k processor. We consider a four-user interference channel where three communication users and one radar user colocated with one of the communication users.} Each node employs a multicarrier signal model with $N_{sc}=8$ subcarriers. {Assume that communication users desire one DoF and
radar desires three DoFs, which is the highest practical achievable DoFs for the radar subsystem under~\eqref{IA13}.} {We also assume that zero mean Gaussian noise are independent and with variance $\sigma_{W}^2=1$ is present at each receiver. The power of the payload signal is $\sigma_S^2 =1$ for each communication user. Consider orthogonal channel coding is employed such that coding matrix} $\mathbf{C}_{[i]}\mathbf{C}_{[i]}^H = \mathbf{I}$ is an identity matrix. The far-field point target is at azimuth angle $\theta = 0^\circ$. The total number of samples is 500. The initial estimates of the precoder and decoder are obtained from independent row vectors. One of these three communication users is not interfered with by radar.
Increasing the transmit power at each transmitter will also increase the interference observed by the other receivers. {The channel matrices are generated randomly according to the block fading channel assumption, where each entry of the channel matrix follows the standard complex Gaussian distribution. Moreover, we assume that the entries of the target response matrix follows Swerling 2 model with Gaussian distributed complex amplitude}.
We evaluate the system performance by
the sum of all the SINRs in the coexistence scenario. {The performances of the communication and radar subsystems clearly depend heavily on the SINR values at the receivers. Radar performance in the detection task is studied using a ROC curve, which evaluates the performance of a radar detector by plotting the probability of detection (Pd) versus the probability of false alarm (Pfa) for a given conditions}.
We compare the performance of the proposed method with the switched small singular value space projection (SSSVSP) in~\cite{mahal2017spectral}. SSSVSP is an extension of nullspace-based precoder design in which the nullspace has been expanded to include the subspace spanned by singular vectors. These singular vectors correspond to small singular values that are selected based on a threshold value. However, it still faces the drawback that it cannot handle mutual interference between radar and communication systems. In the following simulations, we compare the proposed algorithm with SSSVSP when interference from radar is projected into communication users' switched small singular value space.
In Figure~\ref{figure3}, we compare the total SINR values from all radio receivers, including communication and radar. For fairness in the comparison of DoFs, the number of {transmit and receive antennas} in SSSVSP is selected to be eight. {Noting the multicarrier interference channel matrix is in general a diagonal matrix.}
For the proposed method, the SINR increases as a function of SNR, whereas the other approaches experience more interference, and consequently their SINR values decrease. However,
the SSSVSP and {a design without precoder or decoder method} decrease with it.
This result may be due to the projection of the radar signal to the nullspace of all communication receivers while ignoring the interference experienced at the radar receiver and interferences among the communication nodes. Moreover, if the signal powers increase at the transmitters, it will increase the interference power at the other receivers. {The SINR of the proposed design increases as a function of SNR because interferences are almost completely eliminated.}
We also observe a good total SINR improvement in a medium-SNR regime.
\begin{figure}[H]
\centering
\includegraphics[width=4.2in]{figure1.eps}
\caption{{{The total} SINR, which is the sum of SINR values at all receivers for the {proposed design, SSSVSP}, and original signal for coexistence between the radar and communication~network.}
\label{figure3}
\end{figure}
{The detection performance of proposed design and SSSVSP method are compared using Neyman-Pearson detectors under different false alarm constraints, their ROC curves are plotted in Fig. \ref{figure4}}. Different SNR levels are considered. {In this simulation, one radar and one communication user are coexisting, where nonfluctuating and coherent target model, Swerling I-IV target, and nonfluctuating but coherent target model are investigated in the simulation, respectively.} All radio transmitters are assumed to be active such that every receiver suffers from interference from other transmitter. {The probability of detection as a function of SNR curves is drawn by using 500 radar pulses}.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{figure2-1.eps}
\caption{{The radar's} {ROC curve comparison between the proposed max-SINR joint precoder--decoder design and SSSVSP design in 0, 10, 20 and 30 dB SNR. Comparison results under various target models are illustrated in above figures. The proposed method provides superior performance in all considered scenarios.}}
\label{figure4}
\end{figure
{Figure~\ref{figure4} shows that the proposed max-SINR joint precoder--decoder design outperforms than the SSSVSP design when a radar coexists with one communication user in medium and high SNR regions, namely SNR = 20, 30, and 40 dB. When the SNR is relatively low, i.e. below SNR= 10 dB, the interference plus noise matrix is dominated by the noise power. In this case, the SSSVSP design is employed at communication side, and therefore projects the communication interference into the alternative signal space so that the radar performance is slightly better than our proposed method. Moreover, one can also observe that the detection performance does not increase with the SNR by comparing the curves of SSSVSP design with SNR = 20 dB and SNR = 40 dB. Therefore, there always be some interference leakage with the SSSVSP strategy.}
{In Figure~\ref{figure5}, the difference in radar performance between the proposed max-SNR joint precoder--decoder design and SSSVSP is shown. A single pulse $k=1$, i.e. without pulse compression, and multi-pulse $k=500$, i.e. with pulse compression, are considered. In this simulation, $N_{sc}$ are set to be 16 and the target model is Swerling I. A Neyman--Pearson detector is applied with false alarm constraints $P_{fa} = 10^{-2},P_{fa} = 10^{-4}$, and $P_{fa} = 10^{-6}$. It can be observed that SSSVSP design achieves the detection performance upper bound at a relative low SNR region without pulse compression technology. By increasing the number of pulses, the radar detection performance improves because the radar performance benefits from coherent signal processing.}
In Figure~\ref{figure7}, the SINR performance is studied as a function of the number of users. In this simulation, the DoF of each user is 1. The results show that total system SINR increases as the number of users increases. An additional user may add a useful signal to the system. However, it will cause interference for the other users. The proposed design can successfully remove interference for the entire system. The higher SNR at each receiver could also increase the system SINR. The proposed design avoids performance loss when the number of users increases.
\begin{figure}[H]
\centering
\includegraphics[width=4.5in]{figure3.eps}
\caption{Detection performance difference comparison between proposed max-SINR joint precoder--decoder design and SSSVSP at $P_{fa} = 10^{-2},P_{fa} = 10^{-4}$, and $P_{fa} = 10^{-6}$. A single pulse $k=1$ and multi-pulse $k=500$ are considered in this comparison.}
\label{figure5}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=5in]{Figure7-1.png}
\caption{{System} {total sum SINR performance} as a function of the number of users for the proposed design. The user-desired DoF is 1.
\label{figure7}
\end{figure}
\section{Conclusions}
{This paper considered the problem of interference management and alignment in radar and communication spectrum sharing scenarios. A generalized multicarrier signal model is {employed}. The model provides an easy way {to represent}, generate and analyze a multicarrier signal. }A max-SINR precoder-decoder joint design based on IA theory is proposed. The design is formulated as a constrained optimization problem. By employing IA theory, the benefit of achieving the total DoF upper bound can be obtained. {A distributed and alternating algorithm for finding the precoder and decoder is derived as a solution. It takes advantage of channel reciprocity in TDD system.} The proposed design allows for achieving {better interference suppression }between communication and radar nodes in comparison {to existing} precoder design.
The simulation results demonstrated that our algorithm significantly improves the total SINR for
all radar and communication nodes and provides a significantly higher target detection probability in a radar subsystem at a given constraint {using Neyman-Pearson detector while making the communication system almost interference-free}. The sum SINR increases as a function of SNR and the number of users while avoiding performance degradation.
\authorcontributions{{Conceptualization, Y.C. and V.K.; Formal analysis, Y.C.; Funding acquisition, X.J.; Methodology, Y.C. and V.K.; Project administration, Y.C. and X.J.; Resources, Y.C.; Software, Y.C.; Writing---review \& editing, Y.C. and V.K. All authors have read and agreed to the published version of the manuscript.}
\funding{{This research was funded by Chinese Scholarship Council.}
\institutionalreview{{Not applicable}
\informedconsent{{Not applicable}
\dataavailability{{Not applicable}
\acknowledgments{{The authors would like to thank Dr. Tuomas Aittomaki for his valuable suggestions for this paper.}}
\conflictsofinterest{{The authors declare no conflict of interest.}
\newpage
\abbreviations{Abbreviations}{
The following symbols are used in this manuscript:\\
\noindent
\begin{tabular}{@{}lp{11cm}<{\raggedright}}
$\alpha\;\;\;$ &scalar, complex path loss \\
$\mathbf{A}\;\;\;$ &transmitted signal matrix under block fading assumption\\
$\mathcal{A}\;\;\;$ &communication user set\\
$\mathcal{A}_r\;\;\;$ &communication user set that is interfered by radar\\
$\mathcal{A}_c\;\;\;$ &communication user set that is not interfered by radar\\
$\beta \;\;\;$ & baseband subcarrier\\
$\beta_R \;\;\;$& radar baseband subcarrier\\
$\beta_C \;\;\;$& communication baseband subcarrier\\
$\mathbf{B}\;\;\;$& {frequency modulation matrix}\\
$\mathbf{\ddot{B}}\;\;\;$& demodulation matrix\\
$\mathcal{B}\;\;\;$& radar user set\\
$\mathbf{C}\;\;\;$& channel coding matrix\\
$d \;\;\;$ &scalar, number of degrees of freedom\\
$\mathbf{D}\;\;\;$& interference plus noise covariance matrix\\
$h[f_1] \;\;\;$& scalar, channel impulse response received for first subcarrier\\
$\mathbf{H}\;\;\;$ &channel matrix\\
$K \;\;\;$ & total number of communication users\\
$L \;\;\;$& total number of blocks/pulses\\
$M \;\;\;$ & total number of subpulses\\
$N \;\;\;$ & column dimension of precoder matrix, which is user-desired frequency DoFs.\\
$N_{sc}\;\;\;$ & number of transmitted subcarriers\\
$N_{p}\;\;\;$ & column dimension of data matrix $\mathbf{S}$ \\
$\mathbf{P}\;\;\;$& $N_{sc} \times N$-dimensional precoding matrix\\
$\mathbf{Q}\;\;\;$& $N_{sc} \times N$-dimensional decoding matrix\\
$\mathbf{s}(N)\;\;\;$ & data vector for $N$th time slot \\
$\mathbf{S}\;\;\;$& $N_{p} \times M$-dimensional data matrix\\
$T_p \;\;\;$& entire pulse duration\\
$\mathbf{W}\;\;\;$& $N_{sc} \times M$-dimensional {Gaussian noise matrix}\\
$\mathbf{Y_T}\;\;\;$& $N_{sc} \times M$-dimensional transmitted signal matrix \\
$\mathbf{Y_R}\;\;\;$& $N_{sc} \times M$-dimensional received signal matrix \\
$\tau_{T,i} \;\;\;$& scalar, delay between transmitted antenna and target\\
$\tau_{R,i} \;\;\;$& scalar, delay between received antenna and target\\
$\theta\;\;\;$& target's direction of arrival\\
$\bm \Omega \;\;\;$& selection matrix\\
$\mathbf{1} \;\;\;$& {all-ones matrix}\\
\end{tabular}}
|
1,116,691,497,827 | arxiv | \section{Introduction}
\label{sec:intro}
The Cell Network Model of Fracture (CNMF \cite{VILLALOBOS10}), is a
two dimensional statistical model of fracture \cite{alava-2006-55}
inspired in the stress field caused by drying of the bamboo
\emph{Guadua angustifolia} \cite{Montoya06,Takeuchi2008}. At the
parenchymatous tissue level, bamboos shrink during drying, causing the
detaching of neighboring cells and the appearance of fractures. The
CNMF models this tissue as an hexagonal array of cell elements (each
of them made of six beams), fixed by angular springs and joined by
brittle springs called the junctures. (Figure: \ref{bamboo.pdf}).
Shrinking forces -due to drying- acting along the elements distort the
structure and cause breaking avalanches of the junctures among cells.
\figura{8cm}{bamboo.pdf}{Cell Network Model of Fracture
(CNMF). \emph{Upper left}, The plane frame element spanning between
the nodes $i$ and $j$, oriented by an angle $\theta$. Each node has
two translational and one rotational degrees of freedom. \emph{Lower
left}. Two contiguous Cells. \emph{Right} Structure of the
CNMF. The hexagons represent the cells and the junctures are
arranged into triangles. Hashing denote fixed boundary
conditions. From \cite{VILLALOBOS10} Not at scale.}{}
Interestingly enough, when an homogeneous distribution of breaking
thresholds of junctures and fixed Young modulii are used, the
histogram of avalanche sizes of the CNMF shows power law behavior with
an exponent of $-2.93(8)$ (\cite{VILLALOBOS10}). This is by all means
equal to avalanche size distribution of the random fuse model, which
shows a power law with exponent of $-3$
\cite{PhysRevE.74.016122,Hansen94}. The analytical solution of both
models has been elusive.
One dimensional fiber bundle systems have provided models for the
thoroughly study of critical phenomena. Universality, the effect of
damage and the critical exponents have been found analytically (see
\cite{Kun2005,PradhanHansenEtAlFailure09} and references therein.).
The logical extension of those models to 2D, the random fuse model,
has allowed to investigate the fracture properties of biological materials,
as in the case of brittle nacre,
(\cite{PhysRevE.72.041919}), describing
the toughness of the material by its microscopical architecture. Beam models
similar to the CNMF has also been used to model fracture in concrete
\cite{PhysRevE.75.066109}.
In the present paper, to characterize the path to global failure, we
study both the distribution of humidity decrement among between
consecutive avalanches (the analogue of waiting times for this model),
as well as the fraction of intact fibers as function of the humidity
decrement. This last quantity behaves as an order parameter for the
system (see \refsec{model}). Moreover, the universality of the CNMF
is explored numerically by characterizing the histogram of avalanche
sizes for two cases: homogeneous distributions of the juncture
breaking thresholds with different widths and Weibull distributions of
different shapes, \refsec{disorder}. Furthermore, in section
\refsec{damage} a damage function is introduced as follows: Each time
a stress threshold is reached, the stiffness is reduced by a constant
factor, until the fiber completely breaks. The main results and
comments are summarized in Sec \refsec{concludingremarks}.
\section{Model}
\label{sec:model}
The CNMF is a 2D statistical model of fracture that resembles the
parenchymatous tissue of the bamboo {\it Guadua augustifolia}. It is
composed by two kinds of structures: cell walls and junctures among
cells. Six cell walls arrange themselves to form hexagons, thanks to a
angular springs associated with the rotational degree of freedom. The
cells are arranged like a honeycomb. The junctures are arranged in sets
of three at the common corners of the cells, modeling the silica
deposits that glue cells together. (Figure: \ref{bamboo.pdf}) Each
kind has a given fixed Young modulus for all its elements. Junctures
are allowed to break, as a result of the brittle behavior of the
silica, while cell walls are not.
As boundary conditions, all the border nodes are set fixed. The
deformation of the system comes from shrinking forces acting on every
cell wall and proportional to a global humidity loss parameter $\Delta
h$. By means of a Finite Element Method, the resulting forces and
deformations of all the elements are calculated.
The evolution of the system has three stages. \emph{Linear Elastic
Shrinking}: local shrinking forces due to humidity losses are
applied to the cell walls. The differences between the local strain at
the juncture and their individual thresholds are calculated. This step
continues until at least one fiber would suffer an strain surpassing
its threshold. \emph{ Drying induced Breaking}: By means of a zero
finding algorithm, the exact humidity loss causing the first breaking
is found. The broken element is removed from the structure, changing
the stiffness matrix in accordance. \emph{Nonlinear avalanche}: The
breaking of an element calls for a force redistribution over the whole
structure. This redistribution may cause an avalanche of breaks. When
the avalanche ends, the procedure re-stars from the first stage.
\subsection{Distributions of humidity decrement between consecutive avalanches}
The distribution of humidity decrements between successive avalanches
provides a description of the path to the global failure of the
system. This is the analog to the waiting times between avalanches of
other models of fracture. Fig. \refig{largsmall.pdf} shows the
normalized histograms of humidity decrements for several values of the
maximum humidity change allowed $\Delta h$. When the breaking process
is driven until the end, the histograms can be fitted to an
exponential, with a fitted humidity constant of $13.5(2)$, which is
and indication of lack of correlation between successive avalanches.
\figura{8.5cm}{largsmall.pdf}{Humidity decrement between consecutive
avalanches histogram. The parameter that defines a curve
is the maximum humidity change ($\Delta h$, proportional to Cell
Wall strain, shown). The classes (horizontal axis) are the
(normalized by the mean waiting humidity).}{}{}
\subsection{Fraction of remaining junctures}
Let us consider the one-dimensional global load sharing fiber bundle model
\cite{PradhanHansenEtAlFailure09}. Given $U(\sigma)$ as the fraction of
remaining fibers at a given stress and $\sigma_c$ the critical stress causing
the global breaking of the system. Thus, $U^*(\sigma) - U^*(\sigma_c)$ behaves as an order
parameter. For the CNMF with homogeneous
breaking thresholds at the juncture elements we obtain an exponential
relaxation in the number of remaining junctures (\refig{varWidthLogY.pdf}).
\figura{8.5cm}{varWidthLogY.pdf}{Number of intact fibers as function
of the humidity change for several homogeneous distribution of the
breaking threshold with different widths (semi log). \emph{Inset}, linear
axis. }{}{}
This exponential behavior is independent of the system size. In
\refig{OrderParScaling.pdf}, the fraction of the population of intact
fibers as function of humidity is shown for different sizes of the
system. The horizontal axis was rescaled by means of a linear fit on
the semi-log data. All histograms show an exponential decay.
\figura{8.5cm}{OrderParScaling.pdf}{Scaled fraction of
intact fibers as function of the humidity change (in units of max
humidity difference) for different system sizes. }{}{}
\section{Universality}
\label{sec:disorder}
Fiber bundle models are universal in the sense that the breaking of the elements follows power law distribution of avalanche sizes irrespective of several system characteristics. For the CNMF we studied the distribution of avalanche sizes when the thresholds are generated either from flat distributions of several widths or from Weibull distributions with several characteristic parameters.
Flat distributions of the breaking thresholds, all centered at $0.35 EA$ but with different widths (spanning on two orders of magnitude), show the same power-law distribution of avalanche sizes, with slopes around $-3$ (Fig.~\refig{ChangeSize.pdf}). The data shows that narrower distributions show larger fluctuations around this power law than wider ones.
\figura{8.5cm}{ChangeSize.pdf}{Histogram of avalanche sizes for
several widths of the homogeneous threshold distribution centered at
$0.35 EA$}{}{}
The probability density function for the Weibull distribution function
\cite{Weibull51} is given by: \beq{Weibull}
f(x;\lambda,k) = \left\{ \begin{array}{cc} \frac{k}{\lambda}
\left(\frac{k}{\lambda}\right)^{k-1}e^{-(x/\lambda)^k} & x\geq0\\ 0
& x<0
\end{array}\right. ,
\end{equation}
\noindent where $k$ and $\lambda$ are free parameters. It is commonly
used to describe the breaking thresholds of fibers by fixing
$\lambda$$=$$1$ and changing $k$, which is the main parameter
controlling the distribution shape. When we use this distribution for
the breaking thresholds at the junctures, the histogram of avalanche
sizes shows also a power law behavior, with an exponent close to -2.9
for small values of k. For larger values the exponential cutoff is
more pronounced. \refig{Weibull1-10.pdf}.
\figura{9.cm}{Weibull1-10.pdf}{Histogram of avalanche sizes for
several Weibull distributions of the breaking thresholds, with
values of the shape parameters $k$ between $1$ and $10$. The
straight line corresponds to an slope of $-2.9$.
}{}{}
The humidity decrements between consecutive avalanches (that is, the
waiting times) distributes like an exponential, also when the
thresholds follow a Weibull distribution (\refig{WTWeibullC.pdf}).The
characteristic time for $k$$=$$5$ is 16.1(8), very close to the one
we gathered for flat distributions of the braking thresholds.
\figura{8.5cm}{WTWeibullC.pdf}{Histogram of humidity increments
between successive avalanches for several Weibull distributions of
the breaking thresholds, with $k$ between $1$ and $10$. The blue
line corresponds to a fit of the series of shape $k=10$, with slope
$-2.1$. The green line to that for $k=3$, with slope $-2.4
}{}{}
\section{Damage}
\label{sec:damage}
In order to introduce a degradation for the juncture elements we
reduce the Young modulus of the juncture elements by a damage factor
$0<a<1$ each time the juncture fails. When the element has suffered a
maximum number of failures $k_{max}$, it is assumed to be broken and
is removed from the structure.
\figura{8.5cm}{CHMxD1.pdf}{Histogram of avalanche sizes for damage
parameter $a$$=$$0.1$ and several maximal numbers of failures, $k_{max}$. The bolder line (slope $-2.98(7)$ fits for avalanche sizes between 1 and 7, while the thinner line (slope $-2.1(1)$) fits for avalanche sizes between 7 and 60.}{}{}
When a small degradation is introduced ($a$$=$$0.1$), the power law distribution of avalanche sizes seems to exhibit a crossover from an exponent $-3$ to an exponent $-2$ at sizes around 8 (Fig. \refig{CHMxD1.pdf}). This may indicate that the remaining elasticity helps to sustain the structure. However, more statistics and larger system sizes are required to clarify this point. Even smaller values of $a$ (not shown) cause the maximum humidity loss (and therefore the forces on the elements) to be much larger, creating numerical instabilities that end into poor statistics.
\section{Conclusions}
\label{sec:concludingremarks}
The numerical evidence of this work indicates that the avalanche sizes
for the Cell Network Model of Fracture distribute as a power law with exponent $-3.0$, for any broad distribution of the braking thresholds, either if they are flat or Weibull distributed.
The distribution of waiting times show an exponential decay for all
system sizes evaluated and all tested disorder
distributions of breaking thresholds. Even the characteristic times are similar for all of them. In our opinion, this is related to the fact that the
most common failure mode for the system is the softening of the sample by
displacements that would violate the boundary conditions.
\textbf{Acknowledgments:} We thank
\href{http://www.colciencias.gov.co}{\emph{COLCIENCIAS}}
(``Convocatoria Doctorados Nacionales 2008''),
\href{http://www.ceiba.org.co}{\emph{Centro de Estudios
Interdisciplinarios B{\'a}sicos y Aplicados en Complejidad- CeiBA
- Complejidad}} and
\href{http://www.unal.edu.co}{\emph{Universidad Nacional de Colombia}}
for financial support. We also thank Professor Jorge A. Montoya and
Professor Caori P. Takeuchi for enlightening discussions in the field
of Guadua drying.
\bibliographystyle{elsarticle-num}
|
1,116,691,497,828 | arxiv | \section{Introduction}
Negative curves on algebraic surfaces are an object of classical interest.
One of the most prominent achievements of the Italian School of
algebraic geometry was Castelnuovo's
Contractibility Criterion.
\begin{definition}[Negative curve]
We say that a reduced and irreducible curve $C$ on a smooth projective surface is \emph{negative},
if its self-intersection number $C^2$ is less than zero.
\end{definition}
\begin{example}[Exceptional divisor, $(-1)$-curves]
Let $X$ be a smooth projective surface and let $P\in X$ be a closed point.
Let $f:\Bl_PX\to X$ be the blow up of $X$ at the point $P$. Then the exceptional
divisor $E$ of $f$ (i.e., the set of points in $\Bl_PX$ mapped by $f$ to $P$) is a negative
curve. More precisely, $E$ is rational and $E^2=-1$. By a slight abuse of language
we will call such curves simply $(-1)$--curves.
\end{example}
Castelnuovo's result asserts that the converse is also true, see \cite[Theorem V.5.7]{Hartshorne} or
\cite[Theorem III.4.1]{BPV}.
\begin{theorem}[Castelnuovo's Contractibility Criterion]
Let $Y$ be a smooth projective surface defined over an algebraically closed field.
If $C$ is a rational curve with $C^2=-1$, then there exists a
smooth projective surface $X$ and a projective morphism
$f:Y\to X$ contracting $C$ to a smooth point on $X$. In other words, $Y$ is isomorphic
to $\Bl_PX$ for some point $P\in X$.
\end{theorem}
The above result plays a pivotal role in the Enriques-Kodaira classification of surfaces.
Of course, there are other situations in which negative curves on algebraic surfaces appear.
\begin{example}\label{ex: C x C}
Let $C$ be a smooth curve of genus $g(C)\geq 2$. Then the diagonal $\Delta\subset C\times C$
is a negative curve as its self-intersection is $\Delta^2=2-2g$.
\end{example}
It is quite curious that it is in general not known if for a general curve $C$, there are other
negative curves on the surface $C\times C$, see \cite{Kou93}. It is in fact even more interesting,
that there is a direct relation between this problem and the famous Nagata Conjecture. This was
observed by Ciliberto and Kouvidakis \cite{CilKou99}.
There is also a connection between negative curves and the Nagata
Conjecture on general blow ups of $\P^2$.
We recall the following conjecture about $(-1)$-curves which in fact
implies the Nagata Conjecture; see \cite[Lemma 2.4]{CHMR13}.
\begin{conjecture}[Weak SHGH Conjecture] \label{(-1)-curves}
Let $f: X\to \P^2$ be the blow up of the projective plane $\P^2$ in
general points $P_1,\ldots,P_s$. If $s\geq 10$, then the only negative curves
on $X$ are the $(-1)$-curves.
\end{conjecture}
On the other hand, it is well known that already a blow up of $\P^2$ in $9$ general points
carries \emph{infinitely} many $(-1)$--curves.
One of the central and widely open problems concerning negative curves on algebraic surfaces
asks whether on a fixed surface negativity is bounded. More precisely, we have
the following conjecture (BNC in short). See \cite{BNC} for an extended introduction to this problem.
\begin{conjecture}[Bounded Negativity Conjecture]\label{bnc}
Let $X$ be a smooth projective surface. Then there exists a number $\tau$ such that
$$C^2\geq \tau$$
for any reduced and irreducible curve $C\subset X$.
\end{conjecture}
If the Conjecture holds on a surface $X$, then we denote by $b(X)$ the largest
number $\tau$ such that the Conjecture holds. It is known (see
\cite[Proposition 5.1]{BNC}) that if
the negativity of reduced and irreducible curves is bounded below,
then the negativity of all reduced curves is also bounded below.
Conjecture \ref{bnc} is known to fail in the positive characteristic;
see \cite{Har10,BNC}.
In fact Example \ref{ex: C x C} combined with the action of the Frobenius morphism provides
a counterexample. In characteristic zero, Conjecture \ref{bnc} is
open in general. It is easy to prove BNC in some cases; see Remark
\ref{anti-canonical} for an easy argument when the anti-canonical
divisor of $X$ is nef. However, in many other cases the
conjecture is open. In particular the following question is open and
answering it may lead to a better understanding of Conjecture \ref{bnc}.
\begin{question}\label{que: birational}
Let $X,Y$ be smooth projective surfaces and suppose that $X$ and $Y$
are birational and Conjecture \ref{bnc} holds for $X$. Then does
Conjecture \ref{bnc} hold for $Y$ also?
\end{question}
This is not known even in the simplest case, when one of surfaces is $\P^2$
(where Conjecture \ref{bnc} obviously holds) and the other is a blow up of $\P^2$.
If we blow up general points, then this is governed by Conjecture \ref{(-1)-curves}.
The question is of interest also for special configurations of points in $\P^2$
and we focus our research here on such configurations.
More concretely, we consider some examples of such special rational
surfaces and list all negative curves on them.
In particular, we study
blow ups of $\P^2$ at certain points which lie on elliptic curves.
Our main results classify negative curves on such surfaces; see Theorems
\ref{thm: very general on cubic}, \ref{thm: 3 torsion} and \ref{thm: Fermat}.
As a consequence, we show that Conjecture \ref{bnc} holds for such surfaces.
This recovers some existing results of Harbourne and Miranda \cite{HarMir90}, \cite{Har97TAMS}.
Additionally we compute values of the number $b(X)$ on such surfaces.
\section{Very general points on an irreducible cubic}\label{sec:cubic}
To put our results in Section \ref{sec: Fermat} into perspective,
we recall results on negative curves on blow ups of $\P^2$ at
$s$ very general points on an plane curve of degree $3$.
Geometry of such surfaces was studied by Harbourne in \cite{Har85}.
\begin{theorem}[Points on a cubic curve]\label{thm: very general on cubic}
Let $D$ be an irreducible and reduced plane cubic and let
$P_1,\ldots,P_s$
be smooth points on $D$. Let $f: X \longrightarrow \mathbb{P}^2$ be the blow up at
$P_1,\ldots, P_s$. If $C \subset X$ is any reduced and irreducible curve such that
$C^2 < 0$, then
\begin{itemize}
\item[a)] $C$ is the proper transform of $D$, or
\item[b)] $C$ is a $(-1)$-curve, or
\item[c)] $C$ is a $(-2)$-curve.
\end{itemize}
Moreover, if the points $P_1,\ldots,P_s$ are very general, then only cases a) and b) are possible.
\end{theorem}
\begin{proof}
The first part of Theorem follows from \cite[Remark III.13]{Har97TAMS} and also from our Remark \ref{anti-canonical}.
The "moreover" part follows from the following abstract argument.
A negative curve on $X$ is either a component of $-K_X$, or a $(-1)$-curve or a $(-2)$-curve.
But a $(-2)$-curve is in $\ker(\Pic(X)\to\Pic^0(-K_X))$, which is $0$ for very general points,
so there are no $(-2)$-curves.
\end{proof}
\begin{corollary}
Let $X$ be a surface as in Theorem \ref{thm: very general on cubic}
with $s>0$ very general points. Then Conjecture \ref{bnc} holds for $X$ and we have
$$b(X)=\min\left\{-1,\; 9-s \right\}.$$
\end{corollary}
\section{Special points on a smooth cubic}\label{sec: Fermat}
In this section, we consider blow ups of $\P^2$ at 3-torsion points of
an elliptic curve as well as the points of intersection of the Fermat
arrangement of lines.
In order to consider these two cases, we deal first with the following numerical lemma which seems quite interesting
in its own right.
\begin{lemma}\label{lem: nice}
Let $m_1,\dots,m_9$ be nonnegative real numbers satisfying the following 12 inequa\-li\-ties:
\begin{gather}\label{assumptions}
m_1+m_2+m_3 \leq 1,\\\label{1 1}
m_4+m_5+m_6 \leq 1,\\
m_7+m_8+m_9 \leq 1,\\
m_1+m_4+m_7 \leq 1,\\
m_2+m_5+m_8 \leq 1,\\
m_3+m_6+m_9 \leq 1,\\
m_1+m_5+m_9 \leq 1,\\
m_2+m_6+m_7 \leq 1,\\\label{1}
m_3+m_4+m_8 \leq 1,\\\label{2}
m_1+m_6+m_8 \leq 1,\\\label{3}
m_2+m_4+m_9 \leq 1,\\\label{4}
m_3+m_5+m_7 \leq 1.
\end{gather}
Then $m_1^2+ \dots + m_9^2 \leq 1$.
\end{lemma}
\begin{proof}
Assume that the biggest number among $m_1,\dots,m_9$ is $m_1=1-m$ for some $0\leq m\leq 1$.
Consider the following four pairs of numbers
$$p_1=(m_2,m_3),\; p_2=(m_4,m_7),\; p_3=(m_9,m_5),\; p_4=(m_6,m_8).$$
These are pairs such that together with $m_1$ they occur in one of the $12$ inequalities.
In each pair one of the numbers is greater or equal than the other.
Let us call this bigger number a \emph{giant}.
A simple check shows that there are always three pairs, such that their giants
are subject to one of the $12$ inequalities in the Lemma.
Without loss of generality, let $p_1$, $p_2$, $p_3$ be such pairs.
Also without loss of generality, let
$m_2$, $m_4$ and $m_9$ be the giants.
Thus $m_2+m_4+m_9 \leq 1$. Assume that also $m_6$ is a giant.
Inequality $m_2+m_3 \leq m$ implies that
$$m_2^2 + m_3^2 = (m_2+m_3)^2-2m_2m_3 \leq m(m_2+m_3)-2m_2m_3.$$
Observe also that
$$(m_2+m_3)^2-4m_2m_3 \leq m(m_2-m_3).$$
Analogous inequalities hold for pairs $p_2, p_3$ and $p_4$.
Therefore
\begin{gather*}
m_2^2+m_3^2+m_4^2+m_7^2+m_5^2+m_9^2 \leq \\
\leq m(m_2+m_4+m_9+m_3+m_7+m_5) - 2m_2m_3-2m_4m_7-2m_5m_9 \leq \\
\leq m + \big[ m(m_3+m_7+m_5) - 2m_2m_3-2m_4m_7-2m_5m_9 \big].
\end{gather*}
But we have also
\begin{gather*}
m_2^2+m_3^2+m_4^2+m_7^2+m_5^2+m_9^2 =\\
= (m_2+m_3)^2+(m_4+m_7)^2+(m_5+m_9)^2 - 2m_2m_3-2m_4m_7-2m_5m_9 = \\
= (m_2+m_3)^2-4m_2m_3+(m_4+m_7)^2-4m_4m_7+\\ +(m_5+m_9)^2-4m_5m_9 + 2m_2m_3+2m_4m_7+2m_5m_9 \leq \\
\leq m(m_2-m_3) + m(m_4-m_7) + m(m_9-m_5) + 2m_2m_3+2m_4m_7+2m_5m_9 \leq \\
\leq m - \big[ m(m_3+m_7+m_5) - 2m_2m_3-2m_4m_7-2m_5m_9 \big],
\end{gather*}
which obviously gives
$$m_2^2+m_3^2+m_4^2+m_7^2+m_5^2+m_9^2 \leq m.$$
Since
$$m_6^2+m_8^2 \leq m_6^2+m_6m_8 \leq m_6(m_6+m_8) \leq (1-m)m,$$
we get that the sum of all nine squares is bounded by
\begin{equation*}
(1-m)^2 + m + (1-m)m = 1. \qedhere
\end{equation*}
\end{proof}
If we think of numbers $m_1,\ldots,m_9$ as arranged in a $3\times 3$ matrix
$$\left(\begin{array}{ccccc}
m_1 && m_2 && m_3\\
m_4 && m_5 && m_6\\
m_7 && m_8 && m_9
\end{array}\right),$$
then the inequalities in the Lemma \ref{lem: nice} are obtained
considering the horizontal, vertical triples
and the triples determined by the condition that there is exactly one element $m_i$ in every
column and every row of the matrix (so determined by permutation matrices).
Bounding sums of only such triples allows us to bound the sum of squares of all
entries in the matrix. It is natural to wonder, if this phenomena extends to higher
dimensional matrices. One possible extension is formulated as the next question.
\begin{problem}
Let $M=\left(m_{ij}\right)_{i,j=1\ldots k}$ be a matrix whose
entries are non-negative real numbers. Assume that all the horizontal, vertical
and permutational $k$-tuples of entries in the matrix $M$ are bounded by $1$.
Is it true then that the sum of squares of all entries of $M$ is also bounded by $1$?
\end{problem}
\subsection{Torsion points}\label{ssec: torsion points}
We now consider a blow up of $\P^2$ at $9$ points which are torsion points of order $3$
on an elliptic curve embedded as a smooth cubic.
\begin{theorem}[$3$--torsion points on an elliptic curve]\label{thm: 3 torsion}
Let $D$ be a smooth plane cubic and let $P_1,\ldots,P_9$ be the flexes of $D$.
Let $f:X\to\P^2$ be the blow up of $\P^2$ at $P_1,\ldots,P_9$.
If $C$ is a negative curve on $X$, then
\begin{itemize}
\item[a)] $C$ is the proper transform of a line passing through two (hence three)
of the points $P_1,\ldots,P_9,$ and $C^2=-2$ or
\item[b)] $C$ is an exceptional divisor of $f$ and $C^2=-1$.
\end{itemize}
\end{theorem}
\proof
It is well known that there is a group law on $D$ such that the flexes are $3$--torsion points.
Since any line passing through
two of the torsion points automatically meets $D$ in a third torsion point, there are altogether
$12$ such lines. The torsion points form a subgroup of $D$ which is isomorphic to $\mathbb{Z}_3\times \mathbb{Z}_3$.
We can pick this isomorphism so that
$$P_1=(0,0),\; P_2=(1,0),\; P_3=(2,0),$$
$$P_4=(0,1),\; P_5=(1,1),\; P_6=(2,1),$$
$$P_7=(0,2),\; P_8=(1,2),\; P_9=(2,2).$$
This implies that the following triples of points are collinear (note that these are exactly triples of indices
in inequalities from \eqref{1 1} to \eqref{2}:
$$(P_1,P_2,P_3),\; (P_4,P_5,P_6),\; (P_7,P_8,P_9),\; (P_1, P_4, P_7),$$
$$(P_2,P_5,P_8),\; (P_3,P_6,P_9),\; (P_1,P_5,P_9),\; (P_2, P_6, P_7),$$
$$(P_3,P_4,P_8),\; (P_1,P_6,P_8),\; (P_2,P_4,P_9),\; (P_3, P_5, P_7).$$
Let $C$ be a reduced and irreducible curve on $X$ different from the exceptional divisors of $f$
and the proper transforms of lines through the torsion points. Then $C$ is of the form
$$C=dH-k_1E_1-\ldots-k_9E_9,$$
where $E_1,\ldots,E_9$ are the exceptional divisors of $f$ and
$k_1,\ldots,k_9 \ge 0$ and $d> 0$
is the degree of the image $f(C)$ in $\P^2$.
For $i=1,\ldots, 9$, let $m_i=\frac{k_i}{d}$. Since
$C$ is different from proper transforms of the $12$ lines distinguished above,
taking the intersection product of $C$ with the 12 lines, and dividing by $d$, we obtain exactly the $12$ inequalities in Lemma \ref{lem: nice}.
The conclusion of Lemma \ref{lem: nice} implies then that
$$C^2=d^2-\sum_{i=1}^9m_i^2\geq 0,$$
which finishes our argument.
\endproof
\begin{corollary}\label{cor:bnc on 3-torsion}
For the surface $X$ in Theorem \ref{thm: 3 torsion} Conjecture \ref{bnc} holds with
$$b(X)=-2.$$
\end{corollary}
\begin{remark}\label{rem: fibrations}
Theorem \ref{thm: 3 torsion} fits in a more general setting of elliptic fibrations. Negative curves
on surfaces $X$ with $h^0(X, -mK_X)\geq 2$ for some $m\geq 2$ have been studied by Harbourne and Miranda in \cite{HarMir90}.
\end{remark}
The observation in Remark \ref{rem: fibrations} allows us to explain results of Theorem \ref{thm: 3 torsion}
from another point of view. Let $\wtilde{D}$ be the proper transform of $D$. Then, it is a member
of the Hesse pencil, see \cite{ArtDol09}, in particular the linear system $|\wtilde{D}|$ defines
a morphism from $X$ to $\P^1$. The components of reducible fibers are $(-2)$ curves. There are 12 of them and they are proper transforms of lines passing through triples of blown-up points.
The exceptional divisors over these points are the $(-1)$ curves. These are sections of the fibration determined by $\wtilde{D}$.
Clearly Corollary \ref{cor:bnc on 3-torsion} follows also from the adjunction and the fact that $-K_X$ is effective, see Remark \ref{anti-canonical}.
Of course, there is no reason to restrict to $3$--torsion points.
\begin{remark}
With the same approach one can show that $m\geq 4$ the Bounded Negativity Conjecture holds on the blow ups of $\P^2$
at all the $m$--torsion points of an elliptic curve
embedded as a smooth cubic and we have
$$b(X)=9-m^2.$$
\end{remark}
\subsection{Fermat configuration of points}
The $9$ points and $12$ lines considered in subsection \ref{ssec: torsion points} form the famous
Hesse arrangement of lines; see \cite{Hir83}.
Any such arrangement is projectively equivalent to that obtained from the flex points of the Fermat cubic
$x^3+y^3+z^3=0$ and the lines determined by their pairs. Explicitly in coordinates we have then
$$P_1=(1:\varepsilon:0),\; P_2=(1:\varepsilon^2:0),\; P_3=(1:1:0),$$
$$P_4=(1:0:\varepsilon),\; P_5=(1:0:\varepsilon^2),\; P_6=(1:0:1),$$
$$P_7=(0:1:\varepsilon),\; P_8=(0:1:\varepsilon^2),\; P_9=(0:1:1),$$
for the points and
$$x=0,\; y=0,\; z=0,\; x+y+z=0, x+y+\varepsilon z=0,\; x+y+\varepsilon^2 z=0$$
$$x+\varepsilon y+z=0,\; x+\varepsilon^2 y+z=0,\; x+\varepsilon y+\varepsilon z=0,\; x+\varepsilon y+\varepsilon^2 z=0,\; x+\varepsilon^2 y+\varepsilon z=0, x+\varepsilon^2 y+\varepsilon^2 z=0,$$
for the lines, where $\varepsilon$ is a primitive root of unity of order $3$.
Passing to the dual plane, we obtain an arrangement of $9$ lines
defined by the linear factors of the Fermat polynomial
$$(x^3-y^3)(y^3-z^3)(z^3-x^3)=0.$$
These lines intersect in triples in $12$ points, which are dual to the lines of the Hesse arrangement.
The resulting dual Hesse configuration has the type $(9_4, 12_3)$ and it belongs to a much
bigger family of Fermat arrangements; see \cite{Szp19c}. Figure \ref{fig: dual Hesse}
is an attempt to visualize this arrangement (which cannot be drawn in the real plane
due to the famous Sylvester-Gallai Theorem; for instance, see \cite{Mel41}).
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.50\textwidth]{dualhesse.png}
\end{center}
\vskip-35pt
\caption{Fermat configuration of points}
\label{fig: dual Hesse}
\end{figure}
It is convenient to order the $9$ intersection points in the affine part in the following way:
$$
\begin{array}{lll}
Q_1=(\varepsilon: \varepsilon: 1),& Q_2=(1: \varepsilon: 1),& Q_3=(\varepsilon^2: \varepsilon: 1),\\
Q_4=(\varepsilon: 1: 1),& Q_5=(1: 1: 1),& Q_6=(\varepsilon^2: 1: 1),\\
Q_7=(\varepsilon: \varepsilon^2: 1),& Q_8=(1: \varepsilon^2: 1),& Q_9=(\varepsilon^2: \varepsilon^2: 1).
\end{array}
$$
With this notation established, we have the following result.
\begin{theorem}[Fermat points]\label{thm: Fermat}
Let $f:X\to\P^2$ be the blow up of $\P^2$ at $Q_1,\ldots,Q_9$.
If $C$ is a negative curve on $X$, then
\begin{itemize}
\item[a)] $C$ is the proper transform of a line passing through two or three
of the points $Q_1,\ldots,Q_9,$ or
\item[b)] $C$ is a $(-1)$-curve.
\end{itemize}
\end{theorem}
\proof
The proof of Theorem \ref{thm: 3 torsion} works with very few adjustments.
Let us assume, to begin with, that $C$ is a negative curve on $X$,
distinct from the curves
listed in the theorem. Then
$$C=dH-k_1E_1-\ldots-k_9E_9,$$
for some $d>0$ and $k_1,\ldots,k_9\geq 0$. We can also assume that $d$ is the smallest
number for which such a negative curve exists.
As before, we set
$$m_i=\frac{k_i}{d}\;\mbox{ for }\; i=1,\ldots,9.$$
Then the inequalities \eqref{assumptions} to
\eqref{1} follow from the fact that $C$ intersects the $9$ lines in
the arrangement non-negatively.
If one of the remaining inequalities \eqref{2}, \eqref{3} or \eqref{4} fails,
then we perform a standard Cremona transformation based on the points involved
in the failing inequality. For example, if \eqref{2} fails, we make Cremona based
on points $Q_1, Q_6$ and $Q_8$. Note that these points are \emph{not} collinear
in the set-up of our Theorem. Since $C$ is assumed not to be a line through any
two of these points, its image $C'$ under Cremona is a curve of strictly lower degree,
negative on the blow up of $\P^2$ at the $9$ points. The points $Q_1,\ldots,Q_9$
remain unchanged by the Cremona because, as already remarked, all dual Hesse
arrangements are projectively equivalent. Then $C'$ is again a negative curve
on $X$ of degree strictly lower than $d$, which contradicts our
choice of $C$ such that $C \cdot H$ is minimal.
Hence, we can assume that the inequalities \eqref{2}, \eqref{3} and \eqref{4}
are also satisfied. Then we conclude exactly as in the proof of Theorem \ref{thm: 3 torsion}.
\endproof
\begin{remark}\label{non-extremal} The surface $X$ considered in Theorem \ref{thm: Fermat}
is a \textit{non-extremal Jacobian rational elliptic surface} and
contains infinitely many $(-1)$-curves. See \cite{HarMir90} for more
details.
\end{remark}
In fact, we are in the position to identify all these $(-1)$-curves.
Let $L(X,Y)$ denote the line determine two distinct points $X$ and $Y$.
Let
$$\call=\left\{U_1=L(Q_1,Q_6), U_2=L(Q_1,Q_8), U_3=L(Q_6,Q_8),
V_1=L(Q_2,Q_4),\right.$$$$\left. V_2=L(Q_2,Q_9), V_3=L(Q_4,Q_9),
W_1=L(Q_3,Q_5), W_2=L(Q_3,Q_7), W_3=L(Q_5,Q_7)\right\}$$
be the set of lines determined by pairs of points $Q_i, Q_j$ with
$1\leq i<j\leq 9$ which contain only $2$ points $Q_k$.
These lines can grouped in three ''triangles'', which is indicated
by the letters $U, V$ and $W$ used to labeling relevant triples.
Vertices of these triangles determine standard Cremona transformations,
which we denote by $\phi_1$ for the $U$-triangle, i.e., points
$Q_1, Q_6, Q_8$ and $\phi_2$ and $\phi_3$ for the $V$ and $W$-triangles respectively.
\begin{corollary}\label{prop: all -1 curves}
Let $C\subset X$ be a $(-1)$-curve. Then either $C\in\call$ or there exists a positive integer $r\geq 1$ and
a sequence of Cremona transformations $\phi=\phi_{i_r}\circ\ldots\circ\phi_{i_1}$ with $i_1,\ldots,i_r\in\left\{1,2,3\right\}$
such that $C$ is the image under $\phi$ of one of the lines in $\call$.
\end{corollary}
\proof
The statement follows directly from the proof of Theorem \ref{thm: Fermat}.
There is an interesting regularity in applying Cremona transformation, which we would like to present additionally.
This is done best by the way of an example. Recall that there is the following general rule concerning
changes of degree and multiplicities, when applying Cremona transformation. Let $\phi$ be the standard Cremona
transformation based on a triangle $F, G, H$. Let $C$ be a curve of degree $d$, different from the three lines
$L(F,G)$, $L(F,H)$ and $L(G,H)$ passing through the points $F, G, H$ with multiplicities $m_F, m_G, m_H$.
Let $k=d-m_F-m_G-m_H$.
Then the image curve $C'=\phi(C)$ has degree $d+k$ and multiplicities $m_F+k$, $m_G+k$, $m_H+k$ in the base
points of the reverse Cremona transformation.
In Table \ref{tab: Cremona} we present how the line $L(Q_1,Q_6)$ transforms under the sequence of
Cremona transformations indicated in the first column. If it is possible to perform one of $2$ Cremonas, we indicate
it by writing the chosen one in boldface. Of course, it is always possible to choose the Cremona
performed in the last step but as this leads to nothing new, we ignore this option.
\renewcommand*{\arraystretch}{1.5}
\begin{table}\label{tab: Cremona}
$$
\begin{array}{|c||c||ccc||ccc||ccc|}
\hline
Cremona & \deg & Q_1 & Q_6 & Q_8 & Q_2 & Q_4 & Q_9 & Q_3 & Q_5 & Q_7\\
\hline
\hline
& 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\hline
\boldsymbol{\phi_2} & 2 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0\\
\hline
\phi_3 & 4 & 1 & 1 & 0 & 1 & 1 & 1 & 2 & 2 & 2\\
\hline
\boldsymbol{\phi_1} & 6 & 3 & 3 & 2 & 1 & 1 & 1 & 2 & 2 & 2\\
\hline
\phi_2 & 9 & 3 & 3 & 2 & 4 & 4 & 4 & 2 & 2 & 2\\
\hline
\boldsymbol{\phi_3} & 12 & 3 & 3 & 2 & 4 & 4 & 4 & 5 & 5 & 5\\
\hline
\end{array}$$
\caption{A series of Cremona transformations}
\end{table}
The diagram in Figure \ref{fig: diagram} indicates possible bifurcations at the places where one of two Cremona transformations
can be performed. For simplicity, we put only degree of resulting $(-1)$-curves in the diagram.
\begin{figure}
\begin{tikzpicture}[->,auto,node distance=1.5cm,main node/.style={font=\Large\bfseries}]
%
\node[main node] (C) {1};
\node (CL) [below left = of C] {2};
\node (CR) [below right = of C] {2};
\node (CLC) [below = of CL] {4};
\node (CRC) [below = of CR] {4};
\node (CLCL) [below left = of CLC] {6};
\node (CRCR) [below right = of CRC] {6};
\node (CC) [below right = of CLC] {5};
\node (CLCLC) [below = of CLCL] {9};
\node (CCC) [below = of CC] {8};
\node (CRCRC) [below = of CRCR] {9};
\node (CRCRCR) [below right = of CRCRC] {12};
\node (CCCL) [below left = of CCC] {10};
\node (CCCR) [below right = of CCC] {10};
\node (CLCLCL) [below left = of CLCLC] {12};
\node (CLCLCLC) [below = of CLCLCL] {16};
\node (CCCLC) [below = of CCCL] {14};
\node (CCCRC) [below = of CCCR] {14};
\node (CRCRCRC) [below = of CRCRCR] {16};
\path (C) edge node[above] {$\phi_2\;$} (CL)
(C) edge node[above] {$\;\phi_3$} (CR)
(CL) edge node[left] {$\phi_3$} (CLC)
(CR) edge node {$\phi_2$} (CRC)
(CLC) edge node[above] {$\phi_1\;$} (CLCL)
(CLC) edge node {$\phi_2$} (CC)
(CRC) edge node[above] {$\phi_3\;$} (CC)
(CRC) edge node {$\phi_1$} (CRCR)
(CLCL) edge node[left] {$\phi_2\;$} (CLCLC)
(CC) edge node[left] {$\phi_1\;$} (CCC)
(CRCR) edge node {$\phi_3\;$} (CRCRC)
(CLCLC) edge node[above] {$\phi_3\;$} (CLCLCL)
(CLCLC) edge node {$\phi_1\;$} (CCCL)
(CCC) edge node[above] {$\phi_2\;$} (CCCL)
(CCC) edge node {$\phi_3\;$} (CCCR)
(CRCRC) edge node[above] {$\phi_1\;$} (CCCR)
(CRCRC) edge node {$\phi_2\;$} (CRCRCR)
(CLCLCL) edge node[left] {$\phi_1\;$} (CLCLCLC)
(CCCL) edge node[left] {$\phi_3\;$} (CCCLC)
(CCCR) edge node {$\phi_2\;$} (CCCRC)
(CRCRCR) edge node {$\phi_1\;$} (CRCRCRC);
\end{tikzpicture}
\caption{Bifurcations of Cremona transformations}
\label{fig: diagram}
\end{figure}
\endproof
The diagram in Figure \ref{fig: diagram} seems quite interesting in its own right.
There is a vertical symmetry, and it leads to a scheme of numbers indicated in Table \ref{tab: diagram},
which has some reminiscences to the Pascal's triangle.
\renewcommand*{\arraystretch}{1.5}
\begin{table}
$$
\begin{array}{ccccccc}
&&& 1 &&&\\
&& 2 && 2 &&\\
&& 4 && 4 &&\\
& 6 && 5 && 6 &\\
& 9 && 8 && 9 &\\
12 && 10 && 10 && 12\\
16 && 14 && 14 && 16
\end{array}
$$
\caption{Cremona hexal}
\label{tab: diagram}
\end{table}
\begin{problem}
Investigate numerical properties of the Cremona hexal. For example, find a direct formula
for the entry in line $i$ and column $j$.
\end{problem}
\begin{remark}\label{anti-canonical}
If we are interested only in the bounded negativity
property on $X$, then there is a simple proof. Indeed,
if $C \subset X$ is a reduced and irreducible curve, the genus formula
gives
$$1+\frac{C\cdot (C+K_X)}{2}\geq 0.$$
Now, since the anti-canonical divisor on the blow up of $\P^2$
in the $9$ Fermat points is effective, we conclude that $C$ is a
component of $-K_X$ or
$$C^2\geq -2-CK_X\geq -2.$$
\end{remark}
Having classified all the negative curves on the blow up of $\P^2$ at the
9 Fermat points, it is natural to wonder about the negative curves on
blow ups of $\P^2$ arising from the other Fermat configurations.
Note that the argument given in Remark \ref{anti-canonical} is no
longer valid, since $-K_X$ is not nef or effective anymore. So it will
be interesting to ask whether BNC holds for such surfaces. We pose the following problem.
\begin{problem}\label{pro: Fermat}
For a positive integer $m$, let $Z(m)$ be the set of all points of the form
$$(1:\varepsilon^{\alpha}:\varepsilon^{\beta}),$$
where $\varepsilon$ is a primitive root of unity of order $m$ and $1 \le
\alpha,\beta \le m$.
Let $f_m:X(m)\to\P^2$ be the blow up of $\P^2$ at all the points of $Z(m)$.
Is the negativity bounded on $X(m)$? If so, what is the value of $b(X(m))$?
\end{problem}
We end this note by the following remark which discusses bounded
negativity for blow ups of $\mathbb{P}^2$ at 10 points.
\begin{remark}
Let $X$ denote a blow up of $\mathbb{P}^2$ at 10 points. As mentioned
before, if the blown up points are general, then Conjecture
\ref{(-1)-curves} predicts that the only negative curves on $X$ are
$(-1)$-curves. This is an open question. On the other hand, let us consider a couple of examples
of special points.
Let $X$ be obtained by blowing up the 10 nodes of an irreducible and
reduced rational nodal sextic. Such surfaces are called \textit{Coble
surfaces} (these are smooth rational surfaces $X$ such that
$|-K_X| = \emptyset$, but $|-2K_X| \ne \emptyset$). Then it is known
that BNC holds for $X$. In fact, we have $C^2 \ge -4$ for every
irreducible and reduced curve $C\subset X$; see \cite[Section
3.2]{CD}.
Now let $X$ be the blow up of 10 double points of intersection of 5
general lines in $\mathbb{P}^2$. Then $-K_X$ is a big divisor and by \cite[Theorem
1]{TVV}, $X$ is a \textit{Mori dream space}. For such surfaces, the
submonoid of the Picard group generated by the effective classes
is finitely generated. Hence BNC holds for $X$
(\cite[Proposition I.2.5]{Har10}).
\end{remark}
\textbf{Acknowledgements:}
A part of this work was done when KH visited the
Pedagogical University of Krakow in October 2018. He is grateful to
the university and the department of mathematics for making it a wonderful visit.
This research stay of KH was partially supported by the Simons Foundation
and by the Mathematisches Forschungsinstitut Oberwolfach and he is
grateful to them. The authors thank the referee for making several
useful comments which improved this note. Finally, the authors warmly
thank Brian Harbourne for suggesting corrections and ameliorations
which substantially improved the paper.
|
1,116,691,497,829 | arxiv | \section{Gauge Structure: Fundamental, Emergent, Productive}
The principle of local gauge symmetry is central to our present theories of fundamental interactions. It has been a guide to discovery, not only through its constructive role in explaining the existence and properties of force-mediating particles -- photons, $W$ and $Z$ bosons, color gluons, and graviton
-- as avatars of curvature, but also through the powerful constraints it imposes on possible interactions among fundamental fields. Yet I think it is fair to say that gauge symmetry appears in relativistic field theory as a rather mysterious and even elusive gift. For example, we are led to the equations of quantum chromodynamics by imposing local $SU(3)$ color gauge symmetry, but the observables of the theory are supposed to be gauge singlets, as are the asymptotic states (confinement). The underlying gauge symmetry thereby vanishes from view, leaving behind a smile drawn in evanescent jets.
In recent years gauge symmetry has been identified in several other physical contexts, wherein it is not an irreducible axiom but rather an emergent consequence of other principles. These include the circle of ideas around Berry's phase in basic quantum mechanics \cite{berry, wz, sw1}, now developed to support many sophisticated applications including topological band theory \cite{bernevig}; the statistical gauge fields of anyon theory \cite{lm, fw, asw}, recently exhibited directly in the quantum Hall effect \cite {anyon_collider, manfra} and being prepared for use in quantum information processing \cite{pan, microsoft}; and in the low-energy theory of quantum spin ice \cite{spin_ice}, among others. This naturally raises the question whether gauge symmetries presently regarded as fundamental might emerge from more primitive structures, too. In any case, such examples bring the aforementioned mystery: Why gauge symmetry? down to earth.
Identification of gauge structure in a new problem is productive, because it brings in powerful conceptual tools -- {\it e. g}., covariant derivative, curvature, holonomy, topology -- that have been sharpened by decades of development and application in physics. Here, the synergy of beauty and truth is tangible.
In this note I will discuss another example of emergent gauge symmetry, in the dynamics of deformable bodies \cite{sw2, montgomery, littlejohn, cabrera}. We would like to calculate how the internal motion of a body affects its external position and orientation. But if the shape of a body changes, its change in position is not unambiguously defined, since different parts move by different amounts. To overcome this problem, we associate with each shape a corresponding ``reference shape'' that has a definite position and orientation. The position and orientation of the body relative to its reference shape is then well-defined, and we can calculate it unambiguously. Of course, the introduction of reference shapes introduces an element of convention into this procedure, because we might have chosen them differently. As we will see, this element of convention is precisely a gauge structure. Moreover the associated gauge field is highly non-trivial, and it appears prominently in the dynamical equation.
\section{Dynamical Equation for Deformable Bodies}
\subsection{Referenced Angular Momentum}
We consider an assembly of point masses $m^j$ whose configuration $x^j$ can change with time.
For each configuration we have a reference shape, as described immediately above. We define the time-dependent rotation $R$ that relates the configuration of the assembly to the corresponding reference shape configuration according to
\begin{equation}\label{shape_to_space}
x^j ~=~ Rs^j
\end{equation}
Then we have a dynamical equation that relate change of angular momentum $L^\alpha$ to external torque $\tau^\alpha$ according to
\begin{equation}\label{torque_equation}
\tau^\alpha ~=~ \frac{dL^\alpha}{dt} ~=~ \epsilon_{\alpha\beta\gamma} \, \sum\limits_j \, m^j \, R_{\beta\gamma} s^j_\rho \ \frac{d^2}{dt^2} \, (R_{\gamma\sigma}s^j_\sigma)
\end{equation}
Our goal is to work Eqn.\,(\ref{torque_equation}) into a form that contains fewer variables and brings out its gauge symmetry. Thus, in a broad sense, we will construct an effective theory of rotational motion that displays an emergent gauge symmetry. We will first do this in a way that is valid in any number of dimensions (see {\it e. g.}, \cite{higher_D}) and emphasizes conceptual structure. Then we will apply some algebraic tricks special to three dimensions, to make contact with more conventional presentations. In the Appendix, as an exercise and sanity check, we get to the same dynamical equation by brute force algebra.
\subsection{Inertia Tensor and Angular Motion}
The primary object of interest, written now in a form valid in any number of dimensions, is the angular momentum
\begin{equation}
L_{\alpha \beta} ~\equiv~ \sum\limits_j \ m^j \, (\, x^j_\alpha \, {\dot x}^j_\beta - x^j_\beta \, {\dot x}^j_\alpha )
\end{equation}
Bringing in the reference shapes using Eqn.\,(\ref{shape_to_space}), we are led define two shape-based contributions to $L$ according to
\begin{eqnarray}\label{shape_angular_momentum}
R^{-1}_{\ \alpha \rho} R^{-1}_{\ \beta \sigma} \ L_{\rho \sigma} ~&=&~ \sum\limits_j \, m^j \, (s^j_{\alpha} \, (R^{-1} {\dot R} )_{\beta \gamma} \, s^j_\gamma \, - \, s^j_{\beta} \, (R^{-1} {\dot R} )_{\alpha \gamma} \, s^j_\gamma ) \nonumber \\
~&+&~ \sum\limits_j \ m^j \, (s^j_\alpha \, {\dot s}^j_\beta \, - \, s^j_\beta \, {\dot s}^j_\alpha ) \nonumber \\
~&\equiv&~ M^S_{\alpha \beta} \, + \, L^S_{\alpha \beta}
\end{eqnarray}
Here and throughout we use the superscript $^S$ to indicate objects defined using shape variables. These provide the deformable-body generalization of body fixed co-ordinates.
Now let us analyze and simplify $M^S$. Bringing in the angular velocity
\begin{equation}
R^{-1} \dot R ~\equiv~ \omega
\end{equation}
we have
\begin{equation}\label{momentum_velocity}
M^S_{\alpha \beta} ~=~ \sum\limits_j \, m^j \, ( s^j_{\alpha} \, \omega_{\beta \gamma} \, s^j_\gamma \, - \, s^j_{\beta} \, \omega_{\alpha \gamma} \, s^j_\gamma ) ~=~ I^S_{\alpha \beta; \gamma \eta} \, \omega_{\gamma \eta}
\end{equation}
where
\begin{equation}
I^S_{\alpha \beta; \gamma \eta} ~\equiv~ \frac{1}{2} \bigl( {\tilde I}^S_{\alpha \gamma} \delta_{\beta \eta} - {\tilde I}^S_{\alpha \eta} \delta_{\beta \gamma} - {\tilde I}^S_{\beta \gamma} \delta_{\alpha \eta} + {\tilde I}^S_{\beta \eta} \delta_{\alpha \gamma} \bigr) \end{equation}
with
\begin{equation}
{\tilde I}^S_{\alpha \beta} ~\equiv~ \sum\limits_j \, m^j \, s^j_\alpha \, s^j_\beta
\end{equation}
defines the inertia tensor.
We can assign simple labels $A$ the antisymmetric pairings $(\alpha, \beta)$. For $D=3$ these labels take three values, for $D=4$ six values, and so forth. In terms of those variables $I_{A;B}$ is a symmetric matrix, $\omega_B$ is a vector, and Eqn.\,(\ref{momentum_velocity}) becomes
\begin{equation}
M^S_A ~=~ I^S_{A;B} \omega_B
\end{equation}
or simply
\begin{equation}
M^S~=~ I^S\, \omega
\end{equation}
For our purposes we also need to know that $I^S$, thus defined, is generally invertible. To see that, consider the expectation value of a ``vector'' $\zeta$. We find
\begin{equation}\label{positivity}
\zeta_{\alpha \beta} \, I^S_{\alpha \beta; \gamma \eta} \, \zeta_{\gamma \delta} ~=~ 2 \sum\limits_j \, m^j (\zeta_{\rho \sigma} s^j_\sigma) (\zeta_{\rho \tau} s^j_\tau)
\end{equation}
This is a sum of non-negative terms, each of which only vanishes when $s^j$ lies in the subspace $\zeta_{\rho \sigma} s^j_\sigma = 0$. For assemblies of $s^j$ in general position the expectation value is positive, and in that case $I_{a;b}$ is a symmetric positive matrix. This implies that it is not only invertible, but even diagonalizable with positive entries. (Note however that the diagonalization process involves rotations in the ``index space''. For $D > 3$ these might not correspond to rotations in the ambient $D$-dimensional space.) Expressions related to Eqn.\,(\ref{positivity}) will occur in our discussion of conservation laws, below, where they represent quantities that are generically positive on physical grounds.
\subsection{Gauge Symmetry and Gauge Field}
$L^S$ has the form of an angular momentum in shape space. To bring out its physical significance, however, we need to transcend the element of convention (gauge symmetry!) that enters into its construction. We have the freedom to modify our choice of reference shapes according to
\begin{eqnarray}
s ~&\rightarrow&~ U(s)^{-1}\, s \\
R ~&\rightarrow&~ R\, U(s)
\end{eqnarray}
This choice does not affect the angular momentum $L$ (with no superscript), but it does modify everything else in Eqn.\,(\ref{shape_angular_momentum}). $R$ and $I^S$ change simply homogeneously, but $\omega, M^S$ and $L^S$ are tricker. They bring in inhomogeneous terms, that must cancel between $M^S$ and $L^S$.
Indeed, we have
\begin{equation}
\omega ~\rightarrow~ U^{-1} \omega \, U \, - \, U^{-1} \, \dot U
\end{equation}
The required cancellation therefore implies that
\begin{equation}
\Omega dt ~\equiv~ I^{S \, -1} \, L^S \, dt
\end{equation}
transforms as an $so(3)$ gauge potential field on shape space \cite{guich}. (Of course, its transformation law can also be verified directly.)
To bring out the fact that $U$ is a function on shape space, we can introduce co-ordinates $\lambda^\kappa$ on that space, and write
\begin{equation}
s(t) ~\equiv~ s \bigl(\lambda^a(t) \bigr)
\end{equation}
\begin{equation}
\dot U ~=~ \frac{\partial U\bigl( s(\lambda) \bigr) }{\partial \lambda^\kappa} \, \frac{ d \lambda^\kappa}{dt}
\end{equation}
so that
\begin{equation}
\omega dt ~\rightarrow~ U^{-1} \, \omega dt \,t U\, - \, U^{-1} \frac{\partial U\bigl( s(\lambda) \bigr) }{\partial \lambda^\kappa} \, d \lambda^\kappa
\end{equation}
This gauge potential naturally appears as a one-form. To make contact with conventional physics notation, we would write out components
\begin{equation}
\Omega \, dt~\equiv~ \Omega_\kappa \, d\lambda^\kappa
\end{equation}
This gauge structure brings in the $m^j$ as parameters, but it is dimensionless and purely a function of the shape space geometry. It identifies the effective contribution to the angular velocity due to shape change, as opposed to collective motion. The separation between those two contribution brings in an element of convention, i.e. a choice of gauge, but the total is free of convention, i.e. gauge covariant.
Knowing that $\Omega\, dt$ is a gauge potential, we can use the techniques of gauge theory to bring out its gauge-invariant content. We can calculate field strengths (curvature), Wilson loops (holonomy), and so forth. In the present context, this allows us to
identify unambiguous, quantitative dynamical consequences of shape deformation.
Using $\Omega$ we can write
\begin{equation}
R^{-1}_{\alpha \mu} R^{-1}_{\beta \nu} L_{\mu \nu} ~=~ I^S_{\alpha \beta; \mu \nu} ( \omega_{\mu \nu} + \Omega_{\mu \nu} )
\end{equation}
or more compactly
\begin{equation}
L ~=~ {\cal R} \, I^S (\omega + \Omega)
\end{equation}
\subsection{Dynamical Equation}
It is straightforward to pass from $L$ to the dynamical equation
\begin{eqnarray}\label{dynamical_equation}
\frac{dL_{\alpha\beta}}{dt} ~&=&~ \frac{d}{dt} R_{\alpha\rho} R_{\beta\sigma} I^S_{\rho, \sigma; \mu \nu} (\omega_{\mu \nu} + \Omega_{\mu \nu} ) \nonumber \\
~&=&~ R_{\alpha\rho} R_{\beta\sigma} \bigl( \frac{d}{dt} I^S_{\rho, \sigma; \mu \nu} (\omega_{\mu \nu} + \Omega_{\mu \nu} ) \, \nonumber \\
~&+&~ \, \omega_{\rho \kappa} I^S_{\kappa, \sigma; \mu \nu} (\omega_{\mu \nu} + \Omega_{\mu \nu} ) \, + \, \omega_{\sigma \kappa} I^S_{\rho, \kappa; \mu \nu} (\omega_{\mu \nu} + \Omega_{\mu \nu} ) \bigr) \nonumber \\
~&\equiv&~ R_{\alpha\rho} R_{\beta\sigma} \frac{D}{Dt} I^S_{\rho, \sigma; \mu \nu} (\omega_{\mu \nu} + \Omega_{\mu \nu} )
\end{eqnarray}
The main virtue of bringing $R$ factors outside the time derivative on the right-hand side, as in the third line above, is that when the left-hand side vanishes we can simply strip them away.
We can write Eqn.\,(\ref{dynamical_equation}) more compactly as
\begin{equation}
{\cal R}^{-1} \frac{dL}{dt} ~=~ \frac{D}{Dt} I^S (\omega + \Omega)
\end{equation}
For some applications it might also be useful to go to a rotating frame, where
\begin{equation}
x^j_{\rm rot.} (t) ~=~ V(t) x^j (t)
\end{equation}
motivates use of the variable
\begin{equation}
R_{\rm rot.} (t) ~\equiv~ V(t) \, R(t) \, V^{-1} (t)
\end{equation}
so that
\begin{equation}
V \, R \, x^j ~=~ R_{\rm rot.} \, x^j_{\rm rot.}
\end{equation}
It is not difficult to translate the dynamical equations into this sort of frame, eliminating $R(t)$ in favor of $V(t)$ and $R_{\rm rot.}$.
\subsection{Three Dimensional Notation}
In three dimensions a special ``vector'' notation for the $A$ indices is possible (and standard). It relies on use of the invariant $\epsilon_{a\mu \nu}$ symbol to eliminate antisymmetric pairs of indices in favor of single indices.
Here I will simply record the definitions and the main results. Let me remark that premature resort to this notation can tend to obscure the conceptual structure exposed above.
With
\begin{equation}
I^S_{ab} ~\equiv~ \frac{1}{2} \epsilon_{a\alpha\beta} \epsilon_{b\rho\sigma} I^S_{\alpha\beta; \rho\sigma} ~=~ \sum\limits_j m^j \, (s^{j \, 2} \delta_{ab} - s^j_a s^j_b)
\end{equation}
\begin{eqnarray}
\omega_a ~&\equiv&~ \frac{1}{2} \epsilon_{a\alpha\beta} \omega_{\alpha\beta} \\
\Omega_a ~&\equiv&~ \frac{1}{2} \epsilon_{a\alpha\beta} \Omega_{\alpha\beta}
\end{eqnarray}
we have
\begin{equation}
R^{-1}_{ab} L^b ~=~ I^S_{ac} (\omega_c + \Omega_c)
\end{equation}
and in an evident notation (suppressing indices)
\begin{eqnarray}
R^{-1} L ~&=&~ I^S (\omega + \Omega) \\
R^{-1} \frac{dL}{dt} ~&=&~ \frac{D}{Dt} I^S (\omega + \Omega) \label{3D_dynamical}
\end{eqnarray}
with
\begin{equation}
\frac{D}{Dt} ~=~ \frac{d}{dt} \, + \, \omega \times
\end{equation}
\subsection{Specializations}
Falling cats or divers can face the challenge of re-orienting their bodies in space despite starting from rest (zero angular momentum) and with no access to external torques. They can address that challenge by changing their shape. By doing so, they induce ${L}^S \neq 0$ and in consequence $\Omega \neq 0$. From
\begin{equation}
R^{-1} { L} ~=~ I^S (\omega + \Omega) ~=~ 0
\end{equation}
we deduce
\begin{equation}\label{cat_equation}
\omega dt ~=~ - \Omega dt ~=~ - {I^S}{}^{-1} L^S dt
\end{equation}
Here we see that motion in physical space reflects the gauge potential in shape space quite directly \cite{sw2}. Indeed, the (ordered) time integral of angular velocity, which gives the referenced rotation in physical space, is equal to the integral of the geometric potential, expressed as a one-form, in shape space.
Several examples illustrating this physical phenomenon have been analyzed in detail \cite{al_student, alba, wisdom, batterman, antti, sw2, montgomery, littlejohn, cabrera}. These examples establish the non-triviality of the shape space gauge structure, even for very simple families of shapes. An interesting feature is the occurrence of topologically non-trivial gauge structures, including vortices and magnetic monopoles. These can occur when the allowed shape space is topologically non-trivial. Indeed, when they occur they indicate that the shape space cannot be augmented to become trivial without encountering configurations where $I^S$ is singular (that is, non-invertible). It should be possible to analyze large classes of these singularities systematically.
It could be interesting, also, to analyze cases where the shape space gauge structure plays a dominant but not exclusive role in the dynamics, with $L$ and $\frac{dL}{dt}$ introduced as small parameters or noise.
At the other extreme, for a rigid body we can fix the standard shape once and for all. In this case $L^S = 0$, and Eqn.~(\ref{3D_dynamical}) reduces to the classic Euler equations for rigid body motion.
In some circumstances a specific choice of gauge suggests itself. When we have a stable ordering of the eigenvalues $I_1, I_2, I_3$ of the inertia tensor, we can choose the eigen-directions to be oriented along the $\hat x, \hat y, \hat z$ directions for the corresponding reference shapes. When the eigenvalues cross, of course, we will need to patch together complementary gauge choices. Alternatively, if we are especially interested in small deformations of a specific shape, we can choose to make the gauge potential and its ``radial'' components along a choice of spokes emanating from that point vanish, in the style of Gaussian normal co-ordinates. (See also below.)
\subsection{Angular Momentum and Energy}
For vanishing external torques, the first line of the dynamical equation Eqn.\,(\ref{dynamical_equation}) allows us to infer, from $L_{\alpha \beta} \frac{dL_{\alpha \beta}}{dt} = 0$, the conservation law
\begin{equation}
(M^S_{\alpha\beta} + L^S_{\alpha\beta})(M^S_{\alpha\beta} + L^S_{\alpha\beta}) ~=~ (\omega + \Omega)_A I^S_{A;B} I^S_{B;C} (\omega + \Omega)_C ~=~ {\rm constant}
\end{equation}
This is the covariant completion of the familiar conservation of $(I \omega)^2$ for untorqued rigid bodies.
The absence of external torques does not imply energy conservation, however. Indeed, in the envisaged applications to cats, divers, autonomous systems and micro-machines shape changes might be accomplished by exerting or dissipating work. It is interesting, nevertheless, to consider how the kinetic energy can be expressed.
First let us appreciate the difficulty. The kinetic energy associated with motion according to Eqn.\,(\ref{shape_to_space}) is
\begin{equation}
E_{\rm kin} ~=~ \frac{1}{2} \sum\limits_j \, m^j {\dot x}^j \cdot {\dot x}^j ~=~ \frac{1}{2} \sum\limits_j \, m^j \bigl( \omega_{\alpha \beta} s^j_\beta \omega_{\alpha \gamma} s^j_\gamma \, + \, \omega_{\alpha \beta} (s^j_\alpha {\dot s}^j_\beta - s^j_\beta {\dot s}^j_\alpha) \, + \, {\dot s}^j \cdot {\dot s}^j \bigr)
\end{equation}
Here the first two terms on the right-hand side match the corresponding terms of the expansion of the covariant completion of the standard expression for rigid bodies
\begin{equation}
\frac{1}{2} I^S_{A;B} (\omega_A + \Omega_A)(\omega_B + \Omega_B) ~=~ \frac{1}{2} I_{A;B} \omega_A \omega_B \, + \, \omega_A L^S_A \, + \, \frac{1}{2} I_{A;B} \Omega_A \Omega_B
\end{equation}
but the third terms are quite different.
We can choose fix our gauge so that $\Omega = 0$ at a given shape. {\it In that gauge\/} we have a simple separation of the kinetic energy {\it at that shape\/} into spatial motion and shape space contributions:
\begin{eqnarray}
E_{\rm kin} ~&=&~ \frac{1}{2} I_{A;B} \omega_A \omega_B \, + \, E^S_{\rm kin} \nonumber \\
E^S_{\rm kin} ~&\equiv&~ \frac{1}{2} \sum\limits_j \, m^j {\dot s}^j \cdot {\dot s}^j
\end{eqnarray}
This separation cannot be achieved globally, however. Gauge curvature obstructs a clean separation between rotational and deformational energy.
Since the discrepant terms are of second order in the rate of shape change, in the limit of slow shape changes we can neglect them. The term that is linear in $\dot s$ gives a residual correction in the adiabatic limit. Indeed, if we scale the rate of deformation by writing $t = T\tau$ inside the argument of $s(t)$, while holding its functional form fixed, and integrate between fixed shapes $s(a), s(b)$, then we have schematically
\begin{equation}
\int\limits^{s(b)}_{s(a)} \, dt {\dot s} (t) ~\sim~ \int\limits_{s(a)}^{s(b)} ds
\end{equation}
but
\begin{equation}
\int\limits_{s(a)}^{s(b)} \, dt \dot s (t)^2 ~\sim~ \int\limits_{s(a)}^{s(b)} \, T d{\tau} \frac{1}{T^2} (\frac{ds}{d\tau})^2 \, \propto \frac{1}{T} \, \rightarrow \, 0 \ {\rm as} \ T \rightarrow \infty
\end{equation}
\section{Extensions}
\subsection{Blobs, Media, and Swarms}
We have formulated our equations with reference to systems of particles, but of course the method carries over to continuous mass distributions as a limit. Here it is significant that our effective description needs only the inertia tensor and the shape-space angular momentum (and, for energy, the shape-space energy) as input, and is otherwise independent of the details of deformation.
The framework of reference shapes and gauge symmetry, associated above with rotations of a body in space, can be carried over to problems involving the displacement of a deformable body in a medium. The appearance of gauge structure in connection with separation of infinitesimal angular velocity (and simple velocity) into spatial and deformation pieces is quite general, though its dynamical salience will vary from case to case. It plays a central role in the description of self-propulsion at low Reynolds number \cite{sw3} and in biological locomotion \cite{goldman}.
When one has many independently deformable bodies, each will have its own ``internal'' gauge potential. The dynamics of a swarm will bring in interactions that couple the gauge structures, and in some circumstances might energetically favor small space-time gradations of shape. Then we could have an emergent Yang-Mills description of the collective motion.
\subsection{Molecules and Nuclei}
Many features in the spectra of molecules and of nuclei can be interpreted in terms of motions that combine rotation and deformation \cite{spectroscopy}. When the deformations can be considered as small vibrations about a definite shape, we can choose the gauge potential at that shape to vanish. This is accomplished by introduction of ``Eckart co-ordinates'' \cite{eckart}, which separates the motions approximately. Effects of curvature cannot be made to vanish altogether, however. Informed use of gauge theory ideas should allow more systematic and accurate treatment of this and related problems.
\bigskip
{\it Acknowledgement}: I am happy to thank Antti Niemi and Al Shapere for stimulating discussions. This work is supported by the U.S. Department of Energy under grant Contract Number DE-SC0012567, by the European
Research Council under grant 742104, and by the Swedish Research Council under Contract No. 335-2014-7424.
|
1,116,691,497,830 | arxiv | \section{Introduction}
We denote by $\mathbb{Z}$, $\mathbb{Z}_+$, $\mathbb{N}$ and
$\mathbb{C}$ the sets of all integers, nonnegative integers,
positive integers and complex numbers, respectively. All algebras and vector spaces are assumed to be over $\ensuremath{\mathbb C}\xspace$.
For any vector space $V$, we denote by $V^*$ the dual space of $V$,
and for any subset $S$ of some abelian group, we denote $S^\star=S\setminus\{0\}$.
Let $d\ge1$ be a fixed integer throughout this study.
Representation theory of Lie algebras is a rich topic atracting the extensive attention from many mathematicians. Classification of simple modules is an important step in the study of a module category
over an algebra.
Let $\mathfrak{g}$ be a Lie algebra with a Cartan subalgebra $\mathfrak{h}$. A $\mathfrak{g}$-module $M$ is called a weight module if the action of
$\mathfrak{h}$ on $M$ is diagonalizable. The other extreme is that a $\mathfrak{g}$-module $M$ is called $U(\mathfrak{h})$-torsion-free if for any nonzero $v\in M$ there is non nonzero $g\in U(\mathfrak{h})$ such that
$gv=0$. In this paper we will show that simple $W_d$-modules that are finitely generated over $\mathfrak{h}$ are $U(\mathfrak{h})$-torsion-free.
The classification of simple weight modules for several classes of Lie algebras has been achieved over many years of many mathematicians' endeavor. Here we only mention a few such achievements related to the study of the present paper.
Finite dimensional simple modules for finite-dimensional semisimple
Lie algebras were classified by Cartan in 1913, see \cite{Ca}. The classification of simple Harish-Chandra modules over the Virasoro algebra was completed by Mathieu in 1992, see [M1].
The classification of simple weight
modules over finite dimensiona semisimple Lie algebras with finite-dimensional weight spaces was obtained in 2000,
see \cite{M2}. Besides $\mathfrak{sl}_2$ (and its some deformations), all simple weight modules were constructed only for the aging algebra \cite{LMZ},
the Schrodinger algebra \cite{BL2}, and
the Euclidean algebra \cite{BL1}.
The study on $U(\mathfrak{h})$-torsion-free $\mathfrak{g}$-modules has just started with rank one a few years ago, see \cite{N1,N2,TZ1} where simple $\mathfrak{g}$-modules that are free $U(\mathfrak{h})$-module of rank one were classified for finite dimensional simple
Lie algebras and for Witt algebras $W_d$ and $W^+_d$. This category is not an abelian category. Now we turn to investigate the category of $\mathfrak{g}$-modules which are finitely generated when
restricted to $U(\mathfrak{h})$. We consider the Lie algebra $W_d$ of vector fields on an $d$-dimensional torus, that is, the derivation Lie algebra of the Laurent
polynomial algebra $A_d=\ensuremath{\mathbb C}\xspace[x_1^{\pm1},x_2^{\pm1},\cdots,
x_d^{\pm1}]$. The algebra $W_d$ is a natural higher rank
generalization of the Virasoro algebra, which has many applications
to different branches of mathematics and physics (see \cite{M2,
L1,L2,L3,L4,L5}) and at the same time a much more complicated
representation theory.
Over the last two decades, the weight representation theory of Witt algebras was extensively
studied by many algebraists and physicists; see for example \cite{B, E1, E2, BMZ, GLZ,L3, L4, L5,LZ,LLZ,
MZ2,Z}. In 1986, Shen defined a class of modules $F^\alpha_b(V)$
over the Witt algebra $W_d$ for $\a\in\ensuremath{\mathbb C}\xspace^d$, $b\in\ensuremath{\mathbb C}\xspace$,
and a simple module $ V$ over the special linear Lie algebra
$\mathfrak{sl}_d$, see \cite{Sh}, which were also given by Larsson in 1992,
see \cite{L3}. In 1996, Eswara Rao determined the necessary and
sufficient conditions for these modules to be irreducible when $V$
is finite dimensional, see \cite{E1, GZ}. Very recently, Billig and Futorny
\cite{BF} proved that simple
$W_d$-modules with finite-dimensional weight spaces are modules of the highest weight type or simple quotient
modules from $F^\alpha_b(V)$.
In the present paper, for a $\mathfrak{gl}_d$-module $V$ and an admissible $\widetilde{W}_d$-module $P$, we define a $W_d$-module $\mathcal{F}(P, V)$ which
generalizes the construction of $F^\alpha_b(V)$. Since there exists a natural
algebra homomorphism from $U(\widetilde{W}_d)$ to the Weyl algebra $\mathcal{K}_d$, each $\mathcal{K}_d$-module can be viewed as an admissible $\widetilde{W}_d$-module.
Let $\ensuremath{\mathbb C}\xspace^d$ be the natural $d$-dimensional representation of $\mathfrak{gl}_d$ and let $V(\delta_k,k)$ be its
$k$-th exterior power, $k = 0,\cdots,d$. In the paper \cite{LLZ}, it was shown that when $V$ is a weight module, $\mathcal{F}(P, V)$ is a simple module over
$W_d$ if and only if $M\not\cong V(\delta_k, k)$ for any $k\in \{0, 1,\cdots, d\}$.
For any $k\in \{0, 1,\cdots, d\}$,
there are $W_d$-module homomorphisms
\begin{equation*}\begin{array}{lrcl}
\pi_{k-1}:& \mathcal{F}(P,V(\delta_{k-1},k-1)) & \rightarrow & \mathcal{F}(P, V(\delta_{k},k)),\\
& p\otimes v & \mapsto & \sum_{j=1}^{k} D(e_j,0)p\otimes e_j\wedge v,
\end{array}\end{equation*}
for all $p\in P$ and $v\in V(\delta_{k-1}, k-1)$ where $\mathcal{F}(P,V(\delta_{-1},-1)) =0$.
Let $\tilde \mathfrak{L}_d(P,k)=\text{Ker} \pi_{k}$ and $ \mathfrak{L}_d(P,k)=\text{Im} \, \pi_{k-1}$.
Then from Theorem 3.5 in [LLZ], the $W_d$-modules $ \mathfrak{L}_d(P,k)$ are simple for $k=1,2,\ldots,d$. Moreover,
$$0\subseteq \mathfrak{L}_d(P,k)\subseteq \tilde \mathfrak{L}_d(P,k)\subseteq \mathcal{F}(P, V(\delta_k,k)).$$
In the present paper, we show that if $M$ is a simple $W_d$-module that is finitely generated over $\mathfrak{h}$,
then $M$ is a simple quotient of a $W_d$-module $\mathcal{F}( P, V)$ for a finite dimensional simple $\mathfrak{gl}_d$-module $V$ and a simple $\mathcal{K}_d$-module that is finitely generated over $\mathfrak{h}$.
The paper is organized as follows. In Section 2, we recall the definition of the Witt algebra $W_d$, the entended Witt algebra $\widetilde{W}_d$, the Wyel algebra $\mathcal{K}_d$, and give the construction of the $W_d$-module $\mathcal{F}( P, V)$ and some of its properties (Propositions 2.2 and Theorem 2.3). In Section 3,
we generalize the weighting functor $\mathfrak{W}$ introduced in \cite{N2} for finite dimensional simple Lie algebras to the Witt module category in a slightly different way and discuss some of its properties (Propositions \ref{p3.5}).
In Section 4, we give the classification of simple admissible $\widetilde{W}_d$-modules that are finitely generated over $\mathfrak{h}$ (Theorem \ref{main1}), and the classification of simple admissible $\widetilde{W}_d$-modules that are free $U(\mathfrak{h})$-module of finite rank (Corollary \ref{cor4.8}). In Section 5, using the technique of covering module established in \cite{BF},
we complete the classification of simple $W_d$-modules that are finitely generated over $\mathfrak{h}$ (Theorem \ref{thm5.5}). We also show that
$\mathfrak{L}_d(P,i)$ are not free $U(\mathfrak{h})$-modules for $ i=2,3,\ldots,d $. Using this fact, we obtain the classification of simple $W_d$-modules that are free $U(\mathfrak{h})$-modules of finite rank (Corollary \ref{cor5.5}).
In Section 6, we give a description of simple $\mathcal{K}_d$-modules that are finitely generated over $\mathfrak{h}$ (Lemma 6.1). We also give an example which shows that there exists a simple $W_d$-module that is a free $U(\mathfrak{h})$-module of any given positive rank.
Other main techniques we use in this paper are Quillen-Suslin Theorem and other results from commutative algebra \cite{L}.
\section{$\ensuremath{W}\xspace_d$-modules from $\mathfrak{sl}_d$-modules}
As usual, ${\mathbb Z}^{d}$ (and other similar notations)
denotes the direct sum of $d$ copies of the additive group ${\mathbb Z}$, and we consider it as the additive group of all column vectors with integer entries. For
any $a=(a_1,\cdots, a_d)^T \in \ensuremath{\mathbb{Z}}\xspace_+^d$ and $n=(n_1,\cdots,n_d)^T
\in\ensuremath{\mathbb C}\xspace^d$, we denote $n^{a}=n_1^{a_1}n_2^{a_2}\cdots n_d^{a_d}$,
where $T$ means taking transpose of the matrix.
Let $\mathfrak{gl}_d$ be the Lie algebra of all $d \times d$ complex matrices, and $\mathfrak{sl}_d$ be the
subalgebra of $\mathfrak{gl}_d$ consisting of all traceless matrices. For $1
\leq i, j \leq d$, we use $E_{ij}$ to denote the matrix with $1$
at the $(i, j)$ entry and zeros elsewhere.
We know that
$$\mathfrak{gl}_d=\sum_{1\leq i, j\leq
n}\ensuremath{\mathbb C}\xspace E_{i,j}.$$
Let $\mathfrak{h}=\text{span}\{h_{i}\,|\,1\le i\le d-1\}$ where
$h_i=E_{ii}-E_{i+1,i+1}$.
Let $\Lambda^+=\{\lambda\in\mathfrak{h}^*\,|\,\lambda(h_i)\in\ensuremath{\mathbb{Z}}\xspace_+ \text{ for } i=1,2,...,d-1\}$ be the set of dominant weights with respect to $\mathfrak{h}$. For any
$\psi\in \Lambda^+$, let $V(\psi)$ be the simple $\mathfrak{sl}_d$-module with
highest weight $\psi$. We make $V(\psi)$ into a $\mathfrak{gl}_d$-module by
defining the action of the identity matrix $I$ as some scalar
$b\in\mathbb{C}$. We denote the resulting $\mathfrak{gl}_d$-module as $V(\psi,b)$.
We fix the vector space $\mathbb{C}^d$ of $d\times 1$ matrices.
Denote its standard basis by $\{e_1,e_2,...,e_d\}$. Let
$(\,\cdot\,|\, \cdot\, )$ be the standard symmetric bilinear form on $\mathbb{C}^d$
such that $(u | v)=u^Tv\in\mathbb{C}$.
Define the fundamental weights $\delta_i\in\mathfrak{h}^*$ by
$\delta_i(h_j)=\delta_{i,j}$ for all $i,j=1,2,..., d-1$. It is
well known that the module $V(\delta_1, 1)$ can be realized as the
natural representation of $\mathfrak{gl}_d$ on $\mathbb{C}^d$ (the matrix
product), which we can write as $E_{ji}e_l=\delta_{li}e_j$. In
particular,
\begin{equation}(ru^T)v=(u|v)r,\,\,\forall\,\, u,v,r\in \mathbb{C}^d.\end{equation}
The exterior
product $\bigwedge^k(\mathbb{C}^d)=\mathbb{C}^d\wedge\cdots\wedge
\mathbb{C}^d\ \ (k\ \mbox{times})$ is a $\mathfrak{gl}_d$-module
with the action given by $$X(v_1\wedge\cdots\wedge
v_k)=\sum\limits_{i=1}^k v_1\wedge\cdots v_{i-1}\wedge
Xv_i\cdots\wedge v_k, \,\,\forall \,\, v_i\in \mathbb{C}^d, X\in \mathfrak{gl}_d,$$ and the following
$\mathfrak{gl}_n$-module isomorphism is well known:
\begin{equation}\label{dk}{\bigwedge}^k(\mathbb{C}^d)\cong V(\delta_k,
k),\,\forall\,\, 1\leq k\leq d,\end{equation}
where $V(\delta_d,
d)$ is a $1$-dimsional module.
For convenience of late use, we let $V(\delta_0,
0)$ be the 1-dimensional trivial $\mathfrak{gl}_d$-module. We set $\bigwedge^0(\mathbb{C}^d)=\ensuremath{\mathbb C}\xspace$ and
$v\wedge a=av$ for any $v\in\ensuremath{\mathbb C}\xspace^d, a\in\ensuremath{\mathbb C}\xspace$.
\subsection{ Witt algebra $W_d$}
We denote by $ W_d$ the derivation Lie algebra of the
Laurent polynomial algebra $A_d=\ensuremath{\mathbb C}\xspace[x_1^{\pm1}, \cdots,x_d^{\pm1}]$.
Set $\partial_i=x_i\frac{\partial}{\partial x_i}$ for $i=1,2,\dots,d$ and
$x^r=x_1^{r_1}x_2^{r_2}\cdots x_d^{r_d}$ for $r=(r_1,r_2,\cdots, r_d)^T\in\mathbb{Z}^d$.
For $u=(u_1,\cdots, u_d)^T \in \mathbb {C}^d$ and $r\in \mathbb{Z}^d$, we denote
$$D(u,r)=x^r\sum_{i=1}^du_i\partial_i\in\ensuremath{W}\xspace_d.$$ Then we have the Lie bracket
$$[D(u,r),D(v,s)]=D(w,r+s),\ \forall\ u,v\in \mathbb {C}^d, r,s\in \mathbb {Z}^d,$$
where $w=(u | s)v-(v | r)u$. Note that for any $u,v,\xi,\eta\in
\mathbb{C}^d$, both $uv^T$ and $\xi\eta^T$ are $d\times d$ matrices, and
\begin{equation*}(uv^T)(\xi\eta^T)=(v|\xi)u\eta^T.\end{equation*}
We know that $\mathfrak{h}=\text{span}\{\partial_1, \partial_2, ... , \partial_d\}$ is the Cartan
subalgebra of $\ensuremath{W}\xspace_d$.
It is obvious that $A_d$ admits a natural $W_d$-module structure:
$$D(u,r)x^s=(u|s)x^{r+s}, \,\,\forall\,\,\,u\in \mathbb {C}^d, r,s\in \mathbb {Z}^d.$$
Using this module structure, we can form the semi-direct product Lie algebra $\widetilde{W}_d=W_d\ltimes A_d$,
called the extended Witt algebra. The intertwining Lie bracket between $W_d$ and $A_d$ is given by the
module action, that is,
$$[D(u,r), x^m]=(u|m)x^{r+m}, \,\,\forall\,\,\,u\in \mathbb {C}^d, r,m\in \mathbb {Z}^d.$$
We will denote by $\mathcal{M}_{W_d}$ the category of $W_d$-modules, by $\mathcal{M}_{\widetilde{W}_d}$ the category of $\widetilde{W}_d$-modules
Note that $W_d$ also has a natural module structure over the commutative associative algebra $A_d$:
$$ x^sD(u,r)=D(u,r+s), \ \forall \ u\in \mathbb {C}^d, r,s\in \mathbb {Z}^d.$$
\begin{definition} A $\widetilde{W}_d$-module $P$ is called admissible if the action of the subalgebra $A_d$ is an associative algebra action, i.e.,
$$x^{0}v=v, \ x^{n+r}v= x^r x^nv,\ \forall\ r,n\in \ensuremath{\mathbb{Z}}\xspace^d, v\in P.$$
\end{definition}
An admissible $\widetilde{W}_d$-module was called a $(W_d, A_d)$-module in \cite{D}.
\subsection{$\ensuremath{W}\xspace_d$-modules}
Let $V$ be a $\mathfrak{gl}_d$-module and $P$ be an admissible $\widetilde{W}_d$-module.
Let $\mathcal{F}(P, V)=P\otimes V$. We define the action $\ensuremath{W}\xspace_d$ and $A_d$ on $\mathcal{F}(P, V)$ as follows:
\begin{equation}\label{2.1}
D(u,r)(z\otimes y)= D(u,r)z\otimes y+ x^rz \otimes(ru^T) y,
\end{equation}
\begin{equation}\label{2.4}
x^r(z\otimes y)=x^rz\otimes y ,
\end{equation}
where $u\in\mathbb{C}^d$, $r\in\mathbb{Z}^d, z\in P$ and $y\in V$.
We can rewrite (\ref{2.1}) as
\begin{equation}\label{2.5}(x^r\partial_j)(z\otimes y)= (x^r\partial_j)z\otimes y+\sum_{i=1}^dr_i x^r z \otimes(E_{ij} y).\end{equation}
\begin{proposition}\label{p} Let $V, V_1, V_2$ be $\mathfrak{gl}_d$-modules and $P$ be an admissible $\widetilde{W}_d$-module. Then
\itemize
\item[(a).] $\mathcal{F}(P, V)$ is an admissible $\widetilde{W}_d$-module;
\item[(b).] $\mathcal{F}( \mathcal{F}(P, V_1), V_2)\cong \mathcal{F}(P, V_1\otimes V_2)$.
\end{proposition}
\begin{proof} (a). We need to verify that
$$\aligned
D(u,r)D(v, s)(z\otimes y)- D(v, s)D(u,r)(z\otimes y& = D(w, r+s)(z\otimes y);\\
D(u, r) x^m(z\otimes y)-x^m D(u,r)(z\otimes y) & =(u\mid m)x^{m+r}(z\otimes y),
\endaligned$$
for all $u,v\in\mathbb{C}^d$, $z\in P, y\in V$, $ r,s, m\in\mathbb{Z}^d$, where $w=(u\,|\,s)v-(v\,|\,r)u$.
This is straightforward and we omit the details.
(b). We define the linear map
\begin{equation*}\begin{array}{crcl}
\varphi: & \mathcal{F}( \mathcal{F}(P, V_1), V_2) & \to & \mathcal{F}(P, V_1\otimes V_2),\\
& (z\otimes y_1)\otimes y_2 & \mapsto & z\otimes (y_1\otimes y_2),
\end{array}\end{equation*}
for $z\in P$, $y_1\in V_1$, and $y_2\in V_2$. It is also straightforward to check that
$$\aligned
\varphi\big( D(u,r)((z\otimes y_1)\otimes y_2)\big) & =D(u,r) \varphi\big((z\otimes y_1)\otimes y_2\big);\\
\varphi\big( x^r((z\otimes y_1)\otimes y_2)\big) & = x^r\varphi((z\otimes y_1)\otimes y_2).
\endaligned$$
So $\mathcal{F}( \mathcal{F}(P, V_1), V_2)\cong \mathcal{F}(P, V_1\otimes V_2)$ as admissible $\widetilde{W}_d$-modules.
\end{proof}
Recall that the classical Weyl algebra ${\mathcal{K}}_d$ is the unital simple associative algebra $\ensuremath{\mathbb C}\xspace[x_1^{\pm 1},\cdots,x_d^{\pm 1},\partial_{1},\cdots,\partial_{d}]$.
It is worthy to remark that any $\mathcal{K}_d$-module can be viewed as an admissible $\widetilde{W}_d$-module
since $W_d\ltimes A_d$ can be regarded as a Lie subalgebra of $\mathcal{K}_d$ spanned by all
$x^r\partial_i$ and $x^r$ with $r\in\ensuremath{\mathbb{Z}}\xspace^d$ and $i=1,\cdots,d$.
Similarly as before, we denote by $\mathcal{M}_{\mathcal{K}_d}$ the category of $\mathcal{K}_d$-modules.
In \cite{LLZ}, we have studied another module structure on $P\otimes V$ defined as follows:
\begin{equation}\label{Action2}
(x^{r-e_j}\partial_{j})\circ (z\otimes y)
=((x^{r-e_j}\partial_{j})z)\otimes y+ \sum_{i=1}^nr_i(x^{r-e_i}z)\otimes E_{ij}(y),
\end{equation}
and $x^r (z\otimes y)=(x^{r}z)\otimes y$
for all $r=(r_1,\cdots,r_d)^T\in \ensuremath{\mathbb{Z}}\xspace^d, z\in P$ and $y\in V.$
Denote this module by $F(P, V)$.
Note that in \cite{LLZ}, the $W_d$-module $F(P,V)$ is defined only when $P$ is a $\mathcal{K}_d$-module,
however, the definition is still valid for an admissible $\widetilde{W}_d$-module.
It is easy to see that, $\mathcal{F}(P, V)$ is a weight $\ensuremath{W}\xspace_d$-module if $P$ is a weight $\ensuremath{W}\xspace_d$-module, while $F(P, V)$ is a weight $\ensuremath{W}\xspace_d$-module if both $P$ and $V$ are weight modules. Next we will prove that $\mathcal{F}(P, V)$ and $F(P, V)$ are somewhat the same when $V$ is a weight $\mathfrak{gl}_d$-module.
For any $\lambda=(\lambda_1,\cdots,\lambda_d)\in \ensuremath{\mathbb C}\xspace^d$, we have the automorphism $\tilde{\lambda}$ of ${ {\mathcal{K}}}_d$ defined by
$$\tilde{\lambda}(x^{\a})=x^{\a}, \quad \tilde{\lambda}(\partial_j)=\partial_j- {\lambda}_j,\ \forall\ j=1,2,\ldots,d.$$
Then for any module $P$ over the associative algebra ${{\mathcal{K}}}_d$, we have the new module $P^{\tilde{\lambda}}=P$ with the action
$$y \circ_{\lambda} p={\tilde{\lambda}}(y) p,\ \forall\ y\in {{\mathcal{K}}}_d,\ p\in P^{\tilde{\lambda}}.$$
If $V$ is a simple weight $\mathfrak{gl}_d$-module, there is a $\lambda\in\ensuremath{\mathbb C}\xspace^d$ such that
\begin{equation}\label{weightset}
V=\bigoplus_{\mu\in \ensuremath{\mathbb{Z}}\xspace^d}V_{\lambda+\mu},
\end{equation}
where $V_{\lambda+\mu}=\{ v\in V\mid E_{ii}v=(\lambda_i+\mu_i) v, \ \forall\ i=1, 2, \cdots, d\}$.
We will need the following theorem in the next sections.
\begin{theorem} If $V$ is a simple weight module over $\mathfrak{gl}_d$ with decomposition
(\ref{weightset}) for some $\lambda\in\ensuremath{\mathbb C}\xspace^d$ and $P$ is a module over the associative algebra ${ {\mathcal{K}}}_d$.
Then the linear map
$$\aligned
\Phi: \mathcal{F}(P, V) & \rightarrow& F(P^{\tilde{\lambda}}, V)\hskip5pt&\\
p \otimes v_\mu& \mapsto& x^{-\mu}p \otimes v_\mu,&\quad\forall\ v_\mu\in V_{\lambda+\mu},\ p\in P
\endaligned$$
is a $W_d$-module isomorphism
\end{theorem}
\begin{proof} Clearly $\Phi$ is a bijection. Moreover,
for any $r\in\ensuremath{\mathbb{Z}}\xspace^d, j=1,\cdots,d$, $p\in P$ and $v_\mu\in V_{\lambda+\mu}$, we can check that
$$\aligned
&(x^{r-e_j}\partial_j)\circ\Phi(p \otimes v_\mu)=(x^{r-e_j} \partial_j)\circ (x^{-\mu} p\otimes v_\mu)\\
= & x^{r-e_j} (\partial_j-\lambda_j) x^{-\mu} p\otimes v_\mu+\sum_{i=1}^d r_ix^{r-\mu-e_i}p\otimes E_{ij}v_\mu\\
= & x^{r-e_j-\mu}(\partial_j-\lambda_j-\mu_j)p\otimes v_\mu+\sum_{i=1}^d r_ix^{r-\mu-e_i}p\otimes E_{ij}v_\mu\\
= & x^{r-e_j-\mu}\partial_jp\otimes v_\mu-x^{r-e_j-\mu}p\otimes E_{jj}v_\mu+\sum_{i=1}^dr_ix^{r-\mu-e_i}p\otimes E_{ij}v_\mu\\
= & x^{r-e_j-\mu}\partial_jp\otimes v_\mu+\sum_{i=1}^dx^{r-\mu-e_i}p\otimes (r_i-\delta_{ij})E_{ij}v_\mu\\
= & \Phi \big((x^{r-e_j} \partial_j)(p \otimes v_\mu)\big),
\endaligned$$
where in the fourth and sixth equalities, we have used the facts
$E_{jj}v_\mu=(\lambda_j+\mu_j) v_\mu$ and $E_{ij}v_\mu\in M_{\lambda+\mu+e_i-e_j}$ respectively.
So $ \Phi$ is a $W_d$-module isomorphism.
\end{proof}
For each $\a\in \ensuremath{\mathbb C}\xspace^d$ and $a\in\ensuremath{\mathbb C}\xspace$, there is an admissible $\widetilde{W}_d$-module structure on $A_d$ as follows:
$$D(u,r) x^n=(u\mid n+\a-ar) x^{n+r}, \ x^r x^n=x^{n+r}.$$ We denote this module over $\widetilde{W}_d$ (or $ {W}_d$) by
$A_d(\a, a)$. Let $V$ be a simple $\mathfrak{gl}_d$-module on which the identity matrix acts as a
complex scalar $b$.
By (\ref{2.1}), the action of $\ensuremath{W}\xspace_d$ on $\mathcal{F}(A_d(\a, a), V)=A_d(\a,a) \otimes V$ is defined by
$$D(u,r) ( x^n\otimes v)=(u\mid \a+n-ar) x^{n+r}\otimes v+ x^{n+r}\otimes(ru^Tv),$$
where $u\in\ensuremath{\mathbb C}\xspace^d,r,n\in\ensuremath{\mathbb{Z}}\xspace^d, v\in V$. In this case, $\mathcal{F}(A_d(\a,a), V)$ is a weight module over $\ensuremath{W}\xspace_d$ which was studied by many authors, see \cite{BF, E1, E2,GZ, L1, L2, L3, L4, L5, Sh}.
It was showed that $\mathcal{F}(A_d(\a, 0), V)$ is a reducible module over $W_d$
if and only if $V$ is isomorphic to the simple finite dimensional module
whose highest weight is a fundament weight $\delta_k$ and $b=k$,
where $k\in\ensuremath{\mathbb{Z}}\xspace$ with $1\leq k\leq d-1$, or $\dim V=1$, $\a\in\ensuremath{\mathbb{Z}}\xspace^d$ and $b\in\{0, d\}$, see \cite{LZ}.
Recently Billig and Futorny \cite{BF} showed that any irreducible cuspidal $\ensuremath{W}\xspace_d$-module is isomorphic
to some irreducible subquotient
of $\mathcal{F}( A_d(\a, 0), M)$.
Since there are a lot of known simple $\mathfrak{gl}_d$-modules $V$ and simple admissible $\widetilde{W}_d$-modules $P$, (see \cite{N1, N2, TZ2} and the references therein), we can actually obtain a lot of new $\ensuremath{W}\xspace_d$-modules from the above construction of $\mathcal{F}(P, V)$. In the next sections we will study one such class of $\ensuremath{W}\xspace_d$-modules.
\section{The weighting functor $\mathfrak{W}$}
We will apply the weighting functor $\mathfrak{W}$ introduced in \cite{N2} to the module categories $\mathcal{M}_{W_d}$, $\mathcal{M}_{\widetilde{W}_d}$ and $\mathcal{M}_{\mathcal{K}_d}$, which will be widely used in the next sections.
For $\a\in\ensuremath{\mathbb C}\xspace^d$, let $I_\a$ be the maximal ideal of $U(\mathfrak{h})$ generated by $$\partial_1-\a_1,\dots, \partial_d-\a_d.$$
For a $\ensuremath{W}\xspace_d$-module $P$ and $\a\in\ensuremath{\mathbb C}\xspace^d$, set $P_{\a}:= P/I_{\a}P$.
Denote $$\mathfrak{W}^{(\a)}(P):=\bigoplus_{n\in\ensuremath{\mathbb{Z}}\xspace^d}( P_{n+\a}\otimes x^n).$$
Since the module structures of $\mathfrak{W}^{(\a)}(P)$ for distinct $\a$ are similar,
we study $\mathfrak{W}^{(0)}(P)$ in the rest of this section and denote it by $\mathfrak{W}(P)$ for short.
The general construction will be used in Section 5.
By Proposition 8 in \cite{N2}, we have the following construction.
\begin{proposition} The vector space $\mathfrak{W}(P)$ becomes a ${\ensuremath{W}\xspace}_d$-module under the following action:
\begin{equation}\label{3.3}
D(u,r)\cdot((v+I_{n}P)\otimes x^n):= (D(u,r)v+I_{n+r}P)\otimes x^{n+r}.
\end{equation}
Moreover, if $P$ is an admissible $\widetilde{W}_d$-module, then $\mathfrak{W}(P)$ becomes an admissible $\widetilde{W}_d$-module via
\begin{equation}\label{2.4}
x^r\cdot((v+I_{n}P)\otimes x^{n}):= (x^rv+I_{n+r}P)\otimes x^{n+r},
\end{equation}
where $ u\in\ensuremath{\mathbb C}\xspace^d,n, r\in\ensuremath{\mathbb{Z}}\xspace^d$.
\end{proposition}
In many cases, the $\ensuremath{W}\xspace_d$-module $\mathfrak{W}(P)$ is $0$. For example, if $P$ is a simple weight $W_d$-module with a weight not in $\ensuremath{\mathbb{Z}}\xspace^d$, one can easily see that $\mathfrak{W}(P)=0$. We also note that $\mathfrak{W}(P)=P$ if
$P$ is a simple weight $W_d$-module with a weight in $\ensuremath{\mathbb{Z}}\xspace^d$.
\begin{remark} The $\ensuremath{W}\xspace_d$-module $\mathfrak{W}(P)$ is always a weight module since
$$D(u,0)\cdot((v+I_{n}P)\otimes x^n)=(u\mid n)(v+I_{n}P)\otimes x^{n},$$
for all $v\in P$, and
$P_{n}\otimes x^n$ is a weight space for each $n\in\ensuremath{\mathbb{Z}}\xspace^d$.
\end{remark}
The action of the weighting functor $\mathfrak{W}$ on a $W_d$-module homomorphism $f:P_1\to P_2$ is as follows
$$ \aligned
\mathfrak{W}(f):\quad \mathfrak{W}(P_1) & \to \mathfrak{W}(P_2),\\
v+I_nP_1 & \mapsto f(v)+I_nP_2,\ \forall\ v\in P_1.
\endaligned$$
The following properties are easy to verify
\begin{lemma}\label{morphism}
Let $P_1, P_2\in \mathcal{M}_{W_d}$ and $f:P_1\to P_2$ be a $W_d$-module homomorphism.
\begin{itemize}
\item[(a).] If $f$ is onto, so is $\mathfrak{W}(f)$;
\item[(b).] If $f$ is an isomorphism, so is $\mathfrak{W}(f)$.
\end{itemize}
\end{lemma}
\begin{remark} Note however, we do not have similar result for monomorphisms,
that is, injectivity of $f$ does not necessarily implies the injectivity of $\mathfrak{W}(f)$.
\end{remark}
\begin{proposition}\label{p3.5}
For any admissible $\widetilde{W}_d$-module $P$ and $\mathfrak{gl}_d$-module $M$,
we have that $\mathfrak{W}(\mathcal{F}(P, V))\cong \mathcal{F}( \mathfrak{W}(P), V)$ as admissible $\widetilde{W}_d$-modules.
\end{proposition}
\begin{proof} For $n\in\ensuremath{\mathbb C}\xspace^d$, we have
$$\aligned
\mathcal{F}(P, V)_n= & \mathcal{F}(P, V)/I_n\mathcal{F}(P, V)=(P\otimes V)/(I_n P)\otimes V\\
= & (P/I_n P)\otimes V=P_n\otimes V.
\endaligned$$
So $\mathfrak{W}(\mathcal{F}(P, V))=\bigoplus_{n\in\ensuremath{\mathbb{Z}}\xspace^d}(P_n\otimes V)\otimes x^n$.
Consider the linear map $\varphi: \mathfrak{W}(\mathcal{F}(P, V))\to \mathcal{F}( \mathfrak{W}(P), V)$ defined by
$$\varphi((y_n\otimes v)\otimes x^n)=(y_n\otimes x^n)\otimes v$$
for any $n\in\ensuremath{\mathbb{Z}}\xspace^d, y_n\in P_n, v\in V$. One can easily verify that $\varphi$ is an admissible $\widetilde{W}_d$-module isomorphism.
\end{proof}
Let us first recall the non-weight $\ensuremath{W}\xspace_d$-modules
$\Omega(\lambda, a)$ from \cite{TZ1} for any $a\in\ensuremath{\mathbb C}\xspace$ and $\lambda=(\lambda_1,\lambda_2,\dots,\lambda_d)^T\in (\ensuremath{\mathbb C}\xspace^\star)^d$.
Denote by $\Omega(\lambda,a) =\ensuremath{\mathbb C}\xspace[t_1,\dots,t_d]$,
the polynomial associative algebra over $\ensuremath{\mathbb C}\xspace$ in the commuting indeterminates $t_1,\dots,t_d$.
For simplicity, if $f(t_1,t_2,...,t_d)\in \ensuremath{\mathbb C}\xspace[t_1,\dots,t_d]$,
$r=(r_1,\cdots, r_d)^T\in \ensuremath{\mathbb C}\xspace^d$ and $u\in\ensuremath{\mathbb C}\xspace^d$, denote
$$\aligned
f(t-r):&= f(t_1-r_1,\cdots,t_d-r_d),\\
(u\mid t+r):&=\sum_{i=1}^d u_i(t_i+r_i).
\endaligned$$
The action of $\ensuremath{W}\xspace_d$ and $A_d$ on $\Omega(\lambda,a)$ is defined as follows:
\begin{equation}\label{2.3}\begin{split}
D(u,r) \cdot f(t)& =\lambda^r(u\mid t-ar)f(t-r)\\
x^r\cdot f(t)& =\lambda^rf(t-r),
\end{split}\end{equation}where $ u\in\ensuremath{\mathbb C}\xspace^d,r\in\ensuremath{\mathbb{Z}}\xspace^d$ and $f(t)\in \ensuremath{\mathbb C}\xspace[t_1,\dots,t_d]$. It is easy to see that $\Omega(\lambda,a )$ is an admissible $\widetilde{W}_d$-module. It is actually a $\mathcal{K}_d$-module if $a=1$.
\begin{example} Consider the weight $\widetilde{W}_d$-module
$\mathfrak{W}(\Omega(\lambda,a ))$ for any $a\in\ensuremath{\mathbb C}\xspace$ and $\lambda=(\lambda_1,\dots,\lambda_d)\in (\ensuremath{\mathbb C}\xspace^\star)^d$. Since
$$\aligned D(u, r)(1+I_n\Omega(\lambda,a ))=&\lambda^r(u\mid t-ar)+I_{n+r}\Omega(\lambda,a )
\\
=&\lambda^r(u\mid n+r-ar)+I_{n+r}\Omega(\lambda,a ),\endaligned$$
where $u\in\ensuremath{\mathbb C}\xspace^d, r\in\ensuremath{\mathbb{Z}}\xspace^d$, we see that
$\mathfrak{W}(\Omega(\lambda,a ))$ is isomorphic to the module $A_d(0,a-1 )$.
\end{example}
\section{Simple admissible $\widetilde{W}_d$-modules that are finitely generated $U(\mathfrak{h})$-modules}
In this section we determine the category $\widetilde{\mathcal{H}}$ consisting of admissible $\widetilde{W}_d$-modules that are finitely generated $U(\mathfrak{h})$-modules.
Let $V$ be a finite dimensional $\mathfrak{gl}_d$-module, and $P$ a $\mathcal{K}_d$-module that is a finitely generated $U(\mathfrak{h})$-module. Then we have the admissible $\widetilde{W}_d$-module $\mathcal{F}(P, V)$ that is a finitely generated $U(\mathfrak{h})$-module.
We will show in this section that simple modules in
$\widetilde{\mathcal{H}}$ are exactly the $\widetilde{W}_d$-modules $\mathcal{F}(P, V)$ for simple $\mathcal{K}_d$-modules $P$ that is a finitely generated $U(\mathfrak{h})$-module, and finite dimensional simple $\mathfrak{gl}_d$-modules $V$.
Fix any nonzero $M\in\widetilde{\mathcal{H}}$.
Denote $M'=\{v\in M|{\rm ann}_{U(\mathfrak{h})}(v)\ne 0\}$. It is easy to see that $M'$ is a $\widetilde{W}_d$-module.
\begin{lemma}\label{torsion}
The admissible $\widetilde{W}_d$-module $M$ is torsion-free over $U(\mathfrak{h})$.
\end{lemma}
\begin{proof} Since $U(\mathfrak{h})=\ensuremath{\mathbb C}\xspace[\partial_1,\cdots,\partial_d]$ is Noetherian, then $M'$ is a finitely generated $U(\mathfrak{h})$-module. We see that $0\ne I= {\rm ann}_{U(\mathfrak{h})}(M')$ is an idea of $U(\mathfrak{h})$. Clearly, for any $\alpha\in\ensuremath{\mathbb{Z}}\xspace^d$, we have $x^{\a}M'\subset M'$, and the linear maps
$$x^\a:M'\to M', \,\,\, x^{-\alpha}: M'\to M'$$
are inverse of each other.
We deduce that $M'=x^{\a}M'$ for all $\a\in \ensuremath{\mathbb{Z}}\xspace^d$. We see that, $f(\partial)=f(\partial_1,\ldots,\partial_d)\in I$ if and only if $f(\partial-\a)\in I$, for all $\a\in \ensuremath{\mathbb{Z}}\xspace^d$, which implies that $I=U(\mathfrak{h})$, i.e. $M'=0$. Hence $M$ is $U(\mathfrak{h})$-torsion free.
\end{proof}
Now we see that any nonzero modules in $\widetilde{\mathcal{H}}$ are
finitely generated and torsion free over $U(\mathfrak{h})$.
Before continuing, we give a simple property of such modules.
\begin{lemma}\label{cap I_iP}
Let $P$ be finitely generated torsion free module over $U(\mathfrak{h})$, and $\{I_i, i\in S\}$
is a family of ideals of $U(\mathfrak{h})$. If $\bigcap_{i\in S} I_i=0$ then $\bigcap_{i\in S} I_iP=0$.
\end{lemma}
\begin{proof}
Since $U(\mathfrak{h})$ is an integral domain, by a well-known result in Commutative Algebra,
$P$ can be imbedded in a free $U(\mathfrak{h})$-module $N$ of finite rank. The lemma follows easily from the fact $\bigcap_{i\in S} I_iN=0$.
\end{proof}
We go on to determine the structures of simple modules in $\widetilde{H}$.
First we recall the Lie algebra $\mathcal{T}$ from \cite{E2}.
Let $U$ be the universal enveloping algebra of $\widetilde{W}_d$. Let $\mathcal{I}$ be the two
sided ideal of $U$ generated by
$$ x^0-1, x^r\cdot x^s-x^{r+s},\ \forall\ r,s\in \ensuremath{\mathbb{Z}}\xspace^d.$$
Let $\overline{U}=U/\mathcal{I}.$
Note that an admissible $\widetilde{W}_d$-module is just a $U$-module annihilated by $\mathcal{I}$,
or, just a $\overline{U}$-module.
In particular, $M$ is a $\overline{U}$-module.
Denote $T(u,r)= x^{-r}D(u,r)-D(u,0)+\mathcal{I}\in\overline{U}$. We will identify element in $\widetilde{W}_d$ with its image in $\overline{U}$. It is easy to verify that
$$\aligned
&[T(v, s),T(u, r)]\hskip -4pt =\hskip -4pt (u|s)T(v, s)\hskip -4pt -\hskip -4pt (v| r)T(u,r)\hskip -4pt+\hskip -4ptT((v|r)u-(u|s)v, r+s),\\
&[D(v, 0),T(u, r)]=[x^s,T(u, r)]=0,\ \forall\ u,v\in\ensuremath{\mathbb C}\xspace^d, r,s\in\ensuremath{\mathbb{Z}}\xspace^d.
\endaligned$$
We use $\mathcal{T}$ to denote the Lie subalgebra of $\overline{U}$ generated by
operators $T(u, r)$, $u\in\ensuremath{\mathbb C}\xspace^d, r\in\ensuremath{\mathbb{Z}}\xspace^d$. Let $J$ denote the subspace of $\mathcal{T}$ spanned by
$$T(u; r, m)=T(u,r+m)-T(u,r)-T(u,m),\ \forall\ u\in\ensuremath{\mathbb C}\xspace^d, r,m\in\ensuremath{\mathbb{Z}}\xspace^d.$$
It is straightforward to check (or, cf. \cite{E2}) that $J$ is an ideal of $\mathcal{T}$ and the linear map
\begin{equation}\label{iso-gl_d}
\mathcal{T}/J\to \mathfrak{gl}_d(\ensuremath{\mathbb C}\xspace); \ T(e_i, e_j)\mapsto E_{j,i},\ \forall\ i,j\in\{1,2,\cdots,d\}
\end{equation}
is a Lie algebra isomorphism. So $z_d=\sum_{i=1}^d T(e_i,e_i)\in Z(\mathcal{T}/J)$.
We can check that $[z_d, \widetilde{W}_d]=0$ and hence $z_d$ is an endomorphism of the $\widetilde W_d$-module $M$.
We denote the fraction field of $U(\mathfrak{h})$ by $H$. Since $M$ is a finitely generated torsion-free $U(\mathfrak{h})$-module, we have the nonzero finite-dimensional vector space $M_H:=H\otimes_{U(\mathfrak{h})}M$ over $H$.
Define the $U(\mathfrak{h})$-rank of $M$ as rank$(M)=\dim_H M_H$.
If $\mathfrak{g}$ is a Lie algebra over $\ensuremath{\mathbb C}\xspace$, then the tensor product
$\mathfrak{g}_H:=H\otimes_{\ensuremath{\mathbb C}\xspace}\mathfrak{g}$ can be considered as a Lie algebra over $H$ by defining $\kappa(\kappa_1\otimes g)=(\kappa\k_1\otimes g)$ for $\kappa,\kappa_1\in H, g\in \mathfrak{g}$.
From now on in this section, we will assume that $M\in\widetilde{\mathcal{H}}$ is a simple $\widetilde{W}_d$-module.
Thus $z_d$ acts as a scalar on $M$.
\begin{lemma}\label{T/J}
We have $JM=0$, and further $M$ is a module over $\mathcal{T}/J$.
\end{lemma}
\begin{proof} By definition, $M$ is a $\mathcal{T}$-module. Recall that $\mathfrak{W}(M)$ is a cuspidal admissible $\widetilde{W}_d$-module. Using Theorem 2.9 in \cite{E2} we see that $J^k \mathfrak{W}(M)=0$ for some positive integer $k$, i.e., $J^k M\subseteq I_nM$ for all $n\in \ensuremath{\mathbb{Z}}\xspace^d$. Since $M$ is a torsion free $U(\mathfrak{h})$-module of finite rank, we have $\bigcap_{n\in \ensuremath{\mathbb{Z}}\xspace^d} I_nM=0$ by Lemma \ref{cap I_iP}. Thus $J^kM=0$. Hence $JM\ne M$.
In the algebra $\overline{U}$, we have
$$\aligned
& [D(v, s),T(u, r)] \equiv x^s\big((v|r)T(u, s)\hskip -4pt -\hskip -4pt (u|\hskip -2pt s)T(v,r)\big)\quad \mod x^sJ,\\
& [D(v, s),T(u; m, r)] \equiv - x^s(u|\hskip -2pt s)T(v;m,r)\quad \mod x^sJ,
\endaligned$$
for all $u,v\in\ensuremath{\mathbb C}\xspace^d$ and $ m, r,s\in\ensuremath{\mathbb{Z}}\xspace^d$.
So $JM$ is clearly a $\widetilde W_d$-submodule of $M$. Since $M$ is a simple $\widetilde W_d$-module, we have $JM=0$. Thus $M$ is a module over $\mathcal{T}/J$.
\end{proof}
\begin{lemma} \label{V} The $W_d$-module $M$ has a finite dimensional simple submodule over $\mathcal{T}/J\cong\mathfrak{gl}_d$.
\end{lemma}
\begin{proof}
We can consider $\mathcal{T}$ as a subalgebra of $\text{End}_{U(\mathfrak{h})}(M)$.
Over the field $H$, $M_H$ is a finite dimensional module over the Lie algebra $\mathcal{T}_H$.
By Lemma \ref{T/J}, we see that $JM=0$ and $J_HM_H=0$.
Hence $M_H$ can be viewed as a finite dimensional $(\mathcal{T}_H/J_H)$-module.
Noticing $\mathcal{T}/J\cong\mathfrak{gl}_d(\ensuremath{\mathbb C}\xspace)$, we can easily see that $\mathcal{T}_H/J_H\cong\mathfrak{gl}_d(H)$ which is a finite dimensional split reductive Lie algebra over $H$.
Recall that $z_d$, which corresponds to the identity matrix in $\mathfrak{gl}_d(H)$, acts as a scalar on $M$.
By Theorem 8 on Page 79 in \cite{J}, we know that the $\mathfrak{gl}_d(H)$-module $M_H$ is completely reducible. Let $V_1$ be a simple $\mathfrak{gl}_d(H)$-submodule of $M_H$.
By Theorem 3 on Page 215 in \cite{J}, we know that $V_1$ is a highest weight module of a dominant weight. Let $v$ be a highest weigh vector of $V_1$. We may assume that $v\in M$. Let $V$ be the $(\mathcal{T}/J)$-submodule of $M$ generated by $v$.
Then $V$ is a finite dimensional irreducible submodule of $M$ over $\mathcal{T}/J\cong\mathfrak{gl}_d(\ensuremath{\mathbb C}\xspace)$.
\end{proof}
\begin{lemma} We have the associative algebra isomorphism
\begin{equation}\label{iso-iota}
\iota:\mathcal{K}_d\otimes U(\mathcal{T})\rightarrow \overline{U}, \ \
\iota(x^r \partial^{\a} \otimes y)=x^r \cdot \prod_{j=1}^d D(\varepsilon_j,0)^{\a_j}\cdot y+I,
\end{equation}
where $r, \a\in\ensuremath{\mathbb{Z}}\xspace^d, y\in U(\mathcal{T})$.
\end{lemma}
\begin{proof} Note that $U(\mathcal{T})$ is an associative subalgebra of $\overline{U}$. Since the restrictions of $\iota$ on $\mathcal{K}_d$ and $U(\mathcal{T})$ are well-defined and homomorphisms of associative algebras, so $\iota$ is well-defined and is an homomorphism of associative algebras. From the definition of $T(u, r)\in T$ for $u\in\ensuremath{\mathbb C}\xspace^d, r\in\ensuremath{\mathbb{Z}}\xspace^d$, we have
$D(u,r) =x^{r}(T(u,r)+D(u,0))$, i.e.,
$$\iota(x^{r}\otimes T(u,r)+x^rD(u,0)\otimes 1)=D(u, r),$$
we can see that $\iota$ is an epimorphism.
By PBW Theorem we know that $\overline{U}$ has a basis consisting monomials in the variables
$$D(e_i, r): r\in\ensuremath{\mathbb{Z}}\xspace^d\setminus\{0\}, i\in\{1,2,\cdots, d\}$$
over $\mathcal{K}_d$. Therefore $\overline{U}$ has a basis consisting monomials in the variables
$$T(e_i, r): r\in\ensuremath{\mathbb{Z}}\xspace^d\setminus\{0\}, i\in\{1,2,\cdots, d\}$$
over $\mathcal{K}_d$. So $\iota$ is injective and hence an isomorphism.
\end{proof}
Now any (simple) $\overline{U}$-module can be also considered as a (simple) module over $\mathcal{K}_d\otimes U(\mathcal{T})$ via the isomorphism $\iota$.
The following result is well-known.
\begin{lemma}\label{V'}Let $A, B$ be unital associative algebras and $B$ have a countable basis. \begin{itemize}\item[(a).] If $M$ is a simple module over $A\otimes B$ that contains a simple $B=\ensuremath{\mathbb C}\xspace\otimes B$ submodule $V$, then $M\cong W\otimes V$ for a simple $A$-module $W$. \item[(b).] If $W$ and $V$ are simple modules over $A$ and $B$ respectively, then $W\otimes V$ is a simple module over $A\otimes B$.\end{itemize}\end{lemma}
Now we can determine all simple modules in $\widetilde{\mathcal{H}}$.
\begin{lemma}\label{Simple} Let $V$ be a simple $\mathfrak{gl}_d$-module, and $P$ a simple $\mathcal{K}_d$-module. Then the admissible $\widetilde{W}_d$-module $\mathcal{F}(P, V)$ is simple.
\end{lemma}
\begin{proof} Regard $\mathcal{F}(P, V)$ as $\mathcal{K}_d\otimes U(\mathcal{T})$ module by $\iota$. It is clear from Lemma \ref{V'} that $\mathcal{F}(P,V)$ is a simple $\mathcal{K}_d\otimes U(\mathcal{T})$ module.
Hence it is also a simple admissible $\widetilde{W}_d$-module.
\end{proof}
\begin{theorem} \label{main1}Let $M$ be a simple admissible $\widetilde{W}_d$-module that is a finitely generated $U(\mathfrak{h})$-module when restricted to $U(\mathfrak{h})$.
Then $M\cong \mathcal{F}(P, V)$ for a simple $\mathcal{K}_d$-module $P$ that is a finitely generated (torsion-free) $U(\mathfrak{h})$-module, and a finite dimensional simple $\mathfrak{gl}_d$-module $V$.
\end{theorem}
\begin{proof} By Lemmas \ref{V} and \ref{V'}, there is a simple $\mathcal{K}_d$-module $P$ and a finite dimensional simple $\mathfrak{gl}_d$-module $V$ so that $M\cong P\otimes V$ as $\overline{U}$-modules,
via the Lie algebra isomorphism in \eqref{iso-gl_d} and associative algebra isomorphism in \eqref{iso-iota}.
More precisely, we can deduce the action of $\widetilde{W}_d$ on $P\otimes V$:
$$\aligned
D(u,r)(y\otimes v)=&x^{r}(T(u,r)+D(u,0))(y\otimes v)\\
=&(x^{r}D(u,0)y)\otimes v+(x^ry)\otimes (ru^T)v,
\endaligned$$
for all $u\in\ensuremath{\mathbb C}\xspace^d, r\in\ensuremath{\mathbb{Z}}\xspace^d, y\in P, v\in V$, coinciding with
the definition of the $\widetilde{W}_d$-module $\mathcal{F}(P, V)$.
Hence $M\cong \mathcal{F}(P,V)$.
At last, note that
$$f(\partial)(y\otimes v)=f(\partial)y\otimes v, \quad \forall\ f(\partial)\in U(\mathfrak{h}), y\in P, v\in V.$$
Since $M$ is a finitely generated torsion-free $U(\mathfrak{h})$-module, so is $P$.
\end{proof}
\begin{corollary}\label{cor4.8} Let $M$ be a simple admissible $\widetilde{W}_d$-module that is a free $U(\mathfrak{h})$-module of finite rank. Then $M\cong \mathcal{F}(P, V)$ for a simple $\mathcal{K}_d$-module $P$ that is a free $U(\mathfrak{h})$-module of finite rank, and a finite dimensional simple $\mathfrak{gl}_d$-module $V$.\end{corollary}
\begin{proof}From the proof of the above theorem, we only need to show the $P$ is also a free $U(\mathfrak{h})$-module. In fact we have $M\cong P^m$ as $U(\mathfrak{h})$ modules where $m=\dim V$. Then $P$ is a finitely generated projective module over $U(\mathfrak{h})$, which from Quillen-Suslin Theorem is free.\end{proof}
\begin{corollary}Any admissible $\widetilde{W}_d$-module that is a finitely generated $U(\mathfrak{h})$-module has a finite composition length as $\widetilde{W}_d$-module.\end{corollary}
\begin{proof} Note that any admissible $\widetilde{W}_d$-module $M$ that is a finitely generated $U(\mathfrak{h})$-module is $U(\mathfrak{h})$ torsion-free. Since $U(\mathfrak{h})$ is a Noetherian integral domain, any $U(\mathfrak{h})$-submodules of $M$ is finitely generated as $U(\mathfrak{h})$-module. Hence the length of any composition series of $M$ cannot exceed the rank of $M$ as $U(\mathfrak{h})$-module. Thus $M$ has a finite composition length as $\widetilde{W}_d$-module.\end{proof}
\section{Simple $\ensuremath{W}\xspace_d$-modules that are finitely generated $U(\mathfrak{h})$-modules}
In this section we study the category ${\mathcal{H}}$ consisting of $\ensuremath{W}\xspace_d$-modules that are finitely generated $U(\mathfrak{h})$-modules. Let $\mathcal{W}$ be the category consisting of weight $\ensuremath{W}\xspace_d$-modules with finite dimensional weight spaces.
Corollary \ref{p3.5} tells us that there is a close link between these two categories.
We will generalize the concept of the covering module established in \cite{BF} to the category ${\mathcal{H}}$.
In this section, we always fix a nontrivial $W_d$-module $M\in\mathcal{H}$.
\begin{lemma}\label{torsion-1}
Any nontrivial simple $W_d$-module $M\in\mathcal{H}$ is torsion-free over $U(\mathfrak{h})$.
\end{lemma}
\begin{proof} Recall the $W_d$-submodule $M'=\{v\in M|{\rm ann}_{U(\mathfrak{h})}(v)\ne 0\}$. Suppose that $M$ is not torsion-free, i.e., $M'\neq0$. Then $M= M'$ and $J={\rm ann}_{U(\mathfrak{h})}(M)\ne 0$ since $M$ is finitely generated over $U(\mathfrak{h})$.
Recall that $I_\a$ is the maximal ideal of $U(\mathfrak{h})$ generated by $\partial_i-\a_i, i=1,\cdots,d$.
We claim that $I_\a M=M$ for all $\a\ne 0$. Otherwise, say $I_{\a_0}M\neq M$ for some $\a_0\ne 0$.
Consider the the Harish-Chandra $W_d$-module defined at the beginning of Section 3, i.e.,
$\mathfrak{W}^{(\a_0)}(M)=\bigoplus_{n\in \ensuremath{\mathbb{Z}}\xspace^d} (M/I_{n+\a_0}M)\otimes x^n$.
From the irreducible Harish-Chandra module theory (cf. \cite{BF}), we know that
$M/I_{n+\a_0}M\ne 0$ for all $n+\a_0\neq0$. Hence $J\subseteq I_{n+\a_0}$ for all $n+\a_0\neq0$,
for otherwise $I_{n+\a_0}M=(I_{n+\a_0}+J)M=U(\mathfrak{h})M=M$.
We have $J\subseteq \bigcap_{n\in\ensuremath{\mathbb{Z}}\xspace^d\setminus\{-\a_0\}} I_{n+\a_0}=0$, a contradiction.
Since $M$ is a finitely generated $U(\mathfrak{h})$-module, it has a maximal $U(\mathfrak{h})$-submodule $K$.
Then we have $M/K\cong U(\mathfrak{h})/I_{\a_1}$ for some $\a_1\in\ensuremath{\mathbb C}\xspace^d$ and $I_{\a_1}M\subseteq K\ne M$,
forcing $\a_1=0$. So we have proved that $I_0M \neq M$. Note that $D(u, r)I_\alpha=I_{\alpha+r}D(u, r)$. Then $\bigcap_{\a\in \ensuremath{\mathbb C}\xspace^d}I_{\a}M=I_0M$ becomes a proper nonzero $W_d$-submodule of $M$, a contradiction. Thus $M$ is torsion-free over $U(\mathfrak{h})$.
\end{proof}
Now consider $W_d$ as the adjoint $W_d$-module. We can make the tensor product $\ensuremath{W}\xspace_d$-module $W_d\otimes M$ into an admissible $\widetilde{W}_d$-module by defining
$$x^s(D(u, r)\otimes y)=D(u, r+s)\otimes y,\ \forall\ u\in\ensuremath{\mathbb C}\xspace^d, r, s\in\ensuremath{\mathbb{Z}}\xspace^d.$$
For $u\in \ensuremath{\mathbb C}\xspace^d, r\in\ensuremath{\mathbb{Z}}\xspace^d, y\in M$, we define $D(u, r)\boxtimes y\in {\text{Hom}}_{\ensuremath{\mathbb C}\xspace}(A_d,M)$ as
$$(D(u, r)\boxtimes y)(x^s)=D(u, r+s) y,\ \forall\ s\in\ensuremath{\mathbb{Z}}\xspace^d.$$
Denote
$$W_d\boxtimes M=\text{span}\{w\boxtimes y\ |\ w\in W_d, y\in M\}\subset {\text{Hom}}_{\ensuremath{\mathbb C}\xspace}(A_d,M).$$
We define the canonical linear map
$$\psi: W_d\otimes M\to W_d\boxtimes M; \quad w\otimes y\mapsto w\boxtimes y.$$
It is easy to see that the kernel of $\psi$ is a $\widetilde{W}_d$-submodule of the tensor module $W_d\otimes M$.
Thus we can make $W_d\boxtimes M$ into an admissible $\widetilde{W}_d$-module vis $\psi$, which is isomorphic to $(W_d\otimes M)/\ker\psi$.
As in \cite{BF}, we call this admissible $\widetilde{W}_d$-module as {\it the cover of} $M$.
The action of $\widetilde{W}_d$ on $W_d\boxtimes M$ can be written explicitly as follows:
$$\aligned
& D(u,r)(D(v,s)\boxtimes y)=[D(u,r),(D(v,s)]\boxtimes y+D(v,s)\boxtimes D(u,r)y\\
& x^r(D(v,s)\boxtimes y)=D(v,r+s)\boxtimes y,\ \forall\ u,v\in\ensuremath{\mathbb C}\xspace^d, r,s\in\ensuremath{\mathbb{Z}}\xspace^d, y\in M.
\endaligned$$
The following result is easy to verify.
\begin{lemma}\label{cover}
The linear map
$$\aligned
\pi: \,\,&W_d\boxtimes M & \to\ \ & M ,\\
&\hskip5pt w\boxtimes y & \mapsto\ \ & wy,\quad \forall\ w\in W_d, y\in M
\endaligned$$
is a $W_d$-module epimorphism.
\end{lemma}
The following result gives an interesting property on commutativity of the weighting functor
$\mathfrak{W}$ and $W_d\boxtimes$.
\begin{lemma}\label{iso}
As admissible $\widetilde{W}_d$-modules, $W_d\boxtimes {\mathfrak{W}(M)}\cong \mathfrak{W}(W_d\boxtimes {M})$.
\end{lemma}
\begin{proof} Let us first compute $I_n(W_d\boxtimes {M})$.
For any $u, v\in\ensuremath{\mathbb C}\xspace^d, n, s\in\ensuremath{\mathbb{Z}}\xspace^d$ and $y\in M$, we have
$$\aligned
&(D(u, 0)-(u|n))(D(v,s)\boxtimes y)
= D(v,s)\boxtimes (D(u, 0)-(u|n-s))y;\endaligned$$
and $$\aligned
I_n(W_d\boxtimes {M})
=& \sum_{s\in\ensuremath{\mathbb{Z}}\xspace^d}W_d(s)\boxtimes I_{n-s}(M),
\endaligned$$
where $W_d(s)=\sum_{v\in\ensuremath{\mathbb C}\xspace^d}D(v,s)$ is the root space of $W_d$.
Define the linear map $\gamma: W_d\boxtimes {\mathfrak{W}(M)}\rightarrow \mathfrak{W}(W_d\boxtimes {M})$ by
$$\aligned
& \gamma(D(u,r)\boxtimes(y+I_nM))\\
=& D(u,r)\boxtimes y +\sum_{s\in\ensuremath{\mathbb{Z}}\xspace^d}W_d(s)\boxtimes I_{n+r-s}(M).
\endaligned$$
The linearity and compatibility of $\gamma$ with the action of $\widetilde{W}_d$ can be verified straightforward. The details are left to the readers. We only prove that $\gamma$ is is well-defined and bijective.
Indeed, given $n\in\ensuremath{\mathbb{Z}}\xspace^d$ and finitely many $u_i\in\ensuremath{\mathbb C}\xspace^d, r_i\in\ensuremath{\mathbb{Z}}\xspace^d, y_i\in M$,
we have $\sum\limits_{i}D(u_i,r_i)\boxtimes(y_i+I_{n-r_i}M)=0$ in $W_d\boxtimes {\mathfrak{W}(M)}$
if and only if
$$\sum_{i}D(u_i,r_i+\a) y_i\in I_{n+\a}M,\ \forall\ \a\in\ensuremath{\mathbb{Z}}\xspace^d,$$
if and only if
$$\sum_{i}D(u_i,r_i+\a) y_i\in I_{n+\a}\sum_{s\in\ensuremath{\mathbb{Z}}\xspace^d}W_d(s+\a)(M),\ \forall\ \a\in\ensuremath{\mathbb{Z}}\xspace^d,$$
if and only if
$$\sum_{i}D(u_i,r_i+\a) y_i\in \sum_{s\in\ensuremath{\mathbb{Z}}\xspace^d}W_d(s+\a)( I_{n-s}(M),\ \forall\ \a\in\ensuremath{\mathbb{Z}}\xspace^d,$$
if and only if
$$\sum_{i}D(u_i,r_i)\boxtimes y_i+\sum_{s\in\ensuremath{\mathbb{Z}}\xspace^d}W_d(s)\boxtimes I_{n-s}(M)=0,$$
in $\mathfrak{W}(W_d\boxtimes {M})$. The lemma follows.
\end{proof}
\begin{lemma}\label{fin gen} Let $M\in\mathcal{H}$, which is finitely generated and torsion free over $U(\mathfrak{h})$.
The admissible $\widetilde{W}_d$-module $W_d\boxtimes {M}$ is a finitely generated $U(\mathfrak{h})$-module when restricted to $U(\mathfrak{h})$.
\end{lemma}
\begin{proof} Since $M$ is a finitely generated torsion-free $U(\mathfrak{h})$-module, we know that $\mathfrak{W}(M)$ is a Harish-Chandra $W_d$-module. From Theorem 4.11 in \cite{BF} we know that $\mathfrak{W}(M)$ is a quotient of some cuspidal admissible $\widetilde{W}_d$-module. Therefore there exists $m\in \ensuremath{\mathbb{N}}\xspace$ such that
$\Omega_{\a,\beta,\gamma,i,j}^{(m,\gamma)}\mathfrak{W}(M)=0$
for all $\a,\beta,\gamma\in \ensuremath{\mathbb{Z}}\xspace^d, i,j=1,2\ldots,d,$
where
$$\Omega_{i,j,\a,\beta,\gamma}^{(m)}=\sum_{s=0}^m (-1)^s{m\choose s}D(e_i,\a-s\gamma)D(e_j,\beta+s\gamma).$$
By the module action of $W_d$ on $\mathfrak{W}(M)$, we deduce
$$\Omega_{i,j,\a,\beta,\gamma}^{(m)}M\subseteq \bigcap_{\xi\in \ensuremath{\mathbb{Z}}\xspace^d} I_{\xi}M=0,\ \forall\ \a,\beta,\gamma\in \ensuremath{\mathbb{Z}}\xspace^d, i,j=1,2\ldots,d.$$
Let $\|\a||= |\a_1| + |\a_2|+\cdots+ |\a_d|$ for $\a=(\a_1,\ldots,\a_d)\in \ensuremath{\mathbb{Z}}\xspace^d$.
Note that $M=W_dM$. It is enough to prove by induction on $\|\a\|$ that
$$D(e_i,\a)\boxtimes ( D(e_j,\beta) v)\in \sum_{\|\gamma\|\le md} D(e_i,\gamma)\boxtimes M,$$
for all $v\in M, \a,\beta\in\ensuremath{\mathbb{Z}}\xspace^d, i,j\in\{1,\cdots,d\}.$
This is obvious for $\a\in\ensuremath{\mathbb{Z}}\xspace^d$ with $\|\a\|\le md$. Now we assume that $\|\a\|>md$. Without lose of generality, we may assume that $\a_1>m$.
For any $\eta\in\ensuremath{\mathbb{Z}}\xspace^d$, we have
$$\aligned
&\sum_{s=0}^m (-1)^s {m\choose s} \Big(D(e_i,\a-se_1)\boxtimes \big(D(e_j,\beta+se_1)v\big)\Big)(x^\eta)\\
=&\sum_{s=0}^m (-1)^s {m\choose s}D(e_i,\eta+\a-se_1)\big(D(e_j,\beta+se_1)v\big)\\
=&\Omega_{i,j,\eta+\a,\beta,e_1}^{(m)}(v)=0,
\endaligned$$
which imples
$$\sum_{s=0}^m (-1)^s {m\choose s} D(e_i,\a-se_1)\boxtimes (D(e_j,\beta+se_1)v)=0$$
in $W_d\boxtimes {M}$, that is,
$$ \aligned
& D(e_i,\a)\boxtimes (D(e_j,\beta)v)\\ =&-\sum_{s=1}^m(-1)^{s} {m\choose s} D(e_i,\a-se_1)\boxtimes\big(D(e_j,\beta+se_1)v\big),
\endaligned$$
which belongs to $ \sum_{\|\gamma\|\le md}D(e_i,\gamma)\boxtimes M$
by induction hypothesis.
\end{proof}
\begin{theorem}\label{thm5.5} Let $M$ be a simple $W_d$-module that is a finitely generated $U(\mathfrak{h})$-module when restricted to $U(\mathfrak{h})$.
Then $M$ is a simple quotient of the $W_d$-module $\mathcal{F}( P, V)$ for a simple $\mathcal{K}_d$-module $P$ that is a finitely generated (torsion-free) $U(\mathfrak{h})$-module, and a finite dimensional simple $\mathfrak{gl}_d$-module $V$.
\end{theorem}
\begin{proof} From Lemmas \ref{cover}, \ref{fin gen}, there is an admissible $\widetilde{W}_d$-module $P_1$ that is a finitely generated torsion-free $U(\mathfrak{h})$-module of finite rank when restricted to $U(\mathfrak{h})$ and
a $\ensuremath{W}\xspace_d$-module epimorphism $\phi: P_1\rightarrow M$. We may choose $P_1$ so that its rank $s$ over $U(\mathfrak{h})$ is minimal.
We claim that $P_1$ is a simple $\widetilde{W}_d$-module. Otherwise, $P_1$ has a nonzero maximal $\widetilde{W}_d$-submodule $P_2$.
Then from Lemma \ref{torsion}, $P_2$ and $P_1/P_2$ are torsion-free of rank both less than $s$. However since $M$ is simple as $W_d$ module, we must have either $\phi(P_2)=M$ or $\phi(P_2)=0$. Then either $P_2$ or $P_1/P_2$ has a simple $W_d$-quotient isomorphic to $M$. This contradicts the choice of $P_1$.
From Theorem \ref{Simple}, we know that $P_1\cong \mathcal{F}( P, V)$ for a simple $\mathcal{K}_d$-module $P$ that is a finitely generated torsion-free $U(\mathfrak{h})$-module, and a finite dimensional irreducible $\mathfrak{gl}_d$-module $V$. \end{proof}
Next we will determine all possible simple quotient modules of $\mathcal{F}(P, V)$ in Theorem 5.5.
Let $V$ be a finite dimensional simple $\mathfrak{gl}_d$-module, and $P$ a simple $\mathcal{K}_d$-module that is a finitely generated $U(\mathfrak{h})$-module. If $V$ is not isomorphic to any $\mathfrak{gl}_d$-modules $V(\delta_k, k)$ for any $k= 1,2, \cdots, d$, from Corollary 3.6 in \cite{LLZ} we know that the $W_d$-modules $\mathcal{F}(P, V)=P\otimes V$ are simple. Now we will determine all simple quotients of $\mathcal{F}(P, V(\delta_k, k))$ over $W_d$ for any $k= 1,2, \cdots, d$.
Let us first establish some general results on finitely generated torsion-free $U(\mathfrak{h})$-modules.
For convenience, we denote $R=U(\mathfrak{h})$, and for any prime ideal $p$ of $R$,
let $R_p$ be the localization of $R$ at $p$ and
$P_{p}$ be the localization of the $R$-module $P$ at $p$.
In particular, for any maximal ideal $I_\a$ of $R$ and an $R$-module $M$,
we have $M/I_\a M$ is a vector space over $R/I_\a R\cong \ensuremath{\mathbb C}\xspace$.
Moreover, we have the following canonical isomorphisms
$$M/I_{\a}M\cong (M/I_{\a}M)_{I_{\a}}\cong M_{I_{\a}}/I_{\a}M_{I_{\a}}$$
of $R$-modules.
Recall that $H=R_{(0)}$ is the the quotient field of $R$, where $(0)$ is the zero ideal of $R$.
\begin{lemma}\label{L-free}Let $M$ be a finitely generated torsion-free $U(\mathfrak{h})$-module. Then $M$ is a free $U(\mathfrak{h})$-module if and only if $\dim M/I_{\a}M=\operatorname{rank} M$ for all $\a\in \ensuremath{\mathbb C}\xspace^d$.\end{lemma}
\begin{proof} We need only to prove the ``if'' part of the lemma since the ``only if'' part is clear. Let $r=\operatorname{rank} M$ which is actually $\dim_HM_H$. Since $M_{I_{\a}}/I_{\a}M_{I_{\a}}\cong M/I_{\a}M$ is of dimension $r$, from the Nakayama's lemma, we know that as an $R_{I_{\a}}$ module, $M_{I_{\a}}$ is generated by $r$ elements.
Say $M_{I_{\a}}=R_{I_\a}w_1(\a)+\cdots+R_{I_\a}w_r(\a)$, where $w_1(\a),\cdots,w_r(\a)\in M_{I_{\a}}$ are dependent on $\a$.
Noticing that $R_{I_\a}\subset R_{(0)}=H$, we have $M_{(0)}=Hw_1+\cdots+Hw_r$.
Since $\operatorname{rank}(M)=r$, we see that $w_1,\ldots,w_r$ are $H$-linearly independent,
hence $R_{I_\a}$-linearly independent, and form an $R_{I_\a}$-basis of $M_{I_{\a}}$.
Recall that any finitely generated module over a Noetherian algebra is a finitely presented module. From Corollary 3.4 on Page 19 in \cite{L}, we know that $M$ is a finitely generated projective module over $R$. Hence from Quillen-Suslin Theorem $M$ is $R$-free.\end{proof}
\begin{lemma}\label{lemma-dim}Let $M$ be a $W_d$-module that is a finitely generated torsion-free $U(\mathfrak{h})$-module of rank $r$. Then $\dim M/I_{\a}M=r$ for all $\a\in \ensuremath{\mathbb C}\xspace^d\backslash \{0\}$. \end{lemma}
\begin{proof}
Let $v_1,v_2,\ldots, v_r\in M$ be an $H$-basis of $M_{(0)}$. Then there exists some $f\in R$ such that \begin{equation}\label{eq2}M\subseteq \frac{1}{f}(Rv_1+Rv_2+\ldots Rv_r).\end{equation}
For any $\a\in \ensuremath{\mathbb C}\xspace^d\backslash\{0\}$, from the fact that $\bigcap_{n\in \ensuremath{\mathbb{Z}}\xspace^d\backslash\{-\a\}} I_{\a+n}=0$, we see that there exists some $n\in \ensuremath{\mathbb{Z}}\xspace^d\backslash\{-\a\} $ such that $f\not\in I_{\a+n}$. From (\ref{eq2}), we see that
$M_{I_{\a+n}}=fM_{I_{\a+n}}=R_{I_{\a+n}}v_1+R_{I_{\a+n}}v_2+\ldots R_{I_{\a+n}}v_r$, and further
$\{v_1,\ldots,v_r\}$ becomes an $R_{I_{\a+n}}$-basis of $M_{I_{\a+n}}$.
Then
$$\aligned
& M/I_{\a+n}M\cong
M_{I_{\a+n}}/I_{\a+n}M_{I_{\a+n}}\\
= & (R_{I_{\a+n}}v_1\oplus\cdots \oplus R_{I_{\a+n}}v_r)/I_{\a+n}(R_{I_{\a+n}}v_1\oplus \cdots\oplus R_{I_{\a+n}}v_r)\\
\cong & \bigoplus_{i=1}^r R_{I_{\a+n}}v_i/I_{\a+n}(R_{I_{\a+n}}v_i)
\cong (R/I_{\a+n})^r\endaligned$$ as $R$ modules. In particular, we have $\dim M/I_{\a+n}M=r$.
Considering the cuspidal module $\mathfrak{W}^{(\a)}=\bigoplus_{m\in \ensuremath{\mathbb{Z}}\xspace^d} (M/I_{\a+m}M)\otimes x^m$ over $W_d$, we have $\dim M/I_{\alpha}M=\dim M/I_{\a+n}M=r$.
\end{proof}
From this lemma, we have
\begin{corollary}\label{free-3}\begin{itemize}\item[(1).] Any $\mathcal{K}_d$-module that is a finitely generated $U(\mathfrak{h})$-module is a free $U(\mathfrak{h})$-module.
\item[(2).] Any admissible $\widetilde W_d$-module that is a finitely generated $U(\mathfrak{h})$-module is a free $U(\mathfrak{h})$-module.
\item[(3).] A nontrivial simple $W_d$-module $M$ that is a finitely generated $U(\mathfrak{h})$-module is a free $U(\mathfrak{h})$-module if and only if $\dim M/I_0M=\operatorname{rank} M$. \end{itemize}\end{corollary}
\begin{proof} (1) follows from (2), since any $\mathcal{K}_d$-module is automatically an admissible $\widetilde{W}_d$-module, as we remarked in Section 2. And (3) follows directly from Lemma \ref{L-free} and Lemma \ref{lemma-dim}.
For (2), we take an admissible $\widetilde{W}_d$-module $M$, then $x^nM=M$ and $x^nI_0M=I_{n}M$,
which implies $M/I_0M\cong M/I_nM=\operatorname{rank} M$ for any $n\in\ensuremath{\mathbb{Z}}\xspace^d$. Again the result follows from Lemma \ref{L-free} and Lemma \ref{lemma-dim}.
\end{proof}
Let $P$ be an admissible $\widetilde{W}_d$-module.
Then we have the $W_d$-module homomorphisms for $k=1,2,\cdots, d$,
\begin{equation*}\begin{array}{lrcl}
\pi_{k-1}:& \mathcal{F}(P,V(\delta_{k-1},k-1) & \rightarrow & \mathcal{F}(P, V(\delta_{k},k)),\\
& y\otimes v & \mapsto & \sum\limits_{j=1}^{d} \partial_jy\otimes e_j\wedge v,
\end{array}\end{equation*}
for all $y\in P$ and $v\in V(\delta_{k-1}, k-1)$. Note that the definition for $\phi_{k-1}$ has different form from that in \cite{LLZ}, but they are essentially same using Theorem 2.3.
Denote $ \mathfrak{L}_d(P,k)=\text{Im}\ \pi_{k-1}$ and $\tilde \mathfrak{L}_d(P,k)=\text{Ker}\ \pi_{k}$,
where $\pi_d=0$ and $\tilde{\mathfrak{L}}_d(P,d)=\mathcal{F}(P, V(\delta_{d},d))$.
Thanks to the isomorphism between $F(P,V)$ and $\mathcal{F}(P,V)$ in Theorem 2.3 we can collect some results
on these modules from \cite{LLZ}. If $k=1,\cdots,d$, $P$ is a simple $\mathcal{K}_d$-module and finitely generated over $U(\mathfrak{h})$, then
\begin{itemize}\item[(1).] $\mathcal{F}(P, V(\delta_{0},0))$ is simple,
\item[(2).] $\mathcal{F}(P, V(\delta_{k},k))$ is not simple,
\item[(3).] $\mathfrak{L}_d(P,k)$ is simple,
\item[(4).] $\mathfrak{L}_d(P,k)\subseteq \tilde \mathfrak{L}_d(P,k)$,
\item[(5).] $\tilde{\mathfrak{L}}_d(P,k)/\mathfrak{L}_d(P,k)$ is trivial,
\item[(6).] $\mathcal{F}(P, V(\delta_{d},d))/\mathfrak{L}_d(P,d)$ is trivial.\end{itemize}
Here the only thing we might need to explain is that, since $P$ is a free $U(\mathfrak{h})$-module (Lemma \ref{free-3}(1)),
we have $\sum_{i=1}^d \partial_iP\ne P$ and $\mathcal{F}(P, V(\delta_d,d))$ is not simple by
Corollary 3.6 in \cite{LLZ}.
Recall from Lemma \ref{p3.5} that $$\mathfrak{W}(\mathcal{F}(P,V(\delta_{k},k)))\cong \mathcal{F}(\mathfrak{W}(P),V(\delta_{k},k)).$$
We have the $W_d$-module homomorphism
\begin{equation*}\aligned
\mathfrak{W}(\pi_{k-1}):\ \mathcal{F}(\mathfrak{W}(P),V(\delta_{k-1},k-1)) &\rightarrow \mathcal{F}(\mathfrak{W}(P), V(\delta_{k},k))\\
(y+I_nP)\otimes x^n\otimes v &\mapsto \sum\limits_{j=1}^{d} (\partial_jy+I_nP)\otimes x^n\otimes e_j\wedge v,
\endaligned\end{equation*}
for all $y\in P, v\in V(\delta_{k-1}, k-1)$ and $n\in \ensuremath{\mathbb{Z}}\xspace^d$.
\begin{lemma}\label{lemma-main3} Let notations be as above except that $P$ is a simple
$\mathcal{K}_d$-module which is finitely generated $U(\mathfrak{h})$-module of rank $r$. For all $i=1,2,\ldots,d$,
\begin{itemize}
\item[(1).] $\operatorname{rank} \mathfrak{L}_d(P,i)=\operatorname{rank} \tilde \mathfrak{L}_d(P,i)=r{{d-1}\choose i-1}$;
\item[(2).] $\operatorname{rank}(\mathfrak{L}_d(P,i)/I_0\mathfrak{L}_d(P,i))=r{{d}\choose i-1}$;
\item[(3).] $\mathfrak{L}_d(P,1)\cong \mathcal{F}(P,V(0,0))$ is a free $U(\mathfrak{h})$-module;
\item[(4).] $\mathfrak{L}_d(P,i)$ are not free $U(\mathfrak{h})$-modules.
\end{itemize}
\end{lemma}
\begin{proof}(1). Recall that $\mathcal{F}(P, V(\delta_i,i))$ is a free $U(\mathfrak{h})$-module. Hence $\mathfrak{L}_d(P,i)$ and $\tilde{\mathfrak{L}}_d(P,i)$ are both torsion free over $U(\mathfrak{h})$ and of the same rank, since $U(\mathfrak{h})\tilde{\mathfrak{L}}_d(P,i)\subset {\mathfrak{L}}_d(P,i)$. Moreover from $$\mathcal{F}(P, V(\delta_i,i))/\tilde{\mathfrak{L}}_d(P,i)=\mathfrak{L}_d(P,i+1),\ \forall\ i=1,\ldots,d-1,$$ we have \begin{equation}\label{eq3}\operatorname{rank} \mathfrak{L}_d(P,i)+\operatorname{rank} \mathfrak{L}_d(P, i+1)=\operatorname{rank} \mathcal{F}(P, V(\delta_i,i))=r{{d}\choose i}.\end{equation} Recall that $\mathfrak{L}_d(P,1)\cong \mathcal{F}(P,V(0,0))$. Then $\operatorname{rank} \mathfrak{L}_d(P,1)=r$. Now from (\ref{eq3}) we deduce $\operatorname{rank} \mathfrak{L}_d(P,i)=r{{d-1}\choose i-1}$ for all $ i=1,2,\ldots,d$ by induction on $i$.
(2). Let $\{w_1,\cdots,w_r\}$ be a $U(\mathfrak{h})$-basis of $P$. Note that
$$\mathfrak{L}_d(P,i)\hskip -3pt =\hskip -3pt
\text{span}\left\{\sum_{j=1}^d \partial_j w\otimes e_j\wedge v\,\Big|\,w\in P, v\in \bigwedge^{i-1} \ensuremath{\mathbb C}\xspace^d\right\}.$$
$$I_{0}\mathfrak{L}_d(P,i)\hskip -3pt =\hskip -3pt
\text{span}\left\{\sum_{j=1}^d \partial_j w\otimes e_j\wedge v\,\Big|\,w\in I_{0}P, v\in \bigwedge^{i-1} \ensuremath{\mathbb C}\xspace^d\right\}.$$
By considering the total degree on $\partial_1,\ldots, \partial_d$, it is easy to deduce the following vector space decomposition $\mathfrak{L}_d(P,i)=X\oplus I_0\mathfrak{L}_d(P,i)$, where
\begin{equation
X\hskip -3pt =\hskip -3pt \text{span}\hskip -3pt \left\{\hskip -3pt \sum_{j=1}^d \partial_j w_k\otimes e_j\wedge v\,\Big|\,v\in \bigwedge^{i-1} \ensuremath{\mathbb C}\xspace^d\hskip -3pt, k=1,2,\ldots, r\hskip -3pt \right\}.
\end{equation}
Take any basis $v_1,\cdots,v_s$ of $\bigwedge^{i-1} \ensuremath{\mathbb C}\xspace^d$, where $s={{d}\choose i-1}$.
It is easy to check that $\sum_{j=1}^d \partial_j w_k\otimes e_j\wedge v_i, i=1,\cdots,s, k=1,\cdots,r$,
form a basis of $X$. Hence $\dim \mathfrak{L}_d(P,i)/I_0\mathfrak{L}_d(P,i)=\dim X=r{{d}\choose i-1}$.
(3) is clear. (4) follows from (1) and (2) by Lemma \ref{free-3} (3).
\end{proof}
\begin{lemma}\label{trivial} Let $P$ be a simple $\mathcal{K}_d$-module which is a finitely generated $U(\mathfrak{h})$-module.
Then as a $W_d$-module,
\begin{itemize}
\item[(a).] $\mathcal{F}(P, V(\delta_k,k))$ has finite composition length for $k=1,\cdots,d$;
\item[(b).] $\mathcal{F}(P, V(\delta_d,d))$ has a unique minimal submodule $\mathfrak{L}_d(P, d)$ and the quotient
$\mathcal{F}(P, V(\delta_d,d))/\mathfrak{L}_d(P, d)$ is trivial;
\item[(c).] $\mathcal{F}(P, V(\delta_k,k))$ has a unique minimal submodule $\mathfrak{L}_d(P,k)$, a unique maximal
submodule $\tilde{\mathfrak{L}}_d(P,k)$, the quotient $\tilde{\mathfrak{L}}_d(P,k)/\mathfrak{L}_d(P,k)$ is trivial,
and $\mathcal{F}(P, V(\delta_k,k))/\tilde{\mathfrak{L}}_d(P,k)\cong\mathfrak{L}_d(P,k+1)$ for $k=1,\cdots\hskip-1pt,\hskip-1pt d-1$.
\end{itemize}
\end{lemma}
\begin{proof}
(a). First note that $\mathcal{F}(P,V(\delta_k, k))$ has finitely many trivial simple $W_d$-subquotient since it is finitely generated over $U(\mathfrak{h})$-modules. Then the result follows from the fact that any nontrivial simple $W_d$-sub-quotient of $F(P,V(δk,k))$ is $U(h)$-torsion free.
Then we note that $\mathcal{F}(P, V(\delta_k,k))$ has no trivial $W_d$-submodules,
since any submodule of a $U(\mathfrak{h})$-torsion free module $\mathcal{F}(P, V(\delta_k,k))$ must be $U(\mathfrak{h})$-torsion free.
(b) follows from the above argument for $k=d$ and the facts $\mathfrak{L}_d(P,d)$ is simple and $\mathcal{F}(P,V(\delta_d,d))/\mathfrak{L}_d(P,d))$ is trivial.
(c) Now take $1\leq k\leq d-1$. First suppose that $N$ is a nontrivial simple submodule of $\mathcal{F}(P, V(\delta_k,k))$. To the contrary, we assume that $N\neq\mathfrak{L}_d(P,k)$.
We have the submodule $N\oplus \mathfrak{L}_d(P,k)$ of $\mathcal{F}(P, V(\delta_k,k))$
and $N\cong \mathfrak{L}_d(P,k+1)$. By the definition of $\tilde \mathfrak{L}_d(P,k)$ we deduce $\mathcal{F}(P, V(\delta_k,k))=$ $N\oplus \tilde \mathfrak{L}_d(P,k)$ as $W_d$-modules. Note that $\mathfrak{W}(P)\cong A_d^r$ as $\mathcal{K}_d$-modules. So
\begin{equation}\label{long}\aligned &\mathfrak{W}(N)\oplus\mathfrak{W}(\tilde \mathfrak{L}_d(P,k))=\mathfrak{W}(\mathcal{F}(P, V(\delta_k,k)))\\
\cong &\mathcal{F}(\mathfrak{W}(P), V(\delta_k,k))\cong \mathcal{F}(A_d, V(\delta_k,k))^r. \endaligned\end{equation}
as admissible ${W}_d$-modules.
By definitions, we have the $W_d$-module epimorphism
\begin{equation}\label{epi5.1}
\phi:\quad \mathfrak{W}(N)\cong\mathfrak{W}(\mathfrak{L}_d(P,k+1))\to
\mathfrak{L}_d(\mathfrak{W}(P),k+1)
\end{equation}
given by assigning the element
$\big(\sum\limits_{j=1}^{d} \partial_jy\otimes e_j\wedge v+I_n\mathfrak{L}_d(P,k+1)\big)\otimes x^n$ to
$ \sum\limits_{j=1}^{d}(\partial_jy+I_nP)\otimes x^n\otimes e_j \wedge v$.
Now from Lemma \ref{lemma-main3} and Lemma \ref{lemma-dim}, we have
$$\dim \mathfrak{L}_d(P,k+1)/I_n\mathfrak{L}_d(P,k+1)=r{{d-1}\choose k},\ \forall\ n\in\ensuremath{\mathbb{Z}}\xspace^d\setminus\{0\},$$
and it is also known that
$$\dim \mathfrak{L}_d(\mathfrak{W}(P), k+1))_n=r{{d-1}\choose k},\ \forall\ n\in\ensuremath{\mathbb{Z}}\xspace^d\setminus\{0\}.$$
Thus $\ker\phi$ in \eqref{epi5.1} is a trivial submodule of $\mathfrak{W}(N)$.
On the other hand, $\mathcal{F}(A_d, V(\delta_k,k))$ does not have any nonzero trivial submodule.
So $\ker(\phi)=0$ by \eqref{long} and $\mathfrak{W}(N)\cong\mathfrak{L}_d(\mathfrak{W}(P),k+1)\cong\mathfrak{L}_d(A_d,k+1)^r$,
which implies $\mathfrak{L}_d(A_d,k+1)$ is a direct summand of $\mathcal{F}(A_d, V(\delta_k,k))$, impossible since $\mathcal{F}(A_d, V(\delta_k,k))$ is indecomposible.
So we must have $N=\mathfrak{L}_d(P,k)$.
Now suppose that $N'$ is a maximal $W_d$-submodule of $\mathcal{F}(P, V(\delta_k,k))$. We know that ${\mathfrak{L}}_d(P,k)\subset N'$.
If $N'\neq \tilde{\mathfrak{L}}_d(P,k)$, then $N'+\tilde{\mathfrak{L}}_d(P,k)=\mathcal{F}(P, V(\delta_k,k))$ and
$$\mathcal{F}(P, V(\delta_k,k))/N'\cong \tilde{\mathfrak{L}}_d(P,k)/N'\cap\tilde{\mathfrak{L}}_d(P,k)$$
is trivial, since $\mathfrak{L}_d(P,k)\subseteq N'\cap\tilde{\mathfrak{L}}_d(P,k)$.
Take any $w\in \mathcal{F}(P, V(\delta_k,k))\setminus N'$. Then there exists
$n\in\ensuremath{\mathbb{Z}}\xspace^d$ such that $w\notin I_n\mathcal{F}(P, V(\delta_k,k))$. Thus $w+I_n\mathcal{F}(P, V(\delta_k,k))$ is nonzero in $\mathfrak{W}(\mathcal{F}(P, V(\delta_k,k)))\cong \mathcal{F}(A_d, V(\delta_k,k))^r$ and gives rise to a
nonzero trivial quotient module of $\mathcal{F}(A_d, V(\delta_k,k))^r$, impossible.
So we must have $N'=\tilde{\mathfrak{L}}_d(P,k)$.
\end{proof}
\begin{corollary} \label{cor5.5} Let $M$ be a simple $W_d$-module that is a free $U(\mathfrak{h})$-module of finite rank when restricted to $U(\mathfrak{h})$.
Then $M$ is isomorphic to $\mathcal{F}( P, V)$ for a simple $\mathcal{K}_d$-module $P$ that is a a free $U(\mathfrak{h})$-module module of finite rank, and a finite dimensional simple $\mathfrak{gl}_d$-module $V$ which is not isomorphic to $V(\delta_k,k)$ for any $k=1,2,\ldots,d$.
\end{corollary}
\begin{proof}It follows directly from Theorem \ref{thm5.5}, Lemmas \ref{lemma-main3} and \ref{trivial}. \end{proof}
\section{Simple $\mathcal{K}_d$-modules that are finitely generated $U(\mathfrak{h})$-modules}
From Corollary \ref{free-3} we know that any $\mathcal{K}_d$-module that is a finitely generated $U(\mathfrak{h})$-module is a free $U(\mathfrak{h})$-module of finite rank. In this scetion we will characterize such $\mathcal{K}_d$-module.
Let \begin{equation}\label{polynomail}
f_i=1+\sum_{j=1}^{n_i}a_{ij}x_i^j\in \mathcal{K}_d,\quad i=1,2\ldots,d
\end{equation} where $n_i\ge 1, a_{ij}\in U(\mathfrak{h}), a_{in_i}\in \ensuremath{\mathbb C}\xspace^\star$.
Define the $\mathcal{K}_d$-module $$S_{f_1,\ldots,f_d}=\mathcal{K}_d\Big/\Big(\sum_{i=1}^d \mathcal{K}_df_i\Big).$$
\begin{lemma}
\begin{itemize}
\item[(1).] Any quotient $\mathcal{K}_d$-module of $S_{f_1,\ldots,f_d}$ is a finitely generated $U(\mathfrak{h})$-module.
\item[(2).] As $\mathcal{K}_d$-module, $S_{f_1,\ldots,f_d}$ has finite composition length.
\item[(3).] Any simple $\mathcal{K}_d$-module that is a finitely generated $U(\mathfrak{h})$-module is isomorphic to a quotient of $S_{f_1,\ldots, f_d}$ for some $f_i\in \mathcal{K}_d$ of the form in (6.1).
\end{itemize}
\end{lemma}
\begin{proof} (1). It is easy to see that
$$\mathcal{K}_d=\sum_{i=1}^{d}\mathcal{K}_df_i+\sum_{0\le r_i\le n_i-1} U(\mathfrak{h})x^r.$$
So $S_{f_1,\ldots,f_d}$ is a finitely generated $U(\mathfrak{h})$-module, and hence any quotient $\mathcal{K}_d$-module of of $S_{f_1,\ldots,f_d}$ is a finitely generated $U(\mathfrak{h})$-module.
(2) follows from Corollary 4.9.
(3). Now suppose that $M$ is a simple $\mathcal{K}_d$-module that is a finitely generated $U(\mathfrak{h})$-module when restricted to $U(\mathfrak{h})$. Take a nonzero vector $v\in M$. For any $j=1,\cdots,d$, consider the $U(\mathfrak{h})$-submodule $M_j$ of $M$ generated by $\ensuremath{\mathbb C}\xspace[x_j^{\pm1}]v$ which is finitely generated as a $U(\mathfrak{h})$-module.
There exists some $s,k\in \ensuremath{\mathbb{Z}}\xspace$ such that $M_j=\sum_{i=s}^k U(\mathfrak{h})x_j^i v$.
Since $x_j^{s-1}v, x_j^{k+1}v\in M$, we can find $h_i, g_i\in U(\mathfrak{h})$ such that
$$x_j^{s-1}v=\sum_{i=s}^k h_i(\partial)x_j^iv, \ \ \ \ x_j^{k+1}v=\sum_{i=s}^k g_i(\partial)x_j^iv.$$
Clearly $(x_j^{s-1}-\sum_{i=s}^k(h_i+g_i)x_j^i+x_j^{k+1})v=0$. We can take
$$f_j(x)=x_j^{1-s}(x_j^{s-1}-\sum_{i=s}^k(f_i+g_i)x_j^i+x_j^{k+1}),\ \ \ j=1,\cdots,d.$$
Thus $M$ is isomorphic to a simple quotient of $S_{f_1,\ldots, f_d}$.
\end{proof}
\begin{example} Let $k=1, 2, \cdots, d, A^{(k)}=\ensuremath{\mathbb C}\xspace[x_k^{\pm1}]$ and $\mathcal{K}^{(k)}=\ensuremath{\mathbb C}\xspace[x_k^{\pm 1}, \partial _k]$. Let $f_k=\partial_k-g_k(x_k)$, where $g_k(x_k)=\sum\limits_{i=-m_k}^{n_k} a_{k,i}x_k^i \in A^{(k)}$ with
$a_{k,i}\in\ensuremath{\mathbb C}\xspace$, $a_{k,m_k}a_{k,n_k}\ne 0$ and $m_k, n_k>0$. Then we have the $\mathcal{K}^{(k)}$-module $S^{(k)}_{f_k}=\mathcal{K}^{(k)}/\mathcal{K}^{(k)}f_k\cong A^{(k)}$ with the actions:
$$x^i x_k^l=x_k^{i+l} , \ \ \partial_kx_k^l=x_k^l(l+g(x_k ) ),\ \forall\ i,l\in\ensuremath{\mathbb{Z}}\xspace.$$
From Theorem 12 (3) in \cite{LGZ} we know that $S^{(k)}_{f_k}$ is a simple module over $\mathcal{K}^{(k)}.$
It is easy to see that $S^{(k)}_{f_k}$ is a finitely generated $\ensuremath{\mathbb C}\xspace[\partial_k]$-module. Moreover, $S^{(k)}_{f_k}$ is a free $\ensuremath{\mathbb C}\xspace[\partial_k]$-module of rank $m_k+n_k$.
In particular, we have simple $\mathcal{K}^{(k)}$-module that is a free $\ensuremath{\mathbb C}\xspace[\partial_k]$-module of any given positive rank.
By using the tensor product, we have simple $\mathcal{K}_d$-modules $S_{f_1, f_2, \cdots, f_k}$ that are free $U(\mathfrak{h})$-modules of any given positive rank. Using Corollary \ref{cor5.5} we have simple $W_d$-module that is a free $U(\mathfrak{h})$-module of any given positive rank.
\end{example}
\
\begin{center}
\bf Acknowledgments
\end{center}
\noindent
X.G. is partially supported by NSF of China (Grant 11471294) and the Outstanding Young Talent Research Fund of Zhengzhou University (Grant 1421315071);
G.L. is partially supported by NSF of China (Grant 11301143) and the grants of Henan University (2012YBZR031, 0000A40382);
R.L. was partially supported by the NSF of China (Grant 11471233, 11371134);
K.Z. is partially supported by NSF of China (Grant 11271109) and NSERC.
|
1,116,691,497,831 | arxiv | \section{Introduction}
\label{sec:introduction}
After the discovery of a non-zero $\theta_{13}$~\cite{An:2012eh,Ahn:2012nd,Abe:2012tg,Adamson:2011qu,Abe:2011sj} the emerging picture from the last decades of neutrino oscillation searches consolidates a structure for the PMNS matrix~\cite{Pontecorvo:1957cp,Pontecorvo:1957qd,Maki:1960ut,Maki:1962mu,Pontecorvo:1967fh} describing lepton flavour mixing strikingly different from its CKM counterpart in the quark sector, making the Standard Model flavour puzzle even more intriguing. Far from the hierarchical structure described through the tiny mixing angles of the CKM, large mixing angles characterize the lepton mixing. The ``atmospheric'' mixing angle $\theta_{23}$ is presently compatible with maximal mixing as well as with a large but non-maximal value in either the first or the second octant. Similarly, the ``solar'' mixing angle $\theta_{12}$ is around $33^\circ$ and only $\theta_{13} \sim 8-9^\circ$ is relatively small and its value is still comparable in magnitude to the Cabibbo angle, the largest in the CKM. The large mixing opens the window to the present and next generation of neutrino oscillation experiments to tackle new questions that could provide answers to fundamental open problems.
Present experiments such as T2K~\cite{Abe:2017uxa, Abe:2019vii} and NO$\nu$A~\cite{Acero:2019ksn} have started to provide the first hints on the potentially CP-violating phase $\delta$. The discovery of the violation of the particle-antiparticle symmetry in the lepton sector would be extremely suggestive, given that CP-violation is a necessary ingredient to explain the matter over antimatter excess to which we owe our existence and that the CKM contribution has been shown to be insufficient~\cite{Gavela:1993ts,Gavela:1994dt} for this purpose. Similarly, present neutrino oscillation experiments already show some preference for normal ordering (positive $\Delta m^2_{31}$) with respect to inverted ordering. This parameter is a fundamental input to combine with the searches for the neutrinoless double beta decay process in order to probe the Majorana nature of neutrinos. Finally, present experiments as well as their successors T2HK~\cite{Abe:2015zbg} and DUNE~\cite{Acciarri:2015uup} will also provide even more precise measurements of the oscillation parameters that could hold the key to discriminate among different flavour models addressing the flavour puzzle.
The European Spallation Source (ESS) at Lund provides an opportunity to build a new-generation, long-baseline neutrino oscillation experiment with an unprecedented neutrino luminosity through an upgrade of the ESS Linac~\cite{Baussan:2013zcy}. Its $2.5$~GeV protons would lead to a rather low energy neutrino flux, between 200 and 600~MeV. This energy range is very well suited for a water Cerenkov detector of the MEMPHYS type~\cite{deBellefon:2006vq,Agostino:2012fd}. In Ref.~\cite{Baussan:2013zcy} a greenfield study optimizing the physics reach to leptonic CP-violation was performed for this ESS neutrino Super-Beam facility (ESS$\nu$SB). Interestingly, the outcome of this optimization, as well as follow-up studies~\cite{Agarwalla:2014tpa,Chakraborty:2017ccm,Chakraborty:2019jlv}, was that the best baseline at which to study the neutrino beam from the ESS facility at a MEMPHYS-type detector would be between 400 and 600~km. Two candidate mines that could host the detector were identified: Garpenberg at 540~km and Zinkgruvan at 360~km from the ESS site. This choice makes the ESS$\nu$SB design unique, as the neutrino flux observed by the detector mainly corresponds to the second maximum of the $\nu_\mu \to \nu_e$ oscillation probability, with a marginal contribution of events at the first oscillation peak.
For the value of $\theta_{13} = 8.6^\circ$ currently preferred~\cite{Esteban:2018azc} by Daya Bay~\cite{Adey:2018zwh} and RENO~\cite{Bak:2018ydk}, the ``atmospheric'' term of the $\nu_\mu \to \nu_e$ oscillation probability~\cite{Cervera:2000kp}, which is governed by oscillations driven by the large frequency $\Delta m^2_{31}$ and with an amplitude $\sin^2 2\theta_{13}$, dominates over the sub-leading ``solar'' term driven by $\Delta m^2_{21}$ with amplitude $\sin^2 2\theta_{12}$ at the first oscillation maximum. Thus, the interference between the two, which is the only term dependent on the yet unknown CP-violating phase $\delta$, will also be a sub-leading contribution to the full oscillation probability at the first peak and potentially hidden by systematic uncertainties. Conversely, at the second oscillation maximum the slower ``solar'' oscillation has had more time to develop and thus the CP-violating interference term can give a significant contribution to the oscillation probability, thus increasing the sensitivity to CP violation~\cite{Coloma:2011pg}.
The price to pay in order to observe the oscillation probability at its second maximum is high. Despite this being the optimal choice to maximize the dependence of the oscillation probability on the leptonic CP violating phase, the ratio of the oscillation baseline to the neutrino energy ($L/E$) needs to be a factor 3 larger compared to the first maximum. This implies roughly an order of magnitude less statistics than if the experiment had been designed at the first peak. Indeed, the neutrino flux decreases with $L^{-2}$ from the beam divergence and the neutrino cross section and beam collimation increase with the neutrino energy. Despite the unprecedented neutrino luminosity from the upgraded ESS linac and the megaton-class MEMPHYS detector, only around 100 signal events for each beam polarity would be accumulated after 10 years data taking (2 years in neutrinos and 8 years in antineutrinos) at the 540~km Garpenberg baseline (see Fig.~7 of Ref.~\cite{Baussan:2013zcy}). Conversely, the 360~km Zinkgruvan baseline has a 2.25 times larger neutrino flux. However, the neutrino spectrum for this baseline is rather centered at the first oscillation minimum while the first and second peaks are sampled by the high and low energy tails respectively. Overall this gives similar statistics at the second oscillation maximum when compared to the Garpenberg option, but also some additional statistics at the first peak and in between.
For the ESS$\nu$SB the increased dependence on the CP violating phase of the probability is well worth the loss of precious neutrino events at the second maximum. Indeed, it could provide unprecedented discovery potential to leptonic CP-violation or the most precise measurement of the corresponding phase after discovery, which could be instrumental in tackling the flavour puzzle. Moreover, as pointed out in Ref.~\cite{Coloma:2011pg} and as we will elaborate in later sections, this choice also makes the physics reach much more resilient against unexpected sources of systematic errors, since the signal, while small, has a leading dependence on the unknown parameters. Conversely, statistics will be the bottleneck of the ESS$\nu$SB physics reach and thus longer periods of data taking would greatly increase its capabilities.
On the other hand, other potential oscillation searches, different from the CP violation search, will be negatively impacted by the choice of the second oscillation maximum baseline. In particular the sensitivity to the octant of $\theta_{23}$ is severely reduced by this choice. Indeed, this measurement mainly relies on the ``atmospheric'' term of the oscillation probability, which is leading at the first maximum instead, together with $\theta_{13}$ information from reactor measurements and $\Delta m^2_{31}$ and $\sin^2 2\theta_{23}$ from $\nu_\mu$ disappearance. Similarly the $\nu_\mu$ disappearance data and hence the precise determination of $\Delta m^2_{31}$ and $\sin^2 2\theta_{23}$ are negatively affected by the choice of the second oscillation maximum. The lack of knowledge on the octant of $\theta_{23}$ can lead to ``octant degeneracies''~\cite{Fogli:1996pv} that in turn somewhat limit the CP discovery potential of the ESS$\nu$SB~\cite{Ghosh:2019sfi}. The sensitivity to the mass ordering is also limited at the ESS$\nu$SB given the small matter effects from the low energy and short baseline. However, since these matter effects are small, the resulting ``sign degeneracies''~\cite{Minakata:2001qm} do not compromise the sensitivity to $\delta$ of the facility~\cite{Baussan:2013zcy,Ghosh:2019sfi}.
A very effective and convenient way of increasing both the octant and mass ordering sensitivity of a neutrino Super Beam experiment is to combine the signal from the neutrino beam with the huge atmospheric neutrino sample that can be collected at such a detector~\cite{Huber:2005ep,Campagne:2006yx}. In the case of the ESS$\nu$SB this combination is particularly synergistic. Indeed, the atmospheric neutrino sample can provide not only significantly increased sensitivity to the octant and the mass ordering to solve parametric degeneracies, but also improved precision to $\Delta m^2_{31}$ and $\sin^2 2\theta_{23}$ which is otherwise one of the main drawbacks of the setup.
In this work we will combine the observation of the ESS$\nu$SB flux tuned for the second maximum of the $\nu_e$ appearance probability with the complementary atmospheric neutrino data, more strongly dominated by the first maximum and $\nu_\mu$ disappearance, and characterized by stronger matter effects. We will explore how the physics reach of the facility improves when beam data is considered together with the atmospheric neutrino sample and then review the optimization of the ESS$\nu$SB facility using both data sets.
Finally, we will discuss which sources of systematic errors among the ones considered impact the final sensitivity more significantly.
This paper is organized as follows. In Section~\ref{sec:theory} we discuss the peculiarities of the neutrino oscillation probability and the appearance of parametric degeneracies when observing at the second oscillation maximum. In Section~\ref{sec:setup} we describe the experimental setup considered and the details of the numerical simulations performed. Section~\ref{sec:results} describes the results of the simulations and in Section~\ref{sec:conclusions} we present our conclusions and summarize our work.
\section{Measurements at the second oscillation peak}
\label{sec:theory}
The determination of the oscillation parameters at beam experiments is, in general, hindered by the appearance of degenerate solutions,
cf. e.g., Refs.~\cite{BurguetCastell:2001ez,Barger:2001yr,Minakata:2013hgk,Coloma:2014kca,Ghosh:2015ena}. These degeneracies have been extensively studied for the experimental setups of T2HK~\cite{Coloma:2012ji,C.:2014ika,Ghosh:2014rna,Abe:2014oxa,Ghosh:2017ged,Abe:2018uyc} and DUNE~\cite{Coloma:2012ji,Adams:2013qkq,Barger:2013rha,Ghosh:2013pfa,Agarwalla:2013vyc,Barger:2014dfa,Bora:2014zwa,Acciarri:2015uup,Nath:2015kjg,DeRomeri:2016qwo,Ghosh:2017ged,Abi:2018dnh,deGouvea:2019ozk,Ghoshal:2019pab,Meloni:2018xnk} (and also their combination~\cite{Fukasawa:2016yue,Ballett:2016daj}).
As stated in Section~\ref{sec:introduction}, the $L/E$ range which the ESS$\nu$SB focuses on is different from those of other forthcoming experiments,\footnote{
The MOMENT proposal~\cite{Cao:2014bea,Blennow:2015cmn,Bakhti:2016prn,Tang:2019wsv} with $L=150$ km can access to the oscillation probability with similar $L/E$ to the ESS$\nu$SB.
The T2HKK proposal~\cite{Ishitsuka:2005qi,Hagiwara:2005pe,Hagiwara:2006vn,Kajita:2006bt,Hagiwara:2006nn,Hagiwara:2009bb,Hagiwara:2011kw,Hagiwara:2012mg,Hagiwara:2016qtb,Abe:2016ero,Raut:2017dbh}, in which the first and the second oscillation maxima are measured with two detectors located at different sites, would also cover the similar $L/E$ range to the ESS$\nu$SB.}
Therefore, here we will discuss the peculiarities of ESS$\nu$SB and the differences from other experiments in the determination of the oscillation parameters before presenting our numerical results. The $\nu_e$ appearance oscillation probability in matter is given by~\cite{Cervera:2000kp} (see also~\cite{Freund:1999gy, Akhmedov:2004ny, Minakata:2015gra}):
\begin{equation}
\begin{split}
P(\barparenb{\nu}_{\mu}\rightarrow&\barparenb{\nu}_e) = s_{23}^2\sin^2{2\theta_{13}}\left(\frac{\Delta_{31}}{\tilde{B}_{\mp}}\right)^2\sin^2{\left(\frac{\tilde{B}_{\mp}L}{2}\right)}+c_{23}^2\sin^2{2\theta_{12}}\left(\frac{\Delta_{21}}{A}\right)^2\sin^2{\left(\frac{A L}{2}\right)}\\
& + \tilde{J}\frac{\Delta_{21}}{A}\frac{\Delta_{31}}{\tilde{B}_{\mp}}\sin{\left(\frac{A L}{2}\right)}\sin{\left(\frac{\tilde{B}_{\mp}L}{2}\right)}\left[\cos{\delta} \cos{\left(\frac{\Delta_{31}L}{2}\right)}\mp\sin{\delta}\sin{\left(\frac{\Delta_{31}L}{2}\right)}\right],
\end{split}
\label{Eq:Probability}
\end{equation}
where $\Delta_{i j} \equiv \Delta m^2_{i j}/ 2E$, $\tilde{J}=c_{13}\sin{2\theta_{12}}\sin{2\theta_{23}}\sin{2\theta_{13}}$, $A = \sqrt{2} G_F n_e$ is the matter potential with $n_e$ the electron density and $G_F$ the Fermi constant, and $\tilde{B}_{\mp}\equiv |A\mp \Delta_{13}|$. In this expression the only dependence in the CP violating phase $\delta$ appears in the last term, which is the interference between the ``atmospheric'' oscillation in the first term and the ``solar'' in the second. Since $\sin 2 \theta_{13} \sim 0.3$ while $\Delta_{12} L \sim 0.05$ at the first oscillation peak, the ``atmospheric'' term tends to dominate the oscillation probability and the interesting CP interference is only subleading. Conversely, at the second oscillation maximum $\Delta_{12} L \sim 0.1$ so that the dependence on $\delta$ of the oscillation probability is much higher which allows to improve the sensitivity to this parameter~\cite{Coloma:2011pg}. This can be seen in Fig.~\ref{Fig:probs} where the change in the probability upon changing the values of $\delta$ is much more significant at the second peak maximum compared to the first.
\begin{figure}[t]
\centering
\hspace*{-1cm}
\includegraphics[width=13.5cm]{Biprob/probs-norm.pdf}
\caption{Oscillation probabilities for the Zinkgruvan (upper panels) and Garpenberg (lower panels) baselines as a function of the energy for neutrinos (left panels) and antineutrinos (right panels). The red (blue) lines are for normal (inverted) ordering and three different values of $\delta = -\pi/2$, $0$ and $\pi/2$ are represented by the dashed, solid and dotted lines respectively. The grey histograms show the number of events that would be obtained in each energy bin for a 2/8 time splitting between neutrino/antineutrino mode if the oscillation probability was $1$. Thus, they serve as a guide of what energies of the oscillation probability would be well-sampled by the ESS$\nu$SB setup.}
\label{Fig:probs}
\end{figure}
In Eq.~(\ref{Eq:Probability}) the leading dependence on the mass ordering comes from the ``atmospheric'' term, as it goes as the inverse of the square of $\tilde{B}_{\mp}$. For $E \sim |\Delta m_{31}^2|/(2A) $ there will be a resonance which will produce an enhancement in neutrinos against antineutrinos or viceversa depending on the mass ordering. For a typical average matter density of $3.0~\text{g}/\text{cm}^3$ one finds that the approximate energy for this resonance to happen is $E \sim \mathcal{O}(\text{GeV})$. Given that the peak of the flux for ESS$\nu$SB happens at $E\sim \mathcal{O}(100)~\text{MeV}$ (see Fig.~\ref{Fig:probs}), the importance of the matter effects and hence of the sensitivity to the mass ordering for this facility is not expected to be significant.
The bi-probability plots~\cite{Minakata:2001qm} shown in Fig.~\ref{Fig:biP-540} help to illustrate the degeneracy problem at the ESS$\nu$SB experiment.
Here all oscillation parameters other than $\delta$, the octant of $\theta_{23}$, and
the sign of $\Delta m_{31}^{2}$ are fixed at the current best fit
values~\cite{Esteban:2018azc}, and the matter density along the neutrino
baseline is assumed to be constant with an average density of 3.0 g/cm$^{3}$.
The baseline length $L$ and the neutrino energies $E$ are set to $L=540$ km (ESS-Garpenberg) and $E=\{280, 380, 480\}$ MeV.
The ellipses show the variation of the appearance probabilities for the
neutrino and antineutrino channels from changes in $\delta$.
The four ellipses in each plot correspond to the different choices of the octant of $\theta_{23}$ and the mass ordering.
When the ellipses overlap sharing the same region in the $P(\nu_{\mu} \rightarrow \nu_{e})$-$P(\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e})$ plane, the same oscillation probabilities can be obtained by changing $\delta$, the octant of $\theta_{23}$ and/or the mass ordering, implying the existence of degenerate solutions.
Let us first focus on the middle plot with $E=380$ MeV where the
oscillation probabilities are close to the second maximum, $|\Delta m_{31}^{2}|L/(4E) \sim 3\pi/2$.
The centres of the ellipses are located on the CP conserving line $P(\nu_{\mu} \rightarrow \nu_{e}) = P(\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e})$, which reflects the fact that the matter effect, which could induce an explicit difference between the neutrino and antineutrino oscillation probabilities unrelated to the intrinsic CP violation from $\delta$, is irrelevant for this energy and baseline.
The major axes of the ellipses extend widely along the diagonal line orthogonal to the CP conserving line. This means that the CP violating term proportional to $\sin\delta$ in Eq.(\ref{Eq:Probability}) is very relevant in the oscillation probability for this energy and baseline, leading to the improved CP sensitivity at the second oscillation peak.
The ``fake'' CP violation effect due to the matter effect separates the two ellipses with opposite mass ordering at the first oscillation maximum, where T2HK focuses on, causing the $\delta$-sign$(\Delta m_{31}^{2})$ degeneracy in the CP violation search, cf. the right most plot in Fig.~\ref{Fig:biP-360}.
Conversely, the CP violation search at the second oscillation maximum is not noticeably affected by the matter effect~\cite{Bernabeu:2018use,Ghosh:2019sfi}.
Changing the value of $\theta_{23}$, the ellipses almost keep the same shape and move in parallel along the CP conserving line, which causes the $\delta$-$\theta_{23}$ degeneracy~\cite{Minakata:2013hgk,Coloma:2014kca}.
The vertices of the ellipses are located at $\delta=\{\pi/2, -\pi/2\}$, where the oscillation probabilities do not change much with a change of $\delta$. As a consequence, the precision in the determination of $\delta$ becomes worse close to the oscillation maxima~\cite{Coloma:2012wq}.
In other words, since the two points with $\delta$ and $\pi-\delta$ on an ellipse are close to each other around $\delta=\{\pi/2,-\pi/2\}$, it is hard to separate them~\cite{Coloma:2012wq}.
Although at the probability level from Fig.~\ref{Fig:biP-540} the expectation would be that this quasi-degeneracy effect occurs similarly at $\delta=\pi/2$ and $\delta=-\pi/2$, the numerical simulations we will report in Section~\ref{sec:results} show that the ESS$\nu$SB suffers this effect more severely at $\delta=-\pi/2$ than at $\delta=\pi/2$. This is due to the significant difference in event rates between these two points. Indeed, for $\delta=-\pi/2$, the oscillation probability for neutrinos is enhanced while the antineutrino one is suppressed. Since both the flux and the cross section are also smaller for antineutrinos, this strongly penalizes the measurement at $\delta = -\pi/2$ since the antineutrino sample is essentially lost given that the event rate at the second oscillation peak is already necessarily small.
On the other hand, at $\delta=\pi/2$, the oscillation probability for neutrinos is suppressed, but the larger cross section and flux compensate for it and prevents such a big loss of sensitivity.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Biprob/ESS540-biprob-crop.pdf}
\caption{Bi-probability plots for the ESS-Garpenberg setup $L=540$ km. Three plots for three different neutrino energies: $E=\{280, 380, 480\}$ MeV from left to right.
%
The four ellipses in each plot for the different choices of ($s_{23}^{2} \equiv \sin^{2}\theta_{23}$, sign[$\Delta m_{31}^{2}$]):
blue solid for ($0.45,+$), orange solid for ($0.45,-$), blue dashed for ($0.55,+$), and orange dashed for ($0.55,-$).
The energies $E=380$ MeV and $E=480$ MeV correspond to the vicinity of the second oscillation maximum and the first oscillation minimum.}
\label{Fig:biP-540}
\end{figure}
In the energy region that the ESS$\nu$SB focuses on, the oscillation phase changes rapidly. As a consequence, the shape and location of the ellipses changes very significantly even within the same energy bin.
In Fig.~\ref{Fig:biP-540},
we also show the bi-probability plots with $E=$280 and 480 MeV where the oscillation probabilities are approaching the minima, which are also well-covered by the ESS$\nu$SB flux.
The ellipses are not distributed symmetrically to the CP conserving line,
which means that, contrary to the second peak, matter effects do have some impact on the oscillation probabilities.
However, this impact is still subleading, given the rather low energy, and does not shift the energies where the extrema are located, cf. Fig.~\ref{Fig:probs}.
As a result, the two ellipses for the different mass hierarchies are not separated in the entire energy region.
The drastic shape change of the ellipses when varying the energy is largely due to the ratio of the $\sin\delta$ and the $\cos\delta$ terms in the oscillation probability, see Eq.~(\ref{Eq:Probability}).
The $\sin\delta$ term is most significant close to the oscillation peak with $|\Delta m_{31}^{2}| L/(4E) \simeq 3 \pi/2$ for $E \simeq 380$ MeV.
As the probabilities depart from the maximum, the major axes of the ellipses start following along the direction of the CP conserving line, which means that the $\cos\delta$ term increases in importance as we approach the minima with $|\Delta m_{31}^{2}| L/(4E) \simeq \pi$ (right panel of Fig.~\ref{Fig:biP-540}) or $|\Delta m_{31}^{2}| L/(4E) \simeq 2 \pi$ (left panel).
In the left and the right plots, the ellipses with different mass orderings intersect each other at points with different values of $\delta$ at different energies.
Therefore, in principle, with precise enough measurements at various energies, one could determine the value of $\delta$ and the sign of $\Delta m_{31}^{2}$ separately. However, the oscillations are too fast for the $\sim 100$~MeV resolution achievable at these energies with a water Cerenkov detector to resolve and also the event rate at the second maximum is not large enough to perform a very fine binning.
Thus, it is not possible to track the rapid oscillations in Fig.~\ref{Fig:probs}, although some mild sensitivity to the mass ordering can be achievable
A large overlap between the two ellipses with different mass orderings and different octants at the oscillation maximum (middle panel in Fig.~\ref{Fig:biP-540}), where most of the statistics is concentrated, suggests that the mass ordering sensitivity at the beam experiment is affected by the octant degeneracy.
The ellipses for different octants barely separate in the entire energy region, which implies a rather poor sensitivity to $\theta_{23}$ in the appearance channel leading to octant degeneracies that can spoil both the determination of $\delta$ and of the mass ordering at the ESS$\nu$SB. Conversely, for experiments focusing on the first maxium the two ellipses for different octants are more separated~\cite{Ghosh:2019sfi}, cf. the right panel in Fig.~\ref{Fig:biP-360}.
Therefore, we will explore the impact of the addition of the atmospheric neutrino data collected at the far detector of the ESS$\nu$SB to the beam data since atmospheric neutrinos can provide both sensitivity to the $\theta_{23}$ octant and the mass ordering helping to lift parametric degeneracies~\cite{Huber:2005ep,Campagne:2006yx}.
The mass ordering sensitivity from an observation of atmospheric neutrinos comes from the oscillation signals driven by $\Delta m_{31}^{2}$ and the matter effect (first term in Eq.~(\ref{Eq:Probability})) and therefore, it does not depend on the value of $\delta$.
On the other hand, the sensitivity is better for $\theta_{23}$ in the second octant than the first octant, since the term is proportional to $\sin^{2} \theta_{23}$~\cite{Akhmedov:2012ah}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Biprob/ESS360-biprob-crop.pdf}
\caption{Bi-probability plots for $L=360$ km (ESS-Zinkgruvan). In this
energy range $E=250-600$ MeV, the oscillation probabilities
experience the second maximum, the first minimum, and the first
maximum.}
\label{Fig:biP-360}
\end{figure}
If the shorter baseline $L=360$ km (ESS-Zinkgruvan) is instead considered, the neutrino flux at the high energy tail up to $E\sim600$ MeV covers the first oscillation maximum.
This situation corresponds to the bi-probability ellipses presented in the right panel of Fig.~\ref{Fig:biP-360}, which show the same shape and position characteristic of other experiments located at the first oscillation maximum such as T2HK.
The matter effect is not significant enough to completely separate the two mass orderings.
In the relevant energy range (200-600 MeV), the oscillation probabilities go from the first maximum (right panel) to the first minimum (middle panels) and to the second maximum (left panel).
The leftmost panel with $E=250$ MeV, where the second oscillation peak would be located, looks very similar to that with $E=380$ MeV in the case of $L=540$ km.
The ellipses for the different mass orderings are separated more clearly in the case of $L=360$ km than $L=540$ km in a large energy region, which leads to a slightly better sensitivity to the mass ordering even though the baseline is shorter.
From the information at the first oscillation maximum, the ESS$\nu$SB with $L=360$ km also has better sensitivity to $\theta_{23}$ than the $L=540$ km option, so that it is expected that the longer baseline option will benefit more from the addition of the atmospheric neutrino data, which helps to determine $\theta_{23}$ and its octant.
\section{Simulation and experimental details}
\label{sec:setup}
The simulation of the ESS$\nu$SB data has been performed with the GLoBES software~\cite{Huber:2004ka, Huber:2007ji}. We have assumed that the neutrino beam will shine on a near and a far detector to reduce the systematic uncertainties~\cite{Baussan:2013zcy}. The far detector is a 1~Mt MEMPHYS-like water Cerenkov detector~\cite{Agostino:2012fd}, while the near detector has been assumed to be identical to the far detector in terms of efficiencies and background rejection capabilities with a fiducial mass of 0.1 kt. The response of the detectors has been implemented through migration matrices, both for the signal efficiency and the background rejection from Ref.~\cite{Agostino:2012fd}.
A beam power of 5~MW with 2.5~GeV protons and an exposure of $1.7\times 10^{7}$ operating seconds per year has been assumed~\cite{Baussan:2013zcy}. The fluxes have been simulated explicitly at 1~km for the near detector~\cite{Blennow:2014fqa}, accounting for possible geometrical effects since the source cannot be considered point-like, as well as for 100~km (and consequently rescaled) for the longer baselines considered for the far detector~\cite{Baussan:2013zcy}. The event rate peaks around $\mathcal{O}(100)$ MeV energies (see Fig.\ref{Fig:probs}), so the dominant contribution to the cross section will be in the quasi-elastic regime (QE). For the cross section we use the results from the Genie~\cite{Andreopoulos:2015wxa} tune G18$\_$10a$\_$00$\_$000.
We have assumed a total running time of 10 years. Nonetheless, we will also study the dependence of the physics reach on the relative running time spent in positive and negative focusing in order to optimize it for the measurement of CP violation. Likewise, although the preferred location of the far detector for the ESS$\nu$SB is the Garpenberg mine at 540~km~\cite{Baussan:2013zcy}, different baselines, with emphasis in the alternative Zinkgruvan option at 360~km, will be studied to address the optimal choice. Finally, we will also study how the CP discovery potential depends on the total exposure.
Throughout all the simulations we adopt the same treatment of the systematic errors from Table~\ref{Tab:Systematics} as in Ref.~\cite{Coloma:2012ji}. Unless otherwise specified, we will assume the ``Optimistic'' systematics from the first ``Opt.'' column in Table~\ref{Tab:Systematics} although we will also show how the results are affected when the more conservative ones in the second column ``Cons.'' are considered instead.
All systematics have been introduced as nuisance parameters and the results presented have been obtained minimizing the $\chi^2$ over all of them. The systematic uncertainties associated to fluxes and cross sections have been assumed to be fully correlated between near and far detector and uncorrelated between neutrino and antineutrino components and different flavours. The uncertainties on the fiducial volumes of the near and far detectors were not assumed to be correlated. Additionally, to account for the uncertainty in the cross section between the near and far detector, arising from the different flavour composition of the beam (mainly $\nu_{\mu}$ in the near site and $\nu_e$ for the signal in the far detector), a completely uncorrelated systematic is included for their ratio (last row of Table~\ref{Tab:Systematics}). Therefore, the $\chi^2$ will be given by
\begin{equation}
\chi^2=\text{min}_{n_{s_i}}\left(\hat{\chi}^2_{FD}[n_{s_C}]+\hat{\chi}^2_{ND}[n_{s_C},n_{s_U}] + \frac{n_{s_C}^2}{\sigma_{n_{s_C}}^2}+\frac{n_{s_U}^2}{\sigma_{n_{s_U}}^2}\right),
\end{equation}
where $\hat{\chi}^2_{FD}$ ($\hat{\chi}^2_{ND}$) corresponds to the far (near) detector and $n_{s_C}$ ($n_{s_U}$) are the correlated (uncorrelated) systematic uncertainties.
We have added to the resulting $\chi^2$ a gaussian prior with the central values and $1\sigma$ errors from Ref.~\cite{Esteban:2018azc} for ``solar'' and ``reactor'' parameters. For the ``atmospheric'' parameters we set a prior on $\sin^2{2 \theta_{23}}$ and $|\Delta m_{31}^2|$ given that the octant for $\theta_{23}$ and the mass ordering are still unknown. Since the determination of these two parameters comes primarily from atmospherics, when adding this sample to the beam data no prior has been added on $\theta_{23}$ and $\Delta m_{31}^2$.
\begin{table}
\centering
\begin{tabular} {|| c c c ||}
\hline
Systematics & Opt. & Cons. \\
\hline
\hline
Fiducial volume ND & 0.2\% & 0.5\% \\
Fiducial volume FD & 1\% & 2.5\% \\
Flux error $\nu$ & 5\% & 7.5\% \\
Flux error $\bar{\nu}$ & 10\% & 15\% \\
Neutral current background & 5\% & 7.5\% \\
Cross section $\times$ eff. QE & 10\% & 15\% \\
Ratio $\nu_e/\nu_{\mu}$ QE & 3.5\% & 11\% \\
\hline
\end{tabular}
\caption{Systematic uncertainties for a super beam as described in Ref.~\cite{Coloma:2012ji} for two different scenarios, the ``Optimistic'' one and the ``Conservative'' scenario where systematics are larger.}
\label{Tab:Systematics}
\end{table}
The simulation of the atmospheric neutrino sample in MEMPHYS is the one used in the analysis from Ref.~\cite{Campagne:2006yx} where the neutrino fluxes at Gran Sasso from Honda calculations~\cite{Honda:2004yz} were used. This is a conservative estimate as fluxes become larger at higher geomagnetic latitudes such as Garpenberg or Zinkgruvan. In the simulation the events are separated between fully and partially contained events in the detector and stopping from through-going muon events. The neutral current contamination in each bin was included assuming the same ratio as Super-Kamiokande between neutral-current and unoscillated charged-current events~\cite{Ashie:2005ik}. For further details on the atmospheric sample see~\cite{Campagne:2006yx}.
\section{Results}
\label{sec:results}
In Fig.~\ref{Fig:CP_atmvsbeam} we show the impact on the CP discovery potential of the ESS$\nu$SB before (dashed lines) and after (solid lines) the inclusion of the atmospheric sample for the Zinkgruvan (360~km) and Garpenberg (540~km) options in the left and right panels, respectively. The plots represent the $\sqrt{\Delta \chi^2}$ with which CP conserving values of $\delta = 0$ or $\pi$ can be disfavoured as a function of the true value of $\delta$. We take the minimum of $\Delta \chi^2$ between $\delta=0$ and $\pi$. The $\sqrt{\Delta \chi^2}$ can be interpreted as the significance for exclusion of CP-conserving values (and hence evidence for CP violation) as long as the assumptions behind Wilks' theorem hold~\cite{Wilks:1938dza}. Deviations from these assumptions can be sizable for presently running experiments, but are expected to be smaller for next generation facilities~\cite{Blennow:2014sja}.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_NO_360.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_NO_540.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_IO_360.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_IO_540.pdf}
\caption{Significance with which CP conserving values of $\delta$ can be excluded for the Zinkgruvan 360~km (left panels) and Garpenberg 540~km (right panels) options. The upper (lower) plots are for normal (inverted) mass ordering while the red (blue) curves correspond to $\theta_{23}$ in the first (second) octant. The dashed lines correspond to the beam data only, while the continuous lines correspond to the results studying events from the beam and from atmospheric neutrinos. The running time splitting has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years.}
\label{Fig:CP_atmvsbeam}
\end{figure}
Even though the sensitivity of the atmospheric neutrino dataset to $\delta$ is almost negligible, the improvement of the ESS$\nu$SB physics reach upon its inclusion is quite remarkable. The improvement is generally larger for the longer 540~km baseline than for the Zinkgruvan 360~km option. This is in line with the expectations discussed in Section~\ref{sec:theory} of the atmospheric sample being more complementary to the beam information at the longer baseline. Indeed, at the second oscillation maximum the $\nu_\mu$ disappearance oscillation is not sampled as efficiently as at the first peak and this deteriorates the determination of the atmospheric oscillation parameters $\theta_{23}$ and $\Delta m^2_{31}$, which play an important role in the measurement of $\delta$. Conversely, the 360~km baseline has higher statistics and some events also cover the first oscillation maximum such that the atmospheric oscillation information is less complementary and the gain upon its inclusion is less noticeable. From these results we can conclude that the ESS$\nu$SB setup combined with the atmospheric neutrino sample would be able to rule out CP-conserving values of $\delta$ for $\sim 60 \%$ ($\sim 55 \%$) of the possible values of $\delta$ at the $5 \sigma$ level regardless of the octant and the mass ordering when observing at the 540~km (360~km) baseline.
Figure~\ref{Fig:CP_atmvsbeam} also shows that the gain in CP discovery potential is much more pronounced in some particular regions of the parameter space, especially for $\delta < 0$ and $\theta_{23}$ in the first octant or $\delta > 0$ and the second octant. In these examples the dotted curves for beam only often show a kink that reduces the slope and the values of $\delta$ for which CP-violation could be discovered with high significance. Conversely, the corresponding solid curves with atmospheric data either do not display the kink or develop it at higher significance so that the resulting CP-discovery potential is much larger. These kinks occur due to the presence of an unresolved octant degeneracy at a CP-conserving value of $\delta$ that prevents drawing conclusions regarding CP violation. When atmospheric data is added, the sensitivity to the octant improves and these degeneracies are either lifted or only show up at much higher significance.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_th23_FO_dCPm40_chi2_25_2.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_th23_SO_dCP150_chi2_25_MC.pdf}
\caption{Allowed regions at $\Delta \chi^2 = 25$ for different assumed values of $\sin^2\theta_{23}$ and $\delta$ represented by the star for a 540~km baseline (Garpenberg location). The red curves correspond to the atmospheric dataset alone, the blue to the beam-only information and the black curves to the combination of both. Dotted regions are allowed with the wrong mass ordering. The running time splitting has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years.}
\label{Fig:CP_sens_potato}
\end{figure}
This situation is illustrated in Fig.~\ref{Fig:CP_sens_potato}, where the allowed regions at the $\Delta \chi^2 = 25$ level are shown in the $\delta$-$\sin^2 \theta_{23}$ plane. The left (right) panels assume the true values $\delta=-40^\circ$ ($\delta=150^\circ$), $\sin^2 2 \theta_{23}=0.418$ ($\sin^2 2 \theta_{23}=0.582$) and normal ordering. As can be seen, when only the beam information is taken into account (blue curves), an octant degeneracy that spreads the allowed region towards CP conserving values appears. Conversely, the atmospheric data on their own (red curves) have no capability to determine $\delta$ at all, but can instead rule out the wrong octant of $\theta_{23}$. Thus, the combination of the two data sets (black curves) very significantly improves the CP discovery potential of the facility in these areas of parameter space. The dotted lines correspond to ``sign'' degeneracies with the opposite mass ordering to the one chosen as true value. In the right panel this degeneracy is also solved with atmospheric data while for the values of $\delta$ and $\theta_{23}$ chosen in the left panel a small sign degeneracy remains between the 4 and $5 \sigma$ level. Notice that an ``intrinsic degeneracy''~\cite{BurguetCastell:2001ez} at $\delta \simeq \pi-\delta_{true}$ also shows up at the $5 \sigma$ level when only the beam information is taken into account. As for the ``sign'' degeneracy, the atmospheric neutrino data is enough to lift it for the parameters chosen in the right panel while a small remnant is present in the left. In any case, both the ``intrinsic'' and the ``sign'' degeneracies appear at $\delta \simeq \pi-\delta_{true}$, given the comparatively small matter effects for the setup, and their allowed regions are smaller or comparable to that of the true solution so that only the ``octant''degeneracy plays a significant role in reducing the CP-discovery potential when atmospheric data is not exploited to lift it.
\begin{figure}
\centering
\includegraphics[width=10.5cm]{Octant/Oct_NO_540.pdf}
\caption{Significance with which the wrong octant would be disfavoured as a function of the actual value of $\theta_{23}$ with beam-only information (blue lines) and including also the atmospheric dataset (red lines) for the baseline to Garpenberg ($L=540$~km) and normal mass ordering. The running time splitting has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years. The results for the Zinkgruvan site ($L=360$~km) and for inverted ordering are very similar. The vertical line represents the present best fit for $\theta_{23}$ from~\cite{Esteban:2018azc}.}
\label{Fig:Oct_sens}
\end{figure}
In Fig.~\ref{Fig:Oct_sens} we show how the significance with which the ESS$\nu$SB would be able to disfavour the wrong octant of $\theta_{23}$ as a function of the true value of $\theta_{23}$ (blue lines). As already anticipated in Section~\ref{sec:theory}, this capability improves dramatically upon the inclusion of the atmospheric neutrino sample (red lines) and thus the potentially dangerous ``octant'' degeneracies are lifted. The curves are almost identical for both mass orderings and for the Zinkgruvan and Garpenberg baselines.
The significance with which the ESS$\nu$SB would be able to disfavour the wrong mass ordering is shown in Fig.~\ref{Fig:Hierarchy_sens}, where dotted (solid) lines correspond to beam only data (beam and atmospheric data). The left (right) panels correspond to the 360~km (540~km) baseline and upper (lower) panels are for the scenario in which the true ordering is normal (inverted). As can be seen the ESS$\nu$SB beam data allows to disfavour the wrong mass ordering at around the $3 \sigma$ ($2 \sigma$) level for the 360~km (540~km) baseline for any value of $\delta$ and the octant. When the atmospheric data is added, the sensitivity to the wrong ordering is boosted to the 4-5$\sigma$ level or even higher for the particular case of normal ordering and second octant of $\theta_{23}$ ($\sin^2{\theta_{23}}=0.582$ from Ref.~\cite{Esteban:2018azc}) for which the signal in atmospheric neutrinos is enhanced, as expected from Eq.(\ref{Eq:Probability}). For normal ordering (upper panels) the inclusion of the atmospheric neutrino data also change the shape of the curve, in particular a larger increase in the significance is seen around $\delta=0$ than for other values. This is due to the solution of the octant degeneracy since, as can be seen in the middle panel of Fig.~\ref{Fig:biP-540} or the first panel of Fig.~\ref{Fig:biP-360}, for $\delta=0$ and normal ordering the ellipse with opposite octant and ordering has a significant overlap.
\begin{figure}[h]
\centering
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_360_5+5_NO.pdf}
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_540_5+5_NO.pdf}
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_360_5+5_IO.pdf}
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_540_5+5_IO.pdf}
\caption{Significance with which the wrong mass ordering would be disfavoured for $\theta_{23}$ in the first octant (red lines) or second octant (blue lines) and the true mass ordering being normal (upper plots) or inverted (lower plots). Dashed lines correspond to the beam only data while solid lines correspond to the addition of the atmospheric sample. The left panels correspond to the baseline to Zinkgruvan while the right ones to the location of the Garpenberg mine. The running time has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years.}
\label{Fig:Hierarchy_sens}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=7.5cm]{Precision/Prec_AtmvsBeam_5+5_NO-SO_360_3.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_AtmvsBeam_5+5_NO-SO_540_3.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_Best_360_SO-NO3.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_Best_540_SO-NO3.pdf}
\caption{Precision (spread of the $1 \sigma$ allowed region) on the determination of $\delta$ for the baseline to Zinkgruvan $L=360$~km (left panels) and Garpenberg $L=540$~km (right panels) for the current best-fit parameters~\cite{Esteban:2018azc}. In the upper panels we show the comparison between the precision obtained with (solid lines) and without (dashed lines) the atmospheric sample for a running time of 5 years in each focusing. In the lower plots we show the dependence of the precision on the relative running time in each mode, where $t_{\nu}$ ($t_{\bar{\nu}}$) corresponds to the time the experiment would run in neutrino (antineutrino) mode, combining atmospheric and beam datasets.}
\label{Fig:Precision_RunTime}
\end{figure}
In Fig.~\ref{Fig:Precision_RunTime} we analyze the precision with which the ESS$\nu$SB experiment would be able to measure the CP-violating phase $\delta$. In this figure we assumed the currently preferred option of normal ordering and second octant of $\theta_{23}$. In the upper panels we show the improvement in the $1 \sigma$ allowed region with which $\delta$ would be constrained by adding the atmospheric neutrino sample (solid lines) to the beam information alone (dotted lines). As can be seen, both for the 360~km (left panel) and 540~km baseline (right panel), the precision with which $\delta$ could be determined has a very pronounced shape. For CP violating values of $\delta$ around $\pm 90^\circ$, the $1 \sigma$ uncertainty in the measurement peaks leading to the poorest precision, while for $\delta$ around $0$ or $180^\circ$ the most precise measurements would be achieved.
As discussed in Ref.~\cite{Coloma:2012wq}, this structure follows from the dependence of the oscillation probability on $\delta$ shown in Eq.(\ref{Eq:Probability}). At an oscillation peak $|\Delta m_{31}^{2}| L/(4E) = (2n-1)\pi/2$ and thus mainly $\sin \delta$ is probed. Since the derivative of $\sin \delta$ vanishes at $\delta = \pm 90^\circ$, the precision with which $\delta$ can be determined is worst close to these values. In order to constrain $\delta$ around $\delta = \pm 90^\circ$, measurements away from the oscillation maxima to determine $\cos \delta$ would instead be necessary. These off-peak measurements are easier at the Zinkgruvan 360~km baseline since the statistics is higher and also the beam is not exactly centered at the maximum, while they are very challenging at Garpenberg since very few events away from the oscillation peak are expected. This explains why the reconstructed sensitivities around $\delta = \pm 90^\circ$ are much worse in the right panel compared to the left. Moreover, the double-peak structure that can be seen for $\delta = - 90^\circ$ for 540~km corresponds to the ``intrinsic'' degeneracies depicted in Fig.~\ref{Fig:CP_sens_potato} that merge into one bigger allowed region. Since, as seen in Fig.~\ref{Fig:CP_sens_potato}, the addition of atmospheric data can lift these degeneracies, in the solid lines where this information was included the difference between the two baselines is significantly reduced.
Conversely, for $\delta = 0$ or $180^\circ$ the measurement on peak is what allows to determine $\delta$ and, since this is better covered at the longer 540~km baseline, the precision is slightly better there. This fact also translates into the better CP-discovery potential observed for the 540~km baseline in Fig.~\ref{Fig:CP_atmvsbeam}. Since the error in $\delta$ is smaller around CP-conserving values, the 540~km option could get closer to these values but still allow to claim the discovery of CP violation with high significance.
In the lower panels of Fig.~\ref{Fig:Precision_RunTime}, the impact of changing the relative running times in positive focusing (neutrino mode) and negative focusing (antineutrino mode) is shown. Since off-peak measurements are required for $\delta = \pm 90^\circ$, statistics are crucial and easier to accumulate in neutrino mode, since fluxes and cross sections are larger, and thus the best precision would be obtained by devoting longer periods of data taking to positive focusing. Conversely, around $\delta=0$ or $180^\circ$ the complementarity between the neutrino and antineutrino samples pays off and more even splits of the running time provide better sensitivity.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{Precision/Prec_Best_360.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_Best_540.pdf}
\caption{Precision on the measurement of $\delta$ for a total running time of 10 years when the relative running time in neutrino and antineutrino modes is optimized for each value of $\delta$. This corresponds to running similar times in neutrino and antineutrino modes around $\delta=0, 180^\circ$ and maximizing the neutrino runs around $\delta = \pm 90^\circ$.}
\label{Fig:Precision_Best}
\end{figure}
Since the ESS$\nu$SB would be a next-generation facility, its measurement strategy can profit from the previous hints by preceding oscillation experiments and adapt the splitting between neutrino and antineutrino modes depending on what value of $\delta$ data point to. If such a strategy is followed and the best splitting between neutrino and antineutrino modes is adopted for each value of $\delta$, the precision presented in Fig.~\ref{Fig:Precision_Best} would be obtained. If the mass ordering is confirmed to be normal and $\theta_{23}$ lies in the second octant as present data prefer, the precision with which the ESS$\nu$SB facility would determine $\delta$ ranges from $16^\circ$ ($13^\circ$) for $\delta \sim -90^\circ$ to $6^\circ$ ($7^\circ$) for $\delta \sim 0$ or $\delta \sim 180^\circ$ for 540~km (360~km).
From Figs.~\ref{Fig:CP_atmvsbeam} and~\ref{Fig:Precision_Best} one can conclude that if the experiments preceding the ESS$\nu$SB do not find any evidence for CP-violation, the best option would be the 540~km baseline and a more or less even split of the neutrino and antineutrino running times. Indeed, this choice would minimize the errors with which $\delta$ would be determined around CP-conserving values and allow to increase the CP-discovery potential. On the other hand, if the previous set of experiments determine $\delta$ to be close to maximally CP-violating, then the best scenario for the ESS$\nu$SB would be the shorter 360~km baseline and increased neutrino run time to determine $\delta$ with the best precision possible.
\begin{figure}
\centering
\includegraphics[width=10.5cm]{CP_Optimization/CP_fraction_Systematics.pdf}
\caption{Impact of different sources of systematic errors on the fraction of values of $\delta$ for which a $\Delta \chi^2 > 25$ exclusion of CP conservation would be possible at the Garpenberg mine. The orange circles correspond to the CP fraction with the ``Optimistic'' systematics from Table~\ref{Tab:Systematics}, red squares correspond to assuming that particular uncertainty to be 5 times larger and blue triangles to reducing the uncertainty by a factor of 5.}
\label{Fig:CP_frac_Sys}
\end{figure}
In Fig.~\ref{Fig:CP_frac_Sys} we show the impact of individual systematic uncertainties on the fraction of values of $\delta$ for which CP violation could be discovered ($\Delta\chi^2\geq 25$). The sources of uncertainty considered, summarized in Table~\ref{Tab:Systematics}, are the flux uncertainties for the signal ($\delta\phi_S$) and background ($\delta\phi_B$), the cross section systematic ($\delta\sigma$), the neutral current background ($\delta NC_B$), and the uncertainty on the ratio of the electron and muon flavour neutrino cross section ($\delta \sigma_e/\sigma_{\mu}$). The plot shows that the systematic uncertainties that most significantly affect the performance of the ESS$\nu$SB are the ones related to the background components of the beam, since for these the determination at the near detector is more challenging. Namely, $\delta\phi_B$, $\delta NC_B$ as well as $\delta \sigma_e/\sigma_{\mu}$ since the only $\nu_e$ present at the near detector that would allow to fix this parameter are those from the intrinsic background contamination of the beam. Among these, the strongest impact on the sensitivity is due to the cross section ratio since, not only it is difficult to constrain, but it is also most relevant to the signal at the far detector, which consists of $\nu_e$. Indeed, reducing or increasing this particular source of systematic error has the biggest impact on the physics reach. The impact is in any event limited, since the main bottleneck to the performance when observing at the second oscillation peak is statistics. In particular, a reduction of this systematic by a factor of 5 improves the CP fraction by $\sim 2\%$ (no impact for $\bar{\nu}$) while the same factor in the opposite direction worsens the sensitivity by $\sim 9\%$ ($\sim 4\%$).
\begin{figure}
\centering
\includegraphics[width=7.5cm]{CP_Optimization/CP_fraction_L/CP_fraction_L.pdf}
\includegraphics[width=7.5cm]{CP_Optimization/CP_fraction_RT/CP_fraction_TotalRT.pdf}
\caption{Fraction of values of $\delta$ for which CP violation could be discovered above $5\sigma$ for different baselines to the far detector (left panel) for the two different sets of systematics from Table~\ref{Tab:Systematics}. In the right panel we show the CP fraction for the Garpenberg ($L=540$~km) and Zinkgruvan ($L=360$~km) mines, assuming the current best fit values for the oscillation parameters and the ``Optimistic'' systematics for increasing total exposure.}
\label{Fig:CP_fraction_Opt}
\end{figure}
The importance of these systematic errors in the physics reach is crucially dependent on the baseline of the experiment. In the left panel of Fig.~\ref{Fig:CP_fraction_Opt} we show the fraction of all the possible values of $\delta$ for which it would be possible to rule out $\delta =0$ or $\delta = 180^\circ$ with a $\Delta \chi^2 = 25$ or higher significance. The upper blue line is for the more optimistic systematics from Table~\ref{Tab:Systematics} and the lower red one for the more conservative values. As can be seen, the fraction of values of $\delta$ at which a $5 \sigma$ discovery would be possible, peaks between 400~km and 700~km in both cases. But this peak is much more pronounced when the more conservative values are assumed for the systematic uncertainties. Indeed, for larger values of the systematics, the shorter baselines are strongly penalized since the dependence of the oscillation probability is subleading around the first peak and easily hidden by the systematics. Conversely, if very small systematic errors can be achieved, then the main limiting factor would be statistics and shorter baselines would perform better. Thus, by measuring at the second oscillation maximum the ESS$\nu$SB setup becomes much more resilient to sources of systematic errors unaccounted for than when observing only at the first peak.
In the right panel of Fig.~\ref{Fig:CP_fraction_Opt} we show how the fraction of values of $\delta$ for which CP violation would be discovered at the $5 \sigma$ level by the ESS$\nu$SB beam and atmospheric data increases with the exposure. As expected from an observation at the second oscillation peak, statistics is the main factor controlling the final reach of the experiment. Indeed, for 5 years data taking the CP fraction is around $46\%$, by 10 years it increases to $62\%$ and reaches $70\%$ for 20 years of exposure. The slope only flattens significantly after 25 years.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have performed an exhaustive analysis of the physics reach of the ESS$\nu$SB facility exploring its capability to determine all the presently unknown neutrino oscillation parameters such as the mass ordering and the octant of $\theta_{23}$ but with a focus on the discovery of leptonic CP violation and a precision measurement of $\delta$, which are the main declared goals of the experiment. For the first time we combined the atmospheric neutrino sample that would also be observed at the facility with the beam information and studied the complementarity between the two data sets. We studied how the physics reach of the facility could be optimized by exploring different baselines and focusing on the two candidate sites of Zinkgruvan at 360~km and Garpenberg at 540~km. We have also explored how the time split between neutrino and antineutrino modes can be exploited to improve the physics reach.
We conclude that the inclusion of the atmospheric data set can significantly increase the ESS$\nu$SB physics reach. Due to the peculiarities of observing the oscillation probability at the second oscillation maximum we find that this combination is particularly synergistic. The atmospheric neutrino sample not only significantly increases the sensitivity to the mass ordering, like for other similar facilities~\cite{Huber:2005ep,Campagne:2006yx}, but it is also very effective in improving the constraints on $\Delta m^2_{31}$ and $\theta_{23}$ and its octant. These measurements are especially challenging for the beam alone when sitting at the second maximum, given the low statistics, particularly in antineutrinos and in the $\nu_\mu$ disappearance channel. However, the determination of $\delta$ can be affected by correlations with $\theta_{23}$~\cite{Coloma:2014kca} and degeneracies with the wrong octant and thus the atmospheric information is also crucial to increase the CP discovery potential of the ESS$\nu$SB indirectly. We find this complementarity is somewhat more pronounced for the longer 540~km baseline since there the flux is more centered at the second oscillation peak and the statistics are smaller so it benefits more from the information gained from the atmospheric neutrino data.
Regarding the optimal baseline, we find the choice is rather dependent of the actual value of $\delta$. For $\delta \sim \pm 90^\circ$ a precise measurement needs events away from the oscillation maximum. In this sense the shorter 360~km baseline is better since the statistics for off-peak events are higher and this leads to a more precise measurement. Conversely, if $\delta$ is close to CP conserving values and the previous set of measurements have not been able to claim the discovery of CP-violation, the longer 540~km baseline would allow to cover a larger part of the parameter space. Indeed, after 10 years of data taking, the fraction of values of $\delta$ for which a $5 \sigma$ discovery would be possible is $56\%$ for Zinkgruvan and $62\%$ for Garpenberg.
As for the splitting of the data taking time between neutrino and antineutrino modes, the optimal strategy also depends on the value of $\delta$. This fact could be exploited since previous and present data at the time of the measurement should already show a strong preference for some part of the parameter space. Thus, the running strategy can be adapted to the situation optimizing the precision with which this measurement can be performed. In particular we find again that given the need of going beyond measurements at the peak for $\delta \sim \pm 90^\circ$, statistics is much more relevant and maximizing the time in neutrino mode translates to the best precision for these values. Conversely, close to CP-conserving values of $\delta$, the information from events on-peak is most relevant and the complementarity between neutrino and antineutrino modes pays off so that a more even split of the running time would provide the best precision.
Finally we explored the possible bottlenecks for the physics reach of the facility exploring how it is affected by varying the values of the different systematic errors considered as well as the total exposure. As expected, the choice of observing the oscillation probability at its second maximum significantly reduces the impact of the systematic errors. We find that around the first oscillation peak the fraction of values of $\delta$ for which a $5 \sigma$ discovery is possible is reduced by more than a factor 2 when considering the more conservative values of Table~\ref{Tab:Systematics}. On the other hand, at the second peak the reduction is only by a factor around $1.2$. Among the different sources of systematic uncertainties considered, the most important is the possible difference in the ratio of the electron to muon neutrino cross sections. This uncertainty is difficult to constrain from near detector information since the flux is mainly composed of $\nu_\mu$, but the far detector signal consists of $\nu_e$. Conversely, the observation at the second maximum considerably reduces the number of events and statistics play a much more relevant role. At the longer 540~km baseline, the fraction of values of $\delta$ allowing for a discovery would go from $47 \%$ to $62 \%$ and $70 \%$ for data taking periods of 5, 10, and 20 years, respectively.
\section*{Acknowledgements}
We are extremely grateful to Michele Maltoni for providing us with the simulations of atmospheric neutrino dataset that would be collected at the MEMPHYS detector used in Ref.~\cite{Campagne:2006yx}. We are also indebted to Budimir Klicek and Marco Roda for suggestions and help with the GENIE tunes most appropriate for the ESS$\nu$SB energy range. We also want to thank WP6 of the ESS$\nu$SB design study, in particular Monojit Ghosh, for comments on our manuscript.
f
This work is supported in part by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements 674896-Elusives, 690575-InvisiblesPlus, and 777419-ESSnuSB, as well as by the COST Action CA15139 EuroNuNet.
MB, EFM, and SR acknowledge support from the ``Spanish Agencia Estatal de Investigaci\'on'' (AEI) and the EU ``Fondo Europeo de Desarrollo Regional'' (FEDER) through the project FPA2016-78645-P; and the Spanish MINECO through the ``Ram\'on y Cajal'' programme and through the Centro de Excelencia Severo Ochoa Program under grant SEV-2016-0597. MB also acknowledges support from the G\"oran Gustafsson foundation.
\bibliographystyle{JHEP}
\section{Introduction}
\label{sec:introduction}
After the discovery of a non-zero $\theta_{13}$~\cite{An:2012eh,Ahn:2012nd,Abe:2012tg,Adamson:2011qu,Abe:2011sj} the emerging picture from the last decades of neutrino oscillation searches consolidates a structure for the PMNS matrix~\cite{Pontecorvo:1957cp,Pontecorvo:1957qd,Maki:1960ut,Maki:1962mu,Pontecorvo:1967fh} describing lepton flavour mixing strikingly different from its CKM counterpart in the quark sector, making the Standard Model flavour puzzle even more intriguing. Far from the hierarchical structure described through the tiny mixing angles of the CKM, large mixing angles characterize the lepton mixing. The ``atmospheric'' mixing angle $\theta_{23}$ is presently compatible with maximal mixing as well as with a large but non-maximal value in either the first or the second octant. Similarly, the ``solar'' mixing angle $\theta_{12}$ is around $33^\circ$ and only $\theta_{13} \sim 8-9^\circ$ is relatively small and its value is still comparable in magnitude to the Cabibbo angle, the largest in the CKM. The large mixing opens the window to the present and next generation of neutrino oscillation experiments to tackle new questions that could provide answers to fundamental open problems.
Present experiments such as T2K~\cite{Abe:2017uxa, Abe:2019vii} and NO$\nu$A~\cite{Acero:2019ksn} have started to provide the first hints on the potentially CP-violating phase $\delta$. The discovery of the violation of the particle-antiparticle symmetry in the lepton sector would be extremely suggestive, given that CP-violation is a necessary ingredient to explain the matter over antimatter excess to which we owe our existence and that the CKM contribution has been shown to be insufficient~\cite{Gavela:1993ts,Gavela:1994dt} for this purpose. Similarly, present neutrino oscillation experiments already show some preference for normal ordering (positive $\Delta m^2_{31}$) with respect to inverted ordering. This parameter is a fundamental input to combine with the searches for the neutrinoless double beta decay process in order to probe the Majorana nature of neutrinos. Finally, present experiments as well as their successors T2HK~\cite{Abe:2015zbg} and DUNE~\cite{Acciarri:2015uup} will also provide even more precise measurements of the oscillation parameters that could hold the key to discriminate among different flavour models addressing the flavour puzzle.
The European Spallation Source (ESS) at Lund provides an opportunity to build a new-generation, long-baseline neutrino oscillation experiment with an unprecedented neutrino luminosity through an upgrade of the ESS Linac~\cite{Baussan:2013zcy}. Its $2.5$~GeV protons would lead to a rather low energy neutrino flux, between 200 and 600~MeV. This energy range is very well suited for a water Cerenkov detector of the MEMPHYS type~\cite{deBellefon:2006vq,Agostino:2012fd}. In Ref.~\cite{Baussan:2013zcy} a greenfield study optimizing the physics reach to leptonic CP-violation was performed for this ESS neutrino Super-Beam facility (ESS$\nu$SB). Interestingly, the outcome of this optimization, as well as follow-up studies~\cite{Agarwalla:2014tpa,Chakraborty:2017ccm,Chakraborty:2019jlv}, was that the best baseline at which to study the neutrino beam from the ESS facility at a MEMPHYS-type detector would be between 400 and 600~km. Two candidate mines that could host the detector were identified: Garpenberg at 540~km and Zinkgruvan at 360~km from the ESS site. This choice makes the ESS$\nu$SB design unique, as the neutrino flux observed by the detector mainly corresponds to the second maximum of the $\nu_\mu \to \nu_e$ oscillation probability, with a marginal contribution of events at the first oscillation peak.
For the value of $\theta_{13} = 8.6^\circ$ currently preferred~\cite{Esteban:2018azc} by Daya Bay~\cite{Adey:2018zwh} and RENO~\cite{Bak:2018ydk}, the ``atmospheric'' term of the $\nu_\mu \to \nu_e$ oscillation probability~\cite{Cervera:2000kp}, which is governed by oscillations driven by the large frequency $\Delta m^2_{31}$ and with an amplitude $\sin^2 2\theta_{13}$, dominates over the sub-leading ``solar'' term driven by $\Delta m^2_{21}$ with amplitude $\sin^2 2\theta_{12}$ at the first oscillation maximum. Thus, the interference between the two, which is the only term dependent on the yet unknown CP-violating phase $\delta$, will also be a sub-leading contribution to the full oscillation probability at the first peak and potentially hidden by systematic uncertainties. Conversely, at the second oscillation maximum the slower ``solar'' oscillation has had more time to develop and thus the CP-violating interference term can give a significant contribution to the oscillation probability, thus increasing the sensitivity to CP violation~\cite{Coloma:2011pg}.
The price to pay in order to observe the oscillation probability at its second maximum is high. Despite this being the optimal choice to maximize the dependence of the oscillation probability on the leptonic CP violating phase, the ratio of the oscillation baseline to the neutrino energy ($L/E$) needs to be a factor 3 larger compared to the first maximum. This implies roughly an order of magnitude less statistics than if the experiment had been designed at the first peak. Indeed, the neutrino flux decreases with $L^{-2}$ from the beam divergence and the neutrino cross section and beam collimation increase with the neutrino energy. Despite the unprecedented neutrino luminosity from the upgraded ESS linac and the megaton-class MEMPHYS detector, only around 100 signal events for each beam polarity would be accumulated after 10 years data taking (2 years in neutrinos and 8 years in antineutrinos) at the 540~km Garpenberg baseline (see Fig.~7 of Ref.~\cite{Baussan:2013zcy}). Conversely, the 360~km Zinkgruvan baseline has a 2.25 times larger neutrino flux. However, the neutrino spectrum for this baseline is rather centered at the first oscillation minimum while the first and second peaks are sampled by the high and low energy tails respectively. Overall this gives similar statistics at the second oscillation maximum when compared to the Garpenberg option, but also some additional statistics at the first peak and in between.
For the ESS$\nu$SB the increased dependence on the CP violating phase of the probability is well worth the loss of precious neutrino events at the second maximum. Indeed, it could provide unprecedented discovery potential to leptonic CP-violation or the most precise measurement of the corresponding phase after discovery, which could be instrumental in tackling the flavour puzzle. Moreover, as pointed out in Ref.~\cite{Coloma:2011pg} and as we will elaborate in later sections, this choice also makes the physics reach much more resilient against unexpected sources of systematic errors, since the signal, while small, has a leading dependence on the unknown parameters. Conversely, statistics will be the bottleneck of the ESS$\nu$SB physics reach and thus longer periods of data taking would greatly increase its capabilities.
On the other hand, other potential oscillation searches, different from the CP violation search, will be negatively impacted by the choice of the second oscillation maximum baseline. In particular the sensitivity to the octant of $\theta_{23}$ is severely reduced by this choice. Indeed, this measurement mainly relies on the ``atmospheric'' term of the oscillation probability, which is leading at the first maximum instead, together with $\theta_{13}$ information from reactor measurements and $\Delta m^2_{31}$ and $\sin^2 2\theta_{23}$ from $\nu_\mu$ disappearance. Similarly the $\nu_\mu$ disappearance data and hence the precise determination of $\Delta m^2_{31}$ and $\sin^2 2\theta_{23}$ are negatively affected by the choice of the second oscillation maximum. The lack of knowledge on the octant of $\theta_{23}$ can lead to ``octant degeneracies''~\cite{Fogli:1996pv} that in turn somewhat limit the CP discovery potential of the ESS$\nu$SB~\cite{Ghosh:2019sfi}. The sensitivity to the mass ordering is also limited at the ESS$\nu$SB given the small matter effects from the low energy and short baseline. However, since these matter effects are small, the resulting ``sign degeneracies''~\cite{Minakata:2001qm} do not compromise the sensitivity to $\delta$ of the facility~\cite{Baussan:2013zcy,Ghosh:2019sfi}.
A very effective and convenient way of increasing both the octant and mass ordering sensitivity of a neutrino Super Beam experiment is to combine the signal from the neutrino beam with the huge atmospheric neutrino sample that can be collected at such a detector~\cite{Huber:2005ep,Campagne:2006yx}. In the case of the ESS$\nu$SB this combination is particularly synergistic. Indeed, the atmospheric neutrino sample can provide not only significantly increased sensitivity to the octant and the mass ordering to solve parametric degeneracies, but also improved precision to $\Delta m^2_{31}$ and $\sin^2 2\theta_{23}$ which is otherwise one of the main drawbacks of the setup.
In this work we will combine the observation of the ESS$\nu$SB flux tuned for the second maximum of the $\nu_e$ appearance probability with the complementary atmospheric neutrino data, more strongly dominated by the first maximum and $\nu_\mu$ disappearance, and characterized by stronger matter effects. We will explore how the physics reach of the facility improves when beam data is considered together with the atmospheric neutrino sample and then review the optimization of the ESS$\nu$SB facility using both data sets.
Finally, we will discuss which sources of systematic errors among the ones considered impact the final sensitivity more significantly.
This paper is organized as follows. In Section~\ref{sec:theory} we discuss the peculiarities of the neutrino oscillation probability and the appearance of parametric degeneracies when observing at the second oscillation maximum. In Section~\ref{sec:setup} we describe the experimental setup considered and the details of the numerical simulations performed. Section~\ref{sec:results} describes the results of the simulations and in Section~\ref{sec:conclusions} we present our conclusions and summarize our work.
\section{Measurements at the second oscillation peak}
\label{sec:theory}
The determination of the oscillation parameters at beam experiments is, in general, hindered by the appearance of degenerate solutions,
cf. e.g., Refs.~\cite{BurguetCastell:2001ez,Barger:2001yr,Minakata:2013hgk,Coloma:2014kca,Ghosh:2015ena}. These degeneracies have been extensively studied for the experimental setups of T2HK~\cite{Coloma:2012ji,C.:2014ika,Ghosh:2014rna,Abe:2014oxa,Ghosh:2017ged,Abe:2018uyc} and DUNE~\cite{Coloma:2012ji,Adams:2013qkq,Barger:2013rha,Ghosh:2013pfa,Agarwalla:2013vyc,Barger:2014dfa,Bora:2014zwa,Acciarri:2015uup,Nath:2015kjg,DeRomeri:2016qwo,Ghosh:2017ged,Abi:2018dnh,deGouvea:2019ozk,Ghoshal:2019pab,Meloni:2018xnk} (and also their combination~\cite{Fukasawa:2016yue,Ballett:2016daj}).
As stated in Section~\ref{sec:introduction}, the $L/E$ range which the ESS$\nu$SB focuses on is different from those of other forthcoming experiments,\footnote{
The MOMENT proposal~\cite{Cao:2014bea,Blennow:2015cmn,Bakhti:2016prn,Tang:2019wsv} with $L=150$ km can access to the oscillation probability with similar $L/E$ to the ESS$\nu$SB.
The T2HKK proposal~\cite{Ishitsuka:2005qi,Hagiwara:2005pe,Hagiwara:2006vn,Kajita:2006bt,Hagiwara:2006nn,Hagiwara:2009bb,Hagiwara:2011kw,Hagiwara:2012mg,Hagiwara:2016qtb,Abe:2016ero,Raut:2017dbh}, in which the first and the second oscillation maxima are measured with two detectors located at different sites, would also cover the similar $L/E$ range to the ESS$\nu$SB.}
Therefore, here we will discuss the peculiarities of ESS$\nu$SB and the differences from other experiments in the determination of the oscillation parameters before presenting our numerical results. The $\nu_e$ appearance oscillation probability in matter is given by~\cite{Cervera:2000kp} (see also~\cite{Freund:1999gy, Akhmedov:2004ny, Minakata:2015gra}):
\begin{equation}
\begin{split}
P(\barparenb{\nu}_{\mu}\rightarrow&\barparenb{\nu}_e) = s_{23}^2\sin^2{2\theta_{13}}\left(\frac{\Delta_{31}}{\tilde{B}_{\mp}}\right)^2\sin^2{\left(\frac{\tilde{B}_{\mp}L}{2}\right)}+c_{23}^2\sin^2{2\theta_{12}}\left(\frac{\Delta_{21}}{A}\right)^2\sin^2{\left(\frac{A L}{2}\right)}\\
& + \tilde{J}\frac{\Delta_{21}}{A}\frac{\Delta_{31}}{\tilde{B}_{\mp}}\sin{\left(\frac{A L}{2}\right)}\sin{\left(\frac{\tilde{B}_{\mp}L}{2}\right)}\left[\cos{\delta} \cos{\left(\frac{\Delta_{31}L}{2}\right)}\mp\sin{\delta}\sin{\left(\frac{\Delta_{31}L}{2}\right)}\right],
\end{split}
\label{Eq:Probability}
\end{equation}
where $\Delta_{i j} \equiv \Delta m^2_{i j}/ 2E$, $\tilde{J}=c_{13}\sin{2\theta_{12}}\sin{2\theta_{23}}\sin{2\theta_{13}}$, $A = \sqrt{2} G_F n_e$ is the matter potential with $n_e$ the electron density and $G_F$ the Fermi constant, and $\tilde{B}_{\mp}\equiv |A\mp \Delta_{13}|$. In this expression the only dependence in the CP violating phase $\delta$ appears in the last term, which is the interference between the ``atmospheric'' oscillation in the first term and the ``solar'' in the second. Since $\sin 2 \theta_{13} \sim 0.3$ while $\Delta_{12} L \sim 0.05$ at the first oscillation peak, the ``atmospheric'' term tends to dominate the oscillation probability and the interesting CP interference is only subleading. Conversely, at the second oscillation maximum $\Delta_{12} L \sim 0.1$ so that the dependence on $\delta$ of the oscillation probability is much higher which allows to improve the sensitivity to this parameter~\cite{Coloma:2011pg}. This can be seen in Fig.~\ref{Fig:probs} where the change in the probability upon changing the values of $\delta$ is much more significant at the second peak maximum compared to the first.
\begin{figure}[t]
\centering
\hspace*{-1cm}
\includegraphics[width=13.5cm]{Biprob/probs-norm.pdf}
\caption{Oscillation probabilities for the Zinkgruvan (upper panels) and Garpenberg (lower panels) baselines as a function of the energy for neutrinos (left panels) and antineutrinos (right panels). The red (blue) lines are for normal (inverted) ordering and three different values of $\delta = -\pi/2$, $0$ and $\pi/2$ are represented by the dashed, solid and dotted lines respectively. The grey histograms show the number of events that would be obtained in each energy bin for a 2/8 time splitting between neutrino/antineutrino mode if the oscillation probability was $1$. Thus, they serve as a guide of what energies of the oscillation probability would be well-sampled by the ESS$\nu$SB setup.}
\label{Fig:probs}
\end{figure}
In Eq.~(\ref{Eq:Probability}) the leading dependence on the mass ordering comes from the ``atmospheric'' term, as it goes as the inverse of the square of $\tilde{B}_{\mp}$. For $E \sim |\Delta m_{31}^2|/(2A) $ there will be a resonance which will produce an enhancement in neutrinos against antineutrinos or viceversa depending on the mass ordering. For a typical average matter density of $3.0~\text{g}/\text{cm}^3$ one finds that the approximate energy for this resonance to happen is $E \sim \mathcal{O}(\text{GeV})$. Given that the peak of the flux for ESS$\nu$SB happens at $E\sim \mathcal{O}(100)~\text{MeV}$ (see Fig.~\ref{Fig:probs}), the importance of the matter effects and hence of the sensitivity to the mass ordering for this facility is not expected to be significant.
The bi-probability plots~\cite{Minakata:2001qm} shown in Fig.~\ref{Fig:biP-540} help to illustrate the degeneracy problem at the ESS$\nu$SB experiment.
Here all oscillation parameters other than $\delta$, the octant of $\theta_{23}$, and
the sign of $\Delta m_{31}^{2}$ are fixed at the current best fit
values~\cite{Esteban:2018azc}, and the matter density along the neutrino
baseline is assumed to be constant with an average density of 3.0 g/cm$^{3}$.
The baseline length $L$ and the neutrino energies $E$ are set to $L=540$ km (ESS-Garpenberg) and $E=\{280, 380, 480\}$ MeV.
The ellipses show the variation of the appearance probabilities for the
neutrino and antineutrino channels from changes in $\delta$.
The four ellipses in each plot correspond to the different choices of the octant of $\theta_{23}$ and the mass ordering.
When the ellipses overlap sharing the same region in the $P(\nu_{\mu} \rightarrow \nu_{e})$-$P(\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e})$ plane, the same oscillation probabilities can be obtained by changing $\delta$, the octant of $\theta_{23}$ and/or the mass ordering, implying the existence of degenerate solutions.
Let us first focus on the middle plot with $E=380$ MeV where the
oscillation probabilities are close to the second maximum, $|\Delta m_{31}^{2}|L/(4E) \sim 3\pi/2$.
The centres of the ellipses are located on the CP conserving line $P(\nu_{\mu} \rightarrow \nu_{e}) = P(\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e})$, which reflects the fact that the matter effect, which could induce an explicit difference between the neutrino and antineutrino oscillation probabilities unrelated to the intrinsic CP violation from $\delta$, is irrelevant for this energy and baseline.
The major axes of the ellipses extend widely along the diagonal line orthogonal to the CP conserving line. This means that the CP violating term proportional to $\sin\delta$ in Eq.(\ref{Eq:Probability}) is very relevant in the oscillation probability for this energy and baseline, leading to the improved CP sensitivity at the second oscillation peak.
The ``fake'' CP violation effect due to the matter effect separates the two ellipses with opposite mass ordering at the first oscillation maximum, where T2HK focuses on, causing the $\delta$-sign$(\Delta m_{31}^{2})$ degeneracy in the CP violation search, cf. the right most plot in Fig.~\ref{Fig:biP-360}.
Conversely, the CP violation search at the second oscillation maximum is not noticeably affected by the matter effect~\cite{Bernabeu:2018use,Ghosh:2019sfi}.
Changing the value of $\theta_{23}$, the ellipses almost keep the same shape and move in parallel along the CP conserving line, which causes the $\delta$-$\theta_{23}$ degeneracy~\cite{Minakata:2013hgk,Coloma:2014kca}.
The vertices of the ellipses are located at $\delta=\{\pi/2, -\pi/2\}$, where the oscillation probabilities do not change much with a change of $\delta$. As a consequence, the precision in the determination of $\delta$ becomes worse close to the oscillation maxima~\cite{Coloma:2012wq}.
In other words, since the two points with $\delta$ and $\pi-\delta$ on an ellipse are close to each other around $\delta=\{\pi/2,-\pi/2\}$, it is hard to separate them~\cite{Coloma:2012wq}.
Although at the probability level from Fig.~\ref{Fig:biP-540} the expectation would be that this quasi-degeneracy effect occurs similarly at $\delta=\pi/2$ and $\delta=-\pi/2$, the numerical simulations we will report in Section~\ref{sec:results} show that the ESS$\nu$SB suffers this effect more severely at $\delta=-\pi/2$ than at $\delta=\pi/2$. This is due to the significant difference in event rates between these two points. Indeed, for $\delta=-\pi/2$, the oscillation probability for neutrinos is enhanced while the antineutrino one is suppressed. Since both the flux and the cross section are also smaller for antineutrinos, this strongly penalizes the measurement at $\delta = -\pi/2$ since the antineutrino sample is essentially lost given that the event rate at the second oscillation peak is already necessarily small.
On the other hand, at $\delta=\pi/2$, the oscillation probability for neutrinos is suppressed, but the larger cross section and flux compensate for it and prevents such a big loss of sensitivity.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Biprob/ESS540-biprob-crop.pdf}
\caption{Bi-probability plots for the ESS-Garpenberg setup $L=540$ km. Three plots for three different neutrino energies: $E=\{280, 380, 480\}$ MeV from left to right.
%
The four ellipses in each plot for the different choices of ($s_{23}^{2} \equiv \sin^{2}\theta_{23}$, sign[$\Delta m_{31}^{2}$]):
blue solid for ($0.45,+$), orange solid for ($0.45,-$), blue dashed for ($0.55,+$), and orange dashed for ($0.55,-$).
The energies $E=380$ MeV and $E=480$ MeV correspond to the vicinity of the second oscillation maximum and the first oscillation minimum.}
\label{Fig:biP-540}
\end{figure}
In the energy region that the ESS$\nu$SB focuses on, the oscillation phase changes rapidly. As a consequence, the shape and location of the ellipses changes very significantly even within the same energy bin.
In Fig.~\ref{Fig:biP-540},
we also show the bi-probability plots with $E=$280 and 480 MeV where the oscillation probabilities are approaching the minima, which are also well-covered by the ESS$\nu$SB flux.
The ellipses are not distributed symmetrically to the CP conserving line,
which means that, contrary to the second peak, matter effects do have some impact on the oscillation probabilities.
However, this impact is still subleading, given the rather low energy, and does not shift the energies where the extrema are located, cf. Fig.~\ref{Fig:probs}.
As a result, the two ellipses for the different mass hierarchies are not separated in the entire energy region.
The drastic shape change of the ellipses when varying the energy is largely due to the ratio of the $\sin\delta$ and the $\cos\delta$ terms in the oscillation probability, see Eq.~(\ref{Eq:Probability}).
The $\sin\delta$ term is most significant close to the oscillation peak with $|\Delta m_{31}^{2}| L/(4E) \simeq 3 \pi/2$ for $E \simeq 380$ MeV.
As the probabilities depart from the maximum, the major axes of the ellipses start following along the direction of the CP conserving line, which means that the $\cos\delta$ term increases in importance as we approach the minima with $|\Delta m_{31}^{2}| L/(4E) \simeq \pi$ (right panel of Fig.~\ref{Fig:biP-540}) or $|\Delta m_{31}^{2}| L/(4E) \simeq 2 \pi$ (left panel).
In the left and the right plots, the ellipses with different mass orderings intersect each other at points with different values of $\delta$ at different energies.
Therefore, in principle, with precise enough measurements at various energies, one could determine the value of $\delta$ and the sign of $\Delta m_{31}^{2}$ separately. However, the oscillations are too fast for the $\sim 100$~MeV resolution achievable at these energies with a water Cerenkov detector to resolve and also the event rate at the second maximum is not large enough to perform a very fine binning.
Thus, it is not possible to track the rapid oscillations in Fig.~\ref{Fig:probs}, although some mild sensitivity to the mass ordering can be achievable
A large overlap between the two ellipses with different mass orderings and different octants at the oscillation maximum (middle panel in Fig.~\ref{Fig:biP-540}), where most of the statistics is concentrated, suggests that the mass ordering sensitivity at the beam experiment is affected by the octant degeneracy.
The ellipses for different octants barely separate in the entire energy region, which implies a rather poor sensitivity to $\theta_{23}$ in the appearance channel leading to octant degeneracies that can spoil both the determination of $\delta$ and of the mass ordering at the ESS$\nu$SB. Conversely, for experiments focusing on the first maxium the two ellipses for different octants are more separated~\cite{Ghosh:2019sfi}, cf. the right panel in Fig.~\ref{Fig:biP-360}.
Therefore, we will explore the impact of the addition of the atmospheric neutrino data collected at the far detector of the ESS$\nu$SB to the beam data since atmospheric neutrinos can provide both sensitivity to the $\theta_{23}$ octant and the mass ordering helping to lift parametric degeneracies~\cite{Huber:2005ep,Campagne:2006yx}.
The mass ordering sensitivity from an observation of atmospheric neutrinos comes from the oscillation signals driven by $\Delta m_{31}^{2}$ and the matter effect (first term in Eq.~(\ref{Eq:Probability})) and therefore, it does not depend on the value of $\delta$.
On the other hand, the sensitivity is better for $\theta_{23}$ in the second octant than the first octant, since the term is proportional to $\sin^{2} \theta_{23}$~\cite{Akhmedov:2012ah}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Biprob/ESS360-biprob-crop.pdf}
\caption{Bi-probability plots for $L=360$ km (ESS-Zinkgruvan). In this
energy range $E=250-600$ MeV, the oscillation probabilities
experience the second maximum, the first minimum, and the first
maximum.}
\label{Fig:biP-360}
\end{figure}
If the shorter baseline $L=360$ km (ESS-Zinkgruvan) is instead considered, the neutrino flux at the high energy tail up to $E\sim600$ MeV covers the first oscillation maximum.
This situation corresponds to the bi-probability ellipses presented in the right panel of Fig.~\ref{Fig:biP-360}, which show the same shape and position characteristic of other experiments located at the first oscillation maximum such as T2HK.
The matter effect is not significant enough to completely separate the two mass orderings.
In the relevant energy range (200-600 MeV), the oscillation probabilities go from the first maximum (right panel) to the first minimum (middle panels) and to the second maximum (left panel).
The leftmost panel with $E=250$ MeV, where the second oscillation peak would be located, looks very similar to that with $E=380$ MeV in the case of $L=540$ km.
The ellipses for the different mass orderings are separated more clearly in the case of $L=360$ km than $L=540$ km in a large energy region, which leads to a slightly better sensitivity to the mass ordering even though the baseline is shorter.
From the information at the first oscillation maximum, the ESS$\nu$SB with $L=360$ km also has better sensitivity to $\theta_{23}$ than the $L=540$ km option, so that it is expected that the longer baseline option will benefit more from the addition of the atmospheric neutrino data, which helps to determine $\theta_{23}$ and its octant.
\section{Simulation and experimental details}
\label{sec:setup}
The simulation of the ESS$\nu$SB data has been performed with the GLoBES software~\cite{Huber:2004ka, Huber:2007ji}. We have assumed that the neutrino beam will shine on a near and a far detector to reduce the systematic uncertainties~\cite{Baussan:2013zcy}. The far detector is a 1~Mt MEMPHYS-like water Cerenkov detector~\cite{Agostino:2012fd}, while the near detector has been assumed to be identical to the far detector in terms of efficiencies and background rejection capabilities with a fiducial mass of 0.1 kt. The response of the detectors has been implemented through migration matrices, both for the signal efficiency and the background rejection from Ref.~\cite{Agostino:2012fd}.
A beam power of 5~MW with 2.5~GeV protons and an exposure of $1.7\times 10^{7}$ operating seconds per year has been assumed~\cite{Baussan:2013zcy}. The fluxes have been simulated explicitly at 1~km for the near detector~\cite{Blennow:2014fqa}, accounting for possible geometrical effects since the source cannot be considered point-like, as well as for 100~km (and consequently rescaled) for the longer baselines considered for the far detector~\cite{Baussan:2013zcy}. The event rate peaks around $\mathcal{O}(100)$ MeV energies (see Fig.\ref{Fig:probs}), so the dominant contribution to the cross section will be in the quasi-elastic regime (QE). For the cross section we use the results from the Genie~\cite{Andreopoulos:2015wxa} tune G18$\_$10a$\_$00$\_$000.
We have assumed a total running time of 10 years. Nonetheless, we will also study the dependence of the physics reach on the relative running time spent in positive and negative focusing in order to optimize it for the measurement of CP violation. Likewise, although the preferred location of the far detector for the ESS$\nu$SB is the Garpenberg mine at 540~km~\cite{Baussan:2013zcy}, different baselines, with emphasis in the alternative Zinkgruvan option at 360~km, will be studied to address the optimal choice. Finally, we will also study how the CP discovery potential depends on the total exposure.
Throughout all the simulations we adopt the same treatment of the systematic errors from Table~\ref{Tab:Systematics} as in Ref.~\cite{Coloma:2012ji}. Unless otherwise specified, we will assume the ``Optimistic'' systematics from the first ``Opt.'' column in Table~\ref{Tab:Systematics} although we will also show how the results are affected when the more conservative ones in the second column ``Cons.'' are considered instead.
All systematics have been introduced as nuisance parameters and the results presented have been obtained minimizing the $\chi^2$ over all of them. The systematic uncertainties associated to fluxes and cross sections have been assumed to be fully correlated between near and far detector and uncorrelated between neutrino and antineutrino components and different flavours. The uncertainties on the fiducial volumes of the near and far detectors were not assumed to be correlated. Additionally, to account for the uncertainty in the cross section between the near and far detector, arising from the different flavour composition of the beam (mainly $\nu_{\mu}$ in the near site and $\nu_e$ for the signal in the far detector), a completely uncorrelated systematic is included for their ratio (last row of Table~\ref{Tab:Systematics}). Therefore, the $\chi^2$ will be given by
\begin{equation}
\chi^2=\text{min}_{n_{s_i}}\left(\hat{\chi}^2_{FD}[n_{s_C}]+\hat{\chi}^2_{ND}[n_{s_C},n_{s_U}] + \frac{n_{s_C}^2}{\sigma_{n_{s_C}}^2}+\frac{n_{s_U}^2}{\sigma_{n_{s_U}}^2}\right),
\end{equation}
where $\hat{\chi}^2_{FD}$ ($\hat{\chi}^2_{ND}$) corresponds to the far (near) detector and $n_{s_C}$ ($n_{s_U}$) are the correlated (uncorrelated) systematic uncertainties.
We have added to the resulting $\chi^2$ a gaussian prior with the central values and $1\sigma$ errors from Ref.~\cite{Esteban:2018azc} for ``solar'' and ``reactor'' parameters. For the ``atmospheric'' parameters we set a prior on $\sin^2{2 \theta_{23}}$ and $|\Delta m_{31}^2|$ given that the octant for $\theta_{23}$ and the mass ordering are still unknown. Since the determination of these two parameters comes primarily from atmospherics, when adding this sample to the beam data no prior has been added on $\theta_{23}$ and $\Delta m_{31}^2$.
\begin{table}
\centering
\begin{tabular} {|| c c c ||}
\hline
Systematics & Opt. & Cons. \\
\hline
\hline
Fiducial volume ND & 0.2\% & 0.5\% \\
Fiducial volume FD & 1\% & 2.5\% \\
Flux error $\nu$ & 5\% & 7.5\% \\
Flux error $\bar{\nu}$ & 10\% & 15\% \\
Neutral current background & 5\% & 7.5\% \\
Cross section $\times$ eff. QE & 10\% & 15\% \\
Ratio $\nu_e/\nu_{\mu}$ QE & 3.5\% & 11\% \\
\hline
\end{tabular}
\caption{Systematic uncertainties for a super beam as described in Ref.~\cite{Coloma:2012ji} for two different scenarios, the ``Optimistic'' one and the ``Conservative'' scenario where systematics are larger.}
\label{Tab:Systematics}
\end{table}
The simulation of the atmospheric neutrino sample in MEMPHYS is the one used in the analysis from Ref.~\cite{Campagne:2006yx} where the neutrino fluxes at Gran Sasso from Honda calculations~\cite{Honda:2004yz} were used. This is a conservative estimate as fluxes become larger at higher geomagnetic latitudes such as Garpenberg or Zinkgruvan. In the simulation the events are separated between fully and partially contained events in the detector and stopping from through-going muon events. The neutral current contamination in each bin was included assuming the same ratio as Super-Kamiokande between neutral-current and unoscillated charged-current events~\cite{Ashie:2005ik}. For further details on the atmospheric sample see~\cite{Campagne:2006yx}.
\section{Results}
\label{sec:results}
In Fig.~\ref{Fig:CP_atmvsbeam} we show the impact on the CP discovery potential of the ESS$\nu$SB before (dashed lines) and after (solid lines) the inclusion of the atmospheric sample for the Zinkgruvan (360~km) and Garpenberg (540~km) options in the left and right panels, respectively. The plots represent the $\sqrt{\Delta \chi^2}$ with which CP conserving values of $\delta = 0$ or $\pi$ can be disfavoured as a function of the true value of $\delta$. We take the minimum of $\Delta \chi^2$ between $\delta=0$ and $\pi$. The $\sqrt{\Delta \chi^2}$ can be interpreted as the significance for exclusion of CP-conserving values (and hence evidence for CP violation) as long as the assumptions behind Wilks' theorem hold~\cite{Wilks:1938dza}. Deviations from these assumptions can be sizable for presently running experiments, but are expected to be smaller for next generation facilities~\cite{Blennow:2014sja}.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_NO_360.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_NO_540.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_IO_360.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_Sigma_AtmvsBeam_IO_540.pdf}
\caption{Significance with which CP conserving values of $\delta$ can be excluded for the Zinkgruvan 360~km (left panels) and Garpenberg 540~km (right panels) options. The upper (lower) plots are for normal (inverted) mass ordering while the red (blue) curves correspond to $\theta_{23}$ in the first (second) octant. The dashed lines correspond to the beam data only, while the continuous lines correspond to the results studying events from the beam and from atmospheric neutrinos. The running time splitting has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years.}
\label{Fig:CP_atmvsbeam}
\end{figure}
Even though the sensitivity of the atmospheric neutrino dataset to $\delta$ is almost negligible, the improvement of the ESS$\nu$SB physics reach upon its inclusion is quite remarkable. The improvement is generally larger for the longer 540~km baseline than for the Zinkgruvan 360~km option. This is in line with the expectations discussed in Section~\ref{sec:theory} of the atmospheric sample being more complementary to the beam information at the longer baseline. Indeed, at the second oscillation maximum the $\nu_\mu$ disappearance oscillation is not sampled as efficiently as at the first peak and this deteriorates the determination of the atmospheric oscillation parameters $\theta_{23}$ and $\Delta m^2_{31}$, which play an important role in the measurement of $\delta$. Conversely, the 360~km baseline has higher statistics and some events also cover the first oscillation maximum such that the atmospheric oscillation information is less complementary and the gain upon its inclusion is less noticeable. From these results we can conclude that the ESS$\nu$SB setup combined with the atmospheric neutrino sample would be able to rule out CP-conserving values of $\delta$ for $\sim 60 \%$ ($\sim 55 \%$) of the possible values of $\delta$ at the $5 \sigma$ level regardless of the octant and the mass ordering when observing at the 540~km (360~km) baseline.
Figure~\ref{Fig:CP_atmvsbeam} also shows that the gain in CP discovery potential is much more pronounced in some particular regions of the parameter space, especially for $\delta < 0$ and $\theta_{23}$ in the first octant or $\delta > 0$ and the second octant. In these examples the dotted curves for beam only often show a kink that reduces the slope and the values of $\delta$ for which CP-violation could be discovered with high significance. Conversely, the corresponding solid curves with atmospheric data either do not display the kink or develop it at higher significance so that the resulting CP-discovery potential is much larger. These kinks occur due to the presence of an unresolved octant degeneracy at a CP-conserving value of $\delta$ that prevents drawing conclusions regarding CP violation. When atmospheric data is added, the sensitivity to the octant improves and these degeneracies are either lifted or only show up at much higher significance.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_th23_FO_dCPm40_chi2_25_2.pdf}
\includegraphics[width=7.5cm]{Degeneracies/CP/CP_th23_SO_dCP150_chi2_25_MC.pdf}
\caption{Allowed regions at $\Delta \chi^2 = 25$ for different assumed values of $\sin^2\theta_{23}$ and $\delta$ represented by the star for a 540~km baseline (Garpenberg location). The red curves correspond to the atmospheric dataset alone, the blue to the beam-only information and the black curves to the combination of both. Dotted regions are allowed with the wrong mass ordering. The running time splitting has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years.}
\label{Fig:CP_sens_potato}
\end{figure}
This situation is illustrated in Fig.~\ref{Fig:CP_sens_potato}, where the allowed regions at the $\Delta \chi^2 = 25$ level are shown in the $\delta$-$\sin^2 \theta_{23}$ plane. The left (right) panels assume the true values $\delta=-40^\circ$ ($\delta=150^\circ$), $\sin^2 2 \theta_{23}=0.418$ ($\sin^2 2 \theta_{23}=0.582$) and normal ordering. As can be seen, when only the beam information is taken into account (blue curves), an octant degeneracy that spreads the allowed region towards CP conserving values appears. Conversely, the atmospheric data on their own (red curves) have no capability to determine $\delta$ at all, but can instead rule out the wrong octant of $\theta_{23}$. Thus, the combination of the two data sets (black curves) very significantly improves the CP discovery potential of the facility in these areas of parameter space. The dotted lines correspond to ``sign'' degeneracies with the opposite mass ordering to the one chosen as true value. In the right panel this degeneracy is also solved with atmospheric data while for the values of $\delta$ and $\theta_{23}$ chosen in the left panel a small sign degeneracy remains between the 4 and $5 \sigma$ level. Notice that an ``intrinsic degeneracy''~\cite{BurguetCastell:2001ez} at $\delta \simeq \pi-\delta_{true}$ also shows up at the $5 \sigma$ level when only the beam information is taken into account. As for the ``sign'' degeneracy, the atmospheric neutrino data is enough to lift it for the parameters chosen in the right panel while a small remnant is present in the left. In any case, both the ``intrinsic'' and the ``sign'' degeneracies appear at $\delta \simeq \pi-\delta_{true}$, given the comparatively small matter effects for the setup, and their allowed regions are smaller or comparable to that of the true solution so that only the ``octant''degeneracy plays a significant role in reducing the CP-discovery potential when atmospheric data is not exploited to lift it.
\begin{figure}
\centering
\includegraphics[width=10.5cm]{Octant/Oct_NO_540.pdf}
\caption{Significance with which the wrong octant would be disfavoured as a function of the actual value of $\theta_{23}$ with beam-only information (blue lines) and including also the atmospheric dataset (red lines) for the baseline to Garpenberg ($L=540$~km) and normal mass ordering. The running time splitting has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years. The results for the Zinkgruvan site ($L=360$~km) and for inverted ordering are very similar. The vertical line represents the present best fit for $\theta_{23}$ from~\cite{Esteban:2018azc}.}
\label{Fig:Oct_sens}
\end{figure}
In Fig.~\ref{Fig:Oct_sens} we show how the significance with which the ESS$\nu$SB would be able to disfavour the wrong octant of $\theta_{23}$ as a function of the true value of $\theta_{23}$ (blue lines). As already anticipated in Section~\ref{sec:theory}, this capability improves dramatically upon the inclusion of the atmospheric neutrino sample (red lines) and thus the potentially dangerous ``octant'' degeneracies are lifted. The curves are almost identical for both mass orderings and for the Zinkgruvan and Garpenberg baselines.
The significance with which the ESS$\nu$SB would be able to disfavour the wrong mass ordering is shown in Fig.~\ref{Fig:Hierarchy_sens}, where dotted (solid) lines correspond to beam only data (beam and atmospheric data). The left (right) panels correspond to the 360~km (540~km) baseline and upper (lower) panels are for the scenario in which the true ordering is normal (inverted). As can be seen the ESS$\nu$SB beam data allows to disfavour the wrong mass ordering at around the $3 \sigma$ ($2 \sigma$) level for the 360~km (540~km) baseline for any value of $\delta$ and the octant. When the atmospheric data is added, the sensitivity to the wrong ordering is boosted to the 4-5$\sigma$ level or even higher for the particular case of normal ordering and second octant of $\theta_{23}$ ($\sin^2{\theta_{23}}=0.582$ from Ref.~\cite{Esteban:2018azc}) for which the signal in atmospheric neutrinos is enhanced, as expected from Eq.(\ref{Eq:Probability}). For normal ordering (upper panels) the inclusion of the atmospheric neutrino data also change the shape of the curve, in particular a larger increase in the significance is seen around $\delta=0$ than for other values. This is due to the solution of the octant degeneracy since, as can be seen in the middle panel of Fig.~\ref{Fig:biP-540} or the first panel of Fig.~\ref{Fig:biP-360}, for $\delta=0$ and normal ordering the ellipse with opposite octant and ordering has a significant overlap.
\begin{figure}[h]
\centering
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_360_5+5_NO.pdf}
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_540_5+5_NO.pdf}
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_360_5+5_IO.pdf}
\includegraphics[width=7.5cm]{Degeneracies/Hierarchy/Hier_AtmvsBeam_540_5+5_IO.pdf}
\caption{Significance with which the wrong mass ordering would be disfavoured for $\theta_{23}$ in the first octant (red lines) or second octant (blue lines) and the true mass ordering being normal (upper plots) or inverted (lower plots). Dashed lines correspond to the beam only data while solid lines correspond to the addition of the atmospheric sample. The left panels correspond to the baseline to Zinkgruvan while the right ones to the location of the Garpenberg mine. The running time has been assumed to be $t_{\nu}$=$t_{\bar{\nu}}=5$ years.}
\label{Fig:Hierarchy_sens}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=7.5cm]{Precision/Prec_AtmvsBeam_5+5_NO-SO_360_3.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_AtmvsBeam_5+5_NO-SO_540_3.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_Best_360_SO-NO3.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_Best_540_SO-NO3.pdf}
\caption{Precision (spread of the $1 \sigma$ allowed region) on the determination of $\delta$ for the baseline to Zinkgruvan $L=360$~km (left panels) and Garpenberg $L=540$~km (right panels) for the current best-fit parameters~\cite{Esteban:2018azc}. In the upper panels we show the comparison between the precision obtained with (solid lines) and without (dashed lines) the atmospheric sample for a running time of 5 years in each focusing. In the lower plots we show the dependence of the precision on the relative running time in each mode, where $t_{\nu}$ ($t_{\bar{\nu}}$) corresponds to the time the experiment would run in neutrino (antineutrino) mode, combining atmospheric and beam datasets.}
\label{Fig:Precision_RunTime}
\end{figure}
In Fig.~\ref{Fig:Precision_RunTime} we analyze the precision with which the ESS$\nu$SB experiment would be able to measure the CP-violating phase $\delta$. In this figure we assumed the currently preferred option of normal ordering and second octant of $\theta_{23}$. In the upper panels we show the improvement in the $1 \sigma$ allowed region with which $\delta$ would be constrained by adding the atmospheric neutrino sample (solid lines) to the beam information alone (dotted lines). As can be seen, both for the 360~km (left panel) and 540~km baseline (right panel), the precision with which $\delta$ could be determined has a very pronounced shape. For CP violating values of $\delta$ around $\pm 90^\circ$, the $1 \sigma$ uncertainty in the measurement peaks leading to the poorest precision, while for $\delta$ around $0$ or $180^\circ$ the most precise measurements would be achieved.
As discussed in Ref.~\cite{Coloma:2012wq}, this structure follows from the dependence of the oscillation probability on $\delta$ shown in Eq.(\ref{Eq:Probability}). At an oscillation peak $|\Delta m_{31}^{2}| L/(4E) = (2n-1)\pi/2$ and thus mainly $\sin \delta$ is probed. Since the derivative of $\sin \delta$ vanishes at $\delta = \pm 90^\circ$, the precision with which $\delta$ can be determined is worst close to these values. In order to constrain $\delta$ around $\delta = \pm 90^\circ$, measurements away from the oscillation maxima to determine $\cos \delta$ would instead be necessary. These off-peak measurements are easier at the Zinkgruvan 360~km baseline since the statistics is higher and also the beam is not exactly centered at the maximum, while they are very challenging at Garpenberg since very few events away from the oscillation peak are expected. This explains why the reconstructed sensitivities around $\delta = \pm 90^\circ$ are much worse in the right panel compared to the left. Moreover, the double-peak structure that can be seen for $\delta = - 90^\circ$ for 540~km corresponds to the ``intrinsic'' degeneracies depicted in Fig.~\ref{Fig:CP_sens_potato} that merge into one bigger allowed region. Since, as seen in Fig.~\ref{Fig:CP_sens_potato}, the addition of atmospheric data can lift these degeneracies, in the solid lines where this information was included the difference between the two baselines is significantly reduced.
Conversely, for $\delta = 0$ or $180^\circ$ the measurement on peak is what allows to determine $\delta$ and, since this is better covered at the longer 540~km baseline, the precision is slightly better there. This fact also translates into the better CP-discovery potential observed for the 540~km baseline in Fig.~\ref{Fig:CP_atmvsbeam}. Since the error in $\delta$ is smaller around CP-conserving values, the 540~km option could get closer to these values but still allow to claim the discovery of CP violation with high significance.
In the lower panels of Fig.~\ref{Fig:Precision_RunTime}, the impact of changing the relative running times in positive focusing (neutrino mode) and negative focusing (antineutrino mode) is shown. Since off-peak measurements are required for $\delta = \pm 90^\circ$, statistics are crucial and easier to accumulate in neutrino mode, since fluxes and cross sections are larger, and thus the best precision would be obtained by devoting longer periods of data taking to positive focusing. Conversely, around $\delta=0$ or $180^\circ$ the complementarity between the neutrino and antineutrino samples pays off and more even splits of the running time provide better sensitivity.
\begin{figure}
\centering
\includegraphics[width=7.5cm]{Precision/Prec_Best_360.pdf}
\includegraphics[width=7.5cm]{Precision/Prec_Best_540.pdf}
\caption{Precision on the measurement of $\delta$ for a total running time of 10 years when the relative running time in neutrino and antineutrino modes is optimized for each value of $\delta$. This corresponds to running similar times in neutrino and antineutrino modes around $\delta=0, 180^\circ$ and maximizing the neutrino runs around $\delta = \pm 90^\circ$.}
\label{Fig:Precision_Best}
\end{figure}
Since the ESS$\nu$SB would be a next-generation facility, its measurement strategy can profit from the previous hints by preceding oscillation experiments and adapt the splitting between neutrino and antineutrino modes depending on what value of $\delta$ data point to. If such a strategy is followed and the best splitting between neutrino and antineutrino modes is adopted for each value of $\delta$, the precision presented in Fig.~\ref{Fig:Precision_Best} would be obtained. If the mass ordering is confirmed to be normal and $\theta_{23}$ lies in the second octant as present data prefer, the precision with which the ESS$\nu$SB facility would determine $\delta$ ranges from $16^\circ$ ($13^\circ$) for $\delta \sim -90^\circ$ to $6^\circ$ ($7^\circ$) for $\delta \sim 0$ or $\delta \sim 180^\circ$ for 540~km (360~km).
From Figs.~\ref{Fig:CP_atmvsbeam} and~\ref{Fig:Precision_Best} one can conclude that if the experiments preceding the ESS$\nu$SB do not find any evidence for CP-violation, the best option would be the 540~km baseline and a more or less even split of the neutrino and antineutrino running times. Indeed, this choice would minimize the errors with which $\delta$ would be determined around CP-conserving values and allow to increase the CP-discovery potential. On the other hand, if the previous set of experiments determine $\delta$ to be close to maximally CP-violating, then the best scenario for the ESS$\nu$SB would be the shorter 360~km baseline and increased neutrino run time to determine $\delta$ with the best precision possible.
\begin{figure}
\centering
\includegraphics[width=10.5cm]{CP_Optimization/CP_fraction_Systematics.pdf}
\caption{Impact of different sources of systematic errors on the fraction of values of $\delta$ for which a $\Delta \chi^2 > 25$ exclusion of CP conservation would be possible at the Garpenberg mine. The orange circles correspond to the CP fraction with the ``Optimistic'' systematics from Table~\ref{Tab:Systematics}, red squares correspond to assuming that particular uncertainty to be 5 times larger and blue triangles to reducing the uncertainty by a factor of 5.}
\label{Fig:CP_frac_Sys}
\end{figure}
In Fig.~\ref{Fig:CP_frac_Sys} we show the impact of individual systematic uncertainties on the fraction of values of $\delta$ for which CP violation could be discovered ($\Delta\chi^2\geq 25$). The sources of uncertainty considered, summarized in Table~\ref{Tab:Systematics}, are the flux uncertainties for the signal ($\delta\phi_S$) and background ($\delta\phi_B$), the cross section systematic ($\delta\sigma$), the neutral current background ($\delta NC_B$), and the uncertainty on the ratio of the electron and muon flavour neutrino cross section ($\delta \sigma_e/\sigma_{\mu}$). The plot shows that the systematic uncertainties that most significantly affect the performance of the ESS$\nu$SB are the ones related to the background components of the beam, since for these the determination at the near detector is more challenging. Namely, $\delta\phi_B$, $\delta NC_B$ as well as $\delta \sigma_e/\sigma_{\mu}$ since the only $\nu_e$ present at the near detector that would allow to fix this parameter are those from the intrinsic background contamination of the beam. Among these, the strongest impact on the sensitivity is due to the cross section ratio since, not only it is difficult to constrain, but it is also most relevant to the signal at the far detector, which consists of $\nu_e$. Indeed, reducing or increasing this particular source of systematic error has the biggest impact on the physics reach. The impact is in any event limited, since the main bottleneck to the performance when observing at the second oscillation peak is statistics. In particular, a reduction of this systematic by a factor of 5 improves the CP fraction by $\sim 2\%$ (no impact for $\bar{\nu}$) while the same factor in the opposite direction worsens the sensitivity by $\sim 9\%$ ($\sim 4\%$).
\begin{figure}
\centering
\includegraphics[width=7.5cm]{CP_Optimization/CP_fraction_L/CP_fraction_L.pdf}
\includegraphics[width=7.5cm]{CP_Optimization/CP_fraction_RT/CP_fraction_TotalRT.pdf}
\caption{Fraction of values of $\delta$ for which CP violation could be discovered above $5\sigma$ for different baselines to the far detector (left panel) for the two different sets of systematics from Table~\ref{Tab:Systematics}. In the right panel we show the CP fraction for the Garpenberg ($L=540$~km) and Zinkgruvan ($L=360$~km) mines, assuming the current best fit values for the oscillation parameters and the ``Optimistic'' systematics for increasing total exposure.}
\label{Fig:CP_fraction_Opt}
\end{figure}
The importance of these systematic errors in the physics reach is crucially dependent on the baseline of the experiment. In the left panel of Fig.~\ref{Fig:CP_fraction_Opt} we show the fraction of all the possible values of $\delta$ for which it would be possible to rule out $\delta =0$ or $\delta = 180^\circ$ with a $\Delta \chi^2 = 25$ or higher significance. The upper blue line is for the more optimistic systematics from Table~\ref{Tab:Systematics} and the lower red one for the more conservative values. As can be seen, the fraction of values of $\delta$ at which a $5 \sigma$ discovery would be possible, peaks between 400~km and 700~km in both cases. But this peak is much more pronounced when the more conservative values are assumed for the systematic uncertainties. Indeed, for larger values of the systematics, the shorter baselines are strongly penalized since the dependence of the oscillation probability is subleading around the first peak and easily hidden by the systematics. Conversely, if very small systematic errors can be achieved, then the main limiting factor would be statistics and shorter baselines would perform better. Thus, by measuring at the second oscillation maximum the ESS$\nu$SB setup becomes much more resilient to sources of systematic errors unaccounted for than when observing only at the first peak.
In the right panel of Fig.~\ref{Fig:CP_fraction_Opt} we show how the fraction of values of $\delta$ for which CP violation would be discovered at the $5 \sigma$ level by the ESS$\nu$SB beam and atmospheric data increases with the exposure. As expected from an observation at the second oscillation peak, statistics is the main factor controlling the final reach of the experiment. Indeed, for 5 years data taking the CP fraction is around $46\%$, by 10 years it increases to $62\%$ and reaches $70\%$ for 20 years of exposure. The slope only flattens significantly after 25 years.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have performed an exhaustive analysis of the physics reach of the ESS$\nu$SB facility exploring its capability to determine all the presently unknown neutrino oscillation parameters such as the mass ordering and the octant of $\theta_{23}$ but with a focus on the discovery of leptonic CP violation and a precision measurement of $\delta$, which are the main declared goals of the experiment. For the first time we combined the atmospheric neutrino sample that would also be observed at the facility with the beam information and studied the complementarity between the two data sets. We studied how the physics reach of the facility could be optimized by exploring different baselines and focusing on the two candidate sites of Zinkgruvan at 360~km and Garpenberg at 540~km. We have also explored how the time split between neutrino and antineutrino modes can be exploited to improve the physics reach.
We conclude that the inclusion of the atmospheric data set can significantly increase the ESS$\nu$SB physics reach. Due to the peculiarities of observing the oscillation probability at the second oscillation maximum we find that this combination is particularly synergistic. The atmospheric neutrino sample not only significantly increases the sensitivity to the mass ordering, like for other similar facilities~\cite{Huber:2005ep,Campagne:2006yx}, but it is also very effective in improving the constraints on $\Delta m^2_{31}$ and $\theta_{23}$ and its octant. These measurements are especially challenging for the beam alone when sitting at the second maximum, given the low statistics, particularly in antineutrinos and in the $\nu_\mu$ disappearance channel. However, the determination of $\delta$ can be affected by correlations with $\theta_{23}$~\cite{Coloma:2014kca} and degeneracies with the wrong octant and thus the atmospheric information is also crucial to increase the CP discovery potential of the ESS$\nu$SB indirectly. We find this complementarity is somewhat more pronounced for the longer 540~km baseline since there the flux is more centered at the second oscillation peak and the statistics are smaller so it benefits more from the information gained from the atmospheric neutrino data.
Regarding the optimal baseline, we find the choice is rather dependent of the actual value of $\delta$. For $\delta \sim \pm 90^\circ$ a precise measurement needs events away from the oscillation maximum. In this sense the shorter 360~km baseline is better since the statistics for off-peak events are higher and this leads to a more precise measurement. Conversely, if $\delta$ is close to CP conserving values and the previous set of measurements have not been able to claim the discovery of CP-violation, the longer 540~km baseline would allow to cover a larger part of the parameter space. Indeed, after 10 years of data taking, the fraction of values of $\delta$ for which a $5 \sigma$ discovery would be possible is $56\%$ for Zinkgruvan and $62\%$ for Garpenberg.
As for the splitting of the data taking time between neutrino and antineutrino modes, the optimal strategy also depends on the value of $\delta$. This fact could be exploited since previous and present data at the time of the measurement should already show a strong preference for some part of the parameter space. Thus, the running strategy can be adapted to the situation optimizing the precision with which this measurement can be performed. In particular we find again that given the need of going beyond measurements at the peak for $\delta \sim \pm 90^\circ$, statistics is much more relevant and maximizing the time in neutrino mode translates to the best precision for these values. Conversely, close to CP-conserving values of $\delta$, the information from events on-peak is most relevant and the complementarity between neutrino and antineutrino modes pays off so that a more even split of the running time would provide the best precision.
Finally we explored the possible bottlenecks for the physics reach of the facility exploring how it is affected by varying the values of the different systematic errors considered as well as the total exposure. As expected, the choice of observing the oscillation probability at its second maximum significantly reduces the impact of the systematic errors. We find that around the first oscillation peak the fraction of values of $\delta$ for which a $5 \sigma$ discovery is possible is reduced by more than a factor 2 when considering the more conservative values of Table~\ref{Tab:Systematics}. On the other hand, at the second peak the reduction is only by a factor around $1.2$. Among the different sources of systematic uncertainties considered, the most important is the possible difference in the ratio of the electron to muon neutrino cross sections. This uncertainty is difficult to constrain from near detector information since the flux is mainly composed of $\nu_\mu$, but the far detector signal consists of $\nu_e$. Conversely, the observation at the second maximum considerably reduces the number of events and statistics play a much more relevant role. At the longer 540~km baseline, the fraction of values of $\delta$ allowing for a discovery would go from $47 \%$ to $62 \%$ and $70 \%$ for data taking periods of 5, 10, and 20 years, respectively.
\section*{Acknowledgements}
We are extremely grateful to Michele Maltoni for providing us with the simulations of atmospheric neutrino dataset that would be collected at the MEMPHYS detector used in Ref.~\cite{Campagne:2006yx}. We are also indebted to Budimir Klicek and Marco Roda for suggestions and help with the GENIE tunes most appropriate for the ESS$\nu$SB energy range. We also want to thank WP6 of the ESS$\nu$SB design study, in particular Monojit Ghosh, for comments on our manuscript.
f
This work is supported in part by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements 674896-Elusives, 690575-InvisiblesPlus, and 777419-ESSnuSB, as well as by the COST Action CA15139 EuroNuNet.
MB, EFM, and SR acknowledge support from the ``Spanish Agencia Estatal de Investigaci\'on'' (AEI) and the EU ``Fondo Europeo de Desarrollo Regional'' (FEDER) through the project FPA2016-78645-P; and the Spanish MINECO through the ``Ram\'on y Cajal'' programme and through the Centro de Excelencia Severo Ochoa Program under grant SEV-2016-0597. MB also acknowledges support from the G\"oran Gustafsson foundation.
\bibliographystyle{JHEP}
|
1,116,691,497,832 | arxiv | \section{Introduction}
The high energy behavior of the hadronic total cross sections remains one of the biggest unsolved problems in the theory of the strong interactions. The problem is that, even at the very high energies
$s\rightarrow \infty$, there is a range of scales probed in such a process. Many exclusive processes
with additional large scale
can be treated using perturbative methods thanks to the property of asymptotic freedom and the factorization theorems. On the other hand, the total cross sections are notoriously difficult to evaluate from the first principles and therefore one has to rely on phenomenological models. The high energy asymptotics of the hadronic interactions was first investigated within the S-matrix and Regge theory. Powerful methods based on few general principles were elaborated, despite the lack of information on the microscopic dynamics. The high energy limit in QCD was calculated within the leading logarithmic approximation in the logarithms of energy. The result was the famous BFKL Pomeron, which indeed has the Regge behavior. More recently, in string theory the AdS/CFT conjecture opened up a new path for understanding the large coupling limit
of gauge theories. In this approach the high-energy scattering of hadrons is dominated by the gravitational scattering with the Pomeron Regge trajectory being identified with the graviton trajectory. The picture might be complicated since the unitarity corrections are to be taken into account, and also by the fact that the AdS/CFT conjecture is tested for UV finite and conformal theory and so far the dual description for QCD theory is not known. In this lecture I will bring some of these ideas, namely I will give a short and elementary introduction to the Regge theory, high energy limit in QCD and the strong coupling limit within the string theory.
I will also discuss the progress in resummation at high energy in QCD, which in principle allows to perform the interpolation
between small and large couplings (at least in the case of N=4 SYM theory).
\section{S-matrix and the Regge theory}
The S-matrix theory, which was developed in an attempt to understand the theory of strong interactions, relied on few assumptions based on a very general and fundamental principles, see \cite{pdb_collins}. The postulates for the scattering S-matrix $\langle out| in \rangle$ were the following:
\begin{itemize}
\item Lorentz invariance. The S-matrix had to be therefore a function of the invariants $s,t,u$ and possibly masses of the incoming and outgoing particles.
\item Unitarity of $S$ matrix : $SS^{\dagger}=S^{\dagger}S=1$. The unitarity really comes from the conservation of the probability. The probability of the incoming state to scatter into a given outgoing state must be one if we sum over all possible outgoing states.
\item Short range of the strong interactions. This allows to treat the incoming and outgoing states as free
when $t\rightarrow \infty,\; t\rightarrow-\infty$.
\item Analyticity. The $S$-matrix should be an analytic function of $s,t,u$ with only the singularities
due to stable or unstable particles and these which are required by the unitarity. This postulate is very important for the construction of the S-matrix theory but at the same time is very controversial.
\item Crossing. This is really consequence of the analyticity postulate. The physical kinematic regime for the process
$$
a+b\rightarrow c+d \; ,
$$ is when $s>0$ and $t,u<0$. According to the analyticity postulate the amplitude
${\cal A}(s,t,u)_{ab\rightarrow cd}$ is an analytic function and therefore it can be continued to another region where $t>0$ and $s,u<0$ which gives an amplitude for a different process
$$
a+\bar{c} \rightarrow \bar{b} +d \; .
$$
Thus the same function describes both processes and one can identify
$$
{\cal A}(s,t,u)_{a\bar{c}\rightarrow \bar{b}d}={\cal A}(t,s,u)_{ab\rightarrow cd} \; .
$$
\end{itemize}
These postulates lie at the foundations of the S-matrix approach. A particular insight into the behaviour of the amplitude at high energy was gained by looking into its properties in the angular momentum plane. By performing the partial wave amplitude decomposition
for $2\rightarrow 2$ scattering in $t$-channel one can show that the amplitude admits the representation
\begin{equation}
{\cal A}_{a\bar{c}\rightarrow \bar{b}d}(s,t) = \sum_{l=0}^{\infty} \,a_l(s) \,P_l(1+2t/s) \; ,
\label{eq:partial_wave}
\end{equation}
where $a_l(s)$ is the partial wave amplitude and the $P_l$ is the Legendre polynomial. The continuation
to the s-channel and the Sommerfeld-Watson transform allows to rewrite the above relation
\begin{equation}
{\cal A}(s,t) = \frac{1}{2i} \oint_C\, dl\, (2l+1)\,\frac{a(l,t)}{\sin \pi l}\, P_l(1+2s/t) \; ,
\label{eq:SW}
\end{equation}
where now $a(l,t)$ are the functions which are analytical continuation of the amplitudes $a_l$ in (\ref{eq:partial_wave}). The contour $C$ is shown in left plot in Fig.~\ref{fig:contourC}, it goes around the positive real axis and encompases all the poles given by the $\sin \pi l$ in the denominator of (\ref{eq:SW}).
One can then deform the contour $C$ so that it is parallel to the imaginary axis in $l$-plane.
There might be poles and cuts which must be encircled and so the amplitude can be rewritten as a sum over the poles, cuts and the integral which runs along the line $(-1/2-i \infty,-1/2+i \infty)$, see right hand plot in Fig.\ref{fig:contourC}. We are here primarily interested in the Regge limit, i.e. in the limit when the $s \gg -t$, that is at very high energies and small angle scattering. In this limit the contribution to the amplitude is dominated by the rightmost pole in the complex angular momentum plane and the background integral over the contour $C'$, see right hand plot in Fig.~\ref{fig:contourC}, vanishes. In the case when the simple pole (rather than the cut) dominates, the amplitude can be approximated as
\begin{equation}
{\cal A}(s,t) \rightarrow \frac{\eta+e^{-i\pi \alpha(t)}}{2} \, \beta(t)\, s^{\alpha(t)} \; .
\label{eq:Aregge}
\end{equation}
In this equation $\alpha(t)$ is the leading Regge pole which depends on the momentum transfer $t$ and controls the high energy behavior of the amplitude; $\eta$ is the signature factor and all the normalization
and the residue of the pole are absorbed into the function $\beta(t)$.
The amplitude (\ref{eq:Aregge}) can be thought of as coming from the exchange of the object-Reggeon in the $t$-channel. Its angular momentum is equal to $\alpha(t)$. This is rather complicated object since its spin depends on $t$ and we cannot think about it as an ordinary particle since it does not have a definite representation of the Lorentz group.
\begin{figure}[htb]
\centerline{\epsfig{file=lplane1.eps,width=0.4\textwidth}\hfill\epsfig{file=lplane2.eps,width=0.4\textwidth}}
\caption{Shape of the contour in the angular momentum plane.}
\label{fig:contourC}
\end{figure}
So far we have considered the process with negative $t$ values but if we now look into the process with $t$-values positive then we expect the amplitude to have poles which correspond to the actual physical particles $\alpha(m_i^2)=J_i$. Here $J_i$ is the actual spin of physical particle with mass $m_i$.
An interesting observation made by Chew and Frautschi in the early sixties \cite{chew_frautschi} was that when plotting
the spin of the mesons as a function of their mass the points lie on a universal straight line. This dependence was parametrized as
$$
\alpha(t) = \alpha(0) + \alpha' \, t \; ,
$$
with $\alpha(0)$ being the intercept and $\alpha'$ the slope parameter.
These straight lines were called Regge trajectories. Interestingly, the linear behavior continues to negative values of $t$ and it then corresponds to the scattering process with the exchange of the reggeon with the same quantum numbers (except spin of course, which is not defined) as the mesons lying on the trajectory. For example the process $\pi^- p\rightarrow \pi^0 n$ could be well described using the $\rho$ trajectory. Thus the Regge trajectories turned out to be universal quantities, for positive $t$
they contain physical particles with distinct values of masses and spins, whereas for the negative $t$ values they control the energy behavior of the scattering process.
\subsection{The Pomeron}
The $\rho$ trajectory had intercept $\alpha(0)<1$. From the optical theorem one obtains that the scattering cross section behaves as
$$
\sigma_{\rm TOT} \sim s^{\alpha(0)-1} \; .
$$
Thus the $\rho$ trajectory discussed in the previous section, which corresponds to the exchange of the object with isospin $I=1$ leads to the cross section which decreases with the energy.
Pomeranchuk showed that if there is a charge exchange in any process then the cross section would decrease at very large energies. On the other hand if the cross section increases it should be dominated by the reggeon with the quantum numbers of the vacuum. Such Reggeon is called the Pomeron. The situation could be more complicated by the Odderon state, a Reggeon which is odd under charge conjugation, whose contribution could be constant with the energy \cite{Bartels:1999yt}.
The experimental data on pp and $p\bar{p}$ scattering exhibit slow increase of the total cross section with the increasing c.m.s energy. This increase can be universally parametrized by the small power $\alpha(0)-1\simeq 0.08$ both for pp and $p\bar{p}$ collisions \cite{Donnachie:1992ny}. The two cross sections differ at small energies, but they exhibit universal growth for large energies. In fact, it is very interesting that all the hadronic cross section ($pp,p\bar{p},\pi^+ p,\pi^- p, K^+ p, K^- p$) have this universal behavior \cite{Donnachie:1992ny}. The same growth is also seen in the photoproduction cross section $\gamma p$.
Thus we conclude that the total cross sections in strong interactions have an intriguing property of universality at high energies.
\section{Gauge theory}
\label{sec:qcd}
The $S$-matrix provided an important insight into the high energy asymptotics. It could not however
answer more detailed questions about the exact behavior since it lacked the microscopic dynamics.
The first attempt to derive the Pomeron from QCD was done by Low and Nussinov. They considered the 2-gluon exchange process. This simple model did not have however the features expected from the Regge theory, for example it is not a Regge pole. The improved approach based on the resummation of the leading logarithms of energy was pioneered by Lipatov and collaborators \cite{Lipatov:1996ts}. The original 2-gluon exchange model was dressed with subsequent gluon emission in the approximation $s\gg-t$. More precisely, the gluon emissions were resummed in the limit where each power of the strong coupling is accompanied by the logarithm of the energy. The set of diagrams resummed in this approximation is shown in Fig.~\ref{fig:bfkl} where each gluon exchanged in the $t$-channel acquires the 'reggeized' propagator
$$
D^{\mu\nu}(\hat{s}_i,k_{i,T}^2)=\frac{ig_{\mu\nu}}{k_{i,T}^2} \bigg ( \frac{\hat{s}_i}{k_{i,T}^2}\bigg)^{\epsilon_G(k_{i,T}^2)} \; ,
$$
where $\hat{s}_i=(k_{i-1}-k_{i+1})^2$ with $k_i=(k_i^+,k_i^-,k_{i,T})$ being the momenta exchanged in the ladder and
\begin{equation}
\epsilon_G(q_t^2)=\frac{N_c \alpha_s}{4\pi}\int d^2 k_T \frac{-q^2_T}{k_T^2 (k_T-q_T)} \; ,
\label{eq:Reggetrajectory}
\end{equation}
is the gluon Regge trajectory. The latter object was obtained by the summation of the diagrams with the virtual exchanges of gluons in the leading logarithmic approximation. As seen from (\ref{eq:Reggetrajectory}) this object is infrared divergent so formally one needs a cutoff on the small momenta to properly define it. The vertices between the ordinary emitted gluons and the reggeized gluons are effective vertices. They result from the summation of different tree level single gluon emission diagrams. The final result for the imaginary part of the amplitude with arbitrary number of the gluon emissions is rather complicated but it turns out that it can be succinctly represented as a solution to the integral equation of the Bethe - Salpeter type
\begin{equation}
\omega f_{\omega}(k_{1T},k_{2T},q_T) = \delta^{(2)}(k_{1T}-k_{2T}) + \int dk'_{T} K(k_{1T},k'_{T},q_T) \, f_{\omega}(k'_{T},k_{2T},q_T) \; .
\label{eq:bfklequation}
\end{equation}
\begin{figure}[htb]
\centerline{\epsfig{file=bfkl.ps,width=0.5\textwidth}}
\vspace*{-2cm}
\caption{The schematic representation of the diagram summed in the BFKL calculation. The blobs represent the effective Lipatov vertex. The gluons exchange in the $t$-channel are reggeized.
The are represented by the zigzag lines.}
\label{fig:bfkl}
\end{figure}
Here $\omega$ is the Mellin conjugate variable to the $\ln s$ and $K$ is the (energy independent)
integral BFKL kernel which contains the real part coming from the square of the effective vertex in Fig.~\ref{fig:bfkl} and the virtual part from the Regge trajectory. The function $f$ is called the gluon Green's function and it is dependent of the four off-shell momenta and the rapidity (or $\omega$).
The important property of this equation is that it is infrared safe, unlike the gluon Regge trajectory.
The solution to this equation was found by employing the fact that the kernel has a conformal symmetry in 2-dimensions \cite{Lipatov:1985uk}. Therefore one can diagonalize this operator with the conformal eigenfunctions. For the purposes of this lecture it is sufficient to know the solution for zero momentum transfer $t=-q_T^2=0$.
The eigenvalue equation can be written as
$$
K \times \phi_{\nu}^n = \frac{\alpha_s N_c}{\pi} \, \chi(\nu,n)\, \phi_{\nu}^n\, ,\hspace*{1cm} \phi_{\nu}^n(k_T) = \frac{1}{\pi \sqrt{2}} (k_T^2)^{1/2+i \nu} e^{in\theta}\; ,
$$
where the eigenvalue function is
\begin{equation}
\chi(\nu,n) = 2{\rm Re} [\psi(1)-\psi(1/2+i\nu+n/2)] \; .
\label{eq:eigenvalue}
\end{equation}
The dominant contribution is at $n=0$. The eigenvalue function has simple poles at $$\gamma=1/2+i\nu=\dots,-2,-1,0,1,2,\dots \; ,$$ and a saddle
point at $\gamma=1/2$.
The BFKL equation gives rise to the cut singularity which can be seen by solving
\begin{equation}
1=\frac{\alpha_sN_c}{\pi} \frac{1}{\omega} \chi(n=0,\gamma) \; ,
\label{eq:polebfkl}
\end{equation}
for $\omega$. The cut structure is clear since as $\gamma$ varies along the imaginary axis $(1/2-i\infty,1/2+\infty)$ the value of $\omega$ from this equation varies from $-\infty$ to $4\ln2 \alpha_s N_c/\pi$.
One can also find the solution by the saddle point method. To this aim one can expand the kernel around the saddle point $\nu=0$ to get $$\chi(\nu)\simeq 4\ln2 -14 \zeta(3) \nu^2 \; .$$ This leads to the following solution in the diffusion approximation
\begin{multline}
f(\ln s/s_0,k_{1T},k_{2T}) \simeq \\ \simeq {\cal N}(\alpha_s,s,k_{1T},k_{2T}) \bigg( \frac{s}{s_0}\bigg)^{\omega_0} \exp\bigg(-\frac{\pi \ln ^2 \frac{k_{1T}^2}{k_{2T}^2}}{28 \zeta(3) \alpha_s N_c \ln s/s_0}\bigg) \; ,
\label{eq:bfklsol_diff}
\end{multline}
where normalization function ${\cal N}$ depends on the energy and momenta but the leading behavior has been factored out.
We see that the energy dependence of the solution is governed by the power behavior with the power equal to the value at the minimum of the kernel $\omega_0=4\ln 2 \frac{\alpha_s N_c }{\pi}$. The last term on the right hand side of Eq.~(\ref{eq:bfklsol_diff})
is the diffusion term. The transverse momenta play the role of the coordinates and the logarithm of the energy is like the imaginary time. The diffusion in the transverse momenta is then controlled by the second derivative of the kernel around its minimum. Therefore the BFKL resummation of the leading logarithms in the energy showed that the gluons 'reggeize' i.e. they form composite objects at high energy and that the amplitude is dominated by the Regge cut.
Unfortunately the BFKL leading logarithmic resummation turned out to be incompatible with the experimental data. The power behavior $s^{\omega_0}$ with $\omega_0=4 \ln2 \frac{\alpha_s N_c}{\pi}\simeq 0.5$ (say for typical values of $\alpha_s\simeq 0.2$) is much too strong not only for the total proton-proton cross sections but also for the growth of the structure function $F_2$ in Deep Inelastic Scattering of electron on a proton target where the behavior is roughly $F_2(x) \sim x^{-\lambda_{\rm eff}}\, , \; \lambda_{\rm eff}=0.2-0.3$\footnote{We mean here that the effective behavior can be parametrized by the power of this value. The data are very well described by the conventional renormalization group equations which do not posses this type of singularity.}. Therefore it became clear that there is a need for higher order terms. We will come back to this problem in Sec.~\ref{sec:resum}.
\section{Graviton and string theory in $AdS_5$ background}
\label{sec:string}
The graviton is thought to be a quantum of the gravitational field and, if it exists, it must be a particle of spin two, see for example \cite{Veltman:1975vx}.
Since it couples to energy-momentum tensor it cannot be a scalar. It cannot be a vector particle also,
since it would lead to difference between particles and antiparticles which contradicts the
experiments. It has to be massless object since the gravity is a long range force. The universality of its couplings to particles can be shown by analyzing the amplitudes for the emissions of soft gravitons and employing Ward identities \cite{Weinberg:1964ew}. Then directly
from the condition of energy-momentum conservation it follows that all the couplings of gravitons to particles are equal. Therefore the principle of equivalence is a natural consequence of the Lorentz invariance for the massles spin 2 particles. In string theory the graviton emerges as a particular
closed string state.
The AdS/CFT conjecture gives a tool for analyzing the gauge theory in a regime where the standard perturbative methods are insufficient. It states that the two theories: conformal field theory in $d=D-1$ dimensions
and the string theory in an anti de Sitter space-time in $D$ dimensions are related to each other, \cite{Maldacena:2003nj}. More precisely it states that, the limits of these two different theories which contain different degrees of freedom
are interchanged when the coupling $g^2 N$ is varied. When the coupling $g^2 N_c \gg 1$ then the gauge theory is strongly coupled but the string theory is weakly coupled. On the other hand when $g^2 N \ll 1$ the gauge theory
is weakly coupled, but the gravity is strongly coupled. The conjecture relates the boundary values of the fields on the gravity side to the local operators on the gauge theory side.
The correspondence was checked in a particular case of the conformal field theory $N=4$ super Yang-Mills. This theory apart from the gauge field $A_{\mu}$ contains also six scalar fields $\phi_i$
and four fermions $\chi_j$. All the fields transform in the adjoint representation. The theory is UV finite and the coupling does not run, this fact makes this theory quite different from the QCD.
Nevertheless, the infrared regime is similar to QCD and since the computations are easier in this theory it can be thought of as a useful laboratory for QCD.
The high energy limit of the scattering amplitudes was investigated in the gravity dual, and it turned out that the exchange would be dominated by the graviton state with $j_0=2$ \cite{Janik:1999zk,Brower:2006ea}.
What is also interesting that the same diffusion pattern was found for the amplitude as in the weak coupling limit. This was interpreted as a diffusion in the fifth(radial) coordinate of AdS space
and on the gauge theory side this corresponds to the diffusion in the transverse momenta along the ladder.
The only difference is in the value of the power and the diffusion coefficient
\begin{eqnarray}
j_0= \omega_0+1& = & 2-\frac{2}{\sqrt{g^2 N}}, \; \;{\cal D}=\frac{1}{2\sqrt{g^2 N}}, \; \; g^2 N \gg 1 \label{weak} \\
j_0=\omega_0+1& = & 1+4 \ln2 \frac{\alpha_s N}{\pi}, \; \; {\cal D}=7 \zeta(3) \frac{\alpha_s N_c}{\pi}, \; \; g^2 N \ll 1
\label{strong}
\end{eqnarray}
where $j_0=\omega_0+1$ with $\omega_0$ from the previous notation.
At small values of the coupling we have a linear increase with the coupling according to the leading logarithmic
approximation (\ref{weak}). At large values of the coupling the intercept becomes exactly $2$ with the correction that vanishes as $1/\sqrt{g^2 N}\;\;$ (\ref{strong}).
We see that we have two results which should be good approximations to two different regions of the coupling.
The problem is that they are totally disconnected from each other and it is hard to see that they actually describe the same object.
\section{Resummation at small $x$}
\label{sec:resum}
The leading logarithmic approximation gave a very large value for the intercept of the Pomeron (\ref{weak}). Assuming the typical value of the coupling of about $0.2$, the Pomeron
intercept value from this calculation is about $0.5$. This is in a blatant disagreement with the experimental data, especially the structure function data in deep inelastic scattering.
The next-to-leading correction \cite{NLLx} turned out to be very large,
$$
j_0=1+4 \ln 2 \frac{\alpha_s N_c}{\pi} (1-6.45 \frac{\alpha_s N_c}{\pi}) \;.
$$
Therefore it became immediately clear that there is a need for the resummation of this series. There are several sources of very large corrections. The first of them is the running coupling.
It is fixed in the leading logarithmic calculation due to the subleading contribution (from the point of view of the leading logarithms in energy) from the gluon loops.
It starts to run only at the next to leading level (NLLx). The other important corrections include the kinematical constraint and the requirement of the energy momentum-conservation.
This was shown \cite{Kwiecinski:1996td} to give important contribution
even before the explicit NLLx calculation. Finally, there are also corrections coming from the quarks in the evolution.
Here we will only consider the corrections which come from the kinematics since they are common to both QCD and $N=4$ SYM theory. The kinematical constraint
comes from a more careful treatment of the final state phase space. The leading logarithmic approximation is done both on the level of the amplitude and on the phase space
of emitted gluons. A careful analysis shows that, there is a region of momenta for which the emitted gluons are off-shell. The kinematic constraint imposed
onto the real emission part of the kernel corrects this problem. The result is an all-order resummation of the subleading terms. In particular it was shown that this
constraint is responsible for the triple collinear poles which appear in the next-to-leading calculation and which constitute numerically a large part of the corrections \cite{Salam:1998tj}.
It turns out that it is still insuficient, since the energy momentum is not conserved exactly. Various schemes were proposed, \cite{resum} here we will consider a very simple
model which has energy-momentum conservation imposed on the level of the eigenvalue \cite{ams}. It is rather brute-force method but it does give qualitatively results
which are expected from the gravity calculation at strong coupling.
The anomalous dimensions in the usual renormalization group approach have a constraint that
\begin{eqnarray}
\gamma_{gg}(j=2)+2 N_f \gamma_{qg}(j=2) & = & 0 \; , \nonumber \\
\gamma_{gq}(j=2)+ \gamma_{qq}(j=2) & = & 0 \; ,
\label{eq:emco}
\end{eqnarray}
which is independent of the order of perturbation theory. In $N=4$ SYM the condition is much simpler
$$\gamma_{\rm uni}(j=2)=0 \; ,$$
where $\gamma_{\rm uni}$ is defined for example in \cite{Kotikov:2003fb,Kotikov:2004er}.
One can evaluate the anomalous dimension from the BFKL calculation by solving the equation (\ref{eq:polebfkl}) for $\gamma$. This anomalous dimension does not satisfy the energy momentum constraint.
This is due to the fact, that as mentioned above, the approximations are made on the level of the amplitude and on the level of the phase space integral.
The simple model that satisfies the energy momentum conservation was taken in \cite{ams} simply as
\begin{eqnarray}
1& =& \bar{\alpha}_s \, \gamma_{gg}(\omega) \, \chi(\omega,\gamma) \; ,\nonumber \\
\chi({\omega,\gamma}) & =& -2 \gamma_E - \psi(\gamma+\omega/2)-\psi(1-\gamma+\omega/2) \; .
\label{eq:simple_model}
\end{eqnarray}
The shifts in the arguments of the kernel eigenvalue come from the kinematical constraint. The anomalous dimension in front of the eigenvalue guarantees the energy momentum conservation
when $j=\omega+1=2$. The multiplicative model above is probably too naiive. Nevertheless it gives the result that the intercept becomes $2$ for large values of the coupling $\alpha_s$, see also \cite{Kotikov:2003fb,Kotikov:2004er}.
\begin{figure}[htb]
\centerline{\epsfig{file=chieffresum.eps,width=0.7\textwidth}}
\caption{The solution for $\omega$ from Eq.~(\ref{eq:simple_model}). Fixed points result from the energy momentum constraint.}
\label{fig:eigenvalue}
\end{figure}
\begin{figure}[htb]
\centerline{\epsfig{file=resum.eps,width=0.7\textwidth}}
\caption{The value of the intercept, calculated from the minimum of the resummed eigenvalue as a function of the coupling constant $\alpha_s N_c/\pi$.}
\label{fig:intercept}
\end{figure}
Solving this equation for $\omega$ gives the result shown in Fig.~\ref{fig:eigenvalue}. We see that the constraint forces the curve to have to fixed points. Unlike the leading logarithmic case, where the kernel eigenvalue can take arbitrary values for the
large values of the coupling constant the minimum of this kernel $j_0=1+\omega_0$ is constrained to the interval $[0,2]$. The first correction goes as $1/\sqrt{\alpha_s}$ at large values of the coupling, compare (\ref{strong}).
The second derivative goes as $1/\alpha_s$ which is probably an artefact of the particular multiplicative simple model. One can evaluate the minimum of this eigenvalue as a function of the coupling constant,
which is shown in Fig.~\ref{fig:intercept}. We see that the model provides a very nice interpolation between the small and large values of the coupling.
The other interesting feature is the behavior
of the diffusion pattern in weak and strong coupling limits. From (\ref{weak},\ref{strong}) we see that the diffusion vanishes both at weak and at strong values of the coupling. It is also clear from Fig.~\ref{fig:eigenvalue}. The vanishing
at small coupling is clear, since it is proportional to the coupling. At strong coupling the eigenvalue becomes very flat as it tends ot a constant. The second derivative then vanishes in this limit.
The physical interpretation is that this region is dominated by the soft gluons with vanishing energy. The qualitative behavior of the diffusion parameter as a function of the coupling is shown in Fig.~\ref{fig:diffusion}.
It is zero at $\alpha_s N_c=0,\alpha_s N_c=\infty$ and it has to have a maximum at some intermediate values of $\alpha_s N_c$.
\begin{figure}[htb]
\centerline{\epsfig{file=diffusionc.eps,width=0.6\textwidth}}
\caption{The value of the diffusion coefficient as a function of the coupling constant.}
\label{fig:diffusion}
\end{figure}
\section{Conclusions}
In these lectures I gave a brief overview of the high energy limit in hadronic collisions. It is expected that the high energy limit is governed by the exchange of the
object with the quantum numbers of the vacuum, called the Pomeron. In QCD it can be calculated by the summation of the Feynman diagrams in the leading
logarithmic approximation in the logarithms of the energy. The result leads to the very strong increase of the amplitude with the energy and it is not
compatible with the experimental data. The resummation of the subleading corrections was shown to tame this rapid growth and reduce the value of the intercept of the Pomeron.
The large amount of the corrections comes from the exact treatment of the kinematics: energy-momentum conservation constraint and the kinematical constraint.
By putting these two constraints onto the kernel, one can show that there is a limit on a value of the Pomeron intercept when the coupling constant becomes infinite.
It corresponds to $\omega_0=1$ which is the value if there was an exchange of an object with spin two. Several important questions remain.
The unitarity corrections should become equally important in addition to the single Pomeron exchange, \cite{Hatta:2007he}. The graviton itself emerges here as an object
which consists of very soft gluons , in the limit where the infrared divergences of the gauge theory cancel. The considerations so far were only done at the level
of the fixed coupling in a model which is close to N=4 SYM theory rather than QCD. Running coupling effects and mixing with quarks must be taken into
account when considering real QCD.
\section*{Acknowledgments}
I would like to thank the organizers of the Cracow School of Theoretical Physics for a possibility to give this presentation and for the
very interesting school.
This research is supported by the U.S. D.O.E. under grant
number DE-FG02-90ER-40577 and by the
Polish Committee for Scientific
Research grant No. KBN 1 P03B 028 28.
|
1,116,691,497,833 | arxiv | \section*{Abstract}
Topological data analysis is a recent and fast growing field that approaches the analysis of datasets using techniques from (algebraic) topology. Its main tool, persistent homology (PH), has seen a notable increase in applications in the last decade. Often cited as the most favourable property of PH and the main reason for practical success are the stability theorems that give theoretical results about noise robustness, since real data is typically contaminated with noise or measurement errors. However, little attention has been paid to what these stability theorems mean in practice. To gain some insight into this question, we evaluate the noise robustness of PH on the MNIST dataset of greyscale images. More precisely, we investigate to what extent PH changes under typical forms of image noise, and quantify the loss of performance in classifying the MNIST handwritten digits when noise is added to the data. The results show that the sensitivity to noise of PH is influenced by the choice of filtrations and persistence signatures (respectively the input and output of PH), and in particular, that PH features are often not robust to noise in a classification task.
\vskip 0.25cm
\noindent \emph{Keywords: topological data analysis, persistent homology, filtration, persistence signature, stability, noise robustness, image analysis, classification.}
\section{Introduction}
\label{section_introduction}
Homology goes back to the beginnings of topology in Poincaré's influential papers, who represented the notion of a connectivity of a space with its cycles of different dimensions (e.g., 0-, 1-, and 2-dimensional cycles respectively correspond to connected components, loops, and cavities). These cycles are shown to organize themselves into abelian groups, called homology groups, and their ranks (referred to as the Betti numbers of the space) are non-negative integers corresponding to the number of independent cycles in each dimension \cite{epstein2011topological}. This homology information can be very useful, as it allows to classify spaces and uncover the underlying structure of a space. For a detailed study of homology, we refer to \cite{hatcher2005algebraic}.
Real data are a finite set of observations and do not directly reveal any topological information, since topological features are usually associated with continuous spaces. To circumvent this issue, the underlying topological structure of the data (e.g., a point cloud, a finite set of data points in space) can be estimated at different scales with a nested family of topological spaces, called filtration. The filtration is used to calculate the information about $k$-dimensional cycles that \emph{persist} across different scales of data, referred to as persistent homology (PH) \cite{zomorodian2005computing, edelsbrunner2008persistent, edelsbrunner2010computational}. More precisely, $k$-dimensional PH registers the scale (also referred to as resolution, or time) at which every $k$-dimensional cycle appears and disappears in the filtration. This PH information can be represented using different signatures, e.g., using sets, vectors, functions, or scalars. The pipeline for PH is visualized in Figure~\ref{fig-pipeline}, and explained in greater detail in the next section. For a gentle, but detailed introduction to PH for a broad range of computational scientists, see \cite{otter2017roadmap}.
\begin{figure}[!h]
\centering
\small
\begin{tabular}{p{1.5cm} c p{7.5cm} c p{1.65cm} c p{1.65cm}}
\toprule
space && filtration && \multicolumn{3}{l}{PH, i.e., persistence signature} \\
$S$ & $\rightarrow$ & $S_{r_1} \subset S_{r_2} \subset S_{r_3} \subset \dots \subset S_{r_t}$ & $\rightarrow$ & $PD$ & ($\rightarrow$ & $PL$ or $PI$) \\
\midrule
\includegraphics[width=1.5cm]{figures/pipeline_image.png} &&
\includegraphics[width=7.5cm]{figures/pipeline_image_filtration.png} &&
\includegraphics[width=1.5cm]{figures/pipeline_image_PD.png} &&
\includegraphics[width=1.5cm]{figures/pipeline_image_PL.png} \\
\midrule
\includegraphics[width=1.5cm]{figures/pipeline_point_cloud.png} &&
\includegraphics[width=7.5cm]{figures/pipeline_point_cloud_filtration.png} &&
\includegraphics[width=1.5cm]{figures/pipeline_point_cloud_PD.png} &&
\includegraphics[width=1.5cm]{figures/pipeline_point_cloud_PI.png} \\
\bottomrule
\end{tabular}
\caption{Persistent homology pipeline. PH can be calculated for different types of spaces $S,$ which can represent a single data observation (typical for classification tasks) or a complete dataset. In this figure, we calculate the PH information for an image. The input for PH is a filtration, a nested family of spaces that approximate the structure of $S$ at different scales $r_1 < r_2 < \dots r_t.$ For example, to approximate the structure of an image at scale $r,$ we can look only at pixels within distance $r$ from the top left pixel (top panel). Alternatively, we can look at an image as a point cloud, and approximate its structure at resolution $r$ by constructing an edge between two points whenever they are within distance $r$ (bottom panel). For a homological dimension $k$ (in the figure, $k=1$), PH registers the birth and death time $r$ of every $k$-dimensional cycle (connected component, loop, void, etc.) within the filtration, and is commonly summarized with a scatter plot of birth and death coordinates, referred to as persistence diagram $(PD).$ It is often interesting or even necessary to transform the $PD$ into a different persistence signature, such as a persistence landscape ($PL,$ top panel) or a persistence image ($PI,$ bottom panel).}
\label{fig-pipeline}
\end{figure}
Over the past two decades, persistent homology has found many applications in data science, e.g., in the analysis of local behaviour of the space of natural images \cite{carlsson2008local}, analysis of images of hepatic lesions \cite{adcock2014classification}, human and monkey fibrin \cite{berry2018functional}, fingerprints \cite{giansiracusa2017persistent}, or diabetic retinopathy images \cite{garside2019topological}, analysis of 3D shapes \cite{skraba2010persistence, turner2014persistent}, neuronal morphology \cite{kanari2018topological}, brain artery trees \cite{bendich2016persistent, biscio2019accumulated}, fMRI data \cite{riecktopological, cassidy2015brain, stolz2018topological}, protein binding \cite{kovacev2016using}, genomic data \cite{camara2016inference} orthodontic data \cite{heo2012topological}, coverage in sensor networks \cite{de2007coverage}, plant morphology \cite{li2018persistent, li2018persistent}, fluid dynamics \cite{kramar2016analysis}, dynamical systems describing the movement of biological aggregations \cite{topaz2015topological}, cell motion \cite{bonilla2020tracking}, models of biological experiments \cite{ulmer2019topological}, force networks in granular media \cite{kramar2013persistence}, structure of amorphous and nanoporous materials \cite{nakamura2015persistent, lee2017quantifying}, spatial structure of the locus of afferent neuron terminals in crickets \cite{brown2012structure}, or spread of the Zika virus \cite{lo2018modeling}. An exhaustive collection of applications of topological data analysis to real data can be found at \cite{giunti2021applications}.
The main reason behind the recent popularity of persistent homology in data analysis is its proven stability: PH is robust under small perturbations in the input, which is of crucial importance for practical applications due to the unavoidable presence of noise or measurement error in real data \cite{adams2017persistence}. Moreover, PH is commonly assumed to be a topological invariant and therefore robust under affine transformations.
However, it is often overlooked in the literature how strongly the stability theorems are influenced by the choice of a:
\begin{itemize}
\item filtration, the input for PH, or the medium through which the homology information is extracted from data.
Indeed, it is important to remember that PH is not directly calculated on the data (e.g., an image, or a point cloud, see the first column in Figure~\ref{fig-pipeline}), but on the filtration that approximates the shape of data at different scales (see the second column in Figure~\ref{fig-pipeline}). The filtration must satisfy the underlying assumptions in the stability theorem, which then ensures robustness under minor perturbations in the input - filtration, not necessarily under minor perturbations of the data. Moreover, the level of robustness is directly determined by the filtration.
\item persistence signature, the output of PH, or the medium used to represent PH. Indeed, the stability theorems do not provide a guarantee of the noise robustness of PH in general, but rather prove the stability of a selected signature (with the corresponding metric) (particular choice in the third or fourth column in Figure~\ref{fig-pipeline}).
\end{itemize}
In addition, the choice of filtration influences the type of information captured with PH: for some filtrations, PH can reveal geometric information and thus not be invariant e.g., under rotation or translation. Furthermore, even if the stability theorem holds for the given filtration and signature, little attention has been paid to what these stability theorems mean in practice. In particular, it is unclear if the stability results imply the noise robustness of PH features in a classification task.
To investigate these issues, we carry out computational experiments that evaluate the noise robustness of PH on the MNIST dataset of greyscale images, under different types of noise. More precisely, the main objective of this work is to address the following research questions, across different filtrations and persistence signatures:
\begin{itemize}
\item[(RQ1)] How much does PH change under noise in the data?
\item[(RQ2)] How discriminative is PH if the data contains noise?
\end{itemize}
The findings of this paper can therefore help to guide the choice of appropriate filtrations and signatures, especially in the presence of noise in the data. To the best of our knowledge, this issue has not been studied in literature so far. In the majority of studies that apply PH to tackle a particular problem (and are thus not concerned with noise robustness in particular), a single filtration and signature are commonly adopted, without a discussion on the motivation, assumptions, and implications behind the specific choice. There are a few noteworthy examples in the literature, such as \cite{garin2019topological}, which do consider multiple filtrations and/or signatures (on the MNIST dataset), but they focus on the discriminative power, rather than the noise robustness of PH features. The authors do conclude, however, that PH is reputed for its robustness to noise, and suggest to conduct a similar study under different types of image noise \cite{garin2019topological}.
The next sections introduce the filtrations (Section~\ref{section_filtrations}), persistence signatures (Section~\ref{section_signatures}) and stability theorems (Section~\ref{section_stability_theorems}). We focus on a few common examples of filtrations and signatures that will be used in our computational experiments, and also discuss our choice of parameters. We then proceed to evaluate the robustness of PH on the MNIST image dataset of handwritten digits in Section~\ref{section_experiments}. The final section summarizes the findings and limitations of this work, and provides suggestions for future research.
\section{Filtrations}
\label{section_filtrations}
Persistent homology can be calculated for various types of space $S,$ whether it represents point cloud, time series, graph or image data. To extract the PH information from a space, one must define a suitable filtration. The construction of a filtration in general relies on structured complexes, a type of topological space that is particularly important in algebraic topology due to their combinatorial nature that allows for the computation of homology.
When the space is a point cloud $X \subset \mathbb{R}^n$, the most common choice for a structured complex is the simplicial complex, a set composed of simplices (points, line segments, triangles, tetrahedrons, and their $k$-dimensional counterparts, embedded in $\mathbb{R}^n$), that is closed under taking subsets (so that, for instance, if a triangle is in the simplicial complex, then all its edges and vertices are also elements of the simplicial complex) \cite{edelsbrunner2010computational, otter2017roadmap}. Probably the most well-known is the Vietoris-Rips simplicial complex $VR(X, r)$ \cite{vietoris1927hoheren}, built by constructing (i) a line segment for any pair of points in $X$ within distance $r$ of each other, (ii) a triangle, if the points in a triplet are all within distance $r$ of each other, and so forth. Different values of the so-called resolution parameter $r$ create different simplicial complexes and reveal different cycles. Hence, a single value of $r$ captures information about the space $X$ only at the given scale. However, the filtration $VR(X),$ defined as the nested family of subspaces
$$VR(X, r_1) \subseteq VR(X, r_2) \subseteq \dots \subseteq VR(X, r_t),$$
can be used to depict how $k$-dimensional cycles persist across different values $r_1 < r_2 < \dots r_t$ of the resolution $r$ (see Figure~\ref{fig-pipeline}, bottom panel).
A \emph{filtration} can alternatively be calculated using a \emph{filtration function} $\phi: \mathbb{R}^n \rightarrow \mathbb{R}$, simply by considering the sublevel sets of $\phi,$ determined by a scale cut-off $r:$
$$X_r = \{ \mathbf{y} \in \mathbb{R}^n \mid \phi(\mathbf{y}) \leq r \} \quad (r \in \mathbb{R}).$$
The Rips filtration is obtained with $\phi = \delta_X,$ where $\delta_X(\mathbf{y})$ is the minimum distance between $\mathbf{y} \in \mathbb{R}^n$ and any point $\mathbf{x} \in X$ on the point cloud. Indeed, according to the definition of the sublevel set of the distance function $\delta_X,$
$$X_r = \{ \mathbf{y} \in \mathbb{R}^n \mid \delta_X(\mathbf{y}) \leq r \} = \cup_{x \in X} B(\mathbf{x}, r),$$ where $B(\mathbf{x}, r)$ is a ball with radius $r$ centred around $\mathbf{x}$; this union of balls approximates $VR(X, r)$ \cite{chazal2011geometric, adams2017persistence}. However, the distance function $\delta_X$ is extremely sensitive to outliers and noise (``even one outlier is deadly", or, in the language of robust statistics, the distance function has breakdown point zero \cite{chazal2017robust}). To circumvent this issue, \cite{chazal2011geometric} propose to rather consider distance-to-a-measure (DTM) $\delta_{X, m}$ as the filtration function, which is defined as the average distance from a given number of nearest neighbours in $X$ (and is thus a smooth version of the distance function) \cite{anai2020dtm}. The number of neighbours that are considered is determined by the parameter $m,$ which represents the percentage of the total number of point cloud $X$ points. In our computational experiments, we will quantify how much robustness to noise is actually gained in practice with PH on the DTM (with $m=0.1,$ a commonly suggested and typically default value) compared to the Rips filtration.
In this paper, we are interested in calculating PH for an image. Let $Z$ be a greyscale image, i.e., $Z=[z_{uv}],$ where $z_{uv}$ is the greyscale value of the pixel $(u, v),$ $u \in \{1, 2, \dots, n_x \},$ $v \in \{1, 2, \dots, n_y \},$ $n_x$ and $n_y$ are the numbers of pixels in respectively $x$ and $y$ direction. We can consider the image $Z$ as a 2D point cloud $X(Z, z_0) \subset \mathbb{R}^2$ consisting of all $(u, v) \in \mathbb{R}^2$ corresponding to pixels with a greyscale value above a fixed user-given threshold $z_0.$ A possibility is then to define the filtration for the image $Z$ via the Rips (Figure~\ref{fig-pipeline}, bottom panel) or DTM filtration of the point cloud $X(Z, z_0).$
However, a point cloud (and its simplicial complex) is not the most natural representation of an image. Indeed, this representation does not exploit the natural grid structure of images \cite{garin2019topological}. For an image $Z,$ we can rather consider its so-called cubical complex $K(Z)$, the cubical analogue to a simplicial complex, in which the role of simplices is played by cubes of different dimension (points, line segments, squares, cubes, and their $k$-dimensional counterparts) \cite{kaczynski2006computational}. The squares correspond to the image pixels, the edges to the sides of the pixels, and the points to the pixel corners.\footnote{Another way to construct a cubical complex from an image is to consider the dual of the cubical complex defined above: the points reflect the pixels, the line segments the intersections of pairs of non-diagonally neighbouring pixels, and squares reflect the intersections of four pixels \cite{skraba2020wasserstein}.}
To define a filtration function $\phi: K(Z) \rightarrow \mathbb{R},$ it is thus necessary to define the value of $\phi$ on each cube in $K(Z).$ A natural filtration function on a cubical complex assigns to each square the value of the image on the corresponding pixel, and we use $\phi(u,v)$ to denote the value of the filtration function on the square corresponding to pixel $(u, v).$ The filtration function on the line segments and points is defined as the minimum value of all bordering pixels.\footnote{A natural filtration function on the dual cubical complex assigns the pixel values as the values of the function on the points, and sets the function values for line segments and squares as the maximum value of all bordering simplices. These two methods differ with respect to the diagonally neighbouring pixels, as they are considered connected with the first approach, but not the second, what can result in substantially different persistent homology \cite{skraba2020wasserstein}.} From such a filtration function, it is straightforward to build a filtration $K_{r_1} \subseteq K_{r_2} \subseteq \dots \subseteq K_{r_t},$ where $K_r$ is the union of all cubes corresponding to pixels $(u, v)$ with $\phi(u, v) \leq r$ (Figure~\ref{fig-pipeline}, top panel, and Figure~\ref{fig-cubical-filtration}). This family of nested subspaces is commonly referred to as the filtered cubical complex. Note that this means that (the cubes corresponding to) the pixels with the lowest filtration function value appear first and persist the longest in the filtration.
\begin{figure}[!h]
\includegraphics[width=17cm]{figures/cubical_filtration.png}
\caption{Filtration on a cubical complex. The first image represents the values $[0, 100]$ of the filtration function $\phi$. The next nine figures show the cubical complexes $K_{10} \subseteq K_{20} \subseteq K_{30} \subseteq \dots \subseteq K_{90},$ where $K_r$ corresponds to the union of all cubes, i.e., pixels $(u, v)$ with the filtration value $\phi(u,v) \leq r.$ There is only one 1-dimensional cycle, i.e., hole (one-pixel hole in the third row and third column), which is first seen in $K_{40},$ and then disappears or closes in $K_{70}.$}
\label{fig-cubical-filtration}
\end{figure}
In this paper, we consider the following filtration functions $K(Z) \rightarrow \mathbb{R}$ on the cubical complex $K(Z)$ of an image $Z=[z_{uv}],$ described previously in \cite{garin2019topological}.
\begin{itemize}
\item binary: The binary filtration function considers binary values of pixels by introducing a greyscale threshold $z_0$:
$$
\phi_{z_0} (u, v) =
\begin{cases}
0 & z_{uv} \geq z_0 \\
1 & \text{otherwise.} \\
\end{cases}
$$
PH with respect to this filtration function corresponds to the homology of the image \cite{garin2019topological}, meaning that it only determines the \emph{number} of cycles (Betti numbers). It is of crucial importance that the greyscale threshold parameter $z_0$ is sufficiently low, so that all dark pixels are part of the filtration immediately at scale $r=0.$ Indeed, if only a single pixel along some hole has a greyscale value below the given threshold $z_0,$ this pixel will only be a part of the filtration at resolution $r=1,$ as any other pixel in the image, so that the hole is never seen at any scale in the filtration.
\item greyscale: In order to study how cycles persist with respect to the greyscale value, a nonbinary filtration function is a more natural choice. In the greyscale filtration function, one relates each pixel to its greyscale value:
$$\phi_{\text{grsc}}(u, v) = \max(Z) - z_{uv}.$$
An advantage of the greyscale compared to other considered filtrations is that it is parameter-free. In particular, it does not require an a-priori defined greyscale threshold. Next to the number of cycles, PH with respect to the greyscale filtration function thus also captures information about the brightness of the cycles.
\item density: If the greyscale value of a single pixel changes significantly (e.g., from black to white), an existing hole in an image might get disconnected, or an additional single-pixel hole might appear. To avoid such sensitivity to outlying greyscale values, we can rather consider the density filtration function. Thereby, we relate each pixel to the number of ``dark-enough" pixels in its neighbourhood. More precisely, let the neighbourhood $N((u, v), d_0, z_0)$ be the set of all pixels $(u', v')$ with $z_{u'v'} \geq z_0$ (for a given threshold $z_0)$, that are within given distance from pixel $(u, v):$
$$\| (u', v') - (u, v) \|_2 \leq d_0.$$
Density filtration function is then defined as:
$$\phi_{d_0, z_0}(u, v) = N(d_0) - |N((u, v), d_0, z_0)|,$$
where $N(d_0)$ is the total number of pixels within distance $d_0,$ for any $(u, v).$ The threshold parameter $z_0$ is not of crucial importance. For instance, if only one pixel along a hole is very bright, the hole will never be seen in the binary filtration, but it will persist from early on in the density filtration, for most of the values of $z_0.$ A good choice for the size of the neighbourhood $d_0$ obviously depends on the size of the image. For the dataset of $28 \times 28$ MNIST images, we take $d_0=1.$
\item radial: While the greyscale and density filtration capture information about the brightness of cycles, it is possible to capture other information. For example, the \emph{position} of cycles is captured with PH if one considers the radial filtration function defined as the distance from a given reference pixel $(u_0, v_0):$
$$\phi_{(u_0, v_0), z_0}(u, v) =
\begin{cases}
\| (u, v) - (u_0, v_0) \|_2 & z_{uv} \geq z_0 \\
\max_{(u', v')} \| (u', v') - (u_0, v_0) \|_2, & \text{otherwise} \\
\end{cases}
$$
Similar as for the binary filtration function, the greyscale threshold $z_0$ is crucial for the radial filtration as well, whereas the density and Rips filtration are less sensitive to this parameter (point cloud points corresponding to non-neighbouring pixels can still be connected with an edge, for a sufficiently large resolution $r$). However, to be consistent, we take the same threshold value $z_0 = 0.5 \max(Z)$ for the Rips, DTM, binary, density and radial filtrations.
The choice of the reference pixel $(u_0, v_0)$ depends on where the important topological features are expected to be located in an image, and how this location differs across classes of data. For instance, if we consider $(u_0, v_0)$ to be a pixel in the centre of the image, the holes in digits 6 and 9 would be seen at the same resolution $r$ in the filtration. Since we aim to differentiate between digits 6 and 9, we will consider $(u_0, v_0)=(0, 0).$
\end{itemize}
Figure~\ref{fig-filtrations} visualizes the filtration functions discussed in this section.
\begin{figure}[!h]
\includegraphics[width=17cm]{figures/filtrations.png}
\caption{Filtrations. The first plot shows an example MNIST image $Z,$ with greyscale pixel values in $[0, 250].$ The next four plots respectively show the heatmap for the binary, greyscale, density and radial filtration function $\phi: K(Z) \rightarrow \mathbb{R},$ where $K(Z)$ is the cubical complex corresponding to the given example image. The final two plots visualize the heatmap of $\phi: K(Z) \rightarrow \mathbb{R},$ where $\phi$ is the discretized version of the Rips and DTM filtration functions $\delta_{X(Z, z_0)}: \mathbb{R}^2 \rightarrow \mathbb{R}$ and $\delta_{X(Z, z_0), m}: \mathbb{R}^2 \rightarrow \mathbb{R},$ and $X(Z, z_0)$ is the point cloud corresponding to the image $Z.$}
\label{fig-filtrations}
\end{figure}
Persistent homology information in dimension $k$ captures the values of resolution $r$ when each $k$-dimensional cycle is born and when it dies in a filtration, denoted with $b$ and $d$. The cardinality of this multi-set of persistence intervals $(b_i, d_i)$ $(i \in \mathbb{N}^+)$ counts the number of $k$-dimensional cycles (although many or even a majority might only show up in the filtration for a brief while, i.e., for a small range of resolution values $r,$ yielding very short lifespans or persistence $l_i=d_i-b_i$, and as will be explained later in the paper, can thus be considered as irrelevant). However, the choice of the filtration defines the interpretation of the birth values and death values, which can reflect additional topological or geometric information, such as their size or position (Figure~\ref{fig-ph-filtrations}).
\begin{figure}[!h]
\centering
\scalebox{0.84}{
\begin{tabular}{p{1.5cm} p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{1.75cm} p{2.5cm}}
\toprule
filtration & \multicolumn{7}{c}{1-dimensional PH} & topological or geometric information about a hole \\ \midrule
& \multicolumn{7}{l}{\includegraphics[width=14.5cm]{figures/choice_of_filtration_images.png}} & \\ \midrule
binary
& \footnotesize $\emptyset$
& \footnotesize (0, 1)
& \footnotesize (0, 1)
& \footnotesize {\noindent \color{blue} (0, 1)}
& \footnotesize {\noindent \color{blue} $\emptyset$}
& \footnotesize (0, 1)
& \footnotesize (0, 1) \newline (0, 1)
& - \\ \midrule
greyscale
& \footnotesize $\emptyset$
& \footnotesize (17, 255) \newline (159, 171)
& \footnotesize (17, 255) \newline (159, 171)
& \footnotesize {\noindent \color{blue} (0, 255)}
& \footnotesize {\noindent \color{blue} $\emptyset$}
& \footnotesize (14, 255) \newline (213, 231) \newline
& \footnotesize (9, 255) \newline (19, 255) \newline (65, 86) \newline (223, 242) \newline (234, 255)
& greyscale along \newline and within hole \\ \midrule
density
& \footnotesize $\emptyset$
& \footnotesize (1, 5)
& \footnotesize (1, 5)
& \footnotesize (2, 5)
& \footnotesize (3, 5)
& \footnotesize (1, 5) \newline (3, 4) \newline (4, 5)
& \footnotesize (1, 5) \newline (2, 5) \newline (3, 4)
& density along \newline and within hole \\ \midrule
radial
& \footnotesize $\emptyset$
& \footnotesize (25.00, 38.18)
& \footnotesize (25.00, 38.18)
& \footnotesize {\noindent \color{blue} (24.76, 38.18)}
& \footnotesize {\noindent \color{blue} $\emptyset$}
& \footnotesize (20.00, 38.18)
& \footnotesize (20.12, 38.18) \newline (24.60, 38.18)
& position \\ \midrule
Rips
& \footnotesize (1.00, 1.41)*
& \footnotesize (1.00, 1.41)* \newline {\color{red} (1.00, 8.94)} \newline (2.00, 2.24)
& \footnotesize (1.00, 1.41)* \newline {\color{red} (1.00, 6.40)} \newline (2.00, 2.24) \newline {\color{red} (5.00, 6.00)}
& \footnotesize (1.00, 1.41)* \newline (1.00, 5.00)
& \footnotesize (1.00, 1.41)* \newline (2.00, 5.00)
& \footnotesize (1.00, 1.41)* \newline (1.00, 4.12) \newline (2.83, 3.00)
& \footnotesize (1.00, 1.41)* \newline (1.00, 3.00) \newline (1.00, 3.60) \newline (2.83, 3.00)
& sparsity, \newline distance across \newline hole \\ \midrule
DTM
& \footnotesize (4.13, 4.36)**
& \footnotesize (4.40, 4.60)** \newline (4.94, 12.81)
& \footnotesize (4.08, 4.38)** \newline (11.51, 11.95) \newline (4.94, 12.32)
& \footnotesize (4.08, 4.33)** \newline (5.49, 8.60)
& \footnotesize (4.39, 4.54)** \newline (6.10, 8.74)
& \footnotesize (4.16, 4.58)** \newline (6.17, 6.19) \newline (4.16, 7.43)
& \footnotesize (4.02, 4.33)** \newline (6.43, 6.51)** \newline (4.34, 6.57) \newline (5.00, 7.43)
& sparsity, \newline size \\
\bottomrule
\end{tabular}
}
\caption{Persistent homology across filtrations. Persistent homology is a multi-set of persistence intervals $(b_i, d_i),$ where $b_i$ and $d_i$ are respectively the time when a cycle $i$ (a connected component, loop, void, etc.) is born, and when it dies in a filtration. The table lists 1-dimensional PH calculated for a few example MNIST images (or an image with an outlying pixel), across selected filtrations. The notation $(b, d)^*$ implies that multiple cycles appear and disappear at the same time (thus, PH is a \emph{multi}-set, where each element has its multiplicity). The notation $(b, d)^{**}$ implies that there are multiple intervals with a similar birth and death value. The cardinality of the set of persistence intervals determines the number of cycles. However, the definition of the filtration implies the interpretation of birth and death times, so that PH with different filtrations captures different topological (and geometric) information, what further influences its noise robustness and discriminating power. For example, an additional point at an outlying distance from a point cloud can have an important influence on PH with the Rips filtration (e.g., an additional black pixel within a hole will change the persistence of that hole, see persistence intervals in red), but this is less true for the DTM filtration, as the outlier will have a large distance from the nearest point cloud neighbours and will thus appear only very late in the filtration. A reverse example is a pixel with an outlying greyscale value (e.g., white pixel in a dark region) which has an important influence on PH with the binary, greyscale and radial filtration (in blue), but much less for the density, Rips and DTM filtration. If geometric information is captured, PH becomes sensitive under some affine transformations. Furthermore, 1-dimensional PH with binary, greyscale and density filtration cannot differentiate between digits 0, 6 and 9 (as they all have one hole of similar brightness), but radial filtration allows to discriminate between digits 6 and 9 (as the holes have a different position), and the Rips and DTM filtration enable to distinguish between 0 and 6 (as the holes are of different size).}
\label{fig-ph-filtrations}
\end{figure}
Obviously, the choice of filtration has an important influence on the noise robustness and discriminative power of PH. If PH only registers the \emph{number} of holes, it is of topological nature and is invariant under rotations, translations, or stretching (in topology, a coffee mug is equivalent to a donut), which can be useful in some applications, such as recognition of animals, cars, or people in images. If PH also captures the position of holes, it is sensitive to rotation, but able to differentiate, e.g., between digits 6 and 9. If the size of the holes is also captured, the PH information is not robust to rescaling, but it enables us to differentiate between a 6 and a 0.
In this paper, we consider the \textbf{binary-, greyscale-, density-, and radial-filtered cubical complexes, and the Rips and DTM-filtered simplicial complexes} as the input for PH. Other filtrations have been introduced in the literature, such as the kernel distance or kernel density estimate (which inherit some reconstruction properties of DTM) \cite{phillips2013geometric}, dilation (the smallest distance to a black pixel, thus representing a cubical analogue to the Rips filtration), erosion (inverse dilation), signed distance (a combination of dilation and erosion) \cite{garin2019topological}, etc. Our goal is to emphasize how PH with different filtrations capture different information, and our selection is thus sufficient to illustrate this issue.
\section{Persistence signatures}
\label{section_signatures}
As already mentioned, $k$-dimensional persistent homology is a multi-set of intervals $(b_i, d_i),$ with $b_i$ and $d_i$ corresponding to the scale $r \in \mathbb{R}$ when a $k$-dimensional cycle $i$ appears and disappears in the filtration. In order to represent this information visually, or to apply statistical inference or machine learning on PH, different methods are available, where the multi-set of birth-death values is represented using diagrams, functions, vectors, or even scalars. Below we provide an intuitive introduction to the persistence signatures used in this paper, which are the most common in the literature, and refer the reader to the relevant references for more details.
\begin{itemize}
\item \textbf{persistence diagram} ($PD$): Persistence diagram is the most straightforward representation of PH as a scatter plot of points $(b_i, d_i),$ counted with their multiplicity, and union all points on the diagonal, counted with infinite multiplicity \cite{edelsbrunner2000topological, cohen2007stability} (Figure~\ref{fig-pipeline}).
An advantage of $PD$s compared to other persistence signatures is that they are parameter-free, but they also have an important disadvantage: they are not convenient for statistical inference, because their complicated structure makes common algebraic operations - such as addition, division, and multiplication - challenging \cite{berry2018functional} (so that, for instance, the mean might not be unique \cite{mileyko2011probability}). Furthermore, although $PD$s can be equipped with a metric structure (discussed below) which enables to perform a variety of machine learning techniques such as some clustering algorithms, many other machine learning tools (decision tree classification, neural networks, feature selection, some dimension reduction methods, and others) require more than a metric structure - they require a vector space \cite{adams2017persistence}.
\item \textbf{(vectorized) persistence landscape} ($PL$): Persistence landscape is a function $\lambda: \mathbb{N} \times \mathbb{R} \rightarrow \mathbb{R}$ obtained by ``stacking isosceles triangles" whose bases are the PH intervals $(b_i, d_i),$ and whose heights reflect the so-called lifespans (or persistence) $l_i=d_i-b_i$ (Figure~\ref{fig-pipeline}, top panel). Alternatively, it may be thought of as a sequence of functions $\lambda_j: \mathbb{R} \rightarrow \mathbb{R},$ where $\lambda_j(r) = \lambda(j, r)$ depicts how long the $j$-th most dominant cycle has lived until the moment $r$ in the filtration $(r-b_i)$, or how long from $r$ before it dies $(d_i-r)$ (Figure~\ref{fig-pipeline}, top panel, two functions in green and orange).
In contrast to $PD$s, persistence landscapes lie in a Banach space, and are thus easy to combine with tools from statistics: they obey a strong law of large numbers and a central limit theorem, and the space of landscapes does have a unique mean \cite{bubenik2015statistical}. However, for many machine learning tasks, it is necessary to consider finite vectors rather than functions, and a discretization of the function $\lambda$ into a vector $PL$ requires two additional parameters: we need to decide on the maximum number of first landscape functions $\lambda_j$ to consider, and on the number of points where each of these functions is evaluated, referred to as the landscape resolution. The number of main connected components or holes in the MNIST dataset is typically 0, 1 or 2. However, additional cycles might appear in noisy images, and we thus consider the first 10 landscapes $(j \in \{1, 2, \dots, 10 \});$ although we immediately note that this means that $PL$s and $PD$s do not necessarily capture the same information (e.g., if there are more than 10 cycles in this case). Obviously, this number should be higher if we expect a large number of important cycles that discriminate between classes of data. We set the landscape resolution to $100.$
\item \textbf{persistence image} ($PI$): Persistence image is constructed by superimposing a grid over a $PD,$ and depicting the volume below the weighted sum of Gaussian probability density functions, on each grid cell \cite{adams2017persistence} (Figure~\ref{fig-pipeline}, bottom panel). This is a more sophisticated variant of counting the number of cycles in each of the grid bins \cite{rouse2015feature}. Since there are no points below the $PD$ diagonal, it makes sense to first apply a linear transformation which transforms the multiset of birth-death $(b, d)$ to birth-persistence $(b, d-b)=(b, l)$ coordinates. Each of the Gaussian functions is centered at a point $(b, l),$ with the height of the peak influenced by a given non-negative weight function $\rho: \mathbb{R}^2 \rightarrow \mathbb{R}.$
Typically, $\rho$ reflects some information about the cycles, and it usually depends only on the vertical persistence coordinate $l$ (corresponding to the lifespan of the cycle, $l=d-b$); we choose $\rho(b, l)=l^2.$ In our experiments, we consider a grid of size $10 \times 10,$ and set the Gaussian function variance to $5\%$ of the maximum death value in $PD$s for the given filtration function and homological dimension. An important advantage of $PI$s is their flexibility, since it is possible to tweak their definition with different grid resolution, weight function, but also different probability density function (and their associated parameters). However, this requirement to make a choice about the $PI$ parameters is also its weakness, since the choice is noncanonical \cite{adams2017persistence}.
\end{itemize}
A more detailed, step-by-step procedure to construct $PL$s and $PI$s from $PD$s can be found in the literature, see, for example, \cite[Figure 6, Figure 7]{stolz2018topological}. Figuress~\ref{fig-noise-robustness-example-homdim0} and \ref{fig-noise-robustness-example-homdim1} show 0- and 1-dimensional $PD$s, $PL$s and $PI$s for an example MNIST image, across selected filtrations.
In order to evaluate the noise robustness of PH, we are interested in computing the distance between PH information of two images. These two images will, for example, be the non-noisy and noisy version of an image. Different metrics can be considered on the space of any persistence signatures. The most common distance between $PD$s is the Wasserstein distance:
\begin{equation} \label{eq_def_wasserstein_distance}
W_{p}(PD_1, PD_2) = \inf_\tau \Big( \sum_i \| (b_i, d_i) - \tau(b_i, d_i) \|_\infty^p \Big)^\frac{1}{p},
\end{equation}
where the infimum is taken across all bijections $\tau: PD_1 \rightarrow PD_2,$ and the sum across all persistence intervals $(b_i, d_i) \in PD_1$ \cite{skraba2020wasserstein}. There exists a bijection between any two $PD$s, since it is possible to add as many points on the $PD$ diagonal as necessary \cite{cohen2007stability}. This notion of distance is popular in computer vision \cite{cohen2010lipschitz}, and it is the common metric for optimal transportation problem \cite{kantorovich2006translocation} (with a bijection $\tau$ from $PD_1$ to $PD_2$ corresponding to a transport plan). In the computational experiments, we will consider the Wasserstein $W_2$ metric between $PD$s, and the $l_2=\|\cdot\|_2$ metric for the vector persistence signatures $PL$s and $PI$s. The parameter $p$ in both $W_p$ and $l_p$ determines the importance of long compared to short distances.
Furthermore, for a chosen $p,$ the choice of persistence signature also influences the importance of cycle lifespans. Indeed, it is easy to see that the Wasserstein $W_p^p$ and $l_p^p$ distances between $PD$s and $PL$s or $PI$s corresponding to $PH_1=\{(b,d)\}$ and $PH_2=\emptyset$ reflect $(d-b)^p,$ $(d-b)^{p+1}$ and $\rho^p(b, d-b).$ Since we consider $\rho(b, d-b)=(d-b)^2$ as the weight function for persistence images, this means that the cycles that persist for a short time matter the least for $PI$s, and the most for $PD$s (Table~\ref{tab-signs}).
\begin{table}[!ht]
\caption{Persistent homology across signatures. The choice of persistence signature, and the corresponding metric, determines how sensitive PH is to cycles $(b, d)$ with short persistence, or lifespan, $l=d-b.$ The table lists the limiting behaviour, or growth rate, of the function $\delta^2(PH, \emptyset) = \delta^2(\{(b,d)\}, \emptyset) = f(d-b)=f(l),$ where distance $\delta$ represents the Wasserstein $W_2$ distance between persistence diagrams, or $l_2$ distance between persistence landscapes or persistence images. The growth rate reflects the importance of a cycle with lifespan $l,$ which influences the noise robustness and discriminative power of PH.}
\centering
\begin{tabular}{ll}
\toprule
{\bf Persistence signature} & {\bf Limiting behaviour of $\delta^2(PH, \emptyset)$} \\
\midrule
\noalign{\smallskip}
$PD$ & $O(l^2)$ \\
$PL$ & $O(l^3)$ \\
$PI$, with weight function $\rho(b, l)=l^2$ & $O(l^4)$ \\
\bottomrule
\end{tabular}
\label{tab-signs}
\end{table}
The choice of persistence signature, and the corresponding metric, therefore has an important influence on the noise robustness and discriminative power of PH, although, surprisingly, little research has been carried out in this area before \cite{fasy2020comparing}. Recently, \cite{fasy2020comparing} evaluated the overlap between the $l_p$ distances between persistence landscapes and persistence images, and the Wasserstein $W_p$ distances between persistence diagrams, on three different datasets (including MNIST images). The results clearly show that the distances between vectorized persistence summaries greatly differ from the distances between $PD$s. Another recent and detailed investigation of the distance correlation between different persistence signatures can be found in \cite{turner2020same}: the authors conclude that the considered signatures are ``same but different", as they commonly contain the same information, but are shown to yield different results from statistical analyses since they lie in different metric spaces. In addition, the classification accuracy is shown to vary greatly when distances between shapes are given by the distances between their $PD$s, $PL$s or $PI$s in \cite[Table 1]{adams2017persistence}.
Some other persistence signatures have been introduced in the literature, such as: Betti numbers (across scales) \cite{islambekov2019harnessing, umeda2017time}; silhouette \cite{chazal2014stochastic}, which combines all layers of persistence landscape functions into a single, weighted average function, with greater weight assigned to features with longer lifespans \cite{berry2018functional}; persistence intensity function \cite{chen2015statistical}, which is evaluated on a grid to obtain a persistence image \cite{berry2018functional}; or Euler characteristic, the difference between the number of connected components and the number of holes (across scales) \cite{li2018persistent}. As already indicated in the introduction, persistent homology information can also be summarized with a scalar, for instance with: amplitude, distance from empty persistence diagram \cite{garin2019topological}; entropy, a real number calculated using the lifespans of all features \cite{garin2019topological}, which thus only depends on the persistence but not on the particular birth or death times; or an algebraic function of $b_i$ and $d_i-b_i$, e.g., $\sum b_i ^p (d_i-b_i)^q,$ so that $p$ and $q$ determine the importance of some of the qualities about cycles (e.g., size of holes) \cite{adcock2013ring}. To avoid the difficult task of choosing among the ``zoo of persistence signatures" \cite{turner2020same}, one can learn the best vector summary of persistence diagram (with e.g., PersLay, a simple neural network layer \cite{carriere2020perslay}, or ATOL, an unsupervised vectorization method \cite{royer2019atol}). We do not adopt this approach, as our goal is to illustrate the differences in the noise robustness of PH across signatures. For this purpose, we investigate their behaviour separately, and limit our study to common persistence signatures.
\section{Stability theorems}
\label{section_stability_theorems}
Stability theorems are among the most important results in applied and computational topology \cite{skraba2020wasserstein}, as they may be viewed as a precise statement about robustness to noise \cite{cohen2007stability}: stable representations of PH are not sensitive to noise in the input.
More precisely, for persistence diagrams calculated with respect to filtration functions $\phi$ and $\psi$, a stability theorem ensures that there exists a constant $c \in \mathbb{R}$ such that:
$$W_p(PD(\phi), PD(\psi)) \leq c \|\phi-\psi\|_p.$$
The stability of $PD$s was first proved for $p = \infty$ (the easiest case, since $W_\infty$ is the least sensitive to details in the diagrams \cite{edelsbrunner2010computational}), under some mild conditions on the underlying space $S$ and the filtration functions $\phi$ and $\psi$ \cite{cohen2007stability, chazal2009proximity, dlotko2018rigorous}. A few years later, the stability was shown to hold for large enough $p$ and under additional assumptions \cite{cohen2010lipschitz}, and recently, for any $p$ \cite{skraba2020wasserstein}.
A stability theorem for other persistence signatures $PH$ states the following:
$$\|PH(\phi) - PH(\psi)\|_p \leq c W_p(PD(\phi), PD(\psi)).$$
Persistence landscapes are shown to be stable for large enough $p$ \cite[Theorem 13, Theorem 16]{bubenik2015statistical}, but this fails to be true for $p=2$ \cite[Theorem 7.7]{skraba2020wasserstein}. Stability of persistence images holds for $p=1$ (under some assumptions on the weight function $\rho$) \cite{adams2017persistence}, but not for $p=2$ \cite[Theorem 3]{reininghaus2015stable}, \cite[Remark 6]{adams2017persistence}.
In the remainder of this section, we discuss the importance of the choice of filtration, signature, and dataset in the interpretation of stability theorems, that is often overlooked in the literature. This discussion then motivates our computational experiments in the next section.
\subsection{Stability theorems and the choice of filtration}
\label{section_stability_theorems_subsection_filtration}
The choice of filtration plays a crucial role in the existence and practical value of stability theorems. First of all, in order for the stability theorem to hold for a particular filtration, the filtration function must satisfy the underlying assumptions.
Second, note that the stability theorems ensure that PH is robust under minor perturbations of its input - filtration, and not under minor perturbations in the data space itself. Small changes in the space do not always imply small changes in the filtration function, so that stability theorems provide no guarantee of robustness in such a scenario. For instance, if $Z'$ is obtained by changing the image $Z$ only slightly, $\|\delta_{X(Z, z_0)} - \delta_{X(Z', z_0')}\|_p$ can be large (and it corresponds to the Gromov-Hausdorff distance between point clouds $X(Z, z_0)$ and $X(Z', z_0'),$ for $p=\infty$ \cite{chazal2017introduction}). Although $PDs$ are theoretically stable with respect to the Rips filtration (with the distance function $\delta_{X(Z, z_0}: \mathbb{R}^n \rightarrow \mathbb{R}$ as its filtration function), the upper bound for $W_p(PD(\delta_{X(Z, z_0)}), PD(\delta_{X(Z', z_0')}))$ is so large that it makes little sense in practice: these $PDs$ are sensitive to outliers.
Finally, stability theorems are worst-case results, as they do not necessarily ensure tightness of the upper bound provided for the distance between PH information. This is true even if small perturbations in the data result only in small perturbations of the filtration. Let us consider an image $Z,$ and another image $Z'$ obtained with some transformation $\pi: Z \rightarrow Z'.$ If we apply the stability theorem to the space $Z$ and filtration functions $\phi_{\text{grsc}}: K(Z) \rightarrow \mathbb{R}$ and $\psi_{\text{grsc}} = \phi_{\text{grsc}} \circ \pi: K(Z) \rightarrow \mathbb{R},$ the right-hand side in the stability theorem $\|\phi_{\text{grsc}}-\psi_{\text{grsc}}\|_p$ (and this change in the greyscale values precisely corresponds to the change in the image) is an upper bound for the change in $PD$s.
If $Z$ is the MNIST image of digit 6, and $Z'$ the same image but with one pixel changed from black to white (Figure~\ref{fig-ph-filtrations}), then $\|\phi_{\text{grsc}} - \psi_{\text{grsc}}\|_p=255$ is sufficiently large to allow to change $PD(\phi_{\text{grsc}})$ with one hole to $PD(\psi_{\text{grsc}})$ with no holes. However, if $Z$ is the MNIST image of a digit 0, and $Z'$ the same image but with one pixel changed from white to black (Figure~\ref{fig-ph-filtrations}), we again have $\|\phi_{\text{grsc}}-\psi_{\text{grsc}}\|_p = 255$ but $PD$ remains unchanged. As another example, we can consider $Z'$ to be the translated image $Z,$ when $\|\phi - \psi\|_p$ is large for both the greyscale or radial filtration function. However, $W_p(PD(\phi), PD(\psi))$ is zero when $\phi$ is greyscale (as $PD$s then only register the number and brightness of cycles), but it is large for the radial filtration function (which also captures the position of cycles).
\subsection{Stability theorems and the choice of persistence signature}
\label{section_stability_theorems_subsection_signature}
It is clear from the introduction of this section that stability theorems only hold for some signatures, and some metrics. We already mentioned that $PL$s and $PI$s are shown not to be stable with respect to the $l_2$ metric in \cite{skraba2020wasserstein}, although this is the standard choice in applications, when it is commonly assumed that these are stable representations. This is one of the reasons why \cite{skraba2020wasserstein} recently emphasized that ``the stability theorems are one of the most misunderstood and miscited results within the field of topological data analysis". It is, however, interesting to see if the stability holds in practice, and to which degree.
\subsection{Stability theorems and the choice of dataset}
\label{section_stability_theorems_subsection_dataset}
If the stability theorem holds for a chosen filtration and persistence signature, it does not imply the noise robustness of PH features in a classification task - this depends on the application domain, i.e., the choice of dataset.
Let us go back to the example of $Z$ being the MNIST image of digit 6, and $Z'$ being the same image but with one pixel changed from black to white (Figure~\ref{fig-ph-filtrations}). As already indicated, the upper bound for the greyscale filtration is large enough to allow to change $PD(\phi_{\text{grsc}})$ with one hole to $PD(\psi_{\text{grsc}})$ with no holes. This is problematic for the classification of the MNIST dataset using $PD$s, since any image contains none, one or two holes, but it would pose less of an issue if there is a greater variety in the number of holes across data classes.
\section{Results and discussion}
\label{section_experiments}
We start this section by describing the dataset of greyscale images, and the different types of noise considered in our experiments. In the next subsection, we investigate how sensitive the persistent homology information is to these types of noise, by evaluating the distance between PH for noisy and non-noisy images. This information, however, only paints a part of the picture, since in practical use cases, the PH information must also vary sufficiently among data points in order to form discriminative features in e.g., classification tasks. In the final subsection, we thus investigate the noise robustness of persistent homology together with its discriminative power, by evaluating the drop in classification accuracy when the test data consists of noisy, compared to non-noisy images.
The code is available at \url{https://renata-turkes.github.io/}. The code allows to replicate our study, or to easily investigate the noise robustness of PH with different filtrations and persistence signatures, and their parameters and corresponding metric, for other datasets of greyscale images, types of noise and/or classifiers.
\subsection{(Noisy) Datasets}
\label{section_experiments_subsection_noisy_data}
We consider the MNIST dataset \cite{lecun1998mnist}, as it is a well-defined benchmark of greyscale images, and the shape of each of the digits is well understood. To reduce the computation time, we restrict the study to the first 1000 images in the dataset. We investigate three types of affine transformations, changes in image brightness and contrast, and three types of pure noisy transformations, each at two different levels, and in different directions, if applicable (Table~\ref{tab-noise}).\footnote{The greyscale pixel values are clipped to the interval [0, 255].}
\begin{table}[!ht]
\caption{Image noise.}
\centering
\begin{tabular}{p{4cm}p{11cm}}
\toprule
{\bf Transformation} & {\bf Definition of transformation} \\
\midrule
rotation & Rotation by 45 degrees clockwise (rotation 45), or 90 degrees counterclockwise (rotation -90). \\ \midrule
translation & Translation by 1 pixel right and down (translation 1 1), or 2 pixels left and up (translation -2 -2). \\ \midrule
stretch, shear and reflect & Stretch, shear and flip respectively by a factor of 1.5 (i.e., an image is scaled down by factor of 1.5 in the $x$ direction, whereas it remains unchanged in the $y$ direction, so that the image is stretched), 10 degrees and horizontal (stretch-shear-flip 1.5 10 h), or by a factor 0.75, -20 degrees and vertical (sstretch-shear-flip 0.75 -20 v). \\ \midrule
brightness & -50 or 100 is added to the greyscale value of each pixel (brightness -50 and brightness 100, respectively). \\ \midrule
contrast & Greyscale value of each pixel is multiplied with 2 or 0.5 (contrast 2 and contrast 0.5, respectively). \\ \midrule
gaussian noise & Random noise drawn from normal distribution $\mathcal{N}(0, 10)$ or $\mathcal{N}(0, 20)$ is added to the greyscale value of each pixel (gaussian noise 10 and gaussian noise 20, respectively). \\ \midrule
salt and pepper noise & 5\% or 10\% of random pixels in an image are changed, with equal probability, to either white (i.e., salt) or black (i.e., pepper) (salt and pepper noise 5 and salt and pepper noise 10 respectively). \\ \midrule
shot noise & Greyscale value of each pixel is replaced with a random number drawn from the Poisson distribution, with the distribution mean corresponding to the original greyscale pixel value, scaled down with a factor 50 (shot noise 50) or 100 (shot noise 100), as Poisson distribution is spread out more for lower means. Since the Poisson distribution with mean zero is equal to zero, the shot noise only changes non-white pixels. \\
\bottomrule
\end{tabular}
\label{tab-noise}
\end{table}
For every (non-noisy and noisy) dataset, i.e., for each image in each of the datasets, we calculate the values of filtration functions on each pixel, and the 0- and 1-dimensional persistent homology\footnote{For 0-dimensional homology, we truncate the death value of infinite intervals to the maximum finite death value for the given filtration function, across all transformations.} information with respect to all considered filtrations and persistence signatures (with the specified values of the parameters) using the python GUDHI library \cite{gudhi:urm}.
\subsection{Noise robustness}
\label{section_experiments_subsection_noise_robustness}
The goal of this section is to understand in what way, and to which degree, is the persistent homology information sensitive to noise, across different filtrations and persistence signatures. In order to address this question, we start by visualizing the different filtration functions and persistent homology information for an \emph{example} MNIST image, under different data transformations (Figures~\ref{fig-noise-robustness-example-homdim0} and \ref{fig-noise-robustness-example-homdim1}). We can conclude the following.
\begin{itemize}
\item affine transformations (rotation, translation, stretch-shear-reflect): PH on binary and greyscale filtration remains unchanged under any affine transformations\footnote{In the computational experiments, the affine transformations sometimes slightly disturb the greyscale values, so that, e.g., some cycles can appear or disappear in an image (see, for instance, the additional one-pixel hole for the binary filtration under rotation $45$ in Figure~\ref{fig-noise-robustness-example-homdim1}).}, since it only registers the number and brightness of cycles (it is a topological invariant). However, under stretch-shear-reflect the density along and within a hole changes, which results in a change of birth and death values for 1-dimensional cycles with respect to the density filtration. Radial filtration function captures the position of the cycles, so that the birth and death values of cycles can also change significantly under any affine transformation. PH on Rips and DTM filtration is robust under rotation and translation. However, PH with Rips and DTM filtration captures the size of cycles, and is thus sensitive to affine transformations that rescale the image. For instance, under stretch-shear-reflect that enlarges a digit, the number of point cloud points increases, resulting in many additional short persisting 0-dimensional cycles for these filtrations. The death value of 1-dimensional cycles for Rips and DTM filtration also changes under stretch-shear-flip, as the PH in this case reflects the size of the hole.
\item brightness: PH on binary and radial filtration does not see important changes if the brightness of an image is adjusted. However, a change in image brightness does result in changes of the birth or death values in 1-dimensional PH on greyscale or density filtration, and additional cycles can be captured with density filtration. A change of thickness of a digit also results in additional 0-dimensional cycles for Rips and DTM filtration, that are of short persistence, but there are many. For these filtrations, there is also a minor change in the death value for 1-dimensional cycles, as it captures the size of the hole that can change under a change in brightness.
\item contrast: PH with respect to most of the considered filtrations is invariant under changes in the contrast of an image. The only exception is 1-dimensional PH with greyscale filtration, where the birth or death value of cycles can change.
\item salt and pepper noise: Gaussian, salt and pepper, and shot noise change the greyscale value of some random pixels. For each black pixel on a white background in the salt and pepper noise, a new one-pixel connected component (a long persisting 0-dimensional cycle) appears for PH on binary, greyscale, and radial filtration. If a pixel in a neighbourhood of black pixels is changed to white, an additional long persisting 1-dimensional cycle (one-pixel hole) can appear for PH on these filtrations. Also, an existing hole in the non-noisy image may become disconnected in the noisy image, and thus not registered. The additional 0-dimensional cycles are all born at birth value 0 for PH on Rips filtration, but they die earlier (as soon as they are connected to another point cloud point), so that Rips is more robust under this transformation, but still severely impacted by the outliers. One-pixel or disconnected holes are not an issue for PH on Rips filtration, but the death value of 1-dimensional cycles can decrease due to the additional pixels within a hole (see also the image of digit 0, and the same image with a single outlier in Figure~\ref{fig-ph-filtrations}). PH on DTM filtration is significantly more robust to salt and pepper noise, as the outliers are ``washed out".
\item gaussian noise: The gaussian noise produces similar type of perturbations as the salt and pepper noise, but the change in the greyscale value is much less prominent, so that no additional cycles are typically seen with the binary or radial filtration (which take the binary image as input), and have a very low persistence for the greyscale filtration.
\item shot noise: Shot noise only changes the non-white pixels (to lighter or darker), so that a digit might become disconnected into a few components, a hole might become disconnected, and many one-pixel holes may appear. The additional 0-dimensional cycles have a long lifespan for binary and radial filtration, but they are short for PH on greyscale filtration (or more precisely, they are directly related to the strength of the change of the greyscale pixel values) and density filtration. 1-dimensional PH with these filtrations exhibits similar behaviour. As already mentioned, PH with Rips and DTM filtration is more robust under this type of noise, since disconnected components or holes can still be captured, as the Rips and DTM filtration connect non-neighbouring pixels with a sufficiently large edge (resolution $r$ in the filtration).
\end{itemize}
$PD$s, $PL$s and $PI$s reflect the same information about the cycles, and Figs \ref{fig-noise-robustness-example-homdim0} and \ref{fig-noise-robustness-example-homdim1} show that they change accordingly. However, without considering the metric on these spaces of persistence signatures, we cannot derive any insights regarding the difference in the noise robustness from these figures.
\begin{figure}[!h]
\centering
\scalebox{0.725}{
\begin{tabular}{p{1.5cm} p{0.45cm} p{0.45cm} p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm} p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}}
\toprule
\rotatebox{90}{filtration} & \rotatebox{90}{persistence signature} & \rotatebox{90}{no noise} & \rotatebox{90}{rotation 45} & \rotatebox{90}{rotation -90} & \rotatebox{90}{translation 1 1} & \rotatebox{90}{translation -2 -2} & \rotatebox{90}{stretch-shear-flip 1.5 10 h} & \rotatebox{90}{stretch-shear-flip 0.75 -20 v} & \rotatebox{90}{brightness -50} & \rotatebox{90}{brightness 100} & \rotatebox{90}{contrast 2} & \rotatebox{90}{contrast 0.5} & \rotatebox{90}{gaussian noise 10} & \rotatebox{90}{gaussian noise 20} & \rotatebox{90}{salt and pepper noise 5} & \rotatebox{90}{salt and pepper noise 10} & \rotatebox{90}{shot noise 50} & \rotatebox{90}{shot noise 100} \\
\midrule
- & - & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_image.png}}} \\
\midrule
\multirow{7}{1.5cm}{binary} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary_0-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary_0-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary_0-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{greyscale} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc_0-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc_0-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc_0-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{density} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density_0-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density_0-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density_0-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{radial} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial_0-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial_0-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial_0-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{Rips} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips_0-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips_0-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips_0-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{DTM} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM_0-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM_0-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM_0-dim_PI.png}}} \\
\bottomrule
\end{tabular}
}
\caption{Noise robustness of 0-dimensional persistent homology on an example image. Illustration of the effect of various image transformations when the image is represented with its filtration function values (1st row of each filtration), or 0-dimensional persistence diagram (2nd row), persistence landscape (3rd row), or persistence image (4th row).}
\label{fig-noise-robustness-example-homdim0}
\end{figure}
\begin{figure}[!h]
\centering
\scalebox{0.725}{
\begin{tabular}{p{1.5cm} p{0.45cm} p{0.45cm} p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm} p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}p{0.45cm}}
\toprule
\rotatebox{90}{filtration} & \rotatebox{90}{persistence signature} & \rotatebox{90}{no noise} & \rotatebox{90}{rotation 45} & \rotatebox{90}{rotation -90} & \rotatebox{90}{translation 1 1} & \rotatebox{90}{translation -2 -2} & \rotatebox{90}{stretch-shear-flip 1.5 10 h} & \rotatebox{90}{stretch-shear-flip 0.75 -20 v} & \rotatebox{90}{brightness -50} & \rotatebox{90}{brightness 100} & \rotatebox{90}{contrast 2} & \rotatebox{90}{contrast 0.5} & \rotatebox{90}{gaussian noise 10} & \rotatebox{90}{gaussian noise 20} & \rotatebox{90}{salt and pepper noise 5} & \rotatebox{90}{salt and pepper noise 10} & \rotatebox{90}{shot noise 50} & \rotatebox{90}{shot noise 100} \\
\midrule
- & - & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_image.png}}} \\
\midrule
\multirow{7}{1.5cm}{binary} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary_1-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary_1-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_binary_1-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{greyscale} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc_1-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc_1-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_grsc_1-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{density} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density_1-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density_1-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_density_1-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{radial} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial_1-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial_1-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_radial_1-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{Rips} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips_1-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips_1-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_Rips_1-dim_PI.png}}} \\
\midrule
\multirow{7}{1.5cm}{DTM} & - & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM.png}}} \\
& $PD$ & \multicolumn{17}{l}{\raisebox{-0.3cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM_1-dim_PD.png}}} \\
& $PL$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM_1-dim_PL.png}}} \\
& $PI$ & \multicolumn{17}{l}{\raisebox{-0.35cm}{\includegraphics[width=14.7cm]{figures/noise_robustness_example_DTM_1-dim_PI.png}}} \\
\bottomrule
\end{tabular}
}
\caption{Noise robustness of 1-dimensional persistent homology on an example image. Illustration of the effect of various image transformations when the image is represented with its filtration function values (1st row of each filtration), or 1-dimensional persistence diagram (2nd row), persistence landscape (3rd row), or persistence image (4th row).}
\label{fig-noise-robustness-example-homdim1}
\end{figure}
Furthermore, the major part of the discussion above is based only on a single example data point. We therefore calculate the ($l_2$ or $W_2$) distance between each image in the dataset, and its noisy variant, when images are represented with their filtration functions or persistent homology information (Table~\ref{tab-noise-robustness-distances}). The results on the complete dataset align well with the findings discussed above for an example image. In addition, Table~\ref{tab-noise-robustness-distances} clearly shows that, for any given filtration and homological dimension, there is a relative difference between $PD$s, $PL$s and $PI$s in robustness under various transformations. For instance, 0-dimensional PH on Rips filtration is more sensitive to salt and pepper than shot noise for any persistence signature, but this difference is much more pronounced for $PL$s, and in particular for $PI$s, compared to $PD$s.
\begin{table}[!ht]
\caption{Noise robustness of persistent homology on 1000 MNIST greyscale images. The table shows the distance $\|\phi - \psi\|_2$ between the filtration function values on the non-noisy and noisy image (1st row of each filtration), the Wasserstein distance $W_2(PD(\phi), PD(\psi))$ between 0- or 1-dimensional persistence diagrams (2nd and 5th row), the distance $\|PL(\phi) - PL(\psi)\|_2$ between persistent landscapes (3rd and 6th row), or the distance $\|PI(\phi) - PI(\psi)\|_2$ between persistent images (4th and 7th row), averaged across 1000 images in the MNIST dataset.}
\centering
\scalebox{0.6}{
\begin{tabular}{p{1.5cm} p{0.5cm} p{0.5cm} rrrrrrrr rrrrrrrr}
\toprule
\rotatebox{90}{\bf filtration} & \rotatebox{90}{\bf homological dimension} & \rotatebox{90}{\bf persistence signature} & \rotatebox{90}{\bf rotation 45} & \rotatebox{90}{\bf rotation -90} & \rotatebox{90}{\bf translation 1 1} & \rotatebox{90}{\bf translation -2 -2} & \rotatebox{90}{\bf stretch-shear-flip 1.5 10 h} & \rotatebox{90}{\bf stretch-shear-flip 0.75 -20 v} & \rotatebox{90}{\bf brightness -50} & \rotatebox{90}{\bf brightness 100} & \rotatebox{90}{\bf contrast 2} & \rotatebox{90}{\bf contrast 0.5} & \rotatebox{90}{\bf gaussian noise 10} & \rotatebox{90}{\bf gaussian noise 20} & \rotatebox{90}{\bf salt and pepper noise 5} & \rotatebox{90}{\bf salt and pepper noise 10} & \rotatebox{90}{\bf shot noise 50} & \rotatebox{90}{\bf shot noise 100} \\
\midrule
\multirow{ 7 }{2cm}{ binary } & - & - & 11.4 & 11.95 & 9.12 & 11.9 & 10.59 & 12.28 & 2.73 & 5.63 & 4.31 & \textbf{0.00} & 1.45 & 2.11 & 4.42 & 6.24 & 5.0 & 6.34 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & 0.01 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 0.02 & 0.01 & 0.01 & 0.02 & 0.01 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 1.80 & 2.43 & 0.17 & 0.64 \\
&& $ PL $ & 0.06 & \textbf{0.00} & \textbf{0.00} & 0.01 & 0.16 & 0.10 & 0.09 & 0.18 & 0.12 & \textbf{0.00} & 0.04 & 0.03 & 12.03 & 12.17 & 1.42 & 5.16 \\
&& $ PI $ & 10.82 & \textbf{0.00} & \textbf{0.00} & 1.91 & 27.37 & 15.92 & 17.19 & 30.56 & 21.01 & \textbf{0.00} & 5.73 & 5.73 & 8430.76 & 15135.00 & 282.02 & 1395.47 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & 0.04 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 0.05 & 0.05 & 0.03 & 0.11 & 0.08 & \textbf{0.00} & 0.01 & 0.02 & 0.32 & 0.52 & 0.67 & 0.66 \\
&& $ PL $ & 0.33 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 0.37 & 0.40 & 0.27 & 0.91 & 0.64 & \textbf{0.00} & 0.10 & 0.17 & 2.61 & 4.23 & 5.45 & 5.33 \\
&& $ PI $ & 18.59 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 20.20 & 20.20 & 14.14 & 51.72 & 35.36 & \textbf{0.00} & 5.05 & 8.89 & 170.32 & 333.36 & 503.68 & 494.59 \\
\midrule
\multirow{ 7 }{2cm}{ grsc } & - & - & 2454.9 & 2707.85 & 1906.79 & 2679.12 & 2335.78 & 2656.99 & 1268.72 & 2646.74 & 653.29 & 3295.03 & 207.45 & 412.55 & 1104.09 & 1560.62 & 814.52 & 1092.34 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & 22.23 & 0.04 & \textbf{0.00} & 0.34 & 26.36 & 24.21 & 1.64 & 13.44 & 13.64 & 12.78 & 46.77 & 92.04 & 454.15 & 611.47 & 90.59 & 125.29 \\
&& $ PL $ & 65.97 & 0.18 & \textbf{0.00} & 1.89 & 94.07 & 73.28 & 8.72 & 50.54 & 53.11 & 44.99 & 88.45 & 236.47 & 3059.20 & 3102.04 & 449.45 & 710.91 \\
&& $ PI $ & 2.46 & \textbf{0.00} & \textbf{0.00} & 0.10 & 3.44 & 2.78 & 0.66 & 1.88 & 2.33 & 1.69 & 5.67 & 19.13 & 795.05 & 1427.76 & 23.79 & 44.92 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & 19.29 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 22.02 & 21.96 & 22.24 & 46.86 & 26.27 & 52.87 & 17.31 & 34.23 & 77.16 & 118.86 & 97.97 & 132.88 \\
&& $ PL $ & 125.50 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 147.35 & 133.64 & 165.39 & 314.50 & 162.01 & 331.13 & 58.95 & 121.08 & 516.37 & 810.41 & 532.87 & 803.11 \\
&& $ PI $ & 19.24 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 18.80 & 18.97 & 20.95 & 20.49 & 19.09 & 19.23 & 13.05 & 19.03 & 29.18 & 49.70 & 32.54 & 57.68 \\
\midrule
\multirow{ 7 }{2cm}{ density } & - & - & 43.71 & 46.95 & 28.47 & 44.68 & 39.94 & 47.96 & 6.65 & 15.38 & 11.08 & \textbf{0.00} & 3.29 & 4.8 & 10.1 & 14.79 & 12.12 & 17.68 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & 0.60 & \textbf{0.00} & \textbf{0.00} & 0.01 & 0.90 & 0.69 & 0.61 & 0.78 & 0.70 & \textbf{0.00} & 0.29 & 0.41 & 1.75 & 2.35 & 1.52 & 2.10 \\
&& $ PL $ & 2.40 & 0.01 & 0.01 & 0.04 & 3.84 & 2.72 & 2.51 & 3.13 & 2.78 & \textbf{0.00} & 1.15 & 1.62 & 7.35 & 10.66 & 7.15 & 11.04 \\
&& $ PI $ & 6.10 & 0.01 & 0.01 & 0.08 & 11.77 & 7.37 & 6.96 & 8.60 & 7.30 & \textbf{0.00} & 2.42 & 3.58 & 22.52 & 33.69 & 21.81 & 35.23 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& PD & 0.32 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 0.45 & 0.39 & 0.25 & 0.60 & 0.43 & \textbf{0.00} & 0.09 & 0.17 & 0.66 & 1.21 & 0.69 & 1.06 \\
&& $ PL $ & 1.78 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 2.68 & 2.04 & 1.44 & 3.75 & 2.58 & \textbf{0.00} & 0.51 & 0.94 & 2.84 & 5.04 & 3.73 & 5.39 \\
&& $ PI $ & 7.04 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 10.41 & 7.81 & 5.64 & 16.54 & 11.27 & \textbf{0.00} & 1.95 & 3.64 & 8.32 & 17.30 & 12.49 & 14.21 \\
\midrule
\multirow{ 7 }{2cm}{ radial } & - & - & 205.27 & 216.86 & 172.73 & 202.44 & 196.24 & 233.76 & 49.75 & 102.54 & 78.48 & \textbf{0.00} & 25.93 & 38.34 & 84.06 & 119.18 & 90.77 & 115.35 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & 3.30 & 4.17 & 1.67 & 3.17 & 3.74 & 6.46 & 0.48 & 1.19 & 0.86 & \textbf{0.00} & 0.20 & 0.29 & 34.17 & 46.66 & 3.84 & 12.07 \\
&& $ PL $ & 19.65 & 27.58 & 11.64 & 21.85 & 23.97 & 44.58 & 2.20 & 6.59 & 4.40 & \textbf{0.00} & 0.86 & 1.12 & 214.34 & 277.03 & 20.39 & 68.70 \\
&& $ PI $ & 18.11 & 23.74 & 16.06 & 23.37 & 19.76 & 33.39 & 1.57 & 5.97 & 3.60 & \textbf{0.00} & 0.51 & 0.87 & 92.99 & 154.38 & 6.78 & 21.56 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & 1.44 & 1.37 & 0.59 & 1.18 & 1.51 & 2.56 & 0.57 & 2.00 & 1.36 & \textbf{0.00} & 0.22 & 0.35 & 5.36 & 8.83 & 11.45 & 11.37 \\
&& $ PL $ & 8.07 & 7.33 & 3.60 & 6.51 & 8.36 & 13.99 & 3.00 & 10.82 & 7.31 & \textbf{0.00} & 1.19 & 1.88 & 29.14 & 48.57 & 63.27 & 62.90 \\
&& $ PI $ & 4.40 & 4.12 & 3.23 & 4.25 & 4.33 & 6.26 & 1.01 & 4.01 & 2.59 & \textbf{0.00} & 0.39 & 0.66 & 9.44 & 17.20 & 24.42 & 24.86 \\
\midrule
\multirow{ 7 }{2cm}{ Rips } & - & - & 75.71 & 95.39 & 28.75 & 55.67 & 81.14 & 84.12 & 6.55 & 14.88 & 10.92 & \textbf{0.00} & 3.3 & 4.79 & 89.41 & 106.4 & 10.27 & 12.58 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & 0.82 & 0.01 & 0.01 & 0.09 & 2.88 & 2.85 & 1.39 & 2.85 & 2.18 & \textbf{0.00} & 0.50 & 0.63 & 7.72 & 8.48 & 1.95 & 3.29 \\
&& $ PL $ & 0.21 & \textbf{0.00} & \textbf{0.00} & 0.07 & 0.37 & 0.25 & 0.24 & 0.46 & 0.31 & \textbf{0.00} & 0.11 & 0.09 & 35.45 & 30.59 & 2.11 & 4.77 \\
&& $ PI $ & 2.96 & 0.01 & 0.02 & 0.37 & 29.10 & 29.33 & 7.04 & 28.41 & 16.83 & \textbf{0.00} & 1.40 & 1.85 & 129.83 & 163.86 & 10.19 & 22.72 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & 0.51 & \textbf{0.00} & \textbf{0.00} & 0.02 & 1.21 & 1.05 & 0.64 & 1.34 & 1.01 & \textbf{0.00} & 0.25 & 0.34 & 1.15 & 1.80 & 1.21 & 1.56 \\
&& $ PL $ & 0.83 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 2.11 & 1.27 & 0.69 & 1.83 & 1.22 & \textbf{0.00} & 0.25 & 0.42 & 3.16 & 5.21 & 1.85 & 2.59 \\
&& $ PI $ & 0.68 & 0.01 & \textbf{0.00} & \textbf{0.00} & 2.12 & 1.68 & 0.68 & 2.05 & 1.28 & \textbf{0.00} & 0.24 & 0.38 & 1.71 & 2.77 & 1.56 & 2.46 \\
\midrule
\multirow{ 7 }{2cm}{ DTM } & - & - & 66.74 & 86.45 & 25.79 & 51.24 & 70.8 & 73.56 & 4.35 & 7.6 & 5.99 & \textbf{0.00} & 2.14 & 3.08 & 22.51 & 36.64 & 6.77 & 9.22 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & 1.37 & 0.01 & 0.01 & 0.10 & 3.73 & 4.06 & 1.57 & 2.97 & 2.16 & \textbf{0.00} & 0.81 & 1.05 & 4.00 & 5.88 & 2.21 & 3.30 \\
&& $ PL $ & 1.22 & \textbf{0.00} & 0.01 & 0.08 & 5.28 & 4.78 & 1.60 & 4.19 & 2.69 & \textbf{0.00} & 0.64 & 0.87 & 6.33 & 9.94 & 2.82 & 5.52 \\
&& $ PI $ & 1.66 & \textbf{0.00} & \textbf{0.00} & 0.12 & 14.20 & 12.01 & 3.38 & 12.00 & 7.33 & \textbf{0.00} & 0.81 & 1.08 & 6.89 & 13.81 & 4.93 & 10.58 \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & 0.60 & \textbf{0.00} & \textbf{0.00} & 0.02 & 1.22 & 1.23 & 0.59 & 1.08 & 0.83 & \textbf{0.00} & 0.31 & 0.41 & 0.85 & 1.16 & 0.83 & 1.08 \\
&& $ PL $ & 0.86 & \textbf{0.00} & \textbf{0.00} & 0.02 & 2.32 & 2.26 & 0.79 & 1.32 & 1.01 & \textbf{0.00} & 0.38 & 0.55 & 1.22 & 1.92 & 1.39 & 2.05 \\
&& $ PI $ & 0.23 & \textbf{0.00} & \textbf{0.00} & \textbf{0.00} & 0.60 & 0.56 & 0.18 & 0.36 & 0.25 & \textbf{0.00} & 0.09 & 0.14 & 0.30 & 0.52 & 0.42 & 0.65 \\
\midrule
\end{tabular}
}
\label{tab-noise-robustness-distances}
\end{table}
Finally, Table~\ref{tab-noise-robustness-distances} implies that stability theorems do not necessarily provide useful information about the stability in practice. For example, under rotation and gaussian noise, the average value of $\|\phi_{\text{grsc}} - \psi_{\text{grsc}}\|_2$ is respectively equal to $2707.85$ and $412.55.$ However, we see that the distance between PH on noisy and non-noisy images is close to zero for rotation, but it is much larger under gaussian noise.
\subsection{Noise robustness and discriminative power}
\label{section_experiments_subsection_discriminative_power}
In the previous section, we assess the distances between images and their noisy version. In practical applications, however, these distances ought to be compared to the distances between the images in (other classes of) the dataset, which reflect the discriminative power in a classification task. Therefore, in this section, we discuss the noise robustness together with the discriminative power of persistent homology, across different filtrations and persistence signatures, for non-noisy and noisy datasets.
In order to do so, we investigate how much the performance of a classifier (more specifically, a Support Vector Machine (SVM)) drops when noise is added to the dataset. Since $PD$s are multi-sets, we use an SVM with a gaussian kernel:
$$\kappa (Z, Z') = e^{-\frac{\delta^2(Z, Z')}{2\sigma^2}}.$$
For two images $Z$ and $Z'$, $\delta(Z, Z')$ corresponds to the Wasserstein $W_2$ distance\footnote{The space of $PD$s with Wasserstein metric is not of negative type \cite[Theorem 3.2]{turner2020same}, so that this kernel is not an inner product \cite{carriere2017sliced}.} between their $PD$s, or the $l_2$ distance between their filtration function values, $PL$s or $PI$s. For each representation of the images, the SVM regularization parameter (typically noted as $C$, which trades off correct classification of training examples against maximization of the decision function's margin) and the kernel parameter $\sigma^2$ are first tuned using 5-fold cross validation on the training set of $70\%$ non-noisy images.\footnote{We consider $C \in \{10^{-1}, 10^{0}, 10^{1}, 10^{2}\},$ and $\gamma = \frac{1}{2 \sigma^2} \in \{10^{-7}, 10^{-6}, 10^{-5}, 10^{-4}, 10^{-3}, 10^{-2}\}$.} As we are focused on noise robustness, we calculate the relative decrease in accuracy for noisy compared to non-noisy test data (the remaining $30\%$ of images in the dataset). The results are summarized in Figure~\ref{fig-svm-acc-drop}.
\begin{figure}[!h]
\centering
\scalebox{0.7}{
\begin{tabular}{p{2cm} p{0.5cm} p{0.5cm} cccccccc cccccccc}
\toprule
\rotatebox{90}{filtration} & \rotatebox{90}{homological dimension} & \rotatebox{90}{persistence signature} & \rotatebox{90}{rotation 45} & \rotatebox{90}{rotation -90} & \rotatebox{90}{translation 1 1} & \rotatebox{90}{translation -2 -2} & \rotatebox{90}{stretch-shear-flip 1.5 10 h} & \rotatebox{90}{stretch-shear-flip 0.75 -20 v} & \rotatebox{90}{brightness -50} & \rotatebox{90}{brightness 100} & \rotatebox{90}{contrast 2} & \rotatebox{90}{contrast 0.5} & \rotatebox{90}{gaussian noise 10} & \rotatebox{90}{gaussian noise 20} & \rotatebox{90}{salt and pepper noise 5} & \rotatebox{90}{salt and pepper noise 10} & \rotatebox{90}{shot noise 50} & \rotatebox{90}{shot noise 100} \\
\midrule
\multirow{ 7 }{2cm}{ binary } & - & - & {\color{red! 70.68 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 87.97 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 17.67 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 57.52 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 77.44 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 67.29 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.38 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.5 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.38 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 6.77 !blue} \scalebox{ 0.89 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.79 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.79 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.33 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.67 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.67 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.33 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.45 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.45 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.1 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.61 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 30.65 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 35.48 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 46.77 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 56.45 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.61 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 30.65 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 35.48 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 46.77 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 56.45 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.61 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 30.65 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 35.48 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 50.00 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 61.29 !blue} \scalebox{ 0.21 }{\rule{0.5cm}{0.5cm}}} \\
\midrule
\multirow{ 7 }{2cm}{ grsc } & - & - & {\color{red! 69.37 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 88.19 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 19.56 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 56.46 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 77.49 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 66.79 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.12 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 54.24 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.74 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 70.48 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.37 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.95 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.11 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.06 !blue} \scalebox{ 0.9 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 5.41 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.51 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.22 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.11 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.51 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 32.43 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 32.43 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 32.43 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.22 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 29.73 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 26.19 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 26.19 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 28.57 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 28.57 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.29 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.48 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.48 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.48 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.48 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 42.86 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.48 !blue} \scalebox{ 0.14 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.63 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 26.32 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 23.68 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 7.89 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 21.05 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 44.74 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 39.47 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 39.47 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 42.11 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 39.47 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & {\color{red! 1.15 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.15 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 21.84 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 33.33 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 17.24 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.23 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.60 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.05 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 44.83 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 51.72 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 50.57 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 64.37 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 6.98 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 6.98 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.79 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 19.77 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 46.51 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 22.09 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 44.19 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 6.98 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.49 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 43.02 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 58.14 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 47.67 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 65.12 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 11.11 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 19.75 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 11.11 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 35.80 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 51.85 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.81 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 54.32 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.35 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.05 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 41.98 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 45.68 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 44.44 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 49.38 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} \\
\midrule
\multirow{ 7 }{2cm}{ density } & - & - & {\color{red! 71.17 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 88.69 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.42 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 56.57 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 77.74 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 73.36 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.73 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.36 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.36 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.09 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.73 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 7.3 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & {\color{red! 6.67 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.22 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.22 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 17.78 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.89 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.44 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 22.22 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.00 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 44.44 !blue} \scalebox{ 0.15 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 25.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 7.50 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.50 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 37.50 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 17.50 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 42.50 !blue} \scalebox{ 0.13 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 0.00 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.08 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.08 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.04 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.24 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.08 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.04 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 51.02 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 48.98 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 34.69 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 46.94 !blue} \scalebox{ 0.16 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & {\color{red! 6.94 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.89 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.33 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.39 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.33 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 29.17 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 34.72 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.50 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 31.94 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 5.81 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.14 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 6.98 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 6.98 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 23.26 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 41.86 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 18.60 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 45.35 !blue} \scalebox{ 0.29 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 3.57 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.76 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.38 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.38 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.57 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.38 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 20.24 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.48 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 20.24 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 51.19 !blue} \scalebox{ 0.28 }{\rule{0.5cm}{0.5cm}}} \\
\midrule
\multirow{ 7 }{2cm}{ radial } & - & - & {\color{red! 68.7 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 87.4 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 18.32 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 60.69 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 77.86 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 70.99 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.38 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.76 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.38 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.44 !blue} \scalebox{ 0.87 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & {\color{red! 40.91 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 50.00 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 15.45 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 55.45 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 54.55 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 67.27 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.91 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 79.09 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 79.09 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 15.45 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 60.00 !blue} \scalebox{ 0.37 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 41.80 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 50.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 38.52 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 61.48 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 63.11 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 85.25 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.75 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.92 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.64 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.92 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 81.15 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 81.15 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 17.21 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 75.41 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 27.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 43.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 28.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 48.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 64.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 73.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 11.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 71.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 75.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 35.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & {\color{red! 22.78 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 21.52 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 18.99 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.53 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 32.91 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 29.11 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 36.71 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 49.37 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 60.76 !blue} \scalebox{ 0.26 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 20.99 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 19.75 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.23 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 23.46 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 22.22 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.05 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.23 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 27.16 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 39.51 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 64.20 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 64.20 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 19.75 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 24.69 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.64 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 32.10 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 27.16 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 20.99 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.47 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 22.22 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 37.04 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 50.62 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 51.85 !blue} \scalebox{ 0.27 }{\rule{0.5cm}{0.5cm}}} \\
\midrule
\multirow{ 7 }{2cm}{ Rips } & - & - & {\color{red! 76.56 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 84.98 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.79 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 26.37 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 69.23 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 77.29 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.73 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.56 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.83 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 82.42 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 86.08 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.73 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.1 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & {\color{red! 1.43 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 48.57 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 34.29 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.29 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 41.43 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 21.43 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.43 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 5.71 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.57 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 15.71 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 48.57 !blue} \scalebox{ 0.23 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 8.57 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.86 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.57 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.86 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 11.43 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 11.43 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.86 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.00 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 37.14 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.86 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 31.43 !blue} \scalebox{ 0.12 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 2.82 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 53.52 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 36.62 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.08 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 45.07 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 9.86 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 66.20 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 67.61 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.08 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 28.17 !blue} \scalebox{ 0.24 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & {\color{red! 7.41 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 15.74 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.85 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.81 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.85 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.63 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 25.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 39.81 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 17.59 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 33.33 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 8.26 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 5.50 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 9.17 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.92 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 28.44 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 57.80 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 21.10 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 17.43 !blue} \scalebox{ 0.36 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 0.00 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 26.09 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.17 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.74 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.91 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.74 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.04 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 47.83 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 11.30 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 25.22 !blue} \scalebox{ 0.38 }{\rule{0.5cm}{0.5cm}}} \\
\midrule
\multirow{ 7 }{2cm}{ DTM } & - & - & {\color{red! 76.92 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 89.74 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 14.65 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 29.3 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 79.49 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 75.82 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.47 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.73 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.47 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.0 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.1 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.47 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 24.18 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 54.21 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.83 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.4 !blue} \scalebox{ 0.91 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 0 }
& $ PD $ & {\color{red! 17.21 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.82 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.82 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 45.90 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 61.48 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 16.39 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.16 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 29.51 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.82 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 67.21 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 78.69 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 48.36 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 66.39 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 4.03 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 44.35 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 57.26 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.23 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 39.52 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 15.32 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 66.94 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 78.23 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 58.06 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 81.45 !blue} \scalebox{ 0.41 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 58.42 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 51.49 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 34.65 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.87 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 50.50 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 61.39 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 11.88 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 33.66 !blue} \scalebox{ 0.34 }{\rule{0.5cm}{0.5cm}}} \\
\cline{2- 19 }
& \multirow{ 3 }{*}{ 1 }
& $ PD $ & {\color{red! 0.75 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.75 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.75 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 36.84 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 42.86 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.51 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 12.78 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.50 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 22.56 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 51.88 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 21.05 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 39.10 !blue} \scalebox{ 0.44 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PL $ & {\color{red! 15.97 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.39 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 43.06 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 35.42 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 13.19 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 19.44 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 7.64 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 5.56 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.86 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 29.17 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 45.14 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 25.00 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 40.97 !blue} \scalebox{ 0.48 }{\rule{0.5cm}{0.5cm}}} \\
&& $ PI $ & {\color{red! 5.10 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 32.65 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 7.14 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 0.00 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.08 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 3.06 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 1.02 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 4.08 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 22.45 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} & {\color{red! 42.86 !blue} \scalebox{ 0.33 }{\rule{0.5cm}{0.5cm}}} \\
\midrule
&&& \multicolumn{16}{c}{\includegraphics[width=13.5cm]{figures/colorbar_blue_red.png}} \\
\bottomrule
\end{tabular}
}
\caption{Noise robustness and discriminative power of persistent homology on 1000 MNIST greyscale images. The figure shows the drop in SVM classification accuracy when the test dataset is noisy, compared to the non-noisy test set, averaged across 1000 images in the MNIST dataset. Each image is represented either with its filtration function values (1st row of each filtration), or with its 0- or 1-dimensional persistence diagram (2nd and 5th row), persistent landscape (3rd and 6th row) or persistent image (4th and 7th row). The size of the node reflects the absolute accuracy on the non-noisy test data. The colour of the node reflects the accuracy drop, indicated in the colour bar. In particular, the presence of red nodes for PH information (2nd to 7th row) implies that PH is not robust under any type of noise, for any filtration and persistence signature.}
\label{fig-svm-acc-drop}
\end{figure}
We observe that there is at least 35\% drop in SVM accuracy, when images are represented with PH, in the following scenarios:
\begin{itemize}
\item affine transformations: PH on radial filtration under any affine transformation, and PH on Rips and DTM for stretch-shear-flip.
\item brightness: PH on greyscale, Rips and DTM filtration.
\item contrast: PH on greyscale filtration.
\item gaussian, salt and pepper, shot noise: There is a drop in SVM accuracy for all filtrations under salt and pepper or shot noise. For gaussian noise, the drop in accuracy is negligible for PH on all but the greyscale filtration.
\end{itemize}
The drop in accuracy also varies for different persistence signatures. For example, 1-dimensional $PL$s on Rips filtration are more sensitive to salt and pepper noise than $PD$s.
We note, however, that if the classification accuracy on the non-noisy data is low, the loss in performance is limited. For instance, 0-dimensional PH with respect to the binary filtration yields an accuracy of only 10\% (not better than a random guess), as it only counts the number of components (Figure~\ref{fig-ph-filtrations}), and every digit 0-9 commonly consists of a single component. This is why there is no drop in accuracy when SVM is tested on images under salt and pepper noise (Table~\ref{fig-svm-acc-drop}), although this transformation results in an image with many additional connected components (Figure~\ref{fig-noise-robustness-example-homdim0}) and thus significantly changes PH on binary filtration. In these cases, however, the drop in accuracy gives us no reliable information about noise robustness.
When images are represented with their filtration function values on each pixel (including thus the original representation of an image as a vector of greyscale pixel values), the SVM performance is significantly worse for test data consisting of images with affine transformations. However, the performance is relatively stable under changes in image brightness or contrast, or under noisy transformations (with some exceptions). This is an opposite trend compared to PH, which is often robust under affine, but sensitive under noisy transformations (with a significant difference across filtrations and persistence signatures). Even though PH is often reputed for its robustness to noise \cite{garin2019topological}, if data is expected to contain gaussian, salt and pepper or shot noise, the raw representation of images with their greyscale pixel values is robust to noise (there is no drop in SVM accuracy compared to non-noisy data), while this is often not the case for PH features.
Moreover, the absolute SVM accuracy on non-noisy data, when images are summarized with any persistence signature with respect to any filtration cannot compare with the representation of an image with its filtration function values, which contains more detailed geometric information about the image. Indeed, persistent homology only captures information about cycles in an image, and for most of the filtrations, it can only differentiate between two and three classes among the ten MNIST digits 0-9. The classification accuracy can be significantly improved by concatenating PH across different signatures, homological dimensions and/or filtrations (e.g., a combination of PH on radial and Rips filtration captures both information about the position and size of cycles, and can thus discriminate better across classes). For instance, a set of only 28 features obtained from concatenated persistent homology information is shown in \cite{garin2019topological} as sufficient to attain better classification accuracy than the set of greyscale pixel values. An alternative approach to simultaneously exploit PH from multiple filtrations is multi-dimensional persistence \cite{carlsson2009theory}. Since our goal is to gain insight into the noise robustness of individual PH representations, rather than maximizing the performance of classifiers, such an analysis is out of the scope of this work. However, under some types of transformations, the SVM accuracy is better for some PH features (even without concatenation across filtrations or signatures) compared to the raw representation with greyscale pixel values.
\section{Conclusions, limitations and future research}
\label{section_conclusion}
Persistent homology, information about connected components, holes, and cycles in higher dimensions, is commonly characterized in the literature as a \emph{topological} summary \emph{robust to noise}. The main motivation behind this paper is to illustrate how misleading this description can be, particularly for practical applications. We show that the validity of such a characterization, in theory, depends strongly on the choice of filtration and persistence signature (input and output of PH), and in practice, also on the particular application domain.
First of all, we emphasize that the type of information PH captures about cycles, is determined by the choice of filtration. For some filtrations, this information is only of \emph{topological} nature, but for others, some \emph{geometric} information can be captured as well. Topological invariants are robust under affine transformations, but the same does not necessarily hold for geometric invariants, so that the choice of filtration directly influences the noise robustness of PH.
Moreover, we underline how stability theorems, which provide a theoretical guarantee of the noise robustness of PH, depend on the choice of filtration and persistence signature, as well as the distance metric between them. Firstly, stability theorems make some assumptions about the filtration function, e.g., the function must be tame (the corresponding persistence diagram has finitely many off-diagonal points \cite{chazal2009proximity}), monotonic, continuous, Lipschitz or piecewise constant. Secondly, the robustness of PH is only guaranteed under small changes of the input - the filtration, rather than small changes in the space itself. For instance, if one background white pixel in an image is changed to black, the distance between the filtration functions for the common Vietoris-Rips filtration between these two images is large, and indeed, PH with respect to the Rips filtration is sensitive to such outliers. Furthermore, the statement of stability theorems is restricted to the particular choice of persistence signature and metric. This is often overlooked in the literature: e.g., it is common to employ persistence landscapes or persistence images and the euclidean metric, whereas the stability theorems do not hold in such scenarios.
Finally, even if a stability theorem holds for the particular choice of filtration and persistence signature, it does not imply that PH yields noise-robust features in a classification task - this is domain and application-specific. For instance, changing a single pixel in an image from black to white can result in an additional one-pixel hole, which can be persistent for some filtrations. This change in PH is substantial if the number of holes does not vary greatly across classes of data, when the presence of such noise can be expected to deteriorate the classification accuracy. Reversely, even if there is no theoretical guarantee of the stability of PH for the given filtration and signature, it is interesting to evaluate the noise robustness in practice. To gain a better understanding of the noise robustness of PH, we carry out some computational experiments on the well-known MNIST dataset of greyscale images, under some common types of noise to be expected on such data. We conclude that there is a considerable drop in accuracy of SVM trained on PH information of non-noisy and tested on noisy data, for at least 0- or 1-dimensional PH, for at least one of the considered signatures:
\begin{itemize}
\item rotation and translation: radial
\item stretch-shear-flip: radial, Rips, DTM
\item brightness: greyscale, Rips, DTM
\item contrast: greyscale
\item gaussian noise: greyscale
\item salt and pepper, and shot noise: binary, greyscale, density, radial, Rips, DTM.
\end{itemize}
There is often also an important difference in the drop in accuracy across $PD$s, $PL$s and $PI$s. Taking all the above into consideration, it is clear that one needs to be more careful when referring to persistent homology as a noise-robust topological invariant: this is only true for some filtrations and signatures, and even in such cases, the stability of PH does not necessarily imply that the presence of noise will not weaken the discriminative power of PH features.
The main findings of this paper provide some guidelines on the choice of suitable filtration(s) and persistence signature(s), and the corresponding metric, for the given dataset and expected types of noise. Some important questions that should be addressed when using persistent homology are the following:
\begin{itemize}
\item choice of filtration: What information about cycles (number, brightness, position, size) is different across classes of data, but does not change much for the expected type of noise? Does the filtration function satisfy the assumptions in the stability theorem? Do small changes in the data result in small changes in the filtration function?
\item choice of persistence signature: Is the signature stable? Are the cycles with the longest persistence or lifespan the most important (i.e., should cycles with short lifespan be considered as noise)? If this is not the case, it is a good idea to use a flexible signature which allows cycles of e.g., medium persistence to be the most crucial (such as $PI$s with an appropriate weight function, what is not immediately possible with $PD$s or $PL$s). How critical are the important compared to unimportant cycles? Which statistical or machine learning methods do we want to apply to PH? If PH does not need to be summarized as a function or vector, it might be sufficient to use $PD$s. How important is computational efficiency? If the computation time is limited, it might be useful to avoid $PD$s and the expensive calculation of Wasserstein distances.
\item choice of metrics: Is the signature stable? How critical are the important compared to unimportant cycles? The greater the $p,$ the bigger is the difference across cycles, for both Wasserstein $W_p$ or $l_p$ metric, i.e., PH is less sensitive to unimportant cycles.
\end{itemize}
In summary, the choice of filtration defines the persistence of different types of cycles (e.g., for PH with Rips filtration, small cycles have short persistence), the choice of signature defines which cycles are least important or noisy (e.g., these are typically the cycles with short persistence), and together with the choice of metric determines the level of sensitivity to noisy cycles.
Our findings are limited to the particular setting in our computational experiments: the choice of filtrations and persistence signatures, and their parameters, the choice of metric, dataset, noise, and classifier. For future research, it would be interesting to revisit similar research questions, but in a different context, e.g., for a different dataset. The MNIST images of digits 0-9 all typically have one connected component, and none, one or two holes. Both noise robustness and classification accuracy are expected to be better for datasets where the number (but also other properties such as size) of cycles differ greatly across classes, such as images with complex structure which come from materials science, astronomy, neuroscience, plant morphology (e.g., images of cosmic web, protein networks, brain arteries, plant roots).
\section*{Appendix A}
Table~\ref{tab-notation} summarizes the notation. For the purpose of clarity, throughout the manuscript we denote spaces with capitals, vectors with bolded lower case, scalars with lower case (with the exception of the standard notation $C$ for the SVM regularization paper, referred briefly in the paper), and functions with the Greek alphabet lower cases (with the exceptions of the standard notation for Wasserstein $W_p,$ and $l_p$ and $L_p$ distances).
\begin{table}[!ht]
\caption{Important notation and acronyms.}
\centering
\begin{tabular}{ll}
\toprule
{\bf Notation} & {\bf Interpretation} \\
\midrule
$S$ & space \\
$S_{r_1} \subseteq S_{r_2} \subseteq \dots \subseteq S_{r_t}$ & filtration ($S_r$ approximates $S$ at resolution, scale or time $r \in \mathbb{R}$) \\
\midrule
$Z = [z_{uv}]$ & image as a two-dimensional matrix, $z_{uv}$ is the greyscale value of pixel $(u, v)$ \\
$z_0$ & threshold greyscale value (to obtain binary image) \\
$n_x$ & number of pixels in $x$ direction \\
$n_y$ & number of pixels in $y$ direction \\
\midrule
$X \subset \mathbb{R}^n$ & point cloud \\
$VR(X, r)$ & Vietoris-Rips simplicial complex with resolution $r$ \\
$\delta_X: \mathbb{R}^n\rightarrow \mathbb{R}$ & distance (filtration) function \\
$\delta_{X, m}: \mathbb{R}^n\rightarrow \mathbb{R}$ & distance-to-a-measure (DTM) (filtration) function with parameter $m$ \\
\midrule
$K$ & cubical complex \\
$\phi_{z_0}: K \rightarrow \mathbb{R}$ & binary filtration function with parameter $z_0$ \\
$\phi_{\text{grsc}}: K \rightarrow \mathbb{R}$ & greyscale filtration function \\
$\phi_{d_0, z_0}: K \rightarrow \mathbb{R}$ & density filtration function with parameters $d_0$ and $z_0$ \\
$\phi_{(u_0, v_0), z_0}: K \rightarrow \mathbb{R}$ & radial filtration function with parameters $(u_0, v_0)$ and $z_0$ \\
\midrule
PH & persistent homology, information about $k$-dimensional cycles \\
$(b_i, d_i)$ & persistence interval, i.e., pair of birth and death values for cycle $i$ \\
$l_i=d_i-b_i$ & lifespan or persistence of a cycle $i$ \\
\midrule
$PD$ & persistence diagram \\
$W_p$ & Wasserstein distance between $PD$s (referred to as bottleneck distance for $p=+\infty$) \\
\midrule
$\lambda$ & persistence landscape (as a function) \\
$PL$ & vectorized persistence landscape \\
\midrule
$PI$ & persistence image \\
$\rho$ & $PI$ weight function \\
\bottomrule
\end{tabular}
\label{tab-notation}
\end{table}
\bibliographystyle{plain}
|
1,116,691,497,834 | arxiv | \section{Introduction}
Neutron Stars (NSs) are some of the most fascinating astrophysical objects in multi-messenger astronomy. They contain matter at extreme conditions far beyond the ones accessible in a terrestrial laboratory. The core of a NS is believed to contain matter at a few times nuclear saturation density, $\rho_0$ \cite{Glendenning1996,Haensel2007,Most:2018hfd,Ruester:2005fm}. \footnote{$\rho_0$ = 0.16 fm$^{-3}$} While the structure of a NS can be determined using the Tolman-Oppenheimer-Volkoff (TOV) equations \cite{TOV1,TOV2}, this requires the knowledge of the dense matter Equation of State (EoS). Understanding the internal structure of neutron stars in terms of fundamental interactions between its constituents is an open problem in nuclear physics. To understand the physics of dense matter, at low densities of $\rho \sim 1-2\rho_0$, we can use \textit{ab initio} approaches derived from chiral effective field theory ($\chi$EFT) \cite{Hebeler:2009iv, Tews:2012fj, Hagen:2013yba, Roggero:2014lga, Krastev:2021reh}. At large densities of $\rho \geq 40\rho_0$, perturbative quantum chromodynamics (QCD) calculations converge and provide reliable estimates \cite{Freedman:1976xs, Kurkela:2009gj, Gorda:2018gpy, Krastev:2021reh}. However, in the intermediate density region around $\rho \sim 2-10\rho_0$ which is relevant for most structural descriptions of neutron stars, reliable calculations from first principles are currently unavailable \cite{Krastev:2021reh, Fujimoto:2021zas}. Quantum field theory calculations based on lattice QCD are challenging at these densities due to the sign problem that arises in Monte Carlo simulations \cite{Aarts:2015tyj}. As a result, structural descriptions of neutron stars rely on relativistic and non-relativistic phenomenological models for the EoS. The nuclear matter parameters (NMPs), which form the basis of the construction of these equations of state for neutron star matter, are not directly accessible. While lower-order NMPs can be empirically extracted through finite nuclei nuclear physics experiments \cite{Lalazissis:1996rd, Todd-Rutel:2005yzo, Klupfel:2008af, Sulaksono:2009rn}, in order to constrain higher-order NMPs, we need to rely on astrophysical observations \cite{Zhang:2018vrx, Cai:2020hkk, Gil:2020wqs}.
Recent developments in multi-messenger astronomy have provided important information about high-density nuclear matter physics relevant for NSs. Constraining the EoS is a joint task between nuclear physics and astrophysics. Measured astrophysical quantities such as NS observables can uncover properties of dense nuclear matter. Narrowing the constraints on NS observables, therefore, has the ability to constrain the behavior of matter under extreme conditions. It is expected that precise and simultaneous measurements of NS properties like mass, radius, moment of inertia and tidal deformability may help constrain the EoS to a narrow range \cite{Annala:2021gom, Altiparmak:2022bke, Margueron:2017eqc, Margueron:2017lup}. High mass pulsars like PSR J1614--2230 ($M = 1.908 \pm~ 0.016 M_{\odot}$) \cite{Demorest:2010bx, Fonseca:2016tux, NANOGrav:2017wvv}, PSR J0348--0432 ($M = 2.01 \pm~ 0.04~ M_{\odot}$) \cite{Antoniadis:2013pzd}, PSR J0740+6620 ($M = 2.08 \pm~ 0.07~ M_{\odot}$ \cite{Fonseca:2021wxt}, and very recently PSR J1810+1714 ($M = 2.13 \pm~ 0.04~ M_{\odot}$) \cite{Romani:2021xmb} have already placed tight constraints on the EoS. GW signals emitted from a NS merger event depend on the behaviour of the neutron star matter at high densities \cite{Faber:2009zz, Duez:2009yz}. Therefore, the values of tidal deformability obtained from GW events such as GW170817 associated with a binary NS merger, as well as the simultaneous measurement of NS masses and radii from high-precision X-ray space missions, such as NICER (Neutron star Interior Composition ExploreR), may help further constrain the EoS. Some current observational evidences are the simultaneous measurements of NS mass $1.34_{-0.16}^{+0.15}$ $M_\odot$ and radius $12.71_{-1.19}^{+1.14}$ km for the pulsar PSR J0030+0451 by NICER \cite{Riley:2019yda}. Other independent analyses show that the radius is $13.02_{-1.06}^{+1.24}$ km and the mass $1.44_{-0.14}^{+0.15}$ $M_\odot$ \cite{Miller:2019cac}. However the recent measurement of the equatorial circumferential radius of the pulsar PSR J0740+6620 with mass $M=2.072_{-0.066}^{+0.067}$ $M_\odot$ and $R=12.39^{+1.30}_{-0.98}$ km (68 $\%$ CI) \cite{Riley:2021pdl} cannot further constrain the EoS which already predicts a NS with maximum mass more than 2$M_\odot$ \cite{Malik:2022zol}.
The EoS -- to a good approximation -- can be expressed in terms of NMPs at saturation density. The NMPs usually considered for constructing the EoS are the incompressibility coefficient, the skewness parameter of the symmetric nuclear matter, the symmetry energy coefficient, its slope, and the curvature parameters characterizing the density dependence of the symmetry energy. Recently, there has been a comprehensive analysis of correlations of tidal deformability and other NS properties with NMPs \cite{Fattoyev:2017jql, Zhang:2018vrx, Carson:2018xri}; however, these correlations are found to be model-dependent \cite{Carson:2018xri,Ghosh:2022lam}. Ref. \cite{Malik:2020vwo} shows that the correlations are sensitive to finite nuclei properties, which are accessible to laboratories. Therefore, finite nuclei properties are important quantities to consider in a model while determining the correlations between NS properties and NMPs. Of late, the EoSs obtained by several meta-models have gained popularity owing to their cost-effectiveness in big simulations \cite{Annala:2021gom, Annala:2019puf, Kurkela:2009gj}. These models are constrained by {\it ab initio} theoretical calculations of nucleon-nucleon chiral potentials for low-density neutrons and nuclear matter \cite{Hebeler:2013nza,Drischler:2015eba}, and perturbative QCD for asymptotically high-density regimes \cite{Kurkela:2009gj}. In the intermediate density region, the EoS is evolved in a thermodynamically consistent manner with either piece-wise polytropic segments \cite{Read:2008iy, Ozel:2015fia, Raithel:2017ity}, a speed-of-sound interpolation, or a spectral interpolation \cite{Lindblom:2010bb, Lindblom:2012zi}. These meta-models are limited to incorporating finite nuclei properties and differ on results in establishing a bridge between NS properties and NMPs. Non-parametric models of the NS EoS have also been proposed based on Gaussian processes (GPs) \cite{Essick:2019ldf, Landry:2020vaw} which use Bayesian methods to infer the EoS from multi-messenger data. These models are usually computationally expensive to implement or are highly sensitive to the choice of the training data sets and therefore, might be limited by the current knowledge of the EoS \cite{Han:2021kjx}. Consequently, there is a need to search for alternative approaches to construct a model-independent EoS.
In recent years, deep learning (DL) has been extensively applied to a wide range of technological and scientific tasks. DL algorithms, which are a class of machine learning (ML) algorithms, are highly scalable and distributed computational techniques with the ability to learn intricate relationships from raw data using units called neurons arranged in a stacked fashion. The advent of high-performance computing and the development of parallel devices like graphics processing units have rendered DL as the primary choice of algorithms for tasks such as computer vision \cite{He2016}, or natural language processing \cite{Young2018}. DL models have been used as alternatives to conventional statistical framework and have been successfully applied to many problems in physics. Most applications of ML and DL in physics have been in analyzing data obtained from data-intensive experiments like LIGO for the detection and denoising of GW signals \cite{George:2016hay, George:2017pmj, Carrillo:2016kvt}, and the Large Hadron Collider for particle track reconstruction or anomaly detection \cite{Larkoski:2017jix, Guest:2018yhq, Bourilkov:2019yoi}. Significant progress has also been made in using these algorithms in the context of nuclear physics \cite{bedaque2020} and neutron star physics \cite{Fujimoto:2019hxv, Han:2021kjx, Morawski:2020izm}. While these applications are certainly promising, it remains to be seen up to what extent a DL model can supplement existing physical models. Recent research also aims to address whether a trained DL model can produce correct predictions from experimental data alone and whether such predictions are comparable to mesoscopic phenomenological physics models \cite{Anil:2020lch}. Such ML and DL-based models do not have the feature richness possessed by physics-based models but they offer other benefits like cost-effectiveness while dealing with a large amount of experimental or observational data. Recent works \cite{Fujimoto:2017cdo, Fujimoto:2019hxv, Fujimoto:2021zas} have studied the applications of machine learning methods to the neutron star EoS. They employ a feedforward neural network (FFNN) to map neutron star data to EoS parameters. Instead of considering the FFNN as merely an interpolation tool between EoS NMPs and neutron star observables, we adopt an approach similar to \cite{Han:2021kjx} by treating the FFNN as a representation of the EoS itself. The novel contribution in our study is the use of a trained neural network model within a Bayesian statistical framework to infer NMPs in light of recent observational data.
In the present work, we address two major points of interest: first, using a neural network we mimic a realistic nuclear physics model that satisfies finite nuclei properties. We perform a robust test on the trained model to determine how accurately it captures the physics of finite nuclei. We establish that the trained model is true to the underlying physics as a realistic physical model. Second, we perform a detailed statistical analysis of nuclear matter parameters via Bayesian inference using recent observational data from multi-messenger astronomy with our trained model.
The paper is structured as follows: in Section \ref{sec:framework}, we discuss the parameterization of the EoS which leads to the emergence of NMPs, followed by a brief review of the theory of artificial neural networks and the DL approach used to map NMPs to NS observables. This is followed by a description of the Bayesian statistical framework and the procedure to obtain posterior distributions of the NMPs using observational data for the pulsar PSR J0740+6620. Results are presented in Section \ref{sec:results} and conclusions are given in Section \ref{sec:conclusion}.
\section{Framework}
\label{sec:framework}
In this section, we outline the different facets of the adopted framework for our analysis. In Sec. \ref{sec:nmps}, we briefly describe the neutron star EoS and the nuclear matter parameters involved in its construction. Then, in Sec. \ref{sec:data}, we describe the generation of the data set used to train the neural network model. Sec. \ref{sec:anns} describes at length the DL approach used to construct a deep neural network model which accepts nuclear matter saturation parameters as inputs and produces neutron star properties obtained from a set realistic nuclear physics EoSs as targets. Finally, in Sec. \ref{sec:bayes} we present a Bayesian inference framework that is applied on the trained neural network model to perform a detailed statistical analysis on nuclear matter parameters in light of recent neutron star observational data.
\subsection{Nuclear matter parameters}
\label{sec:nmps}
The structure of neutron stars is obtained by solving the Tolman-Oppenheimer-Volkoff (TOV) equation with a given EoS for the nuclear matter. The EoS is expressed as the variation of pressure $e$ with density $\rho$, over a wide range of densities. To a good approximation, any EoS calculated from phenomenological nuclear models can be decomposed into two parts, (i) the EoS for symmetric nuclear matter $e(\rho, 0)$; and (ii) a term involving the symmetry energy coefficient $S(\rho)$ and the asymmetry $\delta$,
\begin{equation}
e(\rho,\delta) \simeq e(\rho,0) + S(\rho)\delta^2,
\label{eq:eden}
\end{equation}
where $\rho = \rho_n + \rho_p$ is the baryon density, $\rho_n$ and $\rho_p$ are the neutron and proton densities respectively, and the asymmetry $\delta = (\rho_n - \rho_p)/\rho$. We can then characterize the density dependence of the energy density of symmetric matter around the saturation density $\rho_0$ in terms of a few bulk parameters by constructing a Taylor expansion around $\rho_0$. That is,
\begin{equation}
e(\rho, 0) = e_0 + \frac{K_0}{2}\left(\frac{\rho - \rho_0}{3\rho_0}\right)^2 + \frac{Q_0}{6}\left(\frac{\rho - \rho_0}{3\rho_0}\right)^3 + \mathcal{O}(4)
\end{equation}
The coefficients $e_0, K_0, Q_0$ denote the energy per particle, the incompressibility coefficient, and the third derivative of symmetric matter at saturation density, respectively.
Similarly, the behaviour of the symmetry energy around saturation can also be characterized in terms of a few bulk parameters,
\begin{equation}
S(\rho) = J_{\rm sym,0} + L_{\rm sym,0}\left(\frac{\rho - \rho_0}{3\rho_0}\right) + \frac{K_{\rm sym, 0}}{2}\left(\frac{\rho - \rho_0}{3\rho_0}\right)^2 + \mathcal{O}(3)
\end{equation}
where $J_{\rm sym,0} = S(\rho_0)$ is the symmetry energy at saturation density. The incompressibility $K_0$, the skewness coefficient $Q_0$, the symmetry energy slope $L_{\rm sym,0}$, and its curvature $K_{\rm sym, 0}$ evaluated at saturation density, are defined in \cite{Vidana:2009is}. These quantities are the key nuclear matter parameters (NMPs) that describe any equation of state (EoS). Hence, an EoS can be represented by a point in the seven-dimensional parameter space of NMPs $\{e_0$, $\rho_0$, $K_0$, $Q_0$, $J_{\rm sym,0}$, $L_{\rm sym,0}$, and $K_{\rm sym, 0}\}$ \cite{Malik:2020vwo}. Symbolically, the $j^{\text{th}}$ EoS in this space is written as
\begin{align}
\text{EoS}_{j} &= \{e_0, \rho_0, K_0, Q_0, J_{\rm sym,0}, L_{\rm sym,0}, \text{and } K_{\rm sym, 0}\}_{j} \nonumber\\
&\approx \mathcal{N}(\boldsymbol\mu, \boldsymbol\Sigma)
\end{align}
where $ \mathcal{N}(\boldsymbol\mu, \boldsymbol\Sigma)$ is a multivariate Gaussian distribution with $\boldsymbol\mu$ being the mean value of the nuclear matter parameters $\mathbf{p}$ and a covariance matrix $\boldsymbol\Sigma$. The diagonal elements of $\boldsymbol\Sigma$ represent the variance or the squared error for the parameters $p_i$. The off-diagonal elements of $\boldsymbol\Sigma$ are the covariances between different parameters $p_i$ and $p_j$, and denote the correlation coefficient between them. Hence, given a mean $\boldsymbol\mu$ and covariance matrix $\boldsymbol\Sigma$, a large number of EoSs can be obtained.
While the Taylor expansion of the symmetric and asymmetric energy is only truly accurate around the saturation density, we can treat these expansions as a parameterization of the EoS --- similar to other adopted parameterizations --- with the condition that this representation asymptotically approaches the Taylor expansion in the limit $\rho \rightarrow \rho_0$ \cite{Zhang:2018vrx, Ferreira:2019bny}. This lets us ignore any issue arising with the convergence of the approximation. However, the higher-order NMPs obtained via this parameterization might markedly diverge from the actual nuclear matter expansion coefficients. They can be thought of as effective parameters that incorporate the effects of missing higher-order terms. Moreover, Refs. \cite{Margueron:2017eqc, Ferreira:2019bny} indicate that an EoS obtained from the Skyrme framework can be well-reproduced by considering Taylor coefficients until the third or fourth order.
\subsection{Data set generation}
\label{sec:data}
In our analysis, the input data used to train an artificial neural network are the seven key nuclear matter parameters that govern the equation of state \{$e_0$, $\rho_0$, $K_0$, $Q_0$, $J_{\rm sym,0}$, $L_{\rm sym,0}$, $K_{\rm sym, 0}$\}. Six neutron star properties are the target variables: the maximum NS mass, $M_{\text{max}}$; the maximum NS radius, $R_{\text{max}}$; the radius for 1.4 $M_\odot$ NS, $R_{1.4}$; and the tidal deformability $\Lambda_M$ for NS having mass $M\in[1.0,1.4,1.8]M_{\odot}$. We generate our data set by sampling points from the multivariate Gaussian distribution $\mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$, where $\boldsymbol\mu$ is the mean vector with components $\mu_i = \mathbb{E}[p_i]$ and $\boldsymbol\Sigma$ is the covariance matrix with entries $\Sigma_{ij} = \mathbb{E}[(p_i - \mu_i)(p_j - \mu_j)]$, for a NMP $p$. \footnote{$\mathbb{E}[X]$ is the expectation value of a random variable $X$ and is defined as $\mathbb{E}[X] = \int_{-\infty}^{\infty} xp(x)$, where $p(x)$ is the probability density function.} This method closely follows the procedure for generating the \texttt{Case-II} data set in Ref. \cite{Malik:2020vwo}. We assume an a priori inter-correlation coefficient between $L_{\rm sym,0}$ and $K_{\rm sym,0}$ of 0.8, which is reasonable choice for nuclear physics models that satisfy finite nuclear properties. Models which satisfy finite nuclear properties exhibit different correlations between NMPs and NS properties as compared to meta-models or {nuclear physics model for infinite nuclear matter} that do not respect finite nuclear properties \cite{Carson:2018xri, Ghosh:2022lam}. Therefore, we consider \texttt{Case-II} data of Ref. \cite{Malik:2020vwo} to mimic the microphysics information of finite nuclear properties. Figure \ref{fig:data-corr} presents the $7 \times 7$ matrix for the correlation coefficients between the NMPs in the sampled data. The central values and uncertainties on each NMP in the constructed distribution are listed in Table \ref{tab:nmp-params}.
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccc}
\hline\hline
$\boldsymbol{p_i}$ & $\boldsymbol{\mu_{p_i}}$ & $\boldsymbol{\sqrt{\Sigma_{p_ip_i}}}$\\
\hline
$e_0$ & $-$16.0 & 0.25 \\
$\rho_0$ & 0.16 & 0.005 \\
$K_0$ & 230.0 & 20.0 \\
$Q_0$ & $-$300.0 & 95.0 \\
$J_{\rm sym,0}$ & 32.0 & 3.0 \\
$L_{\rm sym,0}$ & 60.0 & 20.0 \\
$K_{\rm sym,0}$ & $-$50.00 & 100.0 \\
\hline\hline
\end{tabular}
\caption{The mean value $\boldsymbol{\mu_{p_i}}$ and error $\boldsymbol{\sqrt{\Sigma_{p_i p_i}}}$ for the nuclear matter parameters $p_i$ employed for the
multivariate Gaussian distribution. All quantities are in units of MeV except for $\rho_0$ which is in units of fm$^{-3}$.}
\label{tab:nmp-params}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{figures/data-corrs.png}
\caption{Correlations among the various NMPs for the sampled data set. Correlations among the off-diagonal pairs corresponding to $L_{\rm sym,0}-K_{\rm sym, 0}$
are the only nontrivial values. Negligible correlations between other NMPs are visible as the figure plots sample statistics.}
\label{fig:data-corr}
\end{figure}
We then construct the Skyrme EoS from a drawn NMP sample and check whether the EoS (a) predicts a NS maximum mass above 2 $M_\odot$; (b) predicts a tidal deformability for a NS with $M = 1.4 M_\odot$ below 800; and (c) satisfies the causality condition, i.e., the speed of sound $c_s = \sqrt{\dv{p}{e}} \leq c$ at the center of maximum mass NS, where $c$ is the speed of light in vacuum. Any samples that generate EoSs which do not satisfy these conditions are discarded. For each EoS, the TOV and deformability equations \cite{Hinderer:2007mb} are solved to obtain the six aforementioned NS observables. Naturally, realistic NS observations accrue experimental and instrumental errors, which result in corresponding uncertainties while reconstructing the NMPs. For simplicity, we choose to disregard NS observational errors and uncertainties while training the model. Following this procedure, we obtain 2106 filtered NMPs and corresponding NS observables which form the data set.
\subsection{Artificial neural networks}
\label{sec:anns}
Artificial Neural Networks (ANN) are a class of machine learning algorithms that have found widespread use over the past decade. The popularity of ANNs arises from their ability to model complex nonlinear relationships within data and their potential to generalize to a wide class of functions. An ANN with sufficiently many layers or neurons is theoretically capable of representing any continuous function \cite{Hornik1989, Cybenko1989}. Therefore, as a general rule, a more complex network with a larger number of parameters is able to learn more abstract features from the data.
Feedforward Neural Networks (FFNN) are the simplest type of neural network architecture. For an input $\vb{x}$, the objective of a feedforward neural network is to approximate some true mapping $f^*(\vb{x})$ by a mapping $f(\vb{x}; \boldsymbol\theta)$, parameterized by a set of weights $\boldsymbol\theta$. A typical feedforward neural network consists of a number of processing units called neurons, arranged into one or many layers composed in a sequential fashion. A neuron performs a linear operation by aggregating weighted inputs received from neurons in the previous layer. A FFNN generally consists of an input layer, followed by one or more hidden layers, and a final output layer consisting of one or more neurons. Computation in an FFNN flows in a linear fashion, starting at the input layer and moving successively through the hidden layers until it reaches the output layer. Figure \ref{fig:ffnn-fig} provides an illustration of a simple feedforward neural network with one input layer, two hidden layers and an output layer.
The parameters $\boldsymbol\theta$ typically represent the weights assigned to a connection between neurons in adjacent layers. The training data provides noisy, approximate examples of $f^*(\vb{x})$ and each $\vb{x}$ is paired with a corresponding label $y$. The goal of a learning algorithm is to learn a particular value of $\boldsymbol\theta$ that results in the best function approximation. During the training procedure, the learning algorithm does not say what each layer does but instead decides how to use these layers to produce an optimal approximation of $f^*$. As the training data does not show the desired output for intermediate layers, they are called hidden layers. The number of hidden layers decides the depth of the network, and the dimensionality or the number of neurons in each hidden layer determines the width of the network. To introduce nonlinearity in the computation between two successive layers within the model, a nonlinear function, called the activation function, acts element-wise on the output of one hidden layer, and the output of the function is passed to the next layer in the computation. For most modern neural networks, the default recommendation is to use the rectified linear unit, or ReLU \cite{Jarrett2009, Nair2010} activation, defined as $g(z) = \max\{0, z\}$. Other commonly used choices for the activation function include the $\tanh$ function or the logistic function. For a textbook review of neural networks and training algorithms, see Ref. \cite{Goodfellow2016}.
The ability of neural networks to model highly nonlinear relationships between input and output variables makes them ideal for estimating neutron star properties from the equation of state parameters. This is important because the relationships between these two quantities are expected to be nontrivial and involve multiple intermediate steps composed of nonlinear operations. Moreover, a neural network offers two major advantages over a conventional approach with traditional physics models:
\begin{enumerate}
\item An ANN can efficiently map the sample NMPs to NS properties without calculating the EoS from nuclear physics models. Similar works \cite{Wei:2020xrl, Chua:2019wwt} demonstrate that ANNs offer up to a two-fold speedup over conventional astrophysical models
\item ANNs can also accurately capture finite nuclear information, which can computationally expensive to verify with a traditional physics model in a Bayesian setting.
\end{enumerate}
At this step, we wish to note that other machine learning models can also be used for an identical purpose, albeit with varying degrees of success. We performed a preliminary comparison of FFNNs with other ML models, specifically linear regression, support vector regression and eXtreme Gradient Boosting (XGBoost) \cite{xgboost}. We observed that FFNNs outperformed all the other models in the study, and therefore, we chose to use FFNNs for this work.
\begin{figure}[pt]
\centering
\includegraphics{figures/ann-cartoon.pdf}
\caption{An example of a fully-connected feedforward neural network with two hidden layers. Each hidden layer consists of $m$ neurons, denoted by $a^{(l)}_{j}$, where $l$ denotes an index over the hidden layers and the index $j$ runs over the dimension of the hidden layer. This particular network accepts an $n$-dimensional input through the input layer (in green), passes the input through the hidden layers (in lavender), and produces a $k$-dimensional output at the output layer (in red). Solid lines between individual nodes
denote weighted connections which are learnt during the training procedure. Activation functions are not shown in this figure.}
\label{fig:ffnn-fig}
\end{figure}
In supervised learning instances, the data set is generally partitioned into non-overlapping training, testing, and validation sets. The training data set is used by the neural network to learn, the validation set is used to see whether the network is learning properly during the training procedure and to select the optimal values for the hyperparameters, and the testing data set is used to assess the performance of the trained model. It should be noted that the model does not see any instances from the testing data set until after the training is completed. Accordingly, we randomly partition the original generated data set following a 70\%-20\%-10\% split into a training set, a validation set and a testing set consisting of 1515, 422, and 169 instances, respectively. Before training, the features in the split datasets are standardized by removing the mean and scaling them to unit variance. Standardization of data is a common requirement for many ML algorithms as they tend to perform poorly if the individual features are not standard normally distributed, i.e. Gaussian with unit variance and zero mean. Standardization is performed using Scikit-Learn's \cite{pedregosa11a} \texttt{StandardScalar} method. The output of the network is scaled back to the original distribution before computing the loss.
The choice of the ANN architecture, such as the number of layers and the number of neurons in each layer, represents a trade-off between the quality of fit and the overfitting problem. The model should contain sufficiently many trainable parameters to learn the ground truth function accurately. At the same time, the ANN must avoid overfitting the training data by losing its ability to generalize well. As there is no panacea for choosing an optimal model configuration, the final architecture for the ANN was chosen based on empirical tests performed on the data and selecting the set of configurations that performed best on the validation set. Our network consists of two hidden layers with 15 neurons each and uses the rectified linear unit (ReLU) activation function, which is a standard choice for most deep learning applications and is known to mitigate the vanishing gradients problem \cite{Lecun2015}. The ReLU activation function behaves as an identity function for positive inputs and saturates at 0 for negative inputs. No activation function is applied to the input or the output layer. The design of our feedforward architecture is summarized in Table \ref{tab:ANN-arch}.
The neural network is implemented and trained using the Python Keras library \cite{chollet2015keras} with a TensorFlow \cite{abadi2016tf} backend. Neural network parameters are initialized with the Glorot uniform distribution \cite{pmlr-v9-glorot10a}. We use the Adam optimizer \cite{kingma2014} to update the weights of the ANN. For training, we use an initial learning rate of $ 1 \times 10^{-3}$ and the training data is batched into batches of size 16. Training is performed for a total of 50 epochs, or until the validation loss stops decreasing. The model is trained on a 2-core Intel Xeon CPU @ 2.20GHz. The network seeks to optimize a root mean squared error (RMSE) loss between the predictions and actual neutron star properties.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccc}
\hline\hline
\textbf{Layer} & \textbf{Number of neurons} & \textbf{Activation function}\\
\hline
0 (Input) & 7 & None\\
1, 2 & 15 & ReLU\\
3 (Output) & 6 & None\\
\hline\hline
\end{tabular}
\caption{Neural network architecture used in this work. The input layer consists of seven neurons corresponding to nuclear matter parameters.
In the output layer, the six neurons correspond to the six neutron star properties.}
\label{tab:ANN-arch}
\end{table}
\subsection{Bayesian statistical framework}
\label{sec:bayes}
The Bayesian statistical framework allows us to carry out a detailed analysis of the parameters of a model for a given set of fit data \cite{Wesolowski:2015fqa, Furnstahl:2015rha, Ashton2019}. The hypothesis or the prior knowledge of the model parameters and various constraints on them is encoded through the prior distributions. The technique yields a joint posterior distribution of the model parameters by updating the probability for the hypothesis using the available observational data according to Bayes' theorem. The posterior distribution of the model parameters $\boldsymbol\theta$ can be written as
\begin{equation}
P(\boldsymbol \theta|D) = \frac{\mathcal{L}(D | \boldsymbol\theta) P(\boldsymbol\theta)}{\mathcal{Z}}
\end{equation}
where $\boldsymbol\theta$ and $D$ denote the set of model parameters and the fit data, respectively. $ P(\boldsymbol\theta| D)$ is the joint posterior probability of the parameters, $\mathcal{L}(D | \boldsymbol\theta)$ is the likelihood function, $P(\boldsymbol\theta)$ is the prior for the model parameters and $\mathcal{Z}$ is the evidence. The posterior distribution of a given parameter an be obtained by marginalizing $P(\boldsymbol \theta | D)$ over remaining parameters. The marginalized posterior distribution for a parameter $\theta_i$ is obtained as
\begin{equation}
P(\theta_i | D) = \int P(\boldsymbol\theta | D) \prod_{j \neq i} d\theta_j
\end{equation}
We use the Gaussian likelihood function defined as,
\begin{equation}
\mathcal{L}(D | \boldsymbol\theta) = \prod_{j}\left(\frac{1}{\sqrt{2\pi}\sigma_{j}}\right)\exp\left[-\frac{1}{2}\sum_{i=1}^{N_d}\left( \frac{d_i - m_j(\boldsymbol\theta)}{\sigma_j} \right)^2\right]
\label{eq:likelihood}
\end{equation}
Here the index $j$ runs over all the data, $d_j$ and $m_j$ are the data and the corresponding model values, respectively. The model $m$ is parameterized by a set of parameters, $\boldsymbol\theta$. $\sigma_j$ are the adopted uncertainties. The evidence is used to compare the compatibility of different models with the available data. In our present work, the evidence $\mathcal{Z}$ is not relevant and thus can be ignored.
To obtain the marginalized posterior distributions of the NMPs within the Bayesian framework, we require a set of fit data, a model, and a set of priors for the nuclear matter parameters. The likelihood function for a given set of fit data is evaluated for a sample of NMPs populated according to their prior distributions. The joint probability distribution of the NMPs is obtained using the product of the likelihood function and the prior distribution. The fit data for the likelihood function is provided by the neural network for a sample drawn from the prior distribution. To compute the likelihood, we use the maximum neutron star mass $M_{\rm max}$, the maximum radius $R_{\rm max}$, the radius $R_{1.4}$, and the tidal deformability $\Lambda_{1.4}$, from the set of outputs generated by the neural network for a given input of NMPs. Instead of using a distinct value for each data point, we fix $d_i$ in Eq. (\ref{eq:likelihood}) to a mean value $\mu_j$. Therefore for our study, Eq. (\ref{eq:likelihood}) is modified to
\begin{equation}
\mathcal{L}(D | \boldsymbol{p}) = \prod_{j \in \{ M_{\rm max}, R_{\rm max}, R_{1.4}, \Lambda_{1.4} \}}\left(\frac{1}{\sqrt{2\pi}\sigma_{j}}\right)\exp\left[-\frac{1}{2}\sum_{i=1}^{N_d}\left( \frac{\mu_j - \text{ANN}(\boldsymbol{p})_j}{\sigma_{j}} \right)^2\right]
\end{equation}
for a set of NMPs $\boldsymbol{p}$ and the neural network $\text{ANN}(\cdot)$ which accepts NMPs as inputs. We define the set of priors over the NMPs as a multivariable Gaussian distribution. The mean and standard deviation on each NMP in the distribution are listed in Table \ref{tab:priors}.
The calculations are performed for a set of multi-messenger observational data for the millisecond pulsar PSR J0740+6620. Table \ref{tab:likelihood} lists the observations and their mean and standard deviations used to construct the likelihood function. The data set contains latest observational measurements of the NS maximum mass, $M_{\rm max}$, the tidal deformability, $\Lambda_{1.4}$, the radius $R_{1.4}$ for $M = 1.4M_\odot$, and the radius for the maximum NS mass, $R_{\rm max}$, along with their $1\sigma$ uncertainties. Measurements for $M_{\rm max}$ and $R_{\rm max}$ are obtained from the pulse-profile modelling of NICER data for the millisecond pulsar PSR J0740+6620 \cite{Riley:2021pdl} while the measurement for $R_{1.4}$ is obtained from NICER and XMM-Newton data for PSR J0740+6620 \cite{Miller:2021qha}. All three observations are reported with their 68\% confidence intervals (CI). The value of the tidal deformability $\Lambda_{1.4}$ along with a 90\% CI is obtained from the measurements of the gravitational wave signal GW170817 reported by the LIGO, and the Virgo collaboration \cite{LIGOScientific:2018cki}.
\begin{table}[]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccc}
\hline \hline
\textbf{Parameters} & $\boldsymbol{\mu_{p_i}}$ & $\boldsymbol{\sqrt{\Sigma_{p_i p_i}}}$ \\
\hline
$\rho_0$ & 0.16 & 0.005 \\
$e_0$ & $-$16 & 0.26 \\
$K_0$ & 230 & 40 \\
$Q_0$ & $-$100 & 200 \\
$J_{\rm sym,0}$ & 32.5 & 1.8 \\
$L_{\rm sym,0}$ & 45 & 30 \\
$K_{\rm sym,0}$ & $-$100 & 200 \\
\hline \hline
\end{tabular}
\caption{The mean value $\boldsymbol{\mu_{p_i}}$ and error $\boldsymbol{\sqrt{\Sigma_{p_i p_i}}}$ for the nuclear matter parameters $p_i$ in the prior multivariate Gaussian distribution. All quantities are in units of MeV except for $\rho_0$ which is in units of fm$^{-3}$.}
\label{tab:priors}
\end{table}
Bayesian parameter estimation is commonly carried out using the Markov Chain Monte Carlo (MCMC) algorithm. This algorithm updates the existing parameters with a new set of parameters with a probability proportional to the ratio of the two points. However, MCMC approaches can lead to issues with converging to a stable posterior. To overcome this problem, we use the dynamic nested sampling algorithm \cite{Skilling2004,higson2019,speagle2020}. In dynamic nested sampling, the posterior is broken into many nested ``slices" with an initial \texttt{n-live} number of points that vary dynamically as the sampling progresses; samples are generated from each of them and then recombined to construct the posterior distribution. We use the Dynesty dynamic nested sampling algorithm interfaced in BILBY \cite{Ashton2019} with 5000 \texttt{n-live} points to sample from the posterior distributions of the nuclear matter parameters. Dynesty enables flexible Bayesian inference over complex, multi-modal distributions without needing to converge to the posterior before generating valid samples \cite{speagle2020}.
\begin{table}[]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccc}
\hline \hline
& $\boldsymbol\mu$ & $\boldsymbol\sigma$ \\
\hline
$M_{\rm max}$ & 2.072 & 0.066 \\
$R_{\rm max}$ & 12.35 & 0.75 \\
$R_{1.4}$ & 12.45 & 0.65 \\
$\Lambda_{1.4}$ & 255 & 130 \\ \hline
\end{tabular}
\caption{Values of the mean, $\mu$, and error $\sigma$ of the observables used to construct the likelihood function. $\Lambda_{1.4}$ is dimensionless, $R_{\rm max}$ and $R_{1.4}$ are in units of km and $M_{\rm max}$ is in units of $M_{\odot}$. }
\label{tab:likelihood}
\end{table}
\section{Results}
\label{sec:results}
In this section, we present the main findings of our analysis. After selecting the best model, which we call the NS-ANN, by performing a grid search over hyperparameters such as the model depth, model width, learning rate, etc., we determine its performance on the test set. We wish to reemphasize that the test set is never used during training or the validation phase. Evaluating the model's performance on this set quantifies the generalization capacity of the model, i.e., its predictive power on unseen data. The RMSE values obtained for each NS observable on the testing data set are summarized in Table \ref{tab:ann-metrics}. We also include the root mean squared relative error as it provides a scale-independent measure of the generalization capacity and eases comparison between multiple dependent variables. In Figure \ref{fig:losscurve}, we plot the loss as a function of training time, which is called the learning curve. The learning curve plots the root mean squared error on the $y$-axis against the number of elapsed epochs --- or, the number of full passes over the training data set --- on the $x$-axis. We plot two different learning curves for a single training instance: one for the training loss (in blue) and the other for the validation loss (in red). Figure \ref{fig:losscurve} also plots a $1\sigma$ (68\% confidence interval) region centered around the training curves computed for 10 independent runs to indicate the degree of variability observed by training the model on different subsets of the original data set. The training and validation losses are the loss functions computed for the training set and validation sets, respectively. The former is a metric of how well the ANN learns the training data, while the latter indicates how well the model is able to predict by generalizing the learnt information.
For a NS mass of 1.4$M_{\odot}$, we have a prediction error below 2\% for the radius $R_{1.4}$ and 5\% for the tidal deformability $\Lambda_{1.4}$. This implies that using the ANN, we can infer $\Lambda_{1.4}$ and $R_{1.4}$ with an average error of 19.239 and 0.194 km, respectively. The NS maximum mass, $M_{\rm max}$, and the maximum radius $R_{\rm max}$, are also predicted with an error below 2\%. Moreover, the saturation of the loss curves for the training and validation sets indicate that the training has converged to a minimum and that the model can generalize sufficiently well.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccc}
\hline\hline
$\boldsymbol{\hat{y}}$ & \textbf{RMSE} & \textbf{(RMSE/$\boldsymbol{\bar{y}}$) $\boldsymbol\times \boldsymbol{100}$}\\
\hline
$M_{\rm max}$ & 0.024 & 1.1\\
$R_{\text{max}}$ & 0.088 & 0.8 \\
$R_{1.4}$ & 0.194 & 1.5 \\
$\Lambda_{1.0}$ & 123.800 & 3.7 \\
$\Lambda_{1.4}$ & 19.239 & 4.3 \\
$\Lambda_{1.8}$ & 5.344 & 8.2 \\
\hline\hline
\end{tabular}
\caption{Root Mean Squared Error (RMSE) on the test set, defined as $\sqrt{(1/N)\sum_{i=1}^{N}(\hat{y}_i - y_i)^2 }$, where $N$ is the
total number of samples in the test set. We also show the root mean squared relative error, (RMSE$/\bar{y}) \times 100$, where $\bar{y} = (1/N)\sum_{i=1}^{N}y_i$
is the mean of the true value. All quantities are dimensionless except $R_{\rm max}$ and $R_{1.4}$, which are in units of km and $M_{\rm max}$, which is in units of $M_{\odot}$.}
\label{tab:ann-metrics}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{figures/loss-curve.png}
\caption{Learning curves for the training (blue, solid) and validation (red, dashed) data sets plotted against the number of elapsed epochs. The shaded region around the solid curve indicates a $1\sigma$ or 68\% CI region around the central value computed for 10 independent training runs.}
\label{fig:losscurve}
\end{figure}
Once we have successfully and efficiently learnt the non-linear mapping between the empirical nuclear matter parameters and the NS observables, we must first prove the correctness of the NS-ANN model before using it in the context of a Bayesian inference framework to constrain the NMPs. In Figure \ref{fig:corr-ellipse} we plot the $1\sigma$ confidence ellipses for $\Lambda_M$ versus $L_{\rm sym,0}$ and $K_{\rm sym,0}$ for NS mass $M = 1.0,1.4,$ and $1.8 M_{\odot}$ predicted by the NS-ANN for three different pseudo-sets of NMPs. These pseudo-NMP distributions were constructed as a multivariate Gaussian distribution by setting the inter-correlation between $L_{\rm sym,0}$ and $K_{\rm sym, 0}$ to $r = 0.3, 0.6$ and $0.9$, and all other parameters of the distribution to values listed in Table \ref{tab:nmp-params}. Once again, all NMPs that generate EoSs which do not satisfy the conditions from Sec. \ref{sec:anns} are discarded. We use the NS-ANN model for each NMP in the dataset to generate a corresponding set of NS observables. Through this analysis, we wish to establish that a neural network is capable of replicating the underlying microphysical information conveyed by a conventional EoS model. The values of the correlation coefficients for the results illustrated in the figure are summarized in Table \ref{tab:corr-results}. For the first set, we observe that the correlations of $\Lambda_{1.0, 1.4, 1.8}$ with $K_{\rm sym, 0}$ are $\chi \sim 0.5 - 0.8$, and those with $L_{\rm sym, 0}$ are $\chi \sim 0.2 - 0.8$. For the second set, we see a non-trivial narrowing of the confidence ellipses, indicating stronger correlations of $\Lambda_{1.0}$ and $\Lambda_{1.4}$ with $L_{\rm sym, 0}$ and $K_{\rm sym, 0}$, $\chi \sim 0.7 - 0.8$, while these correlations become moderate for $\Lambda_{1.8}$. We also observe that the $\Lambda_M - L_{\rm sym, 0}$ correlations decrease with increasing mass of NS, while an opposite trend is observed for $\Lambda_M - K_{\rm sym, 0}$ correlations. Moreover, for the same NS mass, $\Lambda_M - L_{\rm sym, 0}$ correlations are much more sensitive to $ L_{\rm sym, 0} - K_{\rm sym, 0}$ correlations over $\Lambda_M - K_{\rm sym, 0}$ correlations. These results help emphasize that the correlations of the tidal deformability with $L_{\rm sym, 0}$ and $K_{\rm sym, 0}$ are sensitive to the physical correlations among the empirical NMPs arising from a set of physical constraints \cite{Margueron:2017eqc, Margueron:2017lup}. Additionally, the observations made in this analysis are in excellent agreement with results from previous studies performed on observing the effect of correlations of the tidal deformability with the slope of the symmetry energy, and its curvature \cite{Malik:2020vwo, Fattoyev:2017jql, Ferreira:2019bgy}.
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccccc}
\hline\hline
$r$ & & $\Lambda_{1.0}$ & $\Lambda_{1.4}$ & $\Lambda_{1.8}$\\
\hline
\multirow{2}{*}{0.3} & $L_{\rm sym, 0}$ & 0.81 & 0.56 & 0.23\\
& $K_{\rm sym, 0}$ & 0.57 & 0.78 & 0.81\\
\hline
\multirow{2}{*}{0.6} & $L_{\rm sym, 0}$ & 0.86 & 0.69 & 0.43\\
& $K_{\rm sym, 0}$ & 0.69 & 0.80 & 0.79\\
\hline
\multirow{2}{*}{0.9} & $L_{\rm sym, 0}$ & 0.91 & 0.84 & 0.71\\
& $K_{\rm sym, 0}$ & 0.86 & 0.86 & 0.79\\
\hline\hline
\end{tabular}
\caption{The values of the correlation coefficients for $\Lambda_{1.0, 1.4, 1.8}$ with $L_{\rm sym, 0}$ and $K_{\rm sym, 0}$ for three different $L_{\rm sym, 0}-K_{\rm sym, 0}$ correlation coefficients, $r$}
\label{tab:corr-results}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.35]{figures/confidence-ellipse.png}
\caption{The 1$\sigma$ confidence ellipses in the planes of $\Lambda_M$ - $L_{\rm sym,0}$ (top) and $\Lambda_M$ - $K_{\rm sym, 0}$ (bottom) with $M = 1.0, 1.4$ and $1.8 M_{\odot}$ generated by the NS-ANN model for the correlation coefficient, $r$, between $L_{\rm sym,0}$ and $K_{\rm sym, 0}$ set to $0.3, 0.6$ and $0.9$.}
\label{fig:corr-ellipse}
\end{figure}
Next, we perform a statistical analysis of the NMPs within the Bayesian framework using the trained NS-ANN model in light of recent observational data. The likelihood and fit data used for this analysis are defined in Sec. \ref{sec:bayes}. We perform the analysis for two different scenarios of the prior: (i) an uncorrelated multivariate Gaussian distribution of NMPs, and (ii) the previous distribution with the $L_{\rm sym,0}-K_{\rm sym,0}$ correlation coefficient, $r$, set to 0.9. Fig. \ref{fig:corner-plot} presents the corner plots obtained for the posterior distributions of the NMPs for both cases of the prior. The labels $r = 0$ (in salmon) and $r = 0.9$ (in violet) represent cases (i) and (ii), respectively. The one-dimensional marginalized posterior distributions of the NMPs are plotted along the diagonal. Vertical lines represent $1\sigma$ confidence intervals. Two-dimensional marginalized posterior distributions for individual pairs of NMPs are shown on the off-diagonal plots. The different tonalities of the confidence ellipses, from light to dark, for the two dimensional posterior distributions indicate the 1$\sigma$, 2$\sigma$, and 3$\sigma$ confidence intervals. In Table \ref{tab:bayes-result} we indicate the median, the minimum value and the maximum value of the NMPs for the 50\%, 68\%, and 90\% confidence intervals obtained for both cases of the prior. Some conclusions can be drawn from obtained posterior distributions of the NMPs for both the cases of the prior. We observe that if the symmetry energy slope $L_{\rm sym,0}$, and its curvature parameter $K_{\rm sym,0}$ are correlated, then present observational constraints can further change the median value of the parameters by 8.98\% and $-$32.47\%, respectively. Their corresponding $1\sigma$ (or 68\% CI) regions narrow by 23.50\% and 24.52\%, respectively. This correlation typically arises due to finite nuclear constraints. We also note that the nuclear matter parameters patterning to symmetric nuclear matter are affected by the presence of an interrelation between $L_{\rm sym,0}-K_{\rm sym,0}$. It can be understood that for a given NS matter EoS, there are degeneracies between the symmetric nuclear matter EoS and the symmetry energy. The uncertainties in higher-order NMPs patterning to symmetry energy propagate to the higher-order NMPs that appear in symmetric nuclear matter EoSs. In the future, a systematic study on the uncertainty of NMPs with stricter pseudo-observational constraints is required to precisely infer the effects of these errors.
\begin{figure}
\centering
\includegraphics[scale=0.35]{figures/corner-plot.png}
\caption{Corner plots for the marginalized posterior distribution of the nuclear matter parameters obtained from the Taylor expansion of the EoS for two different values of the Pearson correlation coefficient, $r$, between $L_{\rm sym, 0}$ and $K_{\rm sym, 0}$ set to $0.0$ and $0.9$. The one dimensional posterior distributions are plotted along the diagonal. The vertical lines indicate $1\sigma$ confidence intervals on the nuclear matter parameters and the different tonalities from light to dark indicate the $1\sigma$, $2\sigma$, and $3\sigma$ confidence intervals, respectively.}
\label{fig:corner-plot}
\end{figure}
\begin{table*}[]
\centering
\setlength{\tabcolsep}{4.5pt}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccccccccc}
\hline \hline
\multirow{3}{*}{$\boldsymbol{r}$} & \multirow{3}{*}{\textbf{NMP}} & \multirow{3}{*}{\textbf{Median}} & \multicolumn{6}{c}{\textbf{Confidence Interval (CI)}} \\ \cline{4-9}
& & & \multicolumn{2}{c}{$\boldsymbol{50\%}$} & \multicolumn{2}{c}{$\boldsymbol{68\%}$} & \multicolumn{2}{c}{$\boldsymbol{90\%}$} \\
& & & \textbf{Min} & \textbf{Max} & \textbf{Min} & \textbf{Max} & \textbf{Min} & \textbf{Max} \\
\hline
\multirow{7}{*}{0} & $\rho_0$ & $0.161$ & $0.157$ & $0.164$ & $0.156$ & $0.165$ & $0.153$ & $0.168$ \\
& $e_0$ & $-15.99$ & $-16.15$ & $-15.84$ & $-16.23$ & $-15.77$ & $-16.37$ & $-15.62$ \\
& $K_0$ & $210.01$ & $193.48$ & $229.44$ & $186.25$ & $239.16$ & $169.97$ & $261.43$ \\
& $Q_0$ & $-370.18$ & $-513.05$& $-206.03$ & $-584.49$ & $-125.57$ & $-716.81$ & $29.85$ \\
& $J_{\rm sym,0}$ & $32.80$ & $31.59$ & $33.96$ & $31.09$ & $34.45$ & $30.01$ & $35.53$ \\
& $L_{\rm sym,0}$ & $54.34$ & $44.94$ & $63.27$ & $40.57$ & $67.58$ & $31.88$ & $77.28$ \\
& $K_{\rm sym,0}$ & $40.94$ & $-25.18$ & $98.39$ & $-70.95$ & $124.79$ & $-155.10$ & $189.68$ \\
\hline
\multirow{7}{*}{0.9} & $\rho_0$ & $0.161$ & $0.158$ & $0.164$. & $0.156$ & $0.165$ & $0.154$ & $0.168$ \\
& $e_0$ & $-16.00$ & $-16.16$ & $-15.83$ & $-16.24$ & $-15.76$ & $-16.39$ & $-15.62$ \\
& $K_0$ & $217.62$ & $202.30$ & $233.90$ & $194.77$ & $242.05$ & $179.27$ & $260.65$ \\
& $Q_0$ & $-385.28$ & $-517.08$ & $-247.22$ & $-581.46$ & $-169.05$ & $-716.75$ & $-33.48$ \\
& $J_{\rm sym,0}$ & $32.82$ & $31.63$ & $33.99$ & $31.13$ & $34.53$ & $30.13$ & $35.54$ \\
& $L_{\rm sym,0}$ & $59.22$ & $52.55$ & $66.58$ & $49.26$ & $69.92$ & $42.55$ & $77.18$ \\
& $K_{\rm sym,0}$ & $27.67$ & $-22.90$ & $74.54$ & $-51.84$ & $95.89$ & $-126.78$ & $142.13$ \\
\hline
\hline
\end{tabular}
\caption{The median and the minimum (min) and maximum (max) values of the associated 50\%, 68\%, and 90\% confidence intervals obtained for the marginalized posterior distributions of the NMPs for two multivariate Gaussian priors with parameters as listed in Table \ref{tab:priors} and data from Table \ref{tab:likelihood}. $r\in\{0,0.9\}$ is the Pearson correlation coefficient between $L_{\rm sym,0}$ and $K_{\rm sym,0}$ in the prior. All quantities are in units of MeV except for $\rho_0$ which is in units of fm$^{-3}$.}
\label{tab:bayes-result}
\end{table*}
\section{Conclusions}
\label{sec:conclusion}
In this work, we have demonstrated the application of ANNs to analyse NS observables like the radius, mass, and tidal deformability from a set of seven parameters, or nuclear saturation properties that characterize the equation of state of cold dense matter. We have shown that a neural network which models such a mapping is able to learn the microphysics information of finite nuclei, which are the intercorrelations arising between the NMPs. Based on similar work and empirical observations, we also expect this mapping to be reasonably efficient. We have then applied the trained ANN model in a Bayesian setting and studied the effect of correlations among the slope of the symmetry energy coefficient and its curvature on the confidence intervals on the NMPs themselves. This study demonstrates that in situations where speed and computational efficiency is desired, a trained neural network can function as a surrogate for traditional physics-based EoS models, and possibly also provide insights that might be difficult to obtain otherwise. However, it is essential to clarify that this methodology only complements other approaches, and does not seek to replace them in any form.
Let us briefly summarize what we have done in this work. First, we generate a set of pseudo-NMP data. A suitable set of NMPs are chosen so that the resulting neutron star EoSs are consistent with the currently observed maximum mass of $\sim 2M_{\odot}$ and satisfy causality constraints. Next, we train an artificial neural network on this data set to learn a non-linear mapping between the set of nuclear matter parameters and NS observables. We demonstrate that the ANN is capable of inferring, with reasonable accuracy, NS observables from empirical parameters, as compared to conventional physics models which require the computation of an EoS followed by tedious equation-solving. This NS-ANN model is then validated to ensure that it satisfies the microphysics of finite nuclear matter. Specifically, we study whether a trained ANN model is able to capture correlations between the tidal deformability for a NS with mass $1.4M_{\odot}$, $\Lambda_{1.4}$ and the symmetry energy slope $L_{\rm sym,0}$ as well as its curvature $K_{\rm sym,0}$. We find that the NS-ANN model learns a mapping that is sensitive to $L_{\rm sym,0} - K_{\rm sym,0}$ correlations in agreement with previous studies. Using the Bayesian inference framework, we find that in the presence of correlations between the symmetry energy slope and its curvature, recent astrophysical data can modify these NMPs further by 8.98\% and $-$32.47\%, respectively.
Presently, our framework is a proof-of-concept that demonstrates the applicability of ANNs to NS physics. For a more realistic application of our framework, empirical uncertainties ought to be considered. This can be achieved, for example, by using a class of ANNs called Bayesian Neural Networks to perform the inference task. Bayesian networks have the ability to cast the problem into a probabilistic domain by inferring probability distributions over a prior of NMP inputs. A model which predicts uncertainties will also be able to potentially further reveal the effects of NS observational constraints on the NMPs. Presently, the entire domain of validity of the ANN cannot be automatically ensured. Due to a lack of training data for some parts in the modelled output space, the ANN might be unable to accurately predict regions that have not been learnt.
This work is limited only to hadronic NS compositions, i.e, the set of $\beta$-equilibriated EoSs employed in this work are composed of neutrons, protons, electrons, and muons. Present observational constraints on NS properties cannot rule out the possibility of exotic degrees of freedom or deconfined quark phases inside the NS core. In a future work, a detailed and systematic analysis with an ANN trained on a set of EoSs with different compositions of particles is required to investigate the uncertainties on higher-order NMPs with available observational constraints. The ANN map can also be further extended to determine the required number of NS observations and their precision to conclude the existence of quark phases inside the NS core.
\section*{Data availability}
The generated data set of filtered NMPs and corresponding NS observables used to train the neural network, the trained NS-ANN model, and the sampled NMP posterior distribution used to generate Figure \ref{fig:corner-plot} are made publicly available through a GitHub repository. \footnote{\url{https://github.com/ameya1101/NS-ANN}}
\section*{Acknowledgements}
T.M would like to thank national funds from FCT (Fundação para a Ciência e a Tecnologia, I.P, Portugal) under the Projects No. UID/\-FIS/\-04564/\-2019, No. UIDP/\-04564/\-2020, No. UIDB/\-04564/\-2020, and No. POCI-01-0145-FEDER-029912 with financial support from Science, Technology and Innovation, in its FEDER component, and by the FCT/MCTES budget through national funds (OE). The authors acknowledge the Laboratory for Advanced Computing at University of Coimbra for providing {HPC} resources that have contributed to the research results reported within this paper.
|
1,116,691,497,835 | arxiv | \section{Introduction}
Let $S$ be set of $n$ points in $\mathbb{R}^2$,
and let $U$ be the union of the unit discs
centered at the points of $S$. We would like
to maintain the boundary $\partial U$ of $U$,
as new points are added to $S$.
Even for discs of varying radii, the
complexity of $\partial U$ is
$O(n)$~\cite{DBLP:journals/dcg/KedemLPS86},
and it can be computed in $O(n\log n)$ time
using \emph{power diagrams}~\cite{Aurenhammer1988}. An incremental
algorithm~\cite{spirakis1983very} can maintain $\partial U$ in total of $O(n^2)$ time. This is worst-case optimal,
as the overall complexity of the structural
changes to $\partial U$ under $n$ insertions
may be $\Omega(n^2)$; see Figure~\ref{f:worst_case_example}.
Here,
we describe in Section~\ref{sec:union_maintain}
an output-sensitive algorithm that uses $O(n)$
space and updates $\partial U$ in
$O(k\log^2 n)$ time per
insertion of a disc, where $k$ is the combinatorial
complexity of the structural changes to
$\partial U$ due to the insertion.
Some of our ideas resemble those of de
Berg at al.~\cite{de2016fine}, who present
a semi-dynamic (insertion only) point-location
data structure for $U$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.7\textwidth]{worst_case_example.png}
\caption{An example of $\Omega(n^2)$ changes on the boundary of the union of $n$ insertions of unit discs in $\mathbb{R}^2$.
We first insert $\frac{n}{2}$ discs (in black) with equidistant centers lying on an imaginary circle (in green) of radius $2$, whose center is denoted by $c$. We then insert a unit disc (in red), centered at $c$. We then insert the rest of the unit discs (in blue) incrementally, such that the center of the $i$th inserted blue disc is $\epsilon i$ above the point $c$, where $\epsilon>0$ is some small constant.}
\label{f:worst_case_example}
\end{figure}
The efficient manipulation of collections
of unit discs is a widely and frequently studied
topic, for example in the context of sensor networks,
where every disc represents the area covered by a
sensor.
Here, we are motivated by multi-agent coverage of a
region in search of a
target~\cite{bat-coverage-2018}, where we
investigate the pace of coverage and wish to
estimate at each stage the portion of the overall
area covered up to a certain point in time.
Since the simulation is discretized (i.e.,
each agents is modeled by a unit disc whose
motion is simulated by changing its location
at fixed time steps), we can apply the
structure above to update the area of the
union within the same time bound.
We give more details
in Section~\ref{sec:union_maintain}.
A set of pseudo-lines in the plane is a set of
infinite $x$-monotone curves each pair of which
intersects at exactly one point. Arrangements of
pseudo-lines have been intensively studied in
discrete and computational geometry;
see the recent survey on arrangements~\cite{hs-a-18}
for a review of combinatorial bounds and algorithms
for arrangements of pseudo-lines.
At the heart of our solution to the dynamic
maintenance of $U$ lies an efficient data structure
for the following problem: Given $n$
pseudo-lines in the plane, dynamically maintain their
lower envelope such that one can efficiently answer
vertical ray shooting queries from $y=-\infty$. Here,
the dynamization allows insertions and deletions.
For the case of lines (rather than pseudo-lines),
there are several efficient data
structures to choose from~\cite{DBLP:conf/focs/BrodalJ02,DBLP:journals/jacm/Chan01,Overmars1981,BrodalJ00,KaplanTT01}; these
are, however, not directly applicable for pseudo-lines.
Also, there are powerful general structures based on shallow
cuttings~\cite{DBLP:journals/algorithmica/AgarwalM95,DBLP:journals/jacm/Chan10,DBLP:conf/soda/KaplanMRSS17}.
These structures
can handle general families of algebraic curves of bounded
description complexity and typically also work in $\mathbb{R}^3$.
However, the additional flexibility comes at a cost:
the algorithms are quite involved, the performance
guarantees are in the expected and amortized sense, and the
operations have (comparatively) large polylogarithmic
running times. For pseudo-lines, Chan's
method~\cite{DBLP:journals/jacm/Chan10}, with
improvements by Kaplan et
al.~\cite{DBLP:conf/soda/KaplanMRSS17}, yields
$O(\log^3 n)$ amortized expected insertion time,
$O(\log^5 n)$ amortized expected deletion time, and
$O(\log^2 n)$ worst-case query time.
The solution that we propose here is, however, considerably simpler and more efficient:
We devise a fully dynamic data structure with $O(\log^2 n)$ worst-case update-time, $O(\log n)$ worst-case ray-shooting query-time, and $O(n)$ space.
Additionally, we describe how to find all pseudo-lines
below a given query point in $O(\log n + k\log^2 n)$ time,
where $k$ is the output size.
The structure is an adaptation of the Overmars-van Leeuwen
structure~\cite{Overmars1981}, matching the
performance of the original structure for the case of lines.
The key innovation is a new algorithm for finding the
intersection between two lower envelopes of planar pseudo-lines
in $O(\log n)$ time, using \emph{tentative} binary search (where each pseudo-line in one envelope is ``smaller'' than every pseudo-line in the other envelope,
in a sense to be made precise below).
To the best of our knowledge this is the most efficient data structure for the case of pseudo-lines to date.
For our solution to the union-maintenance problem, we
need to answer intersection-searching queries of the form:
Given the collection ${\cal C}$ of unit-radius circular arcs
that comprise $\partial U$ and a query unit disc $D$,
report the arcs in ${\cal C}$ intersecting $D$. This problem is a special case of the intersection searching problem in which we wish to preprocess a set of geometric objects into a data structure so that the set of objects intersected by a query object can be reported efficiently.
Intersection-searching queries are typically answered using multi-level partition trees; see the recent survey~\cite{a-rs-18} for a comprehensive review.
Our final result is a data structure for the intersection-searching problem in which the input objects are arbitrary unit-radius circular arcs rather than arcs
forming the boundary of the union of the unit discs, and the query is a unit disc.
We present a linear-size data structure with $O(n \log n)$ preprocessing
time, $O(n^{1/2+\delta} + \ell)$ query time and $O(\log^2 n)$
amortized update time, where $\ell$ is the size of the output
and $\delta>0$ is a small constant.
\begin{comment}
We use a multi-level structure to solve this problem, which is very useful for solving decomposable problems \cite{toth2017handbook}. An advantage of a multi-level structure, is it allow us to store different structure in each level. Each structure can filter the objects according to different geometric criterion. The scheme of solving decomposable problems using multi-level structure was first introduced by Dobkin and Edelsbrunner \cite{Dobkin1987}. The general idea is to find geometric criteria that separate the objects that intersect with the query object and use each level of the structure to filter the objects by one of the criteria. Finally the answer is the union of the outputs of the last level of the structure.
\end{comment}
\section{Dynamic lower envelope for pseudo-lines}
\label{sec:dynamic_lower_env}
We describe a data structure to dynamically
maintain the lower envelope of an arrangement
of planar pseudo-lines under insertions and
deletions.
Even though we present our data structure for pseudo-lines,
it holds for more general classes of planar curves; see below.
\subsection{Preliminaries}
Let $E$ be a planar family of pseudo-lines, and
let $\ell$ be a vertical line strictly to the left of
the first intersection point in $E$. The line $\ell$
defines a total order $\leq$ on the pseudo-lines in $E$,
namely for $e_1, e_2 \in E$, we have $e_1 \leq e_2$ if and
only if $e_1$ intersects $\ell$ below $e_2$. Since each
pair of pseudo-lines in $E$ crosses exactly once, it follows
that if we consider a vertical line $\ell'$ strictly to the
right of the last intersection point in $E$, the order
of
the intersection points between $\ell'$ and $E$, from
bottom to top, is exactly reversed.
The \emph{lower envelope} $\mathcal{L}(E)$ of $E$ is the
$x$-monotone curve obtained by taking the pointwise
minimum of the pseudo-lines in $E$. Combinatorially, the
lower envelope $\mathcal{L}(E)$ is a sequence of connected
segments of the pseudo-lines in $E$, where the first
and last segment are unbounded. Two properties
are crucial for our data structure: (A) every pseudo-line
contributes at most one segment to $\mathcal{L}(E)$; and (B) the
order of these segments corresponds exactly to the
order $\leq$ on $E$ defined above.
In fact, our data structure works for every set of planar curves
with properties (A) and (B) (with an appropriate
order $\leq)$, even if they
are not pseudo-lines in the strict sense; this fact will
prove useful in Section~\ref{sec:union_maintain} below.
We assume a computational model in which primitive
operations on pseudo-lines, such as computing the
intersection point of two pseudo-lines or determining
the intersection point of a pseudo-line
with a vertical line
can be performed in constant time.
\subsection{Data structure and operations}
\subparagraph{The tree structure.}
Our primary data structure is a balanced binary
search tree $\Xi$.
Such a tree data structure supports insert and delete, each
in $O(\log n)$ time.
The leaves of $\Xi$ contain
the pseudo-lines, from left to right in the sorted order defined above.
An internal node $v \in \Xi$ represents the
lower envelope of the pseudo-lines in its subtree.
More precisely, every leaf $v$ of $\Xi$ stores a
single pseudo-line $e_v \in E$. For an inner node $v$
of $\Xi$, we write $E(v)$ for the set of
pseudo-lines in the subtree rooted at $v$.
We denote the lower envelope of $E(v)$ by
$\mathcal{L}\big(v\big)$.
The inner node $v$ has the following variables:
\begin{itemize}
\item $f$, $\ell$, $r$: a pointer to the parent,
left child and right child of $v$, respectively;
\item $\max$: the \emph{last} pseudo-line in E(V) (last in the ordering defined in Section 2.1)
\item $\Lambda$:
a balanced binary search tree that stores the
prefix or suffix of $\mathcal{L}(v)$ that
is not on the lower envelope
$\mathcal{L}(f)$ of the parent (in the root,
we store the lower envelope of $E$).
The leaves of $\Lambda$ store the pseudo-lines that support the
segments on the lower envelope,
with the
endpoints of the segments, sorted from left to right.
An inner node of $\Lambda$ stores the common point of the last segment
in the left subtree and the first segment in the right subtree.
We will need split and join operations on the binary trees, which can be implemented in $O(\log n)$ time.
\end{itemize}
\subparagraph*{Queries.}
We now describe the query operations available
on our data structure.
In a \emph{vertical ray-shooting query}, we are
given a value $x_0 \in \mathbb{R}$,
and we would like to find the pseudo-line
$e \in E$ where the vertical
line $\ell: x = x_0$ intersects $\mathcal{L}(E)$.
Since the root of $\Xi$ explicitly stores
$\mathcal{L}(E)$ in a balanced binary search tree,
this query can be answered easily
in $O(\log n)$ time.
\begin{restatable}{restatable_lemma}{verticalRs}
\label{lem:vertical_rs}
Let $\ell: x = x_0$ be
a vertical ray shooting query.
We can find the pseudo-line(s) where
$\ell$ intersects $\mathcal{L}(E)$ in $O(\log n)$
time.
\end{restatable}
\begin{proof}
Let $r$ be the root of $\Xi$. We perform an
explicit search for $x_0$ in
$r.\Lambda$ and return the result. Since $r.\Lambda$
is a balanced binary search tree, this takes
$O(\log n)$ time.
\end{proof}
\begin{restatable}{restatable_lemma}{multiVerticalRs}
Let $q \in \mathbb{R}^2$. We can
report all pseudo-lines in $E$ that
lie below $q \in \mathbb{R}^2$ in total time
$O(\log n + k \log^2 n)$, where $k$
is the output size
\end{restatable}
\begin{proof}
Let $q_x$ be the $x$-coordinate of $q$.
We do a vertical ray shooting query for
$q_x$ and use Lemma~\ref{lem:vertical_rs}
to determine the pseudo-line $e$ where
the vertical line through $q_x$ intersects
$\mathcal{L}(E)$. If $q$ is below $e$, we are done.
Otherwise, we store $e$ in the result set,
and we delete $e$ from $\Xi$.
We repeat until $\Xi$ is empty
or until $q$ is below the current lower
envelope. Then, we reinsert all elements
in the result set to restore the original
set in $\Xi$.
Overall, we need $k + 1$ ray shooting queries,
$k$ deletions, and $k$ insertions. By
Lemma~\ref{lem:vertical_rs},
one ray shooting query needs $O(\log n)$ time, and below
we show that an update operation requires
$O(\log^2 n)$ time.
\end{proof}
\subparagraph*{Update.}
To insert or delete a pseudo-line
$e$ in $\Xi$, we follow
the method of Overmars and van
Leeuwen~\cite{Overmars1981}.
We delete or insert a leaf for $e$ in $\Xi$
using standard binary search tree techniques (the $v.\max$ pointers
guide the search in $\Xi$). As we go down,
we construct the lower envelopes for
the nodes hanging off the search path, using
split and join operations on the $v.\Lambda$ trees. Going
back up, we recompute the information $v.\Lambda$ and
$v.\max$.
To update the $v.\Lambda$ trees, we need the following
operation: given two lower envelopes $\mathcal{L}_\ell$
and $\mathcal{L}_r$, such that all pseudo-lines in $\mathcal{L}_\ell$
are smaller than all pseudo-lines in $\mathcal{L}_r$,
compute the intersection point $q$ of $\mathcal{L}_\ell$
and $\mathcal{L}_r$. In the next section, we see
how to do this in $O(\log n)$ time,
where $n$ is the size of $E$.
Since there are $O(\log n)$ nodes in $\Xi$
affected by an update, this procedure takes
$O(\log^2 n)$ time. More details can be found
in~\cite{Overmars1981,PreparataSh85}.
\begin{lemma}
It takes $O(\log^2 n)$ to
insert or remove a pseudo-line in $\Xi$.
\end{lemma}
\subsection{Finding the intersection point of two lower envelopes}
Given two lower envelopes $\mathcal{L}_\ell$
and $\mathcal{L}_r$ such that all pseudo-lines
in $\mathcal{L}_\ell$ are smaller than all
pseudo-lines in $\mathcal{L}_r$, we would like to
find the
intersection point $q$ between $\mathcal{L}_\ell$
and $\mathcal{L}_r$ in $O(\log n)$ time. We assume that $\mathcal{L}_\ell$
and $\mathcal{L}_r$ are represented as balanced
binary search trees. The leaves of $\mathcal{L}_\ell$
and $\mathcal{L}_r$ store the pseudo-line segments
on the lower envelopes, sorted from left to
right.
We assume that the pseudo-line segments in the
leaves are half-open, containing
their right, but not their left endpoint in
$\mathcal{L}_\ell$; and their left, but not their
right endpoint in $\mathcal{L}_r$.\footnote{We
actually store both endpoints in the trees,
but the intersection algorithm uses only one
of them, depending on the role the tree plays in the
algorithm.}
Thus, it is
uniquely determined which leaves of $\mathcal{L}_\ell$
and $\mathcal{L}_r$ contain the intersection point $q$.
A leaf $v$ stores the pseudo-line $\mathcal{L}(v)$
that supports the segment for $v$,
as well as an endpoint $v.p$ of the segment,
namely the left endpoint
if $v$ is a leaf of $\mathcal{L}_\ell$,
and the right endpoint if
$v$ is a leaf of $\mathcal{L}_r$.\footnote{If the
segment is unbounded, the endpoint
might not exist. In this case, we use
a symbolic endpoint at infinity that
lies below every other pseudo-line.}
An inner node $v$
stores the intersection point $v.p$ between the
last pseudo-line in the left subtree
$v.\ell$ of $v$ and the first pseudo-line
in the right subtree $v.r$ of $v$, together with
the lower envelope $\mathcal{L}(v)$ of these two
pseudo-lines. These trees can be obtained by
appropriate split and join operations from the
$\Lambda$ trees stored in $\Xi$.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]
{hard_case_pseudo_lines2.png}
\caption{An example of Case~3.
$\mathcal{L}_\ell$ is blue; $\mathcal{L}_r$ is red.
The solid pseudo-lines are fixed.
The dashed pseudo-lines are optional, namely, either
none of the dashed pseudo-lines
exists or exactly one of them exists. $u.p$
and $v.p$ are the current points;
and Case~3 applies.
Irrespective of the local
situation at $u$ and $v$, the intersection point
can be to the left of $u.p$, between $u.p$ and
$v.p$ or to the right of $v.p$, depending
on which one of the dashed pseudo-lines
exists.}
\label{f:hard_case_pseudo_lines}
\end{figure}
Let $u^* \in \mathcal{L}_\ell$ and
$v^* \in \mathcal{L}_r$ be the leaves whose segments
contain $q$. Let
$\pi_\ell$ be the path in $\mathcal{L}_\ell$
from the root to $u^*$ and
$\pi_r$
the path in $\mathcal{L}_r$ from the root to $v^*$.
Our strategy is as follows: we simultaneously
descend in $\mathcal{L}_\ell$ and in $\mathcal{L}_r$.
Let $u$ be the current node in $\mathcal{L}_\ell$
and $v$ the current node in $\mathcal{L}_r$.
In each step, we perform a local test on
$u$ and $v$ to decide how to proceed.
There are three possible outcomes:
\begin{enumerate}
\item $u.p$ is on or above $\mathcal{L}(v)$: the
intersection point $q$
is equal to or to the left of $u.p$. If $u$ is
an inner node, then $u^*$ cannot lie in $u.r$;
if $u$ is a leaf, then $u^*$ lies strictly
to the left of $u$;
\item $v.p$ lies on or above $\mathcal{L}(u)$:
the intersection point $q$
is equal to or to the right of $v.p$.
If $v$ is an inner node, then $v^*$ cannot lie in
$v.\ell$; if $v$ is a leaf, then $v^*$ lies
strictly to the right of $v$;
\item $u.p$ lies below $\mathcal{L}(v)$ and $v.p$ lies
below $\mathcal{L}(u)$: then, $u.p$ lies
strictly to the left of $v.p$ (since
we are dealing with pseudo-lines). It must be
the case that $u.p$ is strictly to the left of
$q$ or $v.p$ is strictly to the right of $q$ (or both).
In the former case, if $u$ is an inner node,
$u^*$ lies in or to the right of $u.r$ and if $u$ is
a leaf, then $u^*$
is $u$ or a leaf to the right of $u$. In the
latter case, if $v$ is an inner node, $v^*$ lies in
or to the left of
$v.\ell$ and if $v$ is a leaf, then $v^*$ is $v$
or a leaf to the left of $v$; see Figure~\ref{f:hard_case_pseudo_lines}.
\end{enumerate}
Although it is clear how to proceed in the first two cases, it is not immediately
obvious how to proceed in the third case, because the correct step
might be either to go to $u.r$ or to $v.\ell$.
In the case of lines, Overmars and van Leeuwen
can solve this ambiguity by comparing the slopes
of the relevant lines. For pseudo-lines, however,
this does not seem to be possible. For an example,
refer to Figure~\ref{f:hard_case_pseudo_lines}, where
the local situation at $u$ and $v$ does not determine
the position of the intersection point $q$. Therefore,
we present an alternative strategy.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.85]{invariant.pdf}
\caption{The invariant:
the current search nodes are $u$ and $v$.
\texttt{uStack} contains all nodes on the
path from the root to $u$ where the path goes to a right
child (orange squares), \texttt{vStack} contains all
nodes from the root to $v$ where the path goes to a left child (orange squares). The final leaves $u^*$ and $v^*$ are in one of the
gray subtrees; and at least one of them is under $u$ or under $v$.}
\label{fig:invariant}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.67]{case3.pdf}
\caption{Comparing $u$ to $v$: in Case~3,
we know that $u^*$ is in $u.r$ or $v^*$ is in $v.\ell$; we go to
$u.r$ and to $v.\ell$.}
\label{fig:case3}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.67]{case1.pdf}
\caption{Comparing $u$ to $v$:
in Case~1, we know that
$u^*$ cannot be in $u.r$.
We compare $u'$ and $v$ to decide
how to proceed:
in Case~1, we know that $u^*$ cannot be in $u'.r$; we go to
$u'.\ell$; in Case 2, we know that $u^*$ cannot be in $u.r$ and that
$v^*$ cannot be in $v.\ell$; we go to $u.\ell$ and to $v.r$; in Case~3,
we know that $u^*$ is in $u'.r$ (and hence in $u.\ell$) or in $v.\ell$;
we go to $u.\ell$ and to $v.\ell$. Case~2 is not
shown as it is symmetric.}
\label{fig:case1}
\end{figure}
We will maintain
the invariant that the subtree at $u$ contains $u^*$ or the
subtree at $v$ contains $v^*$ (or both). When comparing
$u$ and $v$, one of the three
cases occurs. In Case~3, $u^*$ must be
in $u.r$, or $v^*$ must be in $v.\ell$;
see Figure~\ref{fig:case3}.
We move $u$ to $u.r$
and $v$ to $v.\ell$. One of these moves must be correct,
but the other move might be mistaken: we might have gone
to $u.r$ even though $u^*$ is in $u.\ell$ or to $v.\ell$
even though $v^*$ is in $v.r$. To correct this,
we remember the current $u$ in a stack \texttt{uStack} and
the current $v$ in a stack \texttt{vStack}, so that we can
revisit $u.\ell$ or $v.r$ if it becomes necessary. This leads
to the general situation shown in Figure~\ref{fig:invariant}:
$u^*$ is below $u$ or in a left subtree of a node
on $\texttt{uStack}$, and $v^*$ is below $v$ or in a right
subtree of a node on $\texttt{vStack}$, and at least one of
$u^*$ or $v^*$ must be below $u$ or $v$, respectively. Now, if
Case~1 occurs when comparing $u$ to $v$, we can exclude the
possibility that $u^*$ is in $u.r$. Thus, $u^*$ might be in
$u.\ell$, or in the left subtree of a node in \texttt{uStack};
see Figure~\ref{fig:case1}.
To make progress, we now compare $u'$, the top of \texttt{uStack},
with $v$. Again, one of the three cases occurs. In Case~1,
we can deduce that going to $u'.r$ was mistaken, and we move
$u$ to $u'.\ell$, while $v$ does not move. In the other cases,
we cannot rule out that $u^*$ is to the right of $u'$, and we
move $u$ to $u.\ell$, keeping the invariant that $u^*$ is either
below $u$ or in the left subtree of a node on \texttt{uStack}.
However, to ensure that the search progresses, we now must also
move $v$. In Case~2, we can rule out $v.\ell$, and we move
$v$ to $v.r$. In Case~3, we move $v$ to $v.\ell$. In this way,
we keep the invariant and always make progress: in each step,
we either discover a new node on the correct search
paths, or we pop one erroneous move from one of the two stacks.
Since the total length of the correct search paths is
$O(\log n)$, and since we push an element onto the stack
only when discovering a new correct node,
the total search time is $O(\log n)$; see Figure~\ref{fig:algodemo} for an example run.
The following pseudo-code gives the details
of our algorithm, including all corner cases.
\begin{lstlisting}[backgroundcolor = \color{lightgray}, mathescape=true]
oneStep($u$, $v$)
do compare($u$, $v$):
Case 3:
if $u$ is not a leaf then
uStack.push($u$); $u \leftarrow u.r$
if $v$ is not a leaf then
vStack.push($v$); $v \leftarrow v.\ell$
if $u$ and $v$ are leaves then
return $u = u^*$ and $v = v^*$
Case 1:
if uStack is empty then
$u \leftarrow u.\ell$
else if $u$ is a leaf then
$u \leftarrow \texttt{uStack.pop().}\ell$
else
$u' \leftarrow \texttt{uStack.top()}$
do compare($u'$, $v$)
Case 1:
uStack.pop(); $u \leftarrow u'.\ell$
Case 2:
$u \leftarrow u.\ell$
if $v$ is not a leaf then
$v. \leftarrow v.r$
Case 3:
$u \leftarrow u.\ell$
if $v$ is not a leaf then
vStack.push($v$); $v \leftarrow v.\ell$
Case 2:
symmetric
\end{lstlisting}
We will show that the search procedure maintains
the following invariant:
\begin{invariant}
\label{inv:intersection}
The leaves in all subtrees $u'.\ell$, for
$u' \in \texttt{uStack}$, together with the
leaves under $u$ constitute a contiguous
prefix of the leaves in $\mathcal{L}_\ell$, which
contains $u^*$.
Also, the leaves in all subtrees
$v'.r$, $v' \in \texttt{vStack}$,
together with the leaves under $v$ constitue a
contiguous suffix of the leaves of $\mathcal{L}_r$,
which contains $v^*$. Furthermore,
either $u \in \pi_\ell$ or $v \in \pi_r$ (or both).
\end{invariant}
Invariant~\ref{inv:intersection} holds at the
beginning, when both stacks are empty,
$u$ is the root of $\mathcal{L}_\ell$ and $v$ is the
root of $\mathcal{L}_r$. To show that the invariant
is maintained, we first consider the special case
when one of the two searches has already discovered
the correct leaf:
\begin{lemma}
\label{lem:leaf_inv}
Suppose that Invariant~\ref{inv:intersection} holds and
that Case~3 occurs when comparing $u$ to $v$.
If $u = u^*$, then
$v \in \pi_r$ and, if $v$ is not a leaf, $v.\ell \in \pi_r$.
Similarly, if $v = v^*$, then $u \in \pi_\ell$ and,
if $u$ is not a leaf, $u.r \in \pi_\ell$.
\end{lemma}
\begin{proof}
We consider the case $u = u^*$; the other case is symmetric.
Let $e_u$ be the segment of $\mathcal{L}_\ell$ stored in $u$.
By Case~3, $u.p$ is strictly to the left of $v.p$.
Furthermore, since $u = u^*$, the intersection point
$q$ must be on $e_u$. Thus, $q$ cannot
be on the right of $v.p$, because otherwise
$v.p$ would be a point on $\mathcal{L}_r$ that lies below $e_u$
and to the left of $q$, which is impossible.
Since $q$ is strictly to the left of $v.p$;
by Invariant~\ref{inv:intersection}, if $v$ is
an inner node, $v^*$ must be in $v.\ell$, and hence
both $v$ and $v.\ell$ lie on $\pi_r$. If $v$ is a
leaf, then $v = v^*$.
\end{proof}
We can now show that the invariant is maintained.
\begin{lemma}
The procedure \texttt{oneStep} either correctly
reports that $u^*$ and $v^*$ have been found, or it maintains Invariant~\ref{inv:intersection}. In the latter case, it either
pops an element from one of the two stacks, or it
discovers a new node on $\pi_\ell$ or $\pi_r$.
\end{lemma}
\begin{proof}
First, suppose Case~3 occurs. The invariant
that \texttt{uStack} and $u$ cover a prefix of
$\mathcal{L}_\ell$ and that \texttt{vStack} and $v$
cover a suffix of $\mathcal{L}_r$ is maintained.
Furthermore, if both $u$ and $v$ are inner nodes,
Case~3 ensures that $u^*$ is in $u.r$ or
to the right of $u$, or that $v^*$ is in $v.\ell$ or to the left of $v$. Suppose the former case
holds. Then, Invariant~\ref{inv:intersection}
implies that $u^*$ must be in $u.r$, and
hence $u$ and $u.r$ lie on $\pi_\ell$.
Similarly, in the second case,
Invariant~\ref{inv:intersection} gives that
$v$ and $v.\ell$ lie in $\pi_r$.
Thus,
Invariant~\ref{inv:intersection} is maintained
and we discover a new node on $\pi_\ell$ or
on $\pi_r$.
Next, assume $u$ is a leaf and $v$ is an inner node.
If $u \neq u^*$, then as above,
Invariant~\ref{inv:intersection} and Case~3 imply
that $v \in \pi_r$ and $v.\ell \in \pi_r$,
and the lemma holds.
\begin{figure}
\centering
\begin{subfigure}{0.801\textwidth}
\includegraphics[width=\textwidth]{intersection_point_demo_2.pdf}
\caption{Demonstration of two set of pseudo-lines and their lower envelope: (i) the blue and green pseudo-lines, (ii) the red and orange pseudo-lines. The blue and the red dots represents the intersection points on the lower envelopes.}
\label{f:intersection_point_demo2}
\end{subfigure}
\begin{subfigure}{0.8\textwidth}
\includegraphics[width=\textwidth]{intersection_point_demo.pdf}
\caption{The top figure shows the lower envelope of (a). The bottom figure shows the the trees which maintain the lower envelopes. $u(i)$ and $v(i)$ shows the position of the pointers $u$ and $v$ at step $i$, during the search procedure.}
\label{f:intersection_point_demo}
\end{subfigure}
\caption{Example of finding the intersection point of two lower envelopes:}
\begin{tabular}{|c|cc|cc|c|}
\hline
Step & $u$ & $v$ & uStack & vStack & Procedure case \\\hline
1 & 4 & 4 & $\emptyset$ & $\emptyset$ & Case~3 \\
2 & 6 & 2 & 4 & 4 & Case~2 $\rightarrow$ Case~2 \\
3 & 6 & 6 & 4 & $\emptyset$ & Case~3 \\
4 & 7 & 5 & 4, 6 & 6 & Case~1 $\rightarrow$ Case~3 \\
5 & 7* & 5* & 4, 6 & 6, 5 & Case~3 $\rightarrow$ End\\\hline
\end{tabular}
\label{fig:algodemo}
\end{figure}
If $u = u^*$, the lemma follows from Lemma~\ref{lem:leaf_inv}.
The case that $u$ is an inner node and $v$ a
leaf
is symmetric. If both $u$ and $v$ are leaves, Lemma~\ref{lem:leaf_inv}
implies that \texttt{oneStep} correctly reports $u^*$ and
$v^*$.
Second, suppose Case~1 occurs. Then,
$u^*$ cannot be in $u.r$, if $u$ is an
inner node, or $u^*$ must be to the left
for a segment left of $u$, if $u$ is
a leaf.
Now, if \texttt{uStack}
is empty, Invariant~\ref{inv:intersection}
and Case~1 imply that $u$ cannot be a leaf
(because $u^*$ must be in the subtree of $u$)
and that $u.\ell$ is a new node on $\pi_\ell$.
Thus, the lemma holds in this case.
Next, if $u$ is a leaf,
Invariant~\ref{inv:intersection} and
Case~1 imply that $v \in \pi_r$. Thus, we pop
\texttt{uStack} and maintain the invariant;
the lemma holds.
Now, assume that \texttt{uStack} is not
empty and that $u$ is not a leaf.
Let $u'$ be the top of $\texttt{uStack}$.
First, if the comparison between $u'$ and $v$ results
in Case~1, then $u^*$ cannot be in
$u'.r$, and in particular, $u \not\in \pi_\ell$.
Invariant~\ref{inv:intersection} shows
that $v \in \pi_r$,
and we pop an element from \texttt{uStack},
so the lemma holds.
Second, if the comparison between $u'$ and $v$
results in Case~2, then $v^*$ cannot
be in $v.\ell$, if $v$ is an inner node.
Also, if $u \in \pi_\ell$, then necessarily also
$u.\ell \in \pi_\ell$, since Case~1
occurred between $u$ and $v$. If $v \in \pi_r$,
since Case~2 occurred between $u'$ and $v$, the node
$v$ cannot
be a leaf and $v.r \in \pi_r$. Thus, in either case
the invariant is maintained and we discover a new
node on $\pi_\ell$ or on $\pi_r$.
Third, assume the comparison between
$u'$ and $v$ results in Case~3. If
$u \in \pi_\ell$, then also $u.\ell \in \pi_\ell$,
because $u.r \in \pi_\ell$ was excluded by
the comparison between $u$ and $v$. In this case,
the lemma holds. If $u \not\in \pi_\ell$,
then also $u'.r \not \in \pi_\ell$, so the fact
that Case~3 occurred between $u'$ and $v$ implies that
$v.\ell$ must be on $\pi_r$ (in this case,
$v$ cannot be a leaf, since otherwise we would
have $v^* = v$ and Lemma~\ref{lem:leaf_inv} would
give $u'.r \in \pi_\ell$, which we have already ruled out).
The argument for Case~2 is symmetric.
\end{proof}
\begin{lemma}
The intersection point $q$ between $\mathcal{L}_\ell$ and
$\mathcal{L}_r$ can be found in $O(\log n)$ time.
\end{lemma}
\begin{proof}
In each step, we either discover a new node of
$\pi_\ell$ or of $\pi_r$, or we pop an element
from \texttt{uStack} or \texttt{vStack}.
Elements are pushed only when
at least one new node on $\pi_\ell$ or
$\pi_r$ is discovered.
As $\pi_\ell$ and $\pi_r$ are each a path from the root to a leaf in a balanced binary tree,
we need $O(\log n)$
steps.
\end{proof}
\section{Maintaining the union of unit discs under insertions}
\label{sec:union_maintain}
To maintain the union of unit discs under insertions, we maintain dynamic data structures for representing the boundary of the union, for reporting the arcs of the boundary that intersect with the next disc to be inserted, and for updating the boundary representation due to the insertion of the new disc. This section is dedicated to these data structures.
\subparagraph*{Overview of the algorithm.}
We denote by $D(x)$ the unit disc centered at $x$.
Let $U$ be the union of $n$ unit discs and let $D(x)$ be the new unit disc, which we wish to insert. In order to report the arcs of $\partial U$ that intersect $D(x)$, we overlay the plane with an implicit grid, where only cells that intersect with $U$ are stored, and where the size of the diagonal of a grid cell is~$1$. The arcs of $\partial U$ are divided into the cells of the grid---each arc of $\partial U$ is associated with the cell that contains it. Note that if an arc belongs to more than one cell then we split it into (sub)arcs at the boundaries of the cells that it crosses (see Figure~\ref{f:grid_structure}). We divide the arcs of a given cell into four sets:
\textit{top}, \textit{right}, \textit{bottom} and \textit{left}, which we denote by $E_t$, $E_r$, $E_b$ and $E_l$ respectively (see Section~\ref{subsec:prelim}). The algorithm consists of the following main steps: (1) Find the cells that $D(x)$ intersects. (2) For each such cell find the arcs of each one of the sets $E_t$, $E_r$, $E_b$ and $E_l$ that $D(x)$ intersects. Cells of the union that contain no boundary arcs are treated in a special way. (3) Update $\partial U$ using the arcs we found in the previous step and with $\partial D(x)$.
Step~1 of the algorithm is implemented using a balanced binary tree $\Omega$ on the \emph{active cells}, namely cells that have non-empty intersection with the current union $U$.
The key of each active cell is the pair of coordinates of its bottom left corner. The active cells are stored at the leaves of the tree in ascending lexicographic order. Finding the cells intersected by a new disc, inserting or deleting a cell, take $O(\log n)$ time each. For details, see, e.g.,~\cite{DBLP:journals/comgeo/HalperinO98}. As we will see below, the structure $\Omega$ will also be used to decide whether a new disc is fully contained in the current union or lies completely outside the current union (Section~\ref{subsec:union_area}).
Most of this section is dedicated to a description of Steps~2 and~3 of the algorithm for the set $E_t$. The sets $E_r$, $E_b$, and $E_l$ can be handled in a similar manner. The basic property that we use is that $D(x)$ intersects an arc $e$ if and only if $x$ belongs to $e\oplus D_1$, namely the Minkowski sum of $e$ with a unit disc.
We divide the boundaries of the Minkowski sums of $E_t$ into upper and lower curves at the $x$-extremal points; in what follows we will refer to them as upper and lower curves, and denote their respective sets by $\Gamma^+$ and $\Gamma^-$.
(To avoid confusion we will refer to portions of the boundary of the union as \emph{arcs} and to
portions of the boundary of the Minkowski sums as \emph{curves}.)
The disc $D(x)$ intersects the arc $e\in E_t$ if and only if $x$ lies above the lower curve induced by $e$ and below the upper curve induce by $e$.
We will store the curves of $\Gamma^+$ in a dynamic structure $\Delta^+$ and the curves of $\Gamma^-$ in a dynamic structure $\Delta^-$ (both described in Section~\ref{subsec:data_strcutures}).
Another property that we use is the following (see Lemma~\ref{lem:upper_above_lower} below): Let $\ell$ be a vertical line that passes through $x$, the center of the new disc. Then the intersection points of curves in $\Gamma^+$ with $\ell$ are all above the intersection points of curves of $\Gamma^-$ with $\ell$.
Assume for the sake of exposition that we are given
the point $\xi$ of intersection between $\ell$ and
the upper envelope of the curves in $\Gamma^-$.
If the center $x$ of our new disc is above $\xi$ then,
since $x$ is above all the lower curves that cross
$\ell$ we only need to search the structure
$\Delta^+$ for the upper curves that lie above
$x$---these will determine the arcs of $E_t$ that
are intersected by $D(x)$. If the point $x$ coincides
with or lies below $\xi$ then we only need to search
the structure $\Delta^-$ for the lower curves that lie
below $x$---now these will determine the arcs of $E_t$
that are intersected by $D(x)$.
However, we cannot easily obtain the point $\xi$, and hence querying the data structures is a little more involved: We use $\Delta^+$ to iterate over the upper curves that lie above $x$. For every upper curve we check in $O(1)$ time whether its corresponding arc (of $E_t$) intersects with $D(x)$. If it intersects then we add this arc to the output list
and continue to the next upper curve. If all the upper curves above $x$ turn out to be induced by arcs intersecting $D(x)$ we output this list of arcs and stop.
If all the reported arcs from the query of $\Delta^+$ indeed intersect $D(x)$, then we are guaranteed that $x$ is above $\xi$ and this is the complete answer. Due to Lemma~\ref{lem:upper_above_lower}, if we detect that the arc induced by a curve reported by $\Delta^+$ to lie above $x$ is not intersecting $D(x)$, then we are guaranteed that $x$ is on or below $\xi$ and we will obtain the full reply by querying $\Delta^-$.
We review the geometric foundations needed by our algorithms and data structures in Section~\ref{subsec:prelim}, then describe the data structures in Section~\ref{subsec:data_strcutures}. Finally, in Section~\ref{subsec:union_area} we explain how we solve our motivating problem---dynamically reporting the area of the union.
\begin{figure}[ht]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{grid_structure.pdf}
\captionsetup{justification=centering}
\caption{}
\label{f:grid_structure}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{Section_3_Overview.pdf}
\captionsetup{justification=centering}
\caption{}
\label{f:Section_3_Overview}
\end{subfigure}
\caption{(a) The grid laid over the union of unit discs. The active cells are highlighted in pale red. (b) Illustration of the search procedure. There are four pairs of upper and lower curves (each pair has a distinct color). The point $x$ is a query point, $\ell$ is a vertical line that passes through $x$. The points $p_1,p_2,p_3$ and $p_4$ are the intersection points of the upper curves with $\ell$ and $q_1,q_2,q_3$ and $q_4$ are the intersection points of the lower curves with $\ell$. The set $\Gamma^+$ contains four upper curves while $\Gamma^-$ contains three lower curves (red, blue and orange). The search procedure proceeds as follows: We iterate over the curves of $\Gamma^+$ one by one as long as the corresponding arc (of $E_t$) intersects with $D(x)$. If it intersects then we report that arc and continue to next upper curve. Otherwise we stop the iteration over $\Gamma^+$ and iterate over the curves of $\Gamma^-$. In (b), No matter what the order of the iteration over the curves of $\Gamma^+$ is, we always stop the search when we reach (and test) the green curve. Therefore we will examine at most three curves.}
\end{figure}
\subsection{Preliminaries}
\label{subsec:prelim}
Let $B$ be an axis-parallel square, which represents one grid cell with unit-length diagonal, and let $\ell_1$ and $\ell_2$ be lines that support the diagonals of $B$. These lines divide the plane into top, right, bottom and left quadrants, which we denote by $Q_t$, $Q_r$, $Q_b$ and $Q_l$, respectively.
Let $U$ be the union of $n$ unit discs. We divide the arcs of $\partial U$ that are contained in $B$ into four sets according to the quadrant which contains their centers. In case that a center lies on one of the lines then it is added either to the top or to the bottom quadrant. Denote these four sets of arcs by $E_t$, $E_r$, $E_b$ and $E_l$. The power of this subdivision into quadrants is that now the projections of the arcs in any one set onto a major axis (the $x$-axis for $E_t$ or $E_b$, and the $y$-axis for $E_l$ or $E_r$), are pairwise interior disjoint.
For example, $E_t$ contains the arcs whose centers are located in $Q_t$, and the projections of the arcs in $E_t$ onto the $x$-axis are pairwise interior disjoint, as we show below in Lemma~\ref{lemma:disoint_x_projection}.
\begin{definition}
For two bounded $x$-monotone arcs $e_i$ and $e_j$ we write $e_i \leq_x e_j$ if and only if the right endpoint of $e_i$ is to the left of or coincides with the left endpoint of $e_j$.
\label{d:E_t}
\end{definition}
\begin{restatable}{restatable_lemma}{openLowerSemiCircle}
\label{l:open_lower_semi_circle}
Each arc in $E_t$ is portion of a lower semicircle.
\end{restatable}
\begin{proof}
Let $e_i$ be an arc in $E_t$ centered at the point $c_i$. Since the length of the diagonal of $B$ is $1$, $c_i$ must lie above the line supporting the top edge of $B$ and therefore only (portions of) the open lower semi-circle of $\partial D(c_i)$ can intersect $B$.
\end{proof}
\begin{restatable}{restatable_lemma}{disointXProjection}
\label{lemma:disoint_x_projection}
The $x$-projections of the (relative interiors of) arcs in $E_t$ are pairwise disjoint.
\end{restatable}
\begin{proof}
Let $e_i$ and $e_j$ be two arcs of $E_t$. Assume toward a contradiction that the $x$-projections of $e_i$ and $e_j$ are not pairwise interior disjoint. Thus there is a vertical line $\ell$ that intersects both arcs. Assume without loss of generality, that $p:=e_i\cap\ell$ is below $q:=e_j\cap\ell$. The point $p$ lies on a lower semi-circle (Lemma~\ref{l:open_lower_semi_circle}) and its center, $c_i$, lies above $B$. This implies that the vertical segment that connects $p$ to the top edge of $B$ is fully contained in $D(c_i)$. But then $q$ cannot be on $\partial U$.
\end{proof}
Relying on Lemma~\ref{lemma:disoint_x_projection}, henceforth we assume that the arcs in $E_t$ are ordered from left to right: $e_1,\ldots,e_m$.
We wish to find which arcs of the set $E_t$ intersect with the new unit disc $D(x)$ to be inserted. For this purpose, we compute the Minkowski sum of each arc $e_i$ of $E_t$ with a unit disc centred at the origin. Then, we divide the boundary of each Minkowski sum into upper and lower curves at the $x$-extremal points: denote the top curve by $\gamma_i^+$ and the bottom curve by $\gamma_i^-$. We denote the set of the upper curves, $\{\gamma_i^+|e_i \in E_t\}$, by $\Gamma^+$ and the set of the lower curves, $\{\gamma_i^-|e_i \in E_t\}$, by $\Gamma^-$.
In the rest of this section we prove some useful properties regrading the curves in $\Gamma^+$ and $\Gamma^-$:
\begin{description}
\item [P1] Every lower curve in $\Gamma^-$ can appear at most once on the lower envelope of the curves in $\Gamma^-$. Furthermore, if $\gamma_i^-$ and $\gamma_j^-$ appear on the lower envelope then $\gamma_i^-$ appears to the left of $\gamma_j^-$ if and only if $e_i<_x e_j$.
\item[P2] Let $e_i$, $e_{i+1}$ and $e_{i+2}$ be an ordered sequence of arcs in $E_t$ and $q$ be a point. If $q$ lies below $\gamma_i^+$ and $\gamma^+_{i+2}$ then $q$ lies also below $\gamma^+_{i+1}$.
\item[P3] For every vertical line $\ell$. The intersection points of the lower curves with $\ell$ are below the intersection points of the upper curves with $\ell$.
\end{description}
In order to prove Property~\textbf{P1}, we first need to show that every pair of lower curves intersect at most once.
\begin{restatable}{restatable_lemma}{lowerCurvesIntersectOnes}
Let $e_i$ and $e_j$ be arcs of $E_t$. Then $\gamma_i^-$ and $\gamma^-_j$ intersect in exactly one point.
\label{l:lower_curves_intersect_ones}
\end{restatable}
\begin{proof}
Assume that $e_i \leq_x e_j$ and assume toward a contradiction that $\gamma_i^-$ and $\gamma^-_j$ intersect in more than one point. This implies that there are two points, $p^-_i$ and $p^-_j$, on the lower envelope of $\gamma_i^-$ and $\gamma^-_j$, where $p^-_i \in \gamma_i^-$ and $p^-_j \in \gamma^-_j$ and such that $p^-_j <_x p^-_i$. The point $p^-_i$ lies on the lower semicircle of $\partial D(p_i)$ where $p_i \in e_i$. This means that $p_i$ lies on the upper semicircle $\sigma^+_i$ of $\partial D(p^-_i)$. The same argument implies that there is a point $p_j$ on the upper semicircle $\sigma^+_j$ of $\partial D(p^-_j)$. The upper semicircles $\sigma^+_i$ and $\sigma^+_j$ intersect exactly once and since $p^-_j <_x p^-_i$ then $\sigma^+_i$ appears to the right of $\sigma^+_j$ on the upper envelope of $\sigma^+_i$ and $\sigma^+_j$. The point $p_i$ must be on that upper envelope, otherwise $p^-_j$ would be inside $D(p_i)$, which contradicts the fact that $p^-_j$ belongs to the lower envelope of $\gamma_i^-$ and $\gamma^-_j$. A similar argument applies to $p_j$. This implies that $p_i >_x p_j$ which contradicts the assumption that $e_i \leq_x e_j$. Finally, notice that the arcs $\gamma_i^-$ and $\gamma^-_j$ intersect at least once since the distance between any two points inside $B$ is at most one (see Figure~\ref{f:lower_curves_intersect_ones_and_upper_lower_curves_not_intersect}).
\end{proof}
For two $x$-monotone curves $\ell_1,\ell_2$ that intersect exactly once, we say that $\ell_1<\ell_2$ when $\ell_1$ appears on their joint lower envelope immediately to the left of their intersection point and $\ell_2<\ell_1$ otherwise. The proof of Lemma~\ref{l:lower_curves_intersect_ones} also implies,
\begin{corollary}
For any pair of curves $\gamma_i^-,\gamma^-_j\in\Gamma^-$, $\gamma_i^- < \gamma^-_j$ if and only if $e_i <_x e_j$.
\label{c:pseudo-lines}
\end{corollary}
We now turn to discuss the upper curves. To prove Property~P2 (Lemma~\ref{l:upper_curves_order}), we first consider the structure of the upper envelope of the upper curves.
\begin{observation}
Let $p$ and $q$ be the endpoints of the arc $e_i$ in $E_t$. The upper curve $\gamma^+_i$ is the upper envelope of the upper boundaries (namely, semicircles) of the discs $D(p)$ and $D(q)$
\label{o:upper_envelop}
\end{observation}
\begin{restatable}{restatable_lemma}{upperCurvesOrder}
Let $e_i$, $e_{i+1}$ and $e_{i+2}$ be an ordered sequence of arcs in $E_t$ and $q$ be a point. If $q$ is below $\gamma_i^+$ and $\gamma^+_{i+2}$ then $q$ is also below $\gamma^+_{i+1}$.
\label{l:upper_curves_order}
\end{restatable}
\begin{proof}
Let $p_1$, $p_2$ and $p_3$ be points on arcs that belong to $E_t$ such that $p_1 <_x p_2 <_x p_3$. Let $\sigma^+_1$, $\sigma^+_2$ and $\sigma^+_3$ be the upper emicircles of $\partial D(p_1)$, $\partial D(p_2)$ and $\partial D(p_3)$, respectively. Let $p^+_{12}$ and $p^+_{23}$ be the intersection points of $\sigma^+_1 \cap \sigma^+_2$ and $\sigma^+_2 \cap \sigma^+_3$, respectively. Note that these intersection points exist since the distance between every pair of points in $B$ is at most one. By the assumption, $p_1 <_x p_2$, which means that $\sigma^+_1$ appears to the left of $\sigma^+_2$ on the upper envelope of $\sigma^+_1$ and $\sigma^+_2$. Let $c$ be the center of the arc $e$ of $E_t$ on which $p_2$ lies. The point $c$ is on $\sigma^+_2$ since $p_2$ belongs to a lower semicircle of radius 1. In addition, $c$ is not below $\sigma^+_1$ since otherwise $p_1 \in D(c)$ which would contradict that $p_1$ is a point on an arc in $E_t$. This means that $p^+_{12} \leq_x c$. The same argument implies that $p^+_{23} \geq_x c$ and therefore $p^+_{12} \leq_x p^+_{23}$. This in turn implies that the intersection point, $p^+_{13}$, between $\sigma^+_1$ and $\sigma^+_3$ is below or on $\sigma^+_2$ and therefore every point that lies below $\sigma^+_1$ and $\sigma^+_3$ lies below $\sigma^+_2$. The only condition on $p_1$, $p_2$ and $p_3$ is that they will be $x$-ordered. $e_i \leq_x e_{i+1} \leq_x e_{i+2}$ and therefore the claim holds (see Figure~\ref{f:E_t_example_and_upper_curves_order}).
\end{proof}
\begin{figure}[ht]
\centering
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\textwidth]{E_u_example3.pdf}
\captionsetup{justification=centering}
\label{f:E_t_example}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\textwidth]{upper_curves_order.pdf}
\captionsetup{justification=centering}
\label{f:upper_curves_order}
\end{subfigure}
\caption{(On the left). An example of $\partial U \cap B$. $e_1$, $e_2$ and $e_3$ are the arcs of $E_t$ whose centers are $c_1$, $c_2$ and $c_3$, respectively. The red, green and blue outer shapes are the boundary of the Minkowski sums of each of $e_1$, $e_2$ and $e_3$ with a disc of radius $1$, respectively.
$\gamma^+_1$ and $\gamma^-_1$ are denoted by the upper and lower red curves whose endpoints are $q_1$ and $q_2$, respectively. (On the right) Illustration of the proof of Lemma~\ref{l:upper_curves_order}.}
\label{f:E_t_example_and_upper_curves_order}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\textwidth]{lower_curves_intersect_ones.pdf}
\captionsetup{justification=centering}
\label{f:lower_curves_intersect_ones}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{upper_lower_curves_not_intersect.pdf}
\captionsetup{justification=centering}
\label{f:upper_lower_curves_not_intersect}
\end{subfigure}
\caption
(On the left) Illustration of the proof that $\gamma_i^-$ and $\gamma^-_j$ intersect exactly once (see Lemma~\ref{l:lower_curves_intersect_ones}). This lemma implies that the set $\Gamma^-:=\{\gamma_i^-|e_i \in E_t\}$ has Property~P1. (On the right) Illustration of the proof that $\gamma_i^-$ and $\gamma^+_j$ do not intersect (see Lemma~\ref{lem:upper_above_lower}).}
\label{f:lower_curves_intersect_ones_and_upper_lower_curves_not_intersect}
\end{figure}
For $p$ an endpoint of $e_i\in E_t$, we call the upper semi-circle of the disc $D(p)$ \emph{the upper curve} of $p$. We denote the upper envelope of the curves in $\Gamma^+$ by $\mathcal{U}(\Gamma^+)$. The following corollary stems from the proof of Lemma~\ref{l:upper_curves_order}.
\begin{corollary}
(i) The upper curve of each endpoint of every arc of $E_t$ appears on $\mathcal{U}(\Gamma^+)$. (ii) The $x$-order of the curves on $\mathcal{U}(\Gamma^+)$ corresponds to the $x$-order of the endpoints of the arcs of $E_t$. Note that some of the upper curves may appear on $\mathcal{U}(\Gamma^+)$ as a single point, namely, they coincide with one of the breakpoints of $\mathcal{U}(\Gamma^+)$.
\end{corollary}
Next, we prove that for any pair of arcs $e_i,e_j\in E_t$, $\gamma_i^+$ and $\gamma^-_j$ are disjoint. Furthermore, we show that $\gamma_i^+$ is above $\gamma^-_j$, and by that prove Property~P3.
\begin{comment}
\begin{restatable}{restatable_lemma}{upperAndLowerDoNotIntersect}
$\gamma_i^- \cap \gamma^+_j = \emptyset$, where $e_i$ and $e_j$ are two distinct arcs in $E_t$.
\label{l:upper_and_lower_do_not_intersect}
\end{restatable}
\begin{proof}
Assume toward a contradiction that $\gamma_i^-$ and $\gamma^+_j$ intersect at a point $p$. This implies that $p$ is one of the intersection points of $\partial D(p_i)$ and $\partial D(p_j)$, where $p_i \in e_i$ and $p_j \in e_j$. The point $p$ is \emph{below} $p_i$ (namely, $p$ has smaller $y$ coordinate than $p_i$) and it is above $p_j$ since it belongs to $\gamma_i^-$ and $\gamma^+_j$. The same argument applies to the second intersection point, $q$, between $\partial D(p_i)$ and $\partial D(p_j)$. Assume that $p <_x q$, which implies that $p$ and $q$ belong to $Q_l$ and $Q_r$, respectively. Let $c_j$ be the center point of $e_j$. The point $c_j$ lies on the upper semicircle of $D(p_j)$ and it belongs to $Q_t$ which mean that $c_j \in D(p_i)$. This means that $p_i \in D(c_j)$ which contradicts that $e_i$ belongs to $E_t$ (see Figure~\ref{f:upper_curves_and_upper_curves}).
\end{proof}
\end{comment}
\begin{restatable}{restatable_lemma}{upperAboveLower}
\label{lem:upper_above_lower}
Let $e_i$ and $e_j$ be two distinct arcs in $E_t$ and let $\ell$ be a vertical line. If $\ell$ intersects with $\gamma_i^+$ and $\gamma^-_j$ at $p$ and $q$, respectively, then $p >_y q$.
\end{restatable}
\begin{proof}
We start the proof by showing that $\gamma_i^-$ and $\gamma^+_j$ do not intersect. Assume toward a contradiction that $\gamma_i^-$ and $\gamma^+_j$ intersect at a point $p$. This implies that $p$ is one of the intersection points of $\partial D(p_i)$ and $\partial D(p_j)$, where $p_i \in e_i$ and $p_j \in e_j$. The point $p$ is \emph{below} $p_i$ (namely, $p$ has smaller $y$ coordinate than $p_i$) and it is above $p_j$ since it belongs to $\gamma_i^-$ and $\gamma^+_j$. The same argument applies to the second intersection point, $q$, between $\partial D(p_i)$ and $\partial D(p_j)$. Assume that $p <_x q$, which implies that $p$ and $q$ belong to $Q_l$ and $Q_r$, respectively. Let $c_j$ be the center point of $e_j$. The point $c_j$ lies on the upper semicircle of $D(p_j)$ and it belongs to $Q_t$ which mean that $c_j \in D(p_i)$. This means that $p_i \in D(c_j)$ which contradicts that $e_i$ belongs to $E_t$ (see Figure~\ref{f:lower_curves_intersect_ones_and_upper_lower_curves_not_intersect}).
The above property ($\gamma_i^+$ and $\gamma^-_j$ do not intersect) implies that in the common $x$-interval of $\gamma_i^+$ and $\gamma^-_j$, $\gamma_i^+$ is strictly above or below $\gamma^-_j$. The arc $e_i$ is above $\gamma^-_j$ and therefore $\gamma_i^+$ is above $\gamma^-_j$.
\end{proof}
\subsection{Data structures}
\label{subsec:data_strcutures}
In this section we describe two data structures. The data structure $\Delta^+$ (resp.\ $\Delta^-$), dynamically maintains the set $\Gamma^+$ of the upper curves (resp.\ $\Gamma^-$ of lower curves). The purpose of these structures is to efficiently answer the following queries: given a point $x$, report on the upper (resp.\ lower) curves which are above (resp.\ below) $x$. For the structure $\Delta^+$ it is required that we can get the answer gradually, one curve after the other, since we need to test each curve for being relevant (in addition to being above $x$), and stop as soon as we detect the first irrelevant curve.
\subsubsection{Dynamically maintaining the lower curves}
\label{subsec:lower}
For maintaining the lower curves $\Gamma^-$ induced by the arcs in $E_t$, we implement $\Delta^-$ using the data structure described in Section~\ref{sec:dynamic_lower_env}.
Recall that the data structure of Section~\ref{sec:dynamic_lower_env} dynamically maintains a set of curves fulfilling property~P1 and supports the following \textbf{query}: given a point $x$ report the curves in $\Gamma^-$ that are below $x$.
\noindent
\textbf{Update.} After we insert a new unit disc we may have to delete and insert many lower curves. If a lower curve $\gamma_i^-$ is split into subcurves, then we delete $\gamma_i^-$ and create two new subcurves instead. In order for Property~P1 to hold at all times, we first delete the old lower curves from $\Delta^-$ and then insert the new ones.
\subsubsection{Dynamically maintaining the upper curves}
\label{subsubsec:upper}
\subparagraph*{Description.} Let $p_1,p_2,\dots,p_r$ be the endpoints of the arcs of $E_t$ sorted in ascending $x$-order. Recall that $\mathcal{U}(\Gamma^+)$ denotes the upper envelope of $\Gamma^+$. Let $s_1,s_2,\dots,s_r$ be the arcs of $\mathcal{U}(\Gamma^+)$ ordered from left to right. Note that each endpoint of $E_t$ corresponds to an arc in $\mathcal{U}(\Gamma^+)$, i.e., $p_i$ corresponds to the curve $s_i$.
The data structure $\Delta^+$ is a balanced binary search tree. We store the points $p_i$ in the leaves of the tree in their left-to-right order. We also store in each leaf pointers \texttt{rn} and \texttt{ln} to its right and left neighboring leaves respectively, if exist. Each internal node stores a pointer \texttt{lml} to the leftmost leaf of its right subtree.
To keep the a structure simple, if two arcs of $E_t$ meet at a single point, we keep only one of the endpoints incident to this point in the list $\{p_i\}$. However, we mark in the leaf of $p_i$ which are the two arcs incident to it. Below, when we traverse the leaves of the tree and test the respective arcs of $E_t$ for intersection with the new disc, in some nodes we may need to test two arcs.
\subparagraph*{Query.} Let $q$ be a query point. By following a path from the root, we first find the leaf $v$ such that the vertical line through $p$ intersects the edge $s_v$. The search down the tree is carried out as follows. Suppose we reached an internal node $u$. We use the pointer \texttt{lml}($u$) to obtain the leaf $w$, and use \texttt{ln}($w$) to find the point immediately to its left in the sequence $\{p_i\}$. These two points will determine the breakpoint of $\mathcal{U}(\Gamma^+)$ that separates between the left and right portions of the envelope, which are represented by the subtrees rooted at the left and right children of $u$.
Recall that the structure $\Delta^+$ plays the extra role of deciding whether the center $x$ of the new disc lies above the point $\xi$ or not (see the overview the algorithm in the beginning of Section~\ref{sec:union_maintain}). Therefore the query process is somewhat more involved than if we used the structure only to determine which curves of $\Gamma^+$ pass above $x$.
Once we find the point $p_i$ whose arc $s_i$ of the envelope intersects the vertical line through the query point $q$, we will be traversing leaves of $\Delta^+$ starting at $v$ going both rightward and leftward. At each leaf $u$ we test whether $q$ lies below the curve $s_j$ stored at $u$ and if the answer is yes, we check whether $D(x)$ intersects the relevant arc of $E_t$.
If the answer to both predicates is true then we continue the search in the same direction. If while we search rightwards the first predicate is false then we go leftwards starting from $v$. If the first predicate is false and we search leftwards then we stop the search and report on the arcs that we found. If the first predicate is true and second predicate is false then we continue with $\Delta^-$.
\subparagraph*{Update.} After we insert a new disc, many arcs may be deleted from $E_t$ and many new arcs may be inserted into $E_t$. We simply remove the endpoints of the deleted arcs and insert the endpoints of the new arcs into $\Delta^+$.
The correctness of the query procedure follows from Lemma~\ref{l:upper_curves_order}. The performance of the structure is summarized in the following lemma whose proof is straightforward.
\begin{lemma}
The query time of the data structure is $O(\log n + k)$, where $k$ is the number of reported arcs. The update requires $O(\log n)$ time per operation.
\end{lemma}
When querying the data structures $\Delta^+$ and $\Delta^-$ we obtain the set $I$ of arcs of the existing union-boundary that need to be deleted or partially cut since they are covered by the new disc $D(x)$ to be inserted. However, we also need to update the structures with the arcs that the boundary of the new disc contributes to the union boundary.
To find which portions of $\partial D(x)$ appear on the boundary of the union $U\cup D(x)$, we construct the arrangement $\mathcal{A}(I \cup \partial D(x))$ and look for faces of this arrangement that abut $\partial D(x)$ and are not in the union $U$. One can show that in a face $f$ of this type the arcs of $\partial U$ appear on it as concave, meaning that any point inside this face is outside the disc bounded by the arcs. Denote the size of $I$ by $k$.
Assume first that $k\ge 1$. We can construct the arrangement in $O(k\log k)$ time (recall that the arcs in $I \cup \partial D(x)$ are pairwise interior disjoint).
Finding the arcs of $\partial D(x)$ that should be inserted takes another $O(k)$ time.
If $k=0$, there are two cases based on whether $D(x) \cap U$ is (i) $D(x)$ or (ii) the empty set.
To distinguish between the cases we need to either (i) find a point that belongs to $D(x)$ and $U$, or (ii) a point that belongs to $D(x)$ but not to $U$. Recall that in order to find $I$ we overlay the plane with a grid of cells of unit-length diagonal each. This implies that at least one of the cells, denoted by $\omega$, is fully contained in $D(x)$. If $\omega$ is an \emph{active cell}, i.e., $\omega \cap U \neq \emptyset$, then $\omega$ is fully contained in $U$ ($I$ is an empty set) and therefore $D(x) \cap U=D(x)$; otherwise $\disc{1}{x} \cap U=\emptyset$. To check whether $\omega$ is active, we search for it in the structure $\Omega$.
In case (i) we do nothing further, and in case (ii) we make all the grid cells covered by $D(x)$ active, and we update the data structures of each grid cell crossed by $\partial D(x)$ by the relevant portions of $\partial D(x)$.
In conclusion of this section,
\begin{theorem}
\label{thm:Near_Optimal_Overhead}
We can dynamically maintain the arcs on the boundary of the union of unit discs under insertion in $O(k\log^2 n)$ time and $O(n)$ space, where $n$ is the number of inserted discs and $k$ is the total number of changes on the boundary of the union.
\end{theorem}
\subsection{Maintaining the area of the union}
\label{subsec:union_area}
We are now ready to solve our motivating problem, namely dynamically reporting the \emph{area} of the union as we insert discs. At a a high level our algorithm proceeds as follows:
\begin{enumerate}
\item Find the set $I$ of the arcs on the boundary of the union $U$ that intersect with the new disc $D(x)$ to be inserted.
\item Compute the arrangement $\mathcal{A}(I \cup \partial D(x))$.
\item Calculate the extra area (over the area of the union before inserting $D(x)$) that $D(x)$ covers, using $\mathcal{A}(I \cup \partial D(x))$.
\end{enumerate}
In order to find $I$ we make use of the data strctures described above and summarized in Theorem~\ref{thm:Near_Optimal_Overhead}. Let $k$ denote the number of arcs in $I$ and assume that $k\ge 1$. We use a sweep-line algorithm to compute the arrangement $\mathcal{A}(I \cup \partial D(x))$ in time $O(k\log k)$. To calculate the extra area that $D(x)$ covers, we go over the faces of the arrangement and sum up the area of the faces that are contained in $D(x) \setminus U$. If $k=0$ then either the disc is fully contained in the current union (see above for how to determine this), in which case we do nothing, or it is disjoint from the union before the insertion of the disc, in which case we increase the area by $\pi$.
In conclusion of this section,
\begin{theorem}
Given a sequence of $n$ unit discs in $\mathbb{R}^2$ to be inserted one after the other, reporting the area of the union of the discs after each insertion can be carried out in
$O(k\log^2 n)$ time and $O(n)$ space, where $k$ is the total number of structural changes to the boundary of the union incurred by the insertion of the new disc.
\end{theorem}
\begin{proof}
Finding the set $I$ takes $O(k\log^2 n)$ time (Theorem~\ref{thm:Near_Optimal_Overhead}). Computing the arrangement $\mathcal{A}(I \cup \partial D(x))$ takes $O(k\log k)$ time. Going over the faces of the arrangement and calculating the area of those faces that were not in the union before the insertion of the new disc takes $O(k)$ time. Deciding the special cases, where the new disc if fully inside or fully outside the union takes $O(\log n)$ time using the structure $\Omega$. All the data structures employed throughout the algorithm require $O(n)$ space each.
\end{proof}
\def\mathcal{C}{\mathcal{C}}
\def\mathcal{L}{\mathcal{L}}
\section{Intersection-searching of unit arcs with unit disc}
\label{sec:range-search}
In this section we address the following intersection-searching problem:
Preprocess a collection ${\cal C}$ of circular arcs of unit radius into a data structure so that for a
query unit disc $D(x)$, centered at the point $x$, the arcs in ${\cal C}$ intersecting $D(x)$ can be reported efficiently.
We assume for simplicity that every arc in ${\cal C}$ belong to the lower semicircle.
Let $e\in \mathcal{C}$ be a unit-radius circular arc, and let $p_1$ and $p_2$ be its endpoints. A unit disc $D(x)$ intersects $e$ if and only if $e\oplus D(0)$, the Minkowski sum of $e$ with a unit disc, contains the center $x$.
Let $z := D(p_1) \cup D(p_2)$, and let $D^+(c)$ be the disk of radius $2$ centered at $c$;
$z$ divides $D^+(c)$ into three regions (see Fig.~\ref{f:zones_and_line_intersect_q}):
(i) $z^+$, the portion of $D^+(c)\setminus z$ above $z$, (ii) $z$ itself, and (iii) $z^-$, the portion of
$D^+(c)\setminus z$ below $z$.
It can be verified that $e\oplus D(0) = z\cup z^-$. We give an alternate characterization of $z\cup z^-$, which will help in developing the data structure.
Let $\ell$ be a line that passes through the tangents points, $p'_1$ and $p'_2$, of $D(p_1)$ and $D(p_2)$
with $D^+(c)$, respectively, and let $\ell^-$ be the halfplane lying below $\ell$. Set $L(e) = D^+(c)\cap \ell^-$.
\begin{lemma}
If $\partial D(p_1)$ and $\partial D(p_2)$ intersect at two points (one of which is always $c$)
then $\ell$ passes through $q := (\partial D(p_1)\cap \partial D(p_2)) \setminus \{c\}$. Otherwise $c \in \ell$.
\label{l:line_intersect_q}
\end{lemma}
\begin{proof}~
Assume that $q$ exists. The quadrilateral $(c, p_1, q, p_2)$ is a rhombus since all its edges have length $1$.
Let $\alpha$ be the angle $\angle p_1qp_2$ and $\beta$ be the angle $\angle cp_1q$.
The angle $\angle qp_1p'_1$ is equal to $\alpha$ since the segment $(c,p'_1)$ is a diameter of $D(p_1)$.
The angle $\angle p_1qp'_1$ is equal to $\frac{\beta}{2}$ since $\triangle p_1qp'_1$ is an isosceles triangle.
The same arguments apply to the angle $\angle p_2qp'_2$ implying that the angle $\angle p'_1qp'_2$ is equal to $\pi$.
Assume that $q$ does not exists then the segment $(p_1,p_2)$ is a diameter of $D(c)$.
The segment $(c,p'_1)$ is a diameter of $D(p_1)$. The segment $(p_1,p_2)$ coincide with $(c,p'_1)$ at the segment $(c,p_1)$.
The same argument applies to the segment $(c,p'_2)$, implying that the angle $\angle p'_1qp'_2$ is equal to $\pi$ (see Fig \ref{f:zones_and_line_intersect_q}).
\end{proof}
\begin{figure}[ht]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{zones.png}
\captionsetup{justification=centering}
\label{f:zones}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{line_intersect_q.png}
\captionsetup{justification=centering}
\label{f:line_intersect_q}
\end{subfigure}
\caption{(On the left) Partition of $\disc{2}{c}$ into three regions: $z^+$, $z$ and $z^-$. (On the right) Illustration of Lemma~\ref{l:line_intersect_q}.}
\label{f:zones_and_line_intersect_q}
\end{figure}
The following corollary summarizes the criteria for the intersection of a unit circular arc with a unit disc.
\begin{corollary}
\label{c:criteria_unit_arc}
Let $e$ be a circular arc in $\mathcal{C}$ with endpoints $p_1$ and $p_2$ and center $c$. Then
(i) $z\cup z^- = z\cup L(e)$.
(ii) $e$ intersects a unit disc $D(x)$ if and only if at least one of the following conditions
is satisfied: (a) $x \in D(p_1)$ (or $p_1 \in D(x)$), (b) $x \in D(p_2)$ (or $p_2\in D(x)$), and
(c) $x \in L(e)$.
\end{corollary}
We thus construct three separate data structures. The first data structure preprcesses the left endpoints of arcs in $\mathcal{C}$ for unit-disc range searching, the second data structure preprocesses the right endpoints of arcs in $\mathcal{C}$ for unit-disc range searching, and the third data structure preprocesses $\mathcal{L} = \{L(e) \mid e\in \mathcal{C}\}$ for inverse range searching, i.e., reporting all regions in $\mathcal{L}$ that contain a query point. Using standard circle range searching data structures (see e.g.~\cite{Agarwal1994,Agarwal2017}), we can build these three data structures so that each of them takes $O(n)$ spaces and answers a query in $O(n^{1/2+\epsilon}+k)$ time, where $k$ is the output size. Furthermore, these data structures can handle insertions/deletions in $O(\log^2 n)$ time. We omit all the details from here and conclude the following:
\begin{theorem}
Let $\mathcal{C}$ be a set of $n$ unit-circle arcs in $\mathbb{R}^2$. $\mathcal{C}$ can be preprocessed into a data structure of linear size so that for a query unit disk $D$, all arcs of $\mathcal{C}$ intersecting $D$ can be reported in
$O(n^{1/2+\epsilon} + k)$ time,
where $\epsilon$ is an arbitrarily small constant and $k$ is the output size.
Furthermore the data structure can be updated under insertion/deletion of a unit-circle arc
in $O(\log^2 n)$ amortized time.
\end{theorem}
\begin{comment}
\section{Intersection-searching of unit arcs with unit disc}
\label{sec:range-search}
In this section we address the following intersection-search problem: Given the collection ${\cal C}$ of circular arcs of unit radius and a query unit disc $D(x)$ centered at the point $x$, report the arcs in ${\cal C}$ intersecting $D(x)$. We assume for simplicity that the arcs in ${\cal C}$ belong to the lower semicircle of the boundary of a unit disc. We use a two-level data structure \cite{Dobkin1987} to report the arcs that intersect with $D(x)$. Each level of the structure is used to filter out one of the criteria which we will explain below:
Let $\disc{2}{x}$ be a disc of radius $2$ centered at the point $x$.
Let $e$ be a unit circular arc and denote by $p_1$ and $p_2$ its endpoints.
A necessary but not sufficient condition is that $x$ will lie inside $\disc{2}{c}$, where $c$ is the center of the disc on whose boundary $e$ lies.
Let $z$ denote the union $D(p_1) \cup D(p_2)$. This union $z$ divides $\disc{2}{c}$ into three regions (see Fig.~\ref{f:zones_and_line_intersect_q}): (i) $z^+$, the portion of $\disc{2}{c}\setminus z$ above $z$, (ii) $z$ itself, and (iii) $z^-$, the portion of $\disc{2}{c}\setminus z$ below $z$. The query disc
$D(x)$ intersects with $e$ if and only if $x$ belongs to $z \cup z^-$.
Let $\ell$ be a line that passes through the tangents points, $p'_1$ and $p'_2$, of $D(p_1)$ and $D(p_2)$ with $\disc{2}{c}$, respectively. Using the following lemma we show that $z^- \subset \disc{2}{c} \cap h^-$, where $h^-$ is the half-plane below $\ell$.
\begin{lemma}
If $D(p_1)$ and $D(p_2)$ have two intersection points then $q \in \ell$ otherwise $c \in \ell$.
\label{l:line_intersect_q}
\end{lemma}
\begin{proof}~
Assume that $q$ exists. The quadrilateral $(c, p_1, q, p_2)$ is a rhombus since all its edges have length $1$. Let $\alpha$ be the angle $\angle p_1qp_2$ and $\beta$ be the angle $\angle cp_1q$. The angle $\angle qp_1p'_1$ is equal to $\alpha$ since the segment $(c,p'_1)$ is a diameter of $D(p_1)$. The angle $\angle p_1qp'_1$ is equal to $\frac{\beta}{2}$ since $\triangle p_1qp'_1$ is an isosceles triangle. The same arguments apply to the angle $\angle p_2qp'_2$ implying that the angle $\angle p'_1qp'_2$ is equal to $\pi$.\\
Assume that $q$ does not exists then the segment $(p_1,p_2)$ is a diameter of $D(c)$. The segment $(c,p'_1)$ is a diameter of $D(p_1)$. The segment $(p_1,p_2)$ coincide with $(c,p'_1)$ at the segment $(c,p_1)$. The same argument applies to the segment $(c,p'_2)$, implying that the angle $\angle p'_1qp'_2$ is equal to $\pi$ (see Fig \ref{f:zones_and_line_intersect_q}).
\end{proof}
\begin{figure}[ht]
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{multi_level/zones.png}
\captionsetup{justification=centering}
\label{f:zones}
\end{subfigure}
\hspace{3em}
\begin{subfigure}{0.3\textwidth}
\includegraphics[width=\textwidth]{multi_level/line_intersect_q.png}
\captionsetup{justification=centering}
\label{f:line_intersect_q}
\end{subfigure}
\caption{(On the left) Partition of $\disc{2}{c}$ into three regions: $z^+$, $z$ and $z^-$. (On the right) Illustration of Lemma~\ref{l:line_intersect_q}.}
\label{f:zones_and_line_intersect_q}
\end{figure}
The following corollary summarizes the criteria for the intersection of a unit circular arc with a unit disc.
\begin{corollary}
\label{c:criteria_unit_arc}
A unit circular arc with endpoints $p_1$ and $p_2$, contained in the lower half-circle of a unit disc intersects with a unit disc, $D(x)$, if and only if at least one of the following conditions is satisfied:
\begin{enumerate}
\item $x \in D(p_1)$,
\item $x \in D(p_2)$,
\item $x \in \disc{2}{c} \cap h^-$.
\end{enumerate}
\end{corollary}
For more information on the data structures see Appendix~\ref{a:Section_4}. In summary we have,
\begin{theorem}
Intersection-searching of a set of unit circular arcs in $\mathbb{R}^2$ with a unit disc in $\mathbb{R}^2$ can be solved with the following performance: \textcolor{red}{$O(n\log n)$ space, $O(n^{1/2+\delta} + k)$ query time and $O(\log^4 n)$ amortized update time, where $\delta$ is a small constant and $k$ is the output size}.
\end{theorem}
\ravid{old}
In this section we describe how to efficiently report the unit circular arcs that intersect with a unit disc. We assume that each arc belongs to the lower boundary of a unit circle. In order to do so, we use two level data structure. The purpose of each level is to filter the arcs by a different criterion. The criteria that we use are described as follows.
Let $\disc{r}{x}$ be a disc with radius $r$ centered at the point $x$. Let $e$ be a unit arc and let $p_1$ and $p_2$ be its left and right endpoints, respectively. Let $c$ be the center point of $e$. Let $\ell$, denote also by the \textit{tangent-line}, be a line that passes through the tangent points of $\disc{2}{c}$ with $\disc{1}{p_1}$ and $\disc{1}{p_2}$. Let $h^-$ be the half-plane below $\ell$. A lower unit circular arc, $e$, intersects with a unit disc, $\disc{1}{x}$, if and only if at least one of the following conditions is satisfied (see Fig.~\ref{f:zones}):
\begin{enumerate}
\item $x \in \disc{1}{p_1}$,
\item $x \in \disc{1}{p_2}$,
\item $x \in \disc{2}{c} \cap h^-$.
\end{enumerate}
We use two well known transformation: (i) lifting transformation which transforms a circle in the plane into half-plane in space. (ii) duality transformation between points and lines in the plane and points and half-planes in space. Using the above transformations we create transformed version of the above criteria. We then use dynamic data structures to report the intersected arcs using this transformed version.
In Section~\ref{multi:s:preliminaries} we prove the geometric criteria. In Section~\ref{multi:s:transformation} we explain how we use the above transformations. In Section~\ref{multi:s:structures} we provide additional details regarding the data structures.
\subsection{Preliminaries}
\label{multi:s:preliminaries}
In this section we prove the geometric criteria for detecting intersection of lower unit circular arc, $e$, with a unit disc, $\disc{1}{x}$, in the plane. A necessary but not sufficient condition is that $x$ will lie inside $\disc{2}{c}$, where $c$ is the center of $e$. We divide $\disc{2}{c}$ into three regions (see Fig.~\ref{f:zones}): (i) $z^+ = (p'_1, c, p'_2, p'_1)$, (ii) $z = \disc{1}{p_1} \cup \disc{1}{p_2}$, (iii) $z^- = (p'_1, p'_2, q, p'_1)$, where $p'_1$ and $p'_2$ are the tangent points of $\disc{1}{p_1}$ and $\disc{1}{p_2}$ with $\disc{2}{c}$, respectively and $q$ is the lower intersection point of $\disc{1}{p_1}$ and $\disc{1}{p_2}$, i.e. not the point $c$. Note that $c$ and $q$ can be equal if $e$ is exactly a half-circle. $\disc{1}{x}$ intersects with $e$ if and only if $x$ belongs to $z \cup z^-$. Using the following lemma we show that $z^- \subset \disc{2}{c} \cap h^-$, where $h^-$ is the half-plane below the \textit{tangent-line}, $\ell$.
\begin{lemma}
If $\disc{1}{p_1}$ and $\disc{1}{p_2}$ have two intersection points then $q \in \ell$ otherwise $c \in \ell$.
\label{l:line_intersect_q}
\end{lemma}
\begin{proof}~
Assume that $q$ exists. The quadrilateral $(c, p_1, q, p_2)$ is a rhombus because all its edges has the length of one. Let $\alpha$ be the angle $\angle p_1qp_2$ and $\beta$ be the angle $\angle cp_1q$. The angle $\angle qp_1p'_1$ is equal to $\alpha$ because the segment $(c,p'_1)$ is a diameter of $\disc{1}{p_1}$. The angle $\angle p_1qp'_1$ is equal to $\frac{\beta}{2}$ because $\triangle p_1qp'_1$ is an isosceles triangle. The same arguments apply to the angle $\angle p_2qp'_2$ implying that the angle $\angle p'_1qp'_2$ is equal to $\pi$.\\
Assume that $q$ does not exists then the segment $(p_1,p_2)$ is a diameter of $\disc{1}{c}$. The segment $(c,p'_1)$ is a diameter of $\disc{1}{p_1}$. The segment $(p_1,p_2)$ coincide with $(c,p'_1)$ at the segment $(c,p_1)$. The same argument applies to the segment $(c,p'_2)$, implying that the angle $\angle p'_1qp'_2$ is equal to $\pi$ (see Fig \ref{f:line_intersect_q}).
\end{proof}
\begin{figure}[ht]
\centering
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{multi_level/zones.png}
\captionsetup{justification=centering}
\caption*{}
\label{f:zones}
\end{subfigure}
\begin{subfigure}{0.45\textwidth}
\includegraphics[width=\textwidth]{multi_level/line_intersect_q.png}
\captionsetup{justification=centering}
\caption*{}
\label{f:line_intersect_q}
\end{subfigure}
\caption{}
\end{figure}
\subsection{Transformations}
\label{multi:s:transformation}
In order to use the data structures, we have to preprocesses the data. This is done using the Lifting Transformation and the Duality Transformation.
\subsubsection*{Lifting transformation}
We use the well known lifting transformations (see \cite{Agarwal1993} for the explicit transformation):
\begin{enumerate}
\item $\varphi_1(p)$ transforms a point in $\mathbb{R}^2$ to a point in $\mathbb{R}^3$,
\item $\varphi_2(\disc{r}{x})$ transforms a disc in $\mathbb{R}^2$ to a plane in $\mathbb{R}^3$,
\end{enumerate}
which hold the following property:
\begin{itemize}
\item[] $p \in \disc{r}{x}$ if and only if $\varphi_1(p)$ lies below $\varphi_2(\disc{r}{x})$.
\end{itemize}
\noindent
Let $\varphi_2^-(\disc{r}{x})$ be the half-space defined by the area below the plane $\varphi_2(\disc{r}{x})$. Using the lifting transformation the main criteria can be write as follows:
$\disc{1}{x}$ intersect a lower unit circular arc $e$ if and only if at least one of the following three conditions is satisfied:
\begin{enumerate}
\item $\varphi_1(x) \in \varphi_2^-(\disc{1}{p_1})$,
\item $\varphi_1(x) \in \varphi_2^-(\disc{1}{p_2})$,
\item $x \in h^-$ and $\varphi_1(x) \in \varphi_2^-(\disc{2}{c})$,
\end{enumerate}
where $p_1$ and $p_2$ are the endpoints of $e$, and $c$ is the center point of $e$. $h^-$ is a half-plane that defined by the area below the \textit{tangent-line}, $\ell$.
\subsection*{Duality transformation}
We need to apply the duality transformation in order to fit the objects to the data structures that we use, i.e. the data structures maintains points and the query objects are half-planes.
We use the well known duality transformation between points and lines in $\mathbb{R}^2$ and between points and planes in $\mathbb{R}^3$ (see \cite{Agarwal1993} for the explicit transformation):
\begin{enumerate}
\item $\psi_1(p)$ transforms a point in $\mathbb{R}^2$ to a line in $\mathbb{R}^2$,
\item $\psi_2(l)$ transforms a line in $\mathbb{R}^2$ to a point in $\mathbb{R}^2$,
\item $\varpi_1(q)$ transforms a point in $\mathbb{R}^3$ to a plane in $\mathbb{R}^3$,
\item $\varpi_2(h)$ transforms a plane in $\mathbb{R}^3$ to a point in $\mathbb{R}^3$,
\end{enumerate}
which hold the following properties:
\begin{enumerate}
\item $p$ lies below $l$ if and only if $\psi_2(l)$ lies below $\psi_1(p)$,
\item $q$ lies below $h$ if and only if $\varpi_2(h)$ lies below $\varpi_1(q)$.
\end{enumerate}
Let $\psi_1^-(p)$ be the half-plane defined by the area below the line $\psi_1(p)$ and let $\varpi_1^-(q)$ be the half-space defined by the area below the plane $\varpi_1(q)$. Using the lifting transform and the duality transform we can conclude the following corollary,
\begin{corollary}
$\disc{1}{x}$ intersect a lower unit circular arc $e$ if and only if at least one of the following three conditions is satisfied:
\begin{enumerate}
\item $\varpi_2(\varphi_2(\disc{1}{p_1})) \in \varpi_1^-(\varphi_1(x))$,
\item $\varpi_2(\varphi_2(\disc{1}{p_2})) \in \varpi_1^-(\varphi_1(x))$,
\item $\psi_2(\ell) \in \psi_1^-(x)$ and $\varpi_2(\varphi_2(\disc{2}{c})) \in \varpi_1^-(\varphi_1(x))$,
\end{enumerate}
where $p_1$ and $p_2$ are the endpoints of $e$, and $c$ is the center point of $e$ and $\ell$ is the \textit{tangent-line}.
\label{c:disc_arcs}
\end{corollary}
\subsection{Data structures}
\label{multi:s:structures}
In this section we describe the data structures that we use in order to filter the arcs according to Corollary~\ref{c:disc_arcs}. The data structures are:
\begin{enumerate}
\item $D_1$: dynamic spanning tree with low stabbing number in $\mathbb{R}^2$ (Theorem~4.1. in \cite{Agarwal1991}), with the following complexity: $(n\log n)$ preprocessing time, $O(n^{\frac{1}{2} + \epsilon})$ query time, $O(\log^2 n)$ amortized update time, and \ravid{$O(n \log n)$ space?},
\item $D_2$: dynamic half-space range reporting in $\mathbb{R}^3$ ( Theorem~1.1. in \cite{Agarwal1995}), with the following complexity: $O(n\log n)$ space and preprocessing time, $O(n^{\epsilon} + k)$ query time and $O(\log^2 n)$ amortized update time,
\end{enumerate}
where $n$ is the number of points and $k$ is the size of the output.
Cases~1 and~2 of Corollary~\ref{c:disc_arcs} can be answered using $D_2$. Case~3 can be answered using two-level data structure, where the first level compose of $D_1$ and second level composed of $D_2$.
The time and space of the algorithm is bounded by the two-level structure, denoted by $A$, and therefore we analyzed its complexity.
\begin{lemma}
The space complexity of $A$ is $O(n\log n)$.
\end{lemma}
\begin{proof}
The space complexity of the first level is \ravid{$O(n\log n)$??}. The space complexity of the second level is $O(\sum\limits_{i=1}^{n^{\frac{1}{2}+\epsilon}}n_i\log n)=O(n\log n)$, where $n_i$ it the number of nodes is the $i$'th canonical node.
\end{proof}
\begin{lemma}
The preprocessing time of $A$ is $O(n\log^2 n)$.
\end{lemma}
\begin{proof}
Preprocessing the first level takes $O(n\log^ n)$ time. The preprocessing time of each canonical node in the second level takes $O(n_i\log n)$ time, where $n_i$ is the number of nodes in the $i$'th canonical node, i.e. the number of nodes in its sub tree. There is one canonical node (the root) with size $n$, two nodes with size $\frac{n}{2}$ and so on. In conclusion we have, $O(n)$ $O(\sum\limits_{i=0}^{\log n} 2^i\frac{n}{2^i} \log n)=O(n\log^2 n)$.
\end{proof}
\begin{lemma}
The query time of $A$ is $O(n^{\frac{1}{2}+\delta} + k)$, where $k$ is the output size and $\delta$ is a small constant.
\end{lemma}
\begin{proof}
The query time of $A$ is bounded by the query time of its first and second level. The query time of the first level is $O(n^{\frac{1}{2}+\epsilon})$. We have $O(n^{\frac{1}{2}+\epsilon})$ quires in second level. The sum of the queries time is given by $O(\sum\limits_{i=1}^{n^{\frac{1}{2}+\epsilon}}n^{\epsilon '} + k_i)=O(n^{\frac{1}{2}+\delta}+k)$, where $k_i$ is the output size of the $i$'th query and $\delta$ is a small constant.
\end{proof}
\begin{lemma}
The update of $A$ can be done in $O(\log^4 n)$ amortized time.
\end{lemma}
\begin{proof}
The update time of the first level of $A$ is $O(\log^2 n)$ on average. We update the second level every time there is a an update in a node of the first level (i.e. we insert or remove a point from a node). We spend (at lest) constant time for every point we insert or remove from a node in the first level. This implies that there is $O(\log^2 n)$ updates on average in total in the second level. Which mean that the overall update takes $O(\log^2 n*\log^2 n) = O(\log^4 n)$ amortized time.
\end{proof}
In summary, we have
\begin{theorem}
Intersection-searching of a set of unit circular arcs in $\mathbb{R}^2$ with a unit disc in $\mathbb{R}^2$ can be solved with the following performance: $O(n\log n)$ space, $O(n^{\frac{1}{2}+\delta} + k)$ query time and $O(\log^4 n)$ amortized update time, where $\delta$ is a small constant and $k$ is the output size.
\end{theorem}
\end{comment}
\subparagraph*{Acknowledgement.}\hspace{-2ex}
We thank Haim Kaplan and Micha Sharir for helpful discussions. Work by P.A. has been supported by
NSF under grants CCF-15-13816, CCF-15-46392, and IIS-14-08846, by ARO
grant W911NF-15-1-0408, and by grant 2012/229 from the U.S.-Israel
Binational Science Foundation. Work by D.H.\ and R.C.\ has been supported in part by the Israel Science Foundation
(grant no.~825/15), by the Blavatnik Computer Science Research Fund,
by the Blavatnik Interdisciplinary Cyber Research Center at Tel Aviv
University, and by grants from Yandex and from Facebook. Work by W.M. has been partially supported by ERC STG 757609 and GIF grant 1367/2016.
\bibliographystyle{plainurl
|
1,116,691,497,836 | arxiv | \section{Introduction}
Let $G$ be a finitely generated profinite group, and let $\Irr(G)$
denote the collection of all continuous irreducible complex characters
of~$G$. We observe that each $\chi \in \Irr(G)$ has finite degree and
for every positive integer $n \in \mathbb{N}$ we put $r_n(G) = \lvert \{ \chi
\in \Irr(G) \mid \chi(1) = n \} \rvert$. From Jordan's theorem on
finite linear groups in characteristic~$0$
(see~\cite[Theorem~9.2]{We73}) one deduces that $r_n(G)$ is finite for
all $n \in \mathbb{N}$ if and only if $G$ is FAb, i.e., if $H/[H,H]$ is finite
for every open subgroup $H$ of~$G$.
Suppose that $G$ is FAb. Then the arithmetic sequence $r_n(G)$, $n
\in \mathbb{N}$, is encoded in the Dirichlet generating function
\begin{equation*}
\zeta_G(s) = \sum_{n=1}^\infty r_n(G) n^{-s} = \sum_{\chi \in
\Irr(G)} \chi(1)^{-s}
\end{equation*}
which is known as the \emph{representation zeta function} of~$G$. If
the representation growth of $G$ is polynomially bounded, i.e., if
$\sum_{n=1}^N r_n(G) = O(N^d)$ for some constant $d$, then
$\zeta_G(s)$ defines an analytic function on a non-empty right
half-plane of~$\mathbb{C}$. Under favourable circumstances, this function
admits a meromorphic continuation, possibly to the entire complex
plane~$\mathbb{C}$.
In recent years representation growth and representation zeta
functions have been investigated for various kinds of groups,
including compact $p$-adic Lie groups; for instance,
see~\cite{JZ06,LaLu08,AKOV12a,AKOV13} or the short introductory
survey~\cite{Kl13}. An intriguing, but mostly unexplored aspect is
the significance of special values of representation zeta functions.
In particular, one may be curious about the locations of zeros and
poles. While there is some theoretical understanding of the latter
(see~\cite[Theorem~B]{AKOV13}, and also compare~\cite{dS94} for the
pole spectra of related zeta functions), almost nothing is known about
the former.
In the present paper we establish that $\zeta_G(s)$ vanishes at $s=-2$
for every member $G$ of a certain class of profinite groups, including
all infinite FAb compact $p$-adic Lie groups for $p \geq 3$. Indeed,
let $G$ be a finitely generated profinite group which is FAb and
virtually pro-$p$ for some prime~$p$. We say that $G$ has
\textit{rational representation zeta function} (with respect to $p$),
$\mathrm{r.r.z.f.}_{(p)}$ for short, if there exist finitely many positive
integers $m_1, \ldots, m_k$ and rational functions $R_1, \ldots, R_k
\in \mathbb{Q}(X)$ such that
\begin{equation} \label{equ:formula} \zeta_G(s) = \sum_{i=1}^k
m_i^{-s} R_i(p^{-s}).
\end{equation}
In~\cite{JZ06}, Jaikin-Zapirain proved that, for $p \geq 3$, every FAb
compact $p$-adic Lie group has~$\mathrm{r.r.z.f.}_{(p)}$. It is
conjectured that the result extends to $2$-adic Lie groups; presently,
it is known that every uniformly powerful pro-$2$ group
has~$\mathrm{r.r.z.f.}_{(2)}$.
There is only a small number of FAb compact $p$-adic Lie groups $G$
for which the representation zeta function $\zeta_G(s)$ has been
computed explicitly; see~\cite{JZ06,AKOV12a,AKOV13}. By inspection of
the formula given in~\cite[Theorem~7.5]{JZ06}, Motoaki Kurokawa and
Nobushinge Kurokawa noticed that the representation zeta function of
the $p$-adic Lie group $\SL_2(\mathbb{Z}_p)$ has zeros at~$s=-1$ and $s=-2$.
The purpose of the present paper is to explain the zero at $s=-2$
which reflects a more general phenomenon.
\begin{teo} \label{thm:main} Let $G$ be a FAb profinite group which is
infinite and virtually a pro\nobreakdash-$p$ group. If $G$ has
rational representation zeta function with respect to~$p$ then
$\zeta_G(-2) = 0$.
\end{teo}
As indicated, using~\cite[Theorem~1.1]{JZ06} we derive the following
corollary.
\begin{cor} \label{cor:main} Let $G$ be a FAb compact $p$-adic Lie
group and suppose that $p \geq 3$. If $G$ is infinite then
$\zeta_G(-2) = 0$.
\end{cor}
\begin{rem} \label{rem:Wedderburn} Wedderburn's structure theorem for
semisimple algebras implies that $\zeta_G(-2) = \sum_{\chi \in \Irr
(G)} \chi(1)^2 = |G|$ for every finite group~$G$. For an infinite
profinite group $G$ one can evaluate $\zeta_G(s)$ at $s=-2$ only if
the function defined by the Dirichlet series has a suitable
continuation.
\end{rem}
\begin{rem}
The representation functions of compact open subgroups of
semi\-simple $p$-adic Lie groups, such as $\SL_n(\mathbb{Z}_p)$, occur
naturally as factors in Euler products for the representation zeta
functions of arithmetic lattices in semisimple groups, such $\Gamma
= \SL_n(\mathbb{Z}) \subseteq \SL_n(\mathbb{R})$; see
\cite[Proposition~1.3]{LaLu08}. However, since the Euler product
formula is not valid for $s=-2$, one cannot use
Corollary~\ref{cor:main} directly to investigate potential
properties of $\zeta_\Gamma(s)$ at $s=-2$. For instance, the
inverse of the Riemann zeta function $\zeta(s)^{-1} =
\sum_{n=1}^\infty \mu(n) n^{-s}$ satisfies $\zeta(s)^{-1} = \prod_p
(1-p^{-s})$ and $\zeta(0)=-2$.
\end{rem}
In the next section we prove Theorem~\ref{thm:main} and its corollary,
by considering the $p$-adic limit of $\zeta_G(s)$ at $s=-2$. We also
offer an alternative proof of Corollary~\ref{cor:main}, which is
closer to the character theoretic set-up in~\cite{JZ06}. In the last
section we provide further comments and highlight some open problems.
In conclusion, we remark that related questions regarding zeros and
special values of Witten $L$-functions associated to real Lie groups,
in particular to the groups $\mathrm{SU}(2)$ and $\mathrm{SU}(3)$,
have been considered by Kurokawa and Ochiai~\cite{KuOc13} and also by
Min~\cite{Mi13}.
\section{The proofs}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Let $m_1, \ldots, m_k \in \mathbb{N}$ and $R_1, \ldots, R_k \in \mathbb{Q}(X)$ such
that the Dirichlet series $\zeta_G(s) = \sum_{n=1}^\infty r_n(G)
n^{-s} = \sum_{\chi \in \Irr(G)} \chi(1)^{-s}$
satisfies~\eqref{equ:formula}. Then the degrees of the irreducible
characters of $G$ are of the form $m_i p^r$ with $i \in \{1, \ldots,
k\}$ and $r \geq 0$. In particular, for every positive integer $j$
there are at most finitely many characters $\chi \in \Irr(G)$ with
$p^j \nmid \chi(1)$. Consequently, the series $\zeta_G(s)$
converges, with respect to the $p$-adic topology, at every negative
integer $-e \in -\mathbb{N}$ to an element in the ring $\mathbb{Z}_p$ of $p$-adic
integers: we obtain a function
\[
\zeta_G^\text{$p$-adic} \colon -\mathbb{N} \rightarrow \mathbb{Z}_p, \quad -e
\mapsto \sum_{n=1}^\infty r_n(G) n^e = \sum_{\chi \in \Irr(G)}
\chi(1)^e.
\]
For the last equality recall that in the $p$-adic topology every
converging series converges unconditionally so that its summands can
be re-arranged freely.
Equation~\eqref{equ:formula} reflects more than the equality of two
complex functions: by expansion of the right hand side we obtain a
Dirichlet series whose coefficients must agree with the defining
coefficients $r_n(G)$ of the zeta function on the left hand side.
This implies that for every negative integer $-e$,
\begin{equation} \label{equ:p-adic-zeta} \zeta_G^\text{$p$-adic}(-e)
= \sum_{i=1}^k m_i^e R_i(p^e) = \zeta_G(-e).
\end{equation}
Consequently, it suffices to prove that $\zeta_G^\text{$p$-adic}(-2)
= 0$.
Fix a positive integer~$j$. As seen above, there are only finitely
many characters $\chi \in \Irr(G)$ such that $p^j \nmid \chi(1)$.
We define
\begin{equation*}
N_j = \bigcap_{\chi \in \Irr (G),\; p^j \nmid \chi (1)} \ker \chi,
\end{equation*}
where each $\ker \chi$ coincides with the kernel of a
representation affording~$\chi$. Then $N_j$ is an open normal
subgroup of~$G$, and
\begin{equation*}
\zeta_G^\text{$p$-adic}(-2) = \sum_{\chi \in \Irr(G)} \chi(1)^2 =
\sum_{\substack{\chi \in \Irr(G) \text{ with} \\ N_j \subseteq
\ker \chi}}\chi(1)^2 + \sum_{\substack{\chi \in \Irr(G) \text{
with} \\ N_j \not \subseteq \ker \chi}} \chi (1)^2.
\end{equation*}
The first sum is equal to the order of the finite group $G/N_j$,
while all the terms in the second sum are divisible by~$p^{2j}$.
Thus
\begin{equation} \label{equ:take-limit}
\zeta_G^\text{$p$-adic}(-2) = \lvert G : N_j \rvert + p^{2j} a_j,
\end{equation}
for some $a_j \in \mathbb{Z}_p$.
Since $G$ is infinite and virtually a pro-$p$ group, $\lvert G : N_j
\rvert + p^{2j} a_j \to 0$ in the $p$-adic topology as $j \to
\infty$. Thus \eqref{equ:take-limit} yields
$\zeta_G^\text{$p$-adic}(-2) = 0$.
\end{proof}
Next we give an alternative proof of Corollary~\ref{cor:main}, which
is closer to the set-up in~\cite{JZ06} and does not rely on $p$-adic
limits.
\begin{pro} \label{pro:1} Suppose that $p \geq 3$ and let $N$ be a FAb
uniformly powerful pro-$p$ group. Then for every $m \geq 0$,
\[
\zeta_{N^{p^m}}(s) = \lvert N : N^{p^m} \rvert \, \zeta_N(s).
\]
\end{pro}
\begin{proof}
This is a consequence of the analysis in~\cite[Section~3]{AKOV13}
of a formula given in~\cite[Corollary~2.13]{JZ06}.
\end{proof}
\begin{pro} \label{pro:2} Suppose that $p \geq 3$ and let $G$ be a FAb
compact $p$-adic Lie group. Let $H$ be an open subgroup of $G$.
Then
\[
\zeta_G(-2) = \lvert G : H \lvert \, \zeta_H(-2).
\]
\end{pro}
\begin{proof}
Choose an open normal subgroup $N$ of $G$ which is $2$-uniform (in
the sense of \cite[Section~2]{JZ06}) and contained in~$H$. We show
that
\[
\zeta_G(-2) = \lvert G : N \lvert \, \zeta_N(-2).
\]
The same reasoning yields $\zeta_H(-2) = \lvert H : N \lvert
\zeta_N(-2)$, and combining the two equations proves the
proposition.
We adapt the set-up in~\cite[Sections~5 and~6]{JZ06}. As in the
proof of \cite[Theorem~1.1]{JZ06}, we decompose the representation
zeta function of $G$ as
\[
\zeta_G(s) = \sum_{N \leq K \leq G} \sum_{\substack{\theta \in
\Irr(N) \\ \text{with } \St_G(\theta)=K}} \lvert G : K
\rvert^{-1-s} f_{(K,N,\theta)}(s) \cdot\theta(s)^{-1},
\]
where for each character triple $(K,N,\theta)$ one defines
\[
f_{(K,N,\theta)}(s) = \sum_{\chi \in \Irr(K \vert \theta)}
\left( \frac{\chi(1)}{\theta(1)} \right)^{-s}
\]
summing over all $\chi \in \Irr(K)$ such that $\theta$ is a
component of~$\red_N^G(\chi)$. We observe that for each character
triple $(K,N,\theta)$,
\begin{equation} \label{equ:f(-2)} f_{(K,N,\theta)}(-2) = \sum_{\chi
\in \Irr(K \vert \theta)} \left( \frac{\chi(1)}{\theta(1)}
\right)^2 = \frac{\red_N^K(\ind_N^K(\theta))(1)}{\theta(1)} =
\lvert K : N \rvert.
\end{equation}
It is proved in \cite{JZ06} that for each group $K$ with $N \leq K
\leq G$ the set $\Irr(N)_K = \{ \theta \in \Irr(N) \mid
\St_G(\theta) = K \}$ can be partitioned into finitely many subsets
$\Irr(N)_{K,v}$, labelled by $v \in V_K$, such that
\begin{enumerate}
\item[(i)] for each $v \in V_K$ and $\theta \in \Irr(N)_{K,v}$,
\[
f_{(K,N,\theta)}(s) = f_v(s)
\]
depends only on $v$ and
\item[(ii)] for each $v \in V_K$,
\[
g_v(s) = \sum_{\theta \in \Irr(N)_{K,v}} \theta(s)^{-1}
\]
is a rational function over $\mathbb{Q}$ in $p^{-s}$.
\end{enumerate}
The equations
\begin{align*}
\zeta_G(s) & = \sum_{N \leq K \leq G} \; \sum_{v \in V_K} \lvert G
: K \rvert^{-1-s} f_v(s) g_v(s), \\
\zeta_N(s) & = \sum_{N \leq K \leq G} \; \sum_{v \in V_K} g_v(s)
\end{align*}
combined with \eqref{equ:f(-2)} give
\begin{align*}
\zeta_G(-2) & = \sum_{N \leq K \leq G} \; \sum_{v \in V_K} \lvert
G : K \rvert \lvert K : N \rvert g_v(-2) \\
& = \lvert G : N \rvert \zeta_N(-2).
\end{align*}
\end{proof}
\begin{proof}[Second proof of Corollary~\ref{cor:main}]
By Proposition~\ref{pro:2} it is enough to prove the result for a
uniformly powerful pro-$p$ group $N$. By Propositions~\ref{pro:1}
and \ref{pro:2} we have
\[
\lvert N : N^p \rvert \zeta_N(-2) = \zeta_{N^p}(-2) = \lvert N : N^p
\rvert^{-1} \zeta_N(-2).
\]
Since $\lvert N : N^p \rvert > 1$, this implies $\zeta_N(-2) = 0$.
\end{proof}
\section{Open questions}
We highlight three questions which arise naturally from
Theorem~\ref{thm:main}, Corollary~\ref{cor:main} and their proofs.
In view of \eqref{equ:p-adic-zeta} we record the following problem.
\begin{que}
Let $G$ be a FAb compact $p$-adic analytic group. What are the
values of $\zeta_G(s)$ at other negative integers $s = -e$ and is
there a suitable interpretation of these?
\end{que}
Of course, we would like to extend Corollary~\ref{cor:main} to the
prime~$p=2$. More generally, one can ask the following.
\begin{que}
Let $G$ be a FAb profinite group and and suppose that $\zeta_G(s)$
converges in some right half-plane $\{ s \in \mathbb{C} \mid \textrm{Re}(s)
> \alpha \}$. Suppose further that $\zeta_G(s)$ has a meromorphic
continuation so that $\zeta_G(-2)$ is defined. Is it true that
$\zeta_G(-2) = 0$?
\end{que}
For instance it would be natural to investigate this question for
compact analytic groups over compact discrete valuation rings of
positive characteristic, e.g., over~$\mathbb{F}_p[\![t]\!]$. The
representation zeta functions of such groups are still rather poorly
understood. In particular, no analogue of Proposition~\ref{pro:1} is
known. However, a direct computation in~\cite{JZ06} shows that, for
$p \geq 3$, the group $\SL_2(\mathbb{F}_p[\![t]\!])$ has the same
representation zeta function as the $p$-adic analytic group
$\SL_2(\mathbb{Z}_p)$.
The last question is inspired by Brauer's Problem~$1$, which asks:
what are the possible degree patterns for irreducible characters of
finite groups; see~\cite{Br63,Hu91,Mo07}. Given a profinite group $G$
with $\mathrm{r.r.z.f.}_{(p)}$, the completed group algebra $\mathbb{C}[\![G]\!] =
\varprojlim_{N \trianglelefteq G} \mathbb{C}[G/N]$, formed with respect to the
directed set of normal open subgroups of $G$, determines the
representation zeta function of~$G$ and, conversely, $\zeta_G(s)$
determines $\mathbb{C}[\![G]\!]$. Furthermore, if $G$ is a pro-$p$ group,
then $\zeta_G(s)$ is a rational function over $\mathbb{Q}$ in $p^{-s}$. The
following can be regarded as an extension of Brauer's Problem~$1$ to
FAb pro-$p$ groups.
\begin{que}
Which rational functions $R(p^{-s})$ over $\mathbb{Q}$ in $p^{-s}$ are
representation zeta functions of infinite FAb pro-$p$ groups with
$\mathrm{r.r.z.f.}_{(p)}$?
\end{que}
Theorem~\ref{thm:main} provides a first necessary criterion:
$R(p^2)=0$.
|
1,116,691,497,837 | arxiv | \section{Introduction}
{\noindent}The phenomenon of global features of complex cognitive functions in the brain arise from the organization of basic functions systematically coordinated in localized brain regions \citep{luria1980}. These two aspects of brain organization are reflected in brain cortical functions and evolve continuously through internal and external perturbations. Hence, human cognition can be thought of as a water surface where patterns in the form of ripples dwell on its surface upon receiving perturbations from outside. The fluidity of consciousness is reflected through the highly dynamic behaviour of brain activity \citep{Chialvo2010}. We can merely access those spatio-temporal patterns of brain activity through various electro-physiological and neuro-imaging techniques in order to understand the underlying dynamics. Electroencephalogram (EEG) is one such technique to measure electrical brain activity with high temporal resolution. The functional spatio-temporal clusters of neurons in the cortical layers of the brain is a consequence of various biological factors affecting the concentration of neurotransmitter, transfusion of ions through ion channels and voltage generated across the membrane of pre and post-synaptic neurons \citep{Klinshov2014, Voglis2006}. These biological factors determine the strength of the diffused signal among neurons and neuronal organization. Such recurring signals strengthen the circuitry of the neuron cluster allowing it to learn and memorize a specific function in the brain and rewiring among these complex functional patterns lead to varied cognitive abilities of the brain \citep{Colicos2006,Eguiluz2005a,Sporns2000,Sporns2002,Sporns2002a}. But, how the synchronous neuronal activation and generation of functional patterns lead to the execution of a cognitive task is a riddle to solve. The basis of understanding the complex cognitive functions would begin through studying the network dynamics of the interacting local and global neurons. In shaping the functional network topology of the brain, the interaction range of the cortical functional regions in the brain and neurons within them plays a significant role. And it would be interesting to know how this range is determined for information processing and shaping neural circuitry among the neurons. The Ising model is a simple physical model that could explain the ferromagnetic behaviour with domain coarsening of the two-spin lattice system through inter-molecular interactions imprinted by some range and concurrent computation of the Hamiltonian energy as a driving factor for the system evolution \cite{McCoy2014}. In our foregoing work, we proposed the \textit{neuron activity pattern or NAP model} which is based on the kinetic Ising model system \cite{Gundh2015}. In that work, we established a critical interaction range that demarcates the long and short ranges of interaction. Our approach evaluates the role of various interaction ranges on the strength of coupling and the overall Hamiltonian dynamics, as compared to the Onsager result that solved for only nearest-neighbour interactions \citep{onsager1944}. It has been shown in \citep{Gundh2015} that each interaction/coupling range has their unique critical temperature for generating spin clusters at the near-critical regime of phase transition though, their overall dynamical behaviour has followed universal scaling laws. Our anticipation led to the conjecture that cortical neurons might follow a second-order phase transition near-criticality to generate patterns of active neurons in response to a signal. However, the emergence of these functional clusters and their organization are still an open question. We have termed these patterns of neuronal activity as functional cortical patterns or FCPs.\\
{\noindent} In this study, we have generated model-based networks on the basis of, firstly, different ranges of interaction (long, critical and short) and secondly, another important parameter i.e. Temperature to define below, near and above criticality regimes. We have taken this architecture, further, to characterize temporal correlations based network connectivity of FCPs in our model data and found its congruency with the task-specific brain functional connectivity, attained through the empirical data. Since the functional connectivity reflects statistical dependencies between spatially distributed neuronal groups. We have made a comparison among networks of simulated and empirical data based on their topological characterization. This study has modelled the emergence of functional connectivity in the brain and to prove its fidelity we have applied different approaches such as network theory attributes, community detection, functional cartography of nodes and multi-fractal analysis. We have used the EEG data of a specific visual task on healthy human subjects for comparison with our model data \citep{Begleiter1999}. We have addressed important features of functional cluster formation, their organization and properties at below, near and above criticality for each interaction range. The major concern of this work is, whether the FCPs generated through our neuron {activity} pattern model, could {mimic} the topological characteristics of task-specific functional brain connectivity.
\begin{figure}
\label{fig1}
\begin{center}
\includegraphics[height=20cm,width=16cm]{Fig1.eps}
\caption{\textbf{The Recurrence plots of model-based functional cortical patterns (FCPs):} \textbf{A)} Outline of the workflow used to characterize the model and empirical time-series data and to attain congruency in their functional activity patterns.
\textbf{B)} For Long ($n=3$), Critical ($n=4$) and Short ($n=6$) coupling
ranges at synaptic strengths below-critical ($T = 1$), near-critical
($T\sim T_{c}$) and above-critical ($T > T_{c}$) strengths ( as
shown in black, red and green respectively), in a network of 4096
nodes. \textbf{Note:- } There is the formation of communities of active
neurons at near-criticality for all the coupling range interactions.}
\end{center}
\end{figure}
\vskip 0.5cm
\section{Results and Discussion}
We first need to delineate the details of the model simulation in the next sub-section, succeeded by the deliberation over results.
\subsection{Correlated neuron activity could optimize brain function}
{\noindent}The concept of self-organization rightly defends the dynamics of complex systems \citep{kauffman1993} and also aids in understanding brain functionality \citep{bassett2011}. Thinking of neural dynamics from a statistical mechanics window is a bit challenging but is ruling out since Ising model was used to explain the dynamical avalanches of neuron activity in the brain \citep{Chialvo2010,Eguiluz2005a}. In comparison with the Ising model a local neural network is not an equilibrium system as neurons constantly receives inputs from other neurons but the near-critical point of phase transition i.e. far from equilibrium could explain the dynamic equilibrium in complex systems. Also, the self-organizing dynamical systems expound that the brain works at a transition point when it receives the optimum signal strength \citep{Beggs2012}. Still, there is thirst for digging more into the complex dynamics of brain functionality. How brain does the transition towards a functional network that spans the cortical layers of brain, acquiring non-random synaptic connectivity for performing a cognitive task \citep{Song2005}.\\
{\noindent}This research work has used the NAP model that was introduced in our previous work \citep{Gundh2015}, therefore, the details regarding the Hamiltonian dynamics of the Ising model and its numerical simulation using the Monte Carlo Glauber kinetics with non-conserved magnetization has been briefly discussed here. The Ising model is one of the primitive models known to study phase ordering dynamics in a random distribution of spins under the influence of a critical parameter i.e. temperature (T). It had been used profoundly to explain the functional neuronal activity across functional brain networks \citep{Fraiman2009,goodarzinick2018}. This makes us strongly presume that phase transition state is the critical state of brain functional dynamics at which cognitive actions take place. We have considered random allocation of two-spin configuration in a 2D-square lattice with periodic boundary condition in order to avoid the edge effect. We contemplated the cortical neuronal circuits coarse-grained into the firing and non-firing states of neurons (or group of neurons) represented by the two states of a spin {s}, {s} $= +1$ for firing and {s} $=-1$ for rest or non-firing neurons. The total energy of the system representing various internal fluctuations and diffusive processes in neurons that result in their activity and influences the activity of surrounding neurons is given by the following Hamiltonian to model long as well as short-range interactions,
\begin{eqnarray}
\label{2D}
H\:=-{\displaystyle {\displaystyle \sum_{<ij>}{\textstyle J(r_{ij},n)s_{i}s_{j}},}}\;\quad\;\:\;s_{i}=\pm1\,\forall\,i,
\end{eqnarray}
Where $s_{i}$ and $s_{j}$ denotes the activity of neuron at site \textit{i} and a neighbouring site \textit{j} and \textit{J} represents the coupling strength among neurons defined as a function of the distance between two $i$th and $j$th neurons given by, $r_{ij}=|r_i-r_j|$ which is confined by $n$, the coupling range in the system. Then one can model $J$ by power-law behaviour given by, $J(r_{ij},n) =\frac{J}{r_{ij}^{n}}$ \citep{Cannas1996}, where, $d$ is the dimension of the system, and $n$ can be expressed as $n=d+\sigma=2+\sigma$ \citep{Fisher1972,Blanchard2013} which can qualify for \textit{short-range} for $\sigma>2$, whereas for \textit{long-range} for $0<\sigma<2$ \citep{Picco2012,gruneberg2004}. Further, $J> 0$ always (ferromagnetic case) and $n>0$ \citep{Cannas1996}. We have taken a 2D square lattice of spins with side $L$ and $N=L\times L$ being the total number of spin sites. To define coupling range we have assumed the centre of the 2D square as the reference origin to delimit the maximum distance for the spins to interact. The incircle of radius $r = L/2$ is the approximated area of the spin distribution which we have considered to calculate the potential energy, U, of the system. In order to capture the thermodynamical parameters of the system, one can define the following potential function $U(n)$ as a slowly decay potential function \citep{Chialvo2010,Cannas1996}, where $n =2 + \sigma$ for the 2D system, scaling $J\rightarrow J/N$ in the limit $N\rightarrow\infty$ and using Euler-McLaurin sum formula \citep{bruijn1981}, we get the potential energy function for each interaction range \textit{n} as follows, \citep{Gundh2015}
\begin{eqnarray}
\label{potential}
U\left(n\right)&=&\underset{N\rightarrow\infty}{\lim}\frac{1}{N}\sum_{<i,j>}^{N}\frac{J}{r_{ij}^{\sigma}}\approx \lim_{N\rightarrow\infty}J\int_{1}^{\sqrt{N}}drg(r)r^{3-n}\nonumber\\
&=&\lim_{N\rightarrow\infty}J\left[
\begin{matrix}
\frac{1}{2}\ln(N),~~~~~~~~~~~~~~~~for~~n=4~(critical)~~~~~~~~~~\\
\frac{1}{n-4}\left(1-N^{2-n/2}\right),~~for~~n>4~(short~range)~~~~~\\
\frac{1}{4-n}N^{2-n/2}~~~~~~~~~~~~for~~0<n<4~(long~range)~
\end{matrix}
\right.
\end{eqnarray}
where, g(r) is the pair distribution function such that $g(r) ~ 1$ for $r >> 1$. Critical parameters of the 2D system can be calculated from the force derived from the potential in equation (\ref{potential}), at the critical point where singularity arises in the solution of the system \citep{Hiley1965}. Following this technique one can estimate critical temperature from the numerically derived closed form approximated equation \citep{Hiley1965} given below,
\begin{eqnarray}
\label{closed}
\frac{U(n)}{k_BT_C}=1+\frac{f_2}{U(n)^2}+O(U^{-4})
\end{eqnarray}
where, $f_2=\sum_{<i,j>}J(r_{ij},n)^2$, which can be calculated for the 2D system by following the procedure mentioned above i.e. scaling $J\rightarrow J/N$ in the limit $N\rightarrow\infty$. Now using Euler-McLaurin sum formula \citep{bruijn1981}, $f_2$ is given by,
\begin{eqnarray}
\label{f2}
f_2&=&\underset{N\rightarrow\infty}{\lim}\frac{1}{N}\sum_{<i,j>}^{N}\left[\frac{J}{r_{ij}^{\sigma}}\right]^2\approx \lim_{N\rightarrow\infty}J^2\int_{1}^{\sqrt{N}}drg(r)r^{5-2n}\nonumber\\
&=&\lim_{N\rightarrow\infty}\frac{J^2}{2}\left[
\begin{matrix}
\left[1-\frac{1}{N}\right],~~~~~~for~~n=4~(critical)~~~~~~~~~~\\
\frac{1-N^{3-n}}{n-3},~~~~~~for~~n>4~(short~range)~~~~~\\
\frac{N^{3-n}-1}{3-n}~~~~~~~~for~~0<n<4~(long~range)~~
\end{matrix}
\right.
\end{eqnarray}
Now using equations (\ref{closed}), (\ref{potential}) and (\ref{f2}) we could able to arrive at an expression for finite critical temperature given below,
\begin{eqnarray}
\label{TC}
T_c(n,N)&=&\frac{1}{k_B}\frac{1}{1+\frac{f_2}{U(n)^2}}\nonumber\\
&=&\frac{J}{k_B}\left[
\begin{matrix}
\lim_{N\rightarrow\infty}\frac{N[\ln(N)]^3}{2N+N[\ln(N)]^2-1},~~~~~~~~~~~~~~~~~~~~~~~~~~~for~~n=4~(Critical)~~~~~~~~~~\\
\frac{4(n-3)}{(n-2)^2(n-4)},~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~for~~n>4~(short~range)~~~~~\\
\lim_{N\rightarrow\infty}\frac{2}{4-n}
\frac{N^{3(4-n)/2}}{2N^{4-n}+(4-n)^2\left[\frac{N^{3-n}-1}{3-n}\right]}~~~~~~~~~~~~~for~~0<n<4~(long~range)~~
\end{matrix}
\right.
\end{eqnarray}
From the above equations (\ref{TC}), we could able to understand that for fixed systems size $N$ both short-range and long-range interaction of neurons contributed to $J$ lowers in $T_c$ allowing to work the neurons at a modified phase. On the other hand, for fixed $n$, $T_c$ increases as $N$ increases both in critical and long-range regime. The quantities such as range of interaction $n$ (or coupling range) and coupling strength $J$ controls the order parameter i.e. temperature $T$ of a spin-lattice system. However, there are debates on the estimation of $T_c$ dependence on $J$ in 2D system. Even though the relation between $T_c$ and $J$ was obtained analytically in 2D system as $sinh(2J/k_BT_c)\approx 1$ \citep{onsager1944}, numerically calculated $J/k_BT_c$ using Monte carlo techniques in such systems within finite size scaling formalism was found to be varying in the range $J/k_BT_c\rightarrow [2.269-2.29]$ \citep{Binder1988}. The reason for variation in the critical parameter value could be due to finite size ($N$) of the system \citep{Binder1988}, and $T_c$ depends on size $N$ except for short-range interaction ($n>4$). The critical temperature $T_c$ can be defined in terms of $J/k_B$, however, through our model simulations of random spin lattices, we have got the float values of $T_c$ for various long and short interaction ranges (see Fig.1 of ref. \citep{Gundh2015}), signified by the sudden drop in the total magnetization (order parameter) of the system.\\
{\noindent}Our idea states that this modelling framework mimics the underlying dynamics among a population of neurons. This study intends to relate the terminology being used in two different yet relatable systems, to characterize the brain as a dynamic physical system. We have defined the temperature, analogously, as the global synaptic strength given to a system of neurons (or groups of neurons) that generates the synchronous simultaneous activation of neurons sharing common synaptic strength. Therefore, the critical temperature would mirror the optimal perturbation (stimulus) imparted to the system i.e. relayed and associated in the form of synaptic strength in the brain. We have taken a 2D $64\times 64$ lattice system of 4096 spin sites depicting either firing or non-firing neuron state, allocated randomly, and simulated for 500 monte carlo steps at a defined global synaptic strength. Our approach examines the inter-neuronal connections for long, short and critical coupling ranges at different \textit{global} synaptic strengths ($T$) as categorized into three stages, below criticality, near-criticality and above criticality. The below criticality is defined as the temperature below the phase transition temperature but above the mean-field temperature i.e. $0 < T < T_c$. We have considered $T=1\frac{J}{k_B}$ as below critical temperature for all coupling ranges. However, the near-critical transition temperature, $T_c$, is different for the long $(T_c\sim 2.9\frac{J}{k_B})$, critical $(T_c\sim 1.9\frac{J}{k_B})$ and short $(T_c\sim 1.4\frac{J}{k_B})$ ranged interactions, based on their respective dynamics (see Fig.1 of ref. \citep{Gundh2015}). The spin-flip kinetics within Glauber kinetics algorithm defines the spin acceptance probability and also assumes ergodicity, however, the spin is chosen randomly \citep{Binder1988,newman1999}. The system quenches from a random disordered to ordered lattice phase and lead to the formation of domains/patterns of functional activity \citep{Sporns2000}. The Monte Carlo simulations for a particular coupling range $n$ and global synaptic strength $T$ has generated the temporal activity data based on the state of each neuron. In order to substantiate the global system \textit{behaviour,} we did average over temporal data of 10 ensemble sets in each case. The averaged time-series data is further used to construct binary undirected graphs by defining inter-neuronal connections using the Pearson correlation coefficient for different coupling range and strengths (see Methods). Thus, our analysis of a 2D lattice of neurons with fixed position and randomly allocated activity at each site lead to the emergence of FCPs of synchronous activity at the near-critical temperature, the optimum synaptic strength. \\
\begin{figure}
\label{fig2}
\begin{center}
\includegraphics[height=20cm,width=16cm]{Fig2.eps}
\caption{\textbf{Basic network attributes of model-based FCPs and EEG-based
FNs:} \textbf{A)} FCPs for the three distinguished coupling ranges
at synaptic strengths below criticality (in black), near-criticality
(in red) and above criticality (in green). The columns exhibit a)
clustering coefficient, C(k), versus degree (k) plots with degree
exponent $\alpha$. b) The probability of degree distribution, P(k),
plots with degree exponent $\gamma$ c) The Neighborhood connectivity,
$C_{n}(k)$, plots with degree exponent $\beta$.
\textbf{B)} a) The connectivity trend of EEG data based FNs generated
for three visual stimulus conditions S1 (single image shown), S2\_match
(two similar images) and S2\_nomatch (two dissimilar images) of one
subject with 10 trials shown in different colours in each case. The
columns b), c) and d) illustrates the C(k), P(k) and $C_{n}(k)$
plots versus degree (k), respectively. The insider plots depict power-law fitting trend of 10 subjects with their average degree exponents.\textbf{Note -} In all the coupling ranges long, critical and short their
is transition shift at near-criticality (shown in red) for all the
three basic network attributes that characterize the hierarchical
and scale-free organization through the power-law behaviour of their
heavy-tailed distributions. The network measures are subsequently
in similar trend for near-critical FCPs and EEG based FNs, apart from
the obvious disparity among subjects.}
\end{center}
\end{figure}
\subsection{Model-based functional cortical patterns}
{\noindent}The neuron activity patterns can be imprinted in a connectivity matrix based on the correlation in the activity of neurons. The temporal correlations of the neural activity has been translated in the form of binary connectivity matrices. We have applied different approaches for building the connection matrix out of the model and empirical time-series data as mentioned in the work flow-chart as shown in Fig.1A. We have tagged the model-based functional clusters of synchronous activity as FCPs. On the other hand, the visual task-based EEG time-series has been transformed into a functional brain network, FN, using the visibility graph approach \citep{Lacasa2008}. We have acquired significant congruence in the FCPs and FNs in their topological properties and functional characterization by applying the network theory and multi-fractal approaches \citep{Sporns2002a,Kantelhardt2002}. The model simulations at various interaction ranges and temperature strengths substantiate the emergence of functional patterns of activity only at the near-critical strength as depicted through the recurrence plots in Fig.1B. In this work, we have extended the interpretation of our model data and applied a fixed global synaptic strength which determines the intensity of overall coupling among the system of N neurons, being classified as below-criticality, near-criticality and above-criticality of the phase transition dynamics. The critical temperature $T_{c}$, which signifies the state of near-critical phase transition, is found to mimic with the optimum synaptic strength to coordinate short and long-range neurons in the brain for the generation of functional clusters of neurons at local and global levels.\\
{\noindent}The emergence of FCPs in our model is the result of self-organized criticality that formulates the functional pattern of neurons in the brain \citep{Kitzbichler2009}. These FCPs could mimic the functional networking that emerges in the brain cortex with task-specific activation of neurons. The recurrence plots in Fig.1B shows the emergence of FCPs being more prominent at near-critical temperature, $T_{c}$ for each coupling range. The long-ranged coupling at $T = 1$, is showing very less clustering and therefore it has many random connections not forming any ordered patterns or domains as reflected in recurrence plots. However, at near-critical synaptic strength ($T \sim T_{c}$), the transition to critical phase allows the formation of a number of functional patterns or domains. We could signify the near-critical transition as a state of attaining the optimal synaptic strength to construct synchronous functional patterns of neural activity and that could mimic the conscious state of a simple cognitive task in the brain \citep{Eguiluz2005a}. Further, at above-criticality ($T > T_{c}$) these patterns are violated due to enormous decline in the randomness of neuronal connections, causing break-down of neuronal self-organization. Also, the high synaptic strength could signify the abnormal state with loss of functional connectivity as observed in diseases such as Alzheimer's etc \citep{Stam2007}. The similar trend is followed by the critical and short coupling range (see in Fig.1B). However, the underlying dynamics is different for short and long-range interactions. The long-range coupling ($n=3$) has been speculated with fast dynamical clustering at near-critical temperature, {$T_{c}= 2.9\frac{J}{k_B}$} whereas, short-range coupling ($n=6$) has exhibited slow dynamics at {$T_{c} = 1.5\frac{J}{k_B}$} \citep{Gundh2015}. From equation (\ref{TC}), short-range $T_c$ $(n=6)$ is given by $T_c=\frac{3}{8}\frac{J}{k_B}$. Whereas, since $\lim_{n\rightarrow 3}\frac{N^{3-n}-1}{3-n}=\lim_{n\rightarrow 3}\frac{-(\ln(N))N^{3-n}}{-1}=\ln(N)$, the long-range $T_c$ $(n=3)$ is given by,
\begin{eqnarray}
T_c(N)\approx\frac{J}{k_B}\frac{2N^{3/2}}{2N+\ln(N)}
\end{eqnarray}
This shows $T_c(N)$ depends on system size, $N$, {for long-range coupling} only. These changes in $T_c$ could suggest the enlargement of FCPs, generated with respect to short and long-range depending on $N$, with increasing synaptic strength signifying the local and global coupling over population of neurons \citep{Kitzbichler2009, Deco2012}. \\
\subsection{Network topology and characterization.}
{\noindent}The complex network theory approach, originated from graph theory, delineates the consanguineous activity of neurons in terms of quantitative measures \citep{Papo2014,Bullmore2009}. We have applied the network theory attributes here to characterize the functional networks generated through the neuro-physiological EEG data set and the simulated FCPs of the population of neurons.
\subsubsection{Properties of model-based FCPs:}
{\noindent}We have computed network theory attributes to study the topological properties of the FCPs generated (as shown in Fig.1), using our model approach. The network attributes calculated here, represents the topological information of the connected undirected networks generated for different coupling ranges of interaction. The sheer transition in the clustering coefficient distribution, $C(k)$, plots from below (black) to near-critical state (red) (Fig. 2A column $a$) apparently classifies the organization of a functional cluster in a hierarchical pattern as exhibited through the power-law scaling behaviour $C(k)\sim k^{-\alpha}$ as a function of degree $k$ with scaling exponent $\alpha$ for all the distinguished coupling ranges (long, critical and short). The data has been plotted on a logarithmic scale with non-linear curve fitting to expose the power-law scaling behaviour. The cases with an initial scattering of data points have shown that the power-law is exhibited only by their long heavy tail whereas, other cases have shown power-law scaling constituting whole or majority of data points. The power-law distribution has been confirmed by the algorithm given by Clausset et al. \citep{clauset2009}. However, at the above critical state (green) due to increased in the randomness of neuronal firing there are very fewer connections to form a functional pattern. Since $\alpha$ for long-range coupling provides larger value ($\alpha=0.58$) as compared to short-range coupling ($\alpha=0.35$), the long-range coupling at near-critical temperature probably enhances the active self-organization of neurons and local effects are allowed to participate in global phenomena. The probability of degree distribution, $P(k)$ again exhibits well defined fractal nature $P(k)\sim k^{-\gamma}$ at near-critical temperature with larger value of scaling parameter, $\gamma=1.04$, for long-range coupling and shows similar transition into a power-law function, thus, substantiate the presence of high degree nodes having less occurrence probability in the neural network. Similarly, the neighbourhood connectivity distribution also follows power-law nature, $C_{n}(k)\sim k^\beta$ with $k$, and near-critical temperature exhibits positive $\beta$ value for short-range and critical couplings indicating possibility of formation of rich-club, where some of the hubs hand in hand together to control the network. However, long-range coupling try to initiate neurons against this rich club formation (negative value in $\beta$). This assures that long-range coupling tries to keep hierarchical organization in the functional cortical pattern near-critical temperature phase. However, short and near-critical range couplings try to bring the network towards near scale-free features, where $C_n(k)$ becomes nearly independent of $k$ and $\beta$ becomes positive indicating significant importance of a group of hubs (rich club formation) respectively. Further, the behaviour at below critical temperature exhibits positive $\beta$ values (black colour data) for all types of coupling ranges (short, critical and long) which is a clear indication of rich club formation showing the significantly important role of hub groups in regulating FCPs. On the other hand, in the case of the above criticality regime, proper and clear behaviour of network properties in the FCPs are not exhibited for all coupling ranges.\\
\subsubsection{Properties of EEG based functional networks:}
{\noindent}The EEG time-series data of the whole brain while visualizing a particular single object (S1), two matching objects (S2 match) and two different objects (S2 nomatch) in human subjects has been taken from the UCI database. Since the networks or visibility graphs constructed from the EEG time-series data carry the properties embedded in the EEG data \citep{Lacasa2008}, characterization of these networks of various subjects may highlight how these brain functional networks work and organize themselves. Each node in the constructed networks corresponds to the time step mapped to the EEG time-series data which could be the resultant of the algebraic sum of all possible interacting neuronal signals (including all possible coupling ranges) at that time step. Hence, these nodes in the networks could be thought of as the representations of the interacting neurons at various cortical regions of the brain at various time steps collected by the EEG probes as signals. Further, these functional networks (FNs) thus constructed from the EEG data of the subjects, while doing a visual task, exhibit well-defined clusters of represented neurons with slight variation in their sizes and properties due to the different paradigms (S1, S2 match and S2 nomatch) (Fig. 2B column $a$). These FNs of the subjects shown to specific visual stimulus task declares the dynamics of the neuronal functional patterns specific to the visual cortex.\\
{\noindent}Now the connectivity plots and topological characterization of the EEG-based FNs show clear emergence of clusters/modules/domains of varied sizes in all subject cases (S1, S2 match and S2 nomatch) ( Fig.2B a). Similar to our theoretically obtained results (in Fig. 1), the behaviour of the topological properties of these networks (we presented the results only for 10 trials of a single subject and its also true for all trials of the subjects we have taken from the UCI database for analysis) follow power-law nature indicating the fractal nature of the system. More specifically, the properties of our neuron activity pattern model near-critical temperature is closely similar to the properties of FNs constructed from EEG data. Then, these topological parameters can be represented by,
\begin{eqnarray}
\bf{\Gamma(k)}\sim k^{F};~~\bf{\Gamma(k)}=\left[\begin{matrix}P(k)\\C(k)\\C_n(k) \end{matrix}\right],~~F=\left[\begin{matrix}\gamma\\ \alpha\\ \beta\end{matrix} \right]\rightarrow\left[\begin{matrix}-2.16-2.31\\-0.73-0.8\\0.21-0.23 \end{matrix}\right]
\end{eqnarray}
The positive value in exponent $\beta$ in $C_n$ indicates the assortativity property in the network (Methods) which enable significant hubs in the network to coordinate among them \citep{Colizza2006}. Further, the emerged modules/clusters follow hierarchical features and hence have a significant correlation among them (Methods). Because of these two reasons these FNs most likely exhibit \textit{rich-club} formation which is consistent with the experimental report on the brain \citep{teller2014}. This means that the rich-club formation of significant hubs in the hierarchically self-organized neuronal system may probably act to stabilize the system both at local and global levels.
{\noindent}Our interpretation theory professes as if a slow and weak signal (say at $T < T_{c}$) propagates among the cluster of neurons creating a pre-conscious state for neurons and then a perturbation (signal) would provide an optimal synaptic strength ($T_{c})$) that could drive the system towards a functional cognitive state. (This could give sense to the condition when there are many processes or thoughts that passes through our mind in an unconscious state and an external stimulus is required to attain the conscious state.) The $C(k)$ and $P(k)$ distributions have been found to be in accordance with the brain functional connectivity network attributes based on the experimental data as reported in \citep{Eguiluz2005a,Sporns2004}. The results shown in \citep{Eguiluz2005a} reflects the functional network attributes while listening to music and finger tapping. The complexity in functional brain networks has been classified as scale-free considering spatio-temporal neural activity through neuro-electrical and imaging techniques \citep{Beggs2003}. From this, we could affirm the validity of our model approach in characterization of the emerging FCPs formed in the brain while some specific cognitive task is done\citep{ThomasYeo2015}. Thus, the network representations of nodes at $T\,\simeq$$T_{c}$ for long, critical and short-range interactions assures the self-organization of nodes into an FCP which exhibits scale-invariant and hierarchical organization. This supports our understanding that the self-organization of the interacting neurons is maintained with the emergence of FCPs for disparate ranges at near-criticality \citep{Deco2012}. \\
\begin{figure}
\label{fig3}
\begin{center}
\includegraphics[height=12cm,width=16cm]{Fig3.eps}
\caption{\textbf{Comparative analysis of network attributes: }The model-based
FCPs have been analysed to mark the near-critical phase transition
in comparison with the EEG data based FNs. a) The closeness centrality,
$C_{C}(k)$, versus degree (k) plot with degree exponent $\delta$.
b) The betweenness centrality, $C_{B}(k),$ plot with degree exponent
$\lambda$. (C) The eigen-vector centrality, $C_{E}(k)$, plot with
degree exponent $\epsilon$. (D) The rich-club coefficient, $R_{C}(k)$,
is plotted against $N_{k>k_{level}}$i.e. number of nodes with degree
(k) greater than particular $k_{level}$. \textbf{Note}- The near-critical
FCPs have shown the right transition shift towards the FNs of a visual
task-based data. For a particular degree node the $C_{B}$ and $C_{E}$
value has increased to make its role more economical, though not much
change in $C_{C}$values observed. The $R_{C}(k)$ plot has shown
power-law trend with heavy tail distribution.}
\end{center}
\end{figure}
\subsection{Centrality and rich-club measures: hierarchical to scale-free transition}
{\noindent}The roles and characterization of the significant hubs in the brain networks, in which emerged modules and sparsely distributed hubs work in a self-organized manner, can be well studied and compare using centrality measurements and rich-club analysis of the networks constructed from both neuron pattern activity model and EEG data. The centrality measures, as shown in Fig.3, characterizes the FCPs, at below and near-criticality for all coupling ranges and does a fair comparison with the FNs generated through the EEG data of a visual stimulus. Since the closeness centrality measurement $C_C$ of a node is estimated by the inverse of average distances of the node with the rest of the other remaining nodes \citep{Freeman1978}, a large value of $C_C$, which is quantified by small average path distance, provides significant importance of the node in regulating the network. The $C_C$ as a function of $k$ follows power-law nature, $C_C(k)\sim k^{\delta}$, both in proposed model network and network constructed from EEG data except the value $C_C$ in networks of EEG data is comparatively smaller than that of the model network (Fig. 3 (a)). The behaviour of $C_C$ for model network at near-critical temperature, $T\sim T_c$ for critical interaction range is closely similar to that of the network from EEG data, where, $C_C$ increases with $k$ indicating a significant role of high degree hubs in regulating brain network in terms of fast information processing of the neurons may be by coordinating the high degree nodes in all the modules of the networks. On the other hand, short and long-range correlation of the neurons again trigger more importance of the hubs moving towards the scale-free phase where hubs are more important than modules in controlling the networks. For higher degree nodes, the value is less at $T\,\simeq$$T_{c}$, this confirms the rareness of hubs in the network.\\
{\noindent}Betweenness centrality of the networks, which is another centrality measure from the number of paths passing through each node from the rest of the nodes in the network, quantify the volume of traffic diffusing through each node in these networks \citep{Freeman1978,borgatti2005}, and found that it follows power-law behaviour, $C_B\sim k^\lambda$ in both model and EEG data networks (Fig. 3 (b)). Since $C_B$ increases as $k$ increases, it indicates that high degree nodes have high traffic of information, and they play a crucial role to control and stabilize the network. In this case, the behaviour of the $C_B$ for all neuron coupling regimes (short, critical and long-range) are closely mimic with that of networks of EEG data in qualitative sense (indicated by nearly parallel fitting lines on the data points). This nature of $C_{B}(k)$, characterizes the cluster at $T\,\simeq$$T_{c}$ with higher value for high degree nodes and thus ensures greater significance of those nodes in fast information transmission across the network \citep{Bullmore2009}. Similarly, the properties of the eigen-vector centrality, which is the measure of influence of a node to any other nodes in the network that could induce long-term traffic risk \citep{bonacich1972}, show power-law nature, $C_E(k)\sim k^\epsilon$ to both for model and EEG data based networks indicating closely mimic behaviour (due to nearly parallel fitted lines). This eigenvector centrality, $C_{E}(k),$ further depicts the intensity of most prominent nodes in networks and found that network is highly communicative at $T\,\simeq$$T_{c}$ with low cost. Hence, high degree nodes have significant influencing capability to other nodes as well as more risk of attack to them. These high degree nodes could be generally target nodes of any brain disorder (disease, functional disorder etc).\\
{\noindent}Now the behaviour of all three centrality measurements ($C_C,C_B,C_E$) indicates the significantly important roles of high degree nodes in both model and EEG data based networks in terms of fast signal processing, regulation of information traffic, influencing other nodes in the network. This could be a clear signature of rich-club formation in these networks, indicating the significant role of the high degree hubs in regulating the networks but removal of these hubs will not cause network breakdown. Another thing depicts through our results are the presence of high degree nodes in long as well as short-range connections. This is in accordance with the experimental results done by \citep{Nigam2016}, where they found high degree neurons through direct recording from up to 500 neurons in slice cultures of mouse somatosensory cortex. In all centrality measurements of model based FCPs at $T\sim T_c$ and $T<T_c$, long-range neuronal coupling contribution is higher than the remaining critical and short-range interactions indicating a significantly more important role of long-range interaction of neurons in FCPs. One possible reason for more importance of long-range neuronal interaction in FCPs could be enhanced propagation of localized properties/perturbation, driven by short-range neuronal interaction, at the global level for better cross-talk to keep network stabilization in a self-organized manner.\\
{\noindent}The plot in Fig. 3d, quantifies the rich-club coefficient, $R_{C}$, as a function of $N_{k>k_{level}}$, which is the number of nodes with degree (k) greater than a particular $k_{level}$, where $k_{level}=1,...,max(k)$ (see Methods). This has encountered presence of hubs (high degree nodes) connecting module/communities patterned in a rich-club arrangement, to maximize communication at low-cost maintaining robustness of the network. The comparative analysis has shown that FCPs at below critical strength are far from the behaviour of the empirical data networks used here. However, at near-critical strength, FCPs leads to a shift towards the behaviour of FNs generated. The degree exponents of various network attributes calculated have been shown in Table-1 for all coupling range and strengths along with the EEG data. Apart from the visible differences in degree exponents of our model and EEG clusters, the trend followed by them in network attributes are quite similar.
\subsection{Modular organization: Is near-critical state active}
{\noindent}Modularity signifies the formation of community or modular structure in the network \citep{Newman2006}. The network centrality measures characterize the important nodes that determine the functional efficiency of the network. However, defining the role of a node based on its position within the module and participation among inter-linking modules is an effective way to portray complex networks \citep{Guimera2005}. In Fig.4A (a), we compare the community affiliated structure, ($C_{i}$), of long, critical and short-ranged coupling networks with the FNs of the EEG data. In column Fig. 4A (b), the within-module degree, $Z_{i}$, versus participation coefficient ($P_{i}$) plots relate the organization of influential nodes based on their intra- and inter-module links. The results show that the near-critical FCPs has marked good coherence with the EEG data. In Fig.4B, based on the categorization of nodes done in \citep{Guimera2005}, we have shown the transition in role and position of module nodes in the near-critical modular networks. The $P_{i}$ versus degree $k$ plots for the distinguished coupling ranges at below-critical and near-critical synaptic strengths has been shown in columns Fig. 4B (a) and (b) respectively. The nodes have been labelled as module hubs and non-hubs with $Z_{i}\geq2.5$ and $Z_{i}<2.5$ respectively. The further segregation depending on their $P_{i}$ values describes nodes as, $R1$ ultra-peripheral non-hubs ($P_{i}\leq0.05$) with all the edge connections in the same module; $R2$ peripheral non-hubs ($0.05<P_{i}\leq0.62$) with mostly intra-modular edges; $R3$ connector non-hubs ($0.62<P_{i}\leq0.80$) with many inter-modular edges; $R4$ kinless non-hubs ($P_{i}>0.80$) with homogeneous sharing of connections among modules; $R5$ provincial hubs ($P_{i}\leq0.30$) with most of the intra-modular connections; $R6$ connector hubs ($0.30<P_{i}\leq0.75$) with majority of inter-modular associations and $R7$ kinless hubs ($P_{i}>0.75$) with homogeneous associations among all the modules \citep{Guimera2005}. In our results of Fig.4B, the transition of temperature from below to near-critical regime leads to the occurrence of non-hub connector nodes, $R3$, (shown in green) in all coupling ranges. The early emergence of such hub nodes in long-range coupling could be due to its fast neuronal dynamics. Also, the near-critical FCPs has shown transition to the optimum hierarchical organization of connector hub nodes, $R6$, and peripheral non-hub nodes, $R2$, with respect to the degree for all coupling ranges, validating its functional modularity. The FNs of the EEG data (shown in violet), being a complex functional brain network, matches favourably with our model FCPs at the near-criticality regime. Further in Fig. 5, we have analyzed the probability of the degree distribution of nodes under a particular community, as constructed through the community affiliation Louvain method \citep{Blondel2008}. The modules generated through networks of below and near-critical FCPs has been shown in Fig. 5 (a) and (b), in respective different color for each module. For comparative analysis, the communities underlying the EEG data based networks have been shown in column (c) with quite a similar trend among various trials (shown in respective different colors) of a single subject. The below to near-critical transition signifies an increase in the number of communities. Also, the near-critical communities at short-range organize themselves towards small-world and tend to become hierarchical and scale-free at long-range as depicted through their degree distributions. This could be a signature to differentiate local and global coupling mechanisms in the functional brain networks which sounds helpful to tackle specificity of brain disorders.
\begin{figure}
\label{fig4}
\begin{center}
\includegraphics[height=20cm,width=16cm]{Fig4.eps}
\caption{\textbf{Community structure in networks of the model and EEG data: A)}
The model-based FCPs for the long, critical and short-ranged coupling
has been compared with the EEG data based FNs (shown in violet). a)
The community affiliation index, ($C_{i}$), versus degree (k) plots.
b) The within-module degree, $Z_{i}$, versus participation coefficient,
$P_{i}$, plots exhibits presence of module-hubs based on the classification
scheme defined as R1- ultra peripheral non-hubs, R2- peripheral non-hubs,
R3- connector non-hubs, R4- kinless non-hubs, R5- provincial hubs,
R6- connector hubs, R7- kinless hubs. \textbf{B) }The classification
of module nodes, based on the $Z_{i}$ and $P_{i}$ values, and their
organization depicted through the $P_{i}$ versus degree (k) plots
in the model generated FCPs for a) Below critical synaptic strength
at T = 1.0 and b) Near critical strength of their respective $T_{C}$.
The presence of various hub and non-hub nodes has been coded in respective
colors shown in the figure. \textbf{Note- }The congruency of the EEG
data (shown in violet) with the near-critical dynamics (shown in red)
claims the model robustness. There is emergence of R3 non-hub connector
nodes and R7 kinless hubs in the near- critical FCPs. Also, the organization
of R2 peripheral non-hub nodes and R6 connector hub nodes is showing
the right {hierarchy} with respect to degree upon transition to near
criticality.}
\end{center}
\end{figure}
\begin{figure}
\label{fig5}
\begin{center}
\includegraphics[height=12cm,width=16cm]{Fig5.eps}
\caption{\textbf{Analysis of Probability of degree-distributions in modules
of networks: }In a) and b) columns for long, critical and short-range
coupling we have shown probability of degree distribution, $P_{k_{m}}$,
of each node degree $k$ in module $m$ of the below and near-critical
FCPs, respectively. Here, different colors represent different communities.
c) The column shows $P_{k_{m}}$ versus $k_{m}$ plots of the communities
underlying the EEG data based networks for the three paradigms S1,
S2 match and S2 nomatch. Here, different colors represent communities
of various trials of a single subject. Note- The modules of near-critical
FCPs at short-range follows poisson distribution that transitions
to the scale-free organization at long-range coupling.}
\end{center}
\end{figure}
\subsection{Multi-fractal signature due to complex brain organization}
{\noindent}Fractal characterization of a complex temporal signal has been done using the multi-fractal detrended fluctuation analysis (MFDFA) \citep{Kantelhardt2002, Song2005}. The power-law scaling behaviour exhibited by the networks at near-critical strength characterizes the approximately self-similar temporal pattern formation, which further announces the existence of a fractal dimension to describe the complex organization and role of short and long-range correlation of the components of the system. The comprehensive methodology on $MFDFA$ in \citep{Song2005, Ihlen2012} signifies the presence of multi-fractality in biological complex systems reflected in time-series data analysis of the system. We have studied the presence of multi-fractality in self-organizing FCPs at below and near-critical strengths and compared the same with the EEG data generated FNs. In Fig.6 plot(a), the overall root mean square fluctuation function, $F_{q}$, for $q$ in range -5 to +5 is showing variation of fast and slow evolving fluctuations over segments of scale $s$ in the time-series, for each coupling range (long, short and critical) at $T<T_{c}$ and $T\simeq T_{c}$ and the EEG data as shown in respective colors. The scale of range 16 to 1024 has been used to calculate the function $F_{q}$ at each scale $s$ and is same for both the model and empirical EEG data. The slope of this scaling function $F_{q}$ determines the q-order Hurst exponent $H_{q}$. The $H_{q}$ is an important parameter to characterize multi-fractality as it captures the scaling behavior of small and large fluctuations in the time-series data, thus, its dependence on $q$ validates the multi-fractal nature. In Fig.6(b), the negative $q$ exhibits the scaling trend of small fluctuations and positive $q$ represents the scaling behaviour of large fluctuations in the data segments of a particular scale. The Fig6(b) has shown variation mainly in the scaling nature of small fluctuations at negative $q's$ and depicts an almost similar trend for large fluctuations at positive $q's$. The case of EEG data demonstrates the result of one subject each of S1, $S2_{match}$ and $S2_{nomatch}$ whereas the model data is the averaged result of 10 trials of time-series for all the three cases (long, short and critical). The $H_q$ of small and large fluctuations in the model data might get affected because of the averaging of data. However, we are more concerned about the scaling trend of the near-critical transition results being similar with the EEG data rather than the below criticality. The Fig6(c) represents the q-order mass exponent $t_{q}$ versus $q$, which exhibits the variation in mass exponent $t_q$ with respect to large ($+ve$ q) and small ($-ve$ q) fluctuations. The Fig6(d), partitioned into two subplots, further represents the generalized fractal dimension $D_q$ versus singularity exponent $h_q$ for below-criticality in the left subplot and near-criticality and EEG data in the right subplot. The mass exponent $t_{q}$, is a factor used to define the fractal dimension.
In Fig6(d), the clear transition in the $h_{q}$ and $D_{q}$, from below to near-critical state, exhibits presence of multiple scaling exponents represented by the arc of the multi-fractal spectrum of long, critical and short coupling ranges that signifies an increase in complexity. The near-critical multi-fractal spectrum curves have shown high congruence with spectrum curves of the FNs of a visual task-based EEG data.
\begin{figure}
\label{fig6}
\begin{center}
\includegraphics[height=12cm,width=16cm]{Fig6.eps}
\caption{\textbf{Multi-fractal detrended fluctuation analysis (MFDFA)
:} The technique has been applied to the model-based FCPs at below
and near-critical synaptic strengths and compared with the EEG-based
FNs to determine multi-fractal nature. a) The q-order RMS fluctuation
function $F_{q}$versus scale $s,$ quantifies the scaling trend in
fluctuations. b) The q-order Hurst exponent $H_{q}$, determines the
scale of the fluctuation function $F_{q}$. c) The q-order mass exponent
$t_{q}$ d) The q-order singularity exponent $h_{q}$ versus Dimension
$D_{q}$ exhibits the multifractal spectrum. \textbf{Note-} The transition
of FCPs at near-critical state is clear in the $D_{q}$ versus $h_{q}$
plot that exhibits a perfect arc of multifractal spectrum around the
range of exponents and complements with the FNs of a visual task-based
EEG data.}
\end{center}
\end{figure}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{\textbf{Degree exponents:}} & \ensuremath{\alpha} & \ensuremath{\beta} & \ensuremath{\gamma} & \ensuremath{\delta} & \ensuremath{\epsilon} & \ensuremath{\lambda} & $\kappa$\tabularnewline
\hline
\multirow{3}{*}{ FCPs ($T\langle T_{c}$)} & Long & -0.32 & -0.03 & 0.1 & 0.1 & 1.04 & 1.99 & -0.15\tabularnewline
\cline{2-9}
& Critical & -0.15 & -0.01 & 0.42 & 0.2 & 0.99 & 1.98 & -0.08\tabularnewline
\cline{2-9}
& Short & -0.34 & -0.07 & 0.14 & 0.26 & 0.96 & 2.0 & -0.10\tabularnewline
\hline
\multirow{3}{*}{FCPs (T $\simeq$ $T_{c}$)} & Long & -0.58 & -0.34 & -1.04 & 0.0 & 0.98 & 2.16 & -1.11\tabularnewline
\cline{2-9}
& Critical & -0.4 & -0.07 & -0.6 & 0.08 & 0.88 & 1.97 & -0.72\tabularnewline
\cline{2-9}
& Short & -0.35 & 0.0 & -0.6 & 0.01 & 0.98 & 1.93 & -0.78\tabularnewline
\hline
\multirow{3}{*}{EEG FNs} & S1 & -0.8 & 0.21 & -2.31 & 0.10 & 1.6 & 2.92 & -0.7\tabularnewline
\cline{2-9}
& S2 match & -0.73 & 0.23 & -2.1 & 0.10 & 1.8 & 2.34 & -0.78\tabularnewline
\cline{2-9}
& S2 nomatch & -0.73 & 0.21 & -2.1 & 0.11 & 1.56 & 2.15 & -0.87\tabularnewline
\hline
\end{tabular}
\caption{The measure of degree exponents for various network theory attributes
(as mentioned in section 4.3).}
\end{table}
\section{Conclusion}
We have made an attempt to characterize cortical neuronal circuitry that emerges out in the brain during any cognitive assignment. In our model, the two-state dynamics has shown ordered pattern dynamics with the emergence of functional clusters, when it evolves at near-critical regime for all the interaction ranges. The emergent network connectivity at near-critical strength, that we call as FCPs, symbolizes the emergence of functional connectivity in the brain. We have done extensive topological characterization of the FCPs and compared with the task-specific functional brain network of human subjects. In this comprehensive work, we have validated our insights of brain functionality by characterizing properties of self-organized functional connectivity embedded in the time-series data of task-specific activity. We have analyzed combined effect of both coupling strength and coupling range as drivers defining the interactions in our model.
Through graph theory measures, the effect of coupling range on the network of neural population depicts clear transition in the dynamic state of connectivity at the near-critical strength ($T\simeq T_{c}$). This illustrates the transition from a randomly connected network to a hierarchical network that owns scale-free characteristics with the presence of hubs as shown by the heavy tail distribution of degree distribution plots. The probability of degree distribution $P(k)$ and clustering coefficient $C(k)$ plots, clearly, explains the emergent complex dynamics of FCPs. The dynamics of the networks at $T\simeq T_{c}$ for long, critical and short-range coupling is found congruent with the emergent functional connectivity in the brain while performing a specific cognitive task. The self-organization in the brain lead to the emergence of FCPs in the form of hierarchically organized functional modules as reflected from the power-law nature of the rich-club organization and centrality measures of the networks constructed \citep{ThomasYeo2015, Meunier2010}. The presence of network centrality and rich-club behaviour, in similar trend with the EEG-based FNs, ensures the presence of coordinating hubs or dense modular cluster of neurons for effective communication in the model-based FCPs. This makes the functional pattern of neurons robust enough to maintain connectivity while wiring/rewiring among neurons as the dynamic pattern of connectivity among neurons affects the cost of information transmission \citep{DeDomenico2016}. The near-critical FCPs have shown the presence of $R6$, connector hubs, and $R7$, kinless hubs (in Fig.4B), with inter-modular connections to broadcast the information. The FCPs of neurons exhibit the hierarchical and scale-free organization with distinct community/pattern formation, where, coordination of the hubs via cross-talking communities in the network is one of the key factors of information processing both at local and global levels of neuronal organization. In our study, it is evident that this mechanism of FCPs organization is rectified by local and global perturbations in it, which can be propagated throughout the network with the help of various ranges of interaction (short, critical and long-range interaction). This manifests the fractal properties of these FCPs, which further exhibited multifractal spectrum formation at near-critical strength. Thus, the near-critical transition state marks the onset of cognitive state through self organizing dynamics at near-criticality. \\
{\noindent} Brain dynamics is known to function at near-criticality. There are recent studies that depicts the non-linearity of the functional brain dynamics at the near-critical regime \citep{Tagliazucchi2016, Cocchi2017, Ezaki2020}. Our results of network theory attributes, fractal analysis and functional cartography have confirmed the generation of a scale-free hierarchical network (FCPs) only at the near-critical phase transition. We have made the following insights focussing on the basis of generation of these FCPs. The dynamics of these FCP's could procreate diverse cognitive states in the brain. These cognitive states could be thought of as a bunch of meta-stable states in a system with near dynamic equilibrium. The near-critical regime could accommodate a set of dynamic functional states whose synchronous permutational amalgamation could lead its way to cognitive behaviours. Also, from our study of near-critical dynamics for long, critical and short-ranged functional patterns, we have interpreted that criticality is a range and not just a point. As we found that the long-ranged dynamics require high coupling strength to attain criticality than the short-ranged. This explains how the increase in synaptic strength due to multiple stimulus attempts would make more long-range connections to the complex functional pattern in the brain. The dynamicity of long-range connections defines the task-specific functional connectivity at the global level \citep{Park2013}. Also, such analytical {modelling} of functional networks of neurons based on inter-neuronal coupling range could further assist in the extensive assessment of brain disorders, characterized by aberrant functional connectivity. The research in \citep{goodarzinick2018} have analyzed 2D Ising model to exhibit topological characterization of functional networks of neuronal activity at criticality, being more robust against structural defects. However, in brain disorders, such as ASD irregularity in the structural connectivity has caused less intricate long-ranged and denser short-ranged functional connectivity \citep{Conti2017}. Also, the extent in reflection of structural connectivity changes on the topology of functional network cannot be determined till date and sounds debatable. Though, the research in \citep{goodarzinick2018} makes our belief more strong towards the critical phase transition state as the emerging functional state in the brain. We believe that our approach towards understanding the generation and characterization of functional patterns of neuronal activity in the brain cortex embarks us with new understanding on brain functional irregularities and also motivates modelling of functional connectivity in brain disorders.\\
{\noindent}There are obvious limitations that our model dynamics is binary and exhibits only firing/non-firing states of a neuron or group of neurons at a single lattice site. It overlooks the biochemical fluxes of a nerve cell and rather focuses on the global connectivity state. In our study, we have considered $64\times 64$ lattice system for our model and the empirical data is from 64 electrodes over the brain, this surely sounds a huge gap. However, brain activity is known to follow statistical fractal nature, which allows the characteristic properties to remain similar over the size \citep{Franca2018}. Also, we have addressed analytically that with an increase in system size the critical temperature will increase in the long-range regime, but this would not affect its near-critical dynamics. In spite of the huge complexity gap, we have got significant congruency in terms of the characteristic power law trends of the network topology between the task-specific functional brain networks and the FCPs at the near-critical regime only. This research has given us strong implication that the functional neuronal system supports far-from-equilibrium dynamics upon receiving a stimulus that leads to a second-order phase transition and self-organization is the key event that generates functional patterns in the brain cortex. This work gives us very useful insights towards the mechanistic view of the emerging brain functional connectivity, though there is still much more to dig and modelling through such simplistic model aid in understanding the concepts from a coarse lens that surely can be investigated to microscopic levels. \\
\section{Methodology}
\subsection{Network Construction:}
{\noindent}The functional connectivity of neuronal population in the brain could be examined through a binary connectivity matrix where '1' represents the connection between two nodes and '0' is antithesis of '1' \citep{sporns2007}. We have computed the functional connectivity based on statistical consanguinity of neuronal time-series using Pearson-correlation coefficient, used to measure inter-neuronal correlations \citep{sporns2007}. The Pearson correlation coefficient $r_{xy}$, defines correlation between nodes $x$ and $y$ using the formula,
\begin{eqnarray}
r_{xy}=\frac{\sum(x_{i}-\bar{x})(y_{i}-\bar{y})}{\sqrt{\sum(x_{i}-\bar{x})^{2}}\sqrt{\sum(y_{i}-\bar{y})^{2}}}
\end{eqnarray}
Where, $x_{i}$ and $y_{i}$ represents temporal data of two spins $x$ and $y$ for the total number of monte carlo steps. The binary undirected matrix is transformed into graph $G(S,E)$ consists of nodes set S = \{$s_{i}\mid i\:\in1$, ..., $N$\} where $N$ is the total spin sites i.e. 4096 in our study. Then, based on the functional correlation of spins the delineated edge list is defined as E = $\{e_{ij}\mid(s_{i},s_{j})\in S\times S\}$. We generated the binary connectivity matrix by averaging the data of 10 time-series from our model for each coupling range and global synaptic strength (Temperature). We applied a common thresholding limit for defining connections among the spins (neurons). We calculated the average correlation value in each case and then did the final average to get a unique threshold i.e. 0.2 for our model. This adjacency matrix is used to construct undirected networks for long, short and critical coupling range (n) at different overall synaptic strengths (T) employing NetworkX module in python. As we are studying the emergent behavior of our model generated functional patterns, we also constructed the functional brain cortical networks using the visibility graph approach on the EEG time-series data.
\subsection{Visibility graph approach:}
{\noindent}The EEG data has been taken from the UCI database, where the subjects were shown to three types of visual stimulus S1 (single image shown), S2\_match (two similar images) and S2\_nomatch (two dissimilar images) and recording was done using 64 electrodes with the sampling frequency of 256Hz for 1 second in healthy human subjects \citep{Beggs2012}. We applied the visibility graph approach in the EEG time-series data to construct the functional brain networks (FNs) generated while doing the visual task \citep{Lacasa2008}. We took 10 trials for each type of stimulus. In this approach, each time state or neuronal state in the EEG time-series (or neuron activity pattern model based time-series) data, which is the resultant signal of the interacting neurons in the brain at that time state, is taken as node in the constructed network. The connections between ant two time or neuronal states with data value $n_{a}$ and $n_{b}$ at time point $t_{a}$ and $t_{b}$ respectively would be defined if the third time
step $n_{c}(t_{c})$ satisfies the following condition,
\begin{eqnarray}
\frac{n_{b}-n_{c}}{t_{b}-t_{c}}>\frac{n_{b}-n_{a}}{t_{b}-t_{a}}
\end{eqnarray}
The extracted network would be connected, at least to its neighbors, undirected and invariant according to the algorithm. The characteristic properties of the time-series get delineated in the form of resultant network.
\subsection{Network theory attributes:}
{\noindent}Network theory is the much applied theoretical approach by researchers for characterizing and studying complex brain networks \citep{Sporns2002a}. We are providing some important attributes, which are used for our analysis, as given below.\\
{\noindent}\textbf{Degree distribution}\\
The probability of degree distribution to have degree $k$ in a network defined by $G(N,E)$, where $N$ and $E$ are sets of nodes and edges of the network, is given by, $P(k)=\frac{n_k}{N}$, where, $n_k$ and $N$ are number of nodes having $k$ degree and size of the network, respectively \citep{Newman2009}. {The degree distribution, $P(k)$, for the Erd$\ddot{o}$s-Re$\acute{n}$yi random network follows Poisson distribution and deviates from the random for small-world networks}, whereas, for scale-free and hierarchical networks it follows power-law, $P(k)\sim k^{-\gamma}$, where, $2<\gamma<3$ \citep{albert2002,barabasi2004}. \\
{\noindent}\textbf{Clustering co-efficient}\\
Clustering co-efficient, which characterize how strongly a node is connected to the rest of the nodes in the network, of an ith node in the network, can be estimated as the ratio of the number of its nearest neighborhood edges to the total possible number of edges of degree $k_i$, $C(k_i)=\frac{E_i}{^{k_i}C_2}$, where, $E_i$ and $k_i$ are the number of connected pairs of nearest-neighbors of ith node \citep{Newman2009}. $C(k)\rightarrow constant$ for scale-free, random and small-world networks, whereas, for hierarchical network, $C(k)\sim k^{-\alpha}$, with $\alpha\sim 1$ \citep{albert2002,barabasi2004}.\\
{\noindent}\textbf{Neighborhood connectivity}\\
It is characterized by the measure of mean connectivities of the nearest neighbors of each node in a network given by, $C_n(k)=\sum_{u}uP(u|k)$, where, $P(u|k)$ is the probability that a link belongs to a node with degree $k$ points to a node with connectivity $u$ \citep{maslov2002,pastor2001}. The hierarchical network follows, $C_n(k)\sim k^{-\beta}$, with $\beta\sim 0.5$ \citep{pastor2001}. However, if $C_n(k)\sim k^{\beta}$ (positive $\beta$), then the network exhibits assortative nature indicating the possibility of coordinating high degree hubs in the network.\\
{\noindent}\textbf{Betweenness centrality}\\
Betweenness centrality of a node $w$ is the measure of sharing amount that a node $i$ needs $w$ to reach $j$ via shortest path, and is given by, $C_B(w)=\sum_{i,j;i\ne j\ne w}=\frac{d_{ij}(w)}{d_{ij}}$, where, $d_{ij}(w)$ is the number of geodesic paths from node $i$ to $j$ passing through $w$, and $d_{ij}$ indicates number of geodesic paths from node $i$ to $j$ \citep{Freeman1978}. It characterizes the amount of information traffic diffusing from each node to every other node in the network \citep{borgatti2005}.\\
{\noindent}\textbf{Closeness centrality}\\
Closeness centrality of a node $u$ is characterized by the harmonic mean of the geodesic paths connecting $u$ and any other nodes in the network, $C_C(u)=\frac{n}{\sum_id_{ui}}$, where, $d_{ui}$ is the geodesic path length between nodes $u$ and $i$, and $n$ is total number of nodes connected to node $u$ in the network \citep{Canright2004}. It generally measures the rate of information flow in the network, where, larger the value of $C_C$ corresponds short path lengths, and hence fast information processing in the network, and vice versa \citep{borgatti2005}.\\
{\noindent}\textbf{Eigen-vector centrality}\\
Eigen-vector centrality of a node $w$ in a network, which indicates the power of spreading of a particular node in the network, can be expressed by, $C_E(w)=\frac{1}{\lambda}\sum_{i=nn(w)}u_i$, where, $nn(w)$ is the nearest neighbors of node $w$, $\lambda$ is the eigenvalue of eigenvector $v_i$ $Av_i=\lambda v_i$, $A$ is the adjacency matrix of the network \citep{Canright2004}. The principal eigenvector of A, which satisfy $Au_w=\lambda_{max}$, is taken to have positive eigenvector centrality score.\\
{\noindent}\textbf{Rich-club co-efficient}\\
Rich nodes correspond to nodes having large links in the network, and they tend to form tight subgraph among them, which is referred to as rich-club formation in the network \citep{zhou2004}. This rich-club phenomenon can be quantified by rich-club coefficient defined by, $R_C(k)=\frac{E_{>k}}{^{N_{>k}}C_2}$, where, $E_{>k}$ as the number of edges after removing nodes less than degree $k$ distributed among $N_{>k}$ nodes \citep{Colizza2006}. It characterizes many important properties of the network, namely, information traffic backbone, mixing properties of the network, etc. The rich-club organization analysis has been done to estimate the presence of connector hubs. The Brain connectivity toolbox has been used to compute the rich-club coefficient and network centrality measures \citep{sporns2007,Rubinov2010}.\\
{\noindent}\textbf{Participation index}\\
{\noindent}The participation coefficient ($P_{i}$) signifies the distribution of connections of a particular node i with respect to different communities \citep{Guimera2005} given by,
\begin{eqnarray}
P_{i}=1-\sum_{c=1}^{N}\left(\frac{k_{ic}}{k_{i}}\right)^{2},
\end{eqnarray}
where, $k_{ic}$ is the number of connections made by node i to nodes in module c and $k_{i}$ is the total degree of node i. The $P_{i}$ value determines the distributional uniformity of the neuronal connections, specified by the range 0-1. The escalating value signifies a more homogeneous allocation of links among all the modules.\\
{\noindent}The \textbf{within-module degree} $Z_{i}$ is another measure to quantify the role of a particular node i in the module $c_{i}$. High values of $Z_{i}$ indicate more intra-community connections than inter-community
and vice-versa \citep{Guimera2005}.
\begin{eqnarray}
Z_{i}=\frac{k_{i}-\bar{k_{c_{i}}}}{\sigma_{k_{c_{i}}}},
\end{eqnarray}
where, $k_{i}$ is the number of connections of the node i to other nodes in its module $c_{i}$. $\bar{k_{c_{i}}}$ is the average k over all nodes in the module $c_{i}$ and $\sigma_{k_{c_{i}}}$ is the standard deviation of k in $c_{i}$.\\
\noindent\textbf{Community detection \textbf{method}:}\\
{\noindent}The complex network structure can be forbidden into communities or modules, specified with less than expected number of connections among them. We have used the community Louvain method to design non-overlapping communities out of the network \citep{Blondel2008}. This method has outperformed other methods in terms of both modularity and computational time \citep{Blondel2008}. To create a significant division of a network the benefit function called modularity (Q) is defined as,
\begin{eqnarray}
Q=\frac{1}{W}\sum_{i,j=1}^{N}[A_{ij}-B_{ij}]\delta_{c_{i},c_{j}},
\end{eqnarray}
The modularity Q, is maximized for good partitioning of the graph $G(S,E)$ with N as total nodes. The $A_{ij}$ and $B_{ij}$ defines the exact and expected number of connections between nodes i and j. $W=\underset{i,j}{\sum}A_{ij}$, $c_{i}$and $c_{j}$are the communities nodes i and j belongs to,$\delta_{c_{i},c_{j}}$equals to 1, when nodes i and j falls in same community and 0, if they do not.
\subsection{MFDFA analysis:}
Multi-fractal detrended fluctuation analysis (MFDFA) approach has been used to ensure the fractality of dynamic patterns generated over non-stationary temporal data in complex biological systems \citep{Kantelhardt2002, Eke2002}. The statistical fractals generated through physiological time-series data show self affinity in terms of different scaling with respect to direction \citep{Eke2002}. The presence of multiple scaling exponents can be examined in a time-series using the Matlab formulation of the MF-DFA method \citep{Ihlen2012}. Parameters characterizing multifractality are scaling function (F), Hurst exponent (H), mass exponent (t), singularity exponent (h) and Dimension (D) as explained in \citep{Ihlen2012}. For a time-series signal $x_{j}$ of finite length \textit{l} with random walk like structure, can be computed by the Root mean square (RMS) variation, $X_{i}=\sum_{j=1}^{i}(x_{j}-\left\langle x\right\rangle)$, where, $\left\langle x\right\rangle$ is the mean value of the signal, and $i = 1,2, ...,
l$. The signal X has been divided into $n_{s}=int(\frac{l}{s})$ non-overlapping segments of equal size \textit{s}. To avoid left-over short segments at the end, the counting has been done from both sides therefore $2n_{s}$ segments are taken into account. This defines the scale (\textit{s}) to estimate the local fluctuations in the time-series. Thus, the overall RMS, F, for multiple scales can be computed using the equation,
\begin{eqnarray}
F^{2}(s,v)=\frac{1}{s}\sum_{i=1}^{s}\left\{ X[(v-1)s+i]-x_{v}(i)\right\} ^{2}
\end{eqnarray}
where, $v$ = 1,2, ..., $n_{s}$ and $x_{v}(i)$ is the fitting trend in each segment $v$. The q-order RMS fluctuation function further determines the impact of scale (s) with large (+ q's) and small (-q's) fluctuations, as follows,
\begin{eqnarray}
F_{q}(s)=\left\{ \frac{1}{2n_{s}}\sum_{\nu=1}^{2n_s}[F^{2}(v,s)]^{\frac{q}{2}}\right\} ^{\frac{1}{q}}
\end{eqnarray}
The q-dependent fluctuation function $F_{q}(s)$ for each scale (s) will quantify the scaling behaviour of the fluctuation function for each q,
\begin{eqnarray}
F_{q}(s)\thicksim s^{H_{q}}
\end{eqnarray}
where, $H_{q}$ is the generalized Hurst exponent, one of the parameters that characterize multi-fractality through small and large fluctuations ( negative and positive q's) in the time-series. The $H_{q}$ is related
to q-order mass exponent $t_{q}$ as follows,
\begin{eqnarray}
t_{q}=qH_{q}-1
\end{eqnarray}
from $t_{q}$ , the singularity exponent $h_{q}$ and Dimension $D_{q}$ is defined as,
\begin{eqnarray}
h_{q}=\frac{dt_{q}}{dq},\quad\quad D_{q}=qh_{q}-t_{q}
\end{eqnarray}
The plot of $D_{q}$ versus $h_{q}$ represents the multifractal spectrum of the time-series.
\section{Conflict of Interest Statement}
The authors declare no competing financial interests.
\section{Author Contributions}
R.K.B.S. and J.G. conceived the model. J.G. did the numerical experiment and prepared the figures of the numerical results. J.G. and R.K.B.S. analyzed and interpreted the results, and wrote the manuscript. All author read and approved the manuscript.
\section{Funding}
J.G. and R.K.B.S. are financially supported by Council of Scientific and Industrial Research through sanctioned project 25(0221)/13/EMR-II.
\section{Acknowledgments}
We acknowledge the funding agencies and Jawaharlal Nehru University. This manuscript has been released as a Pre-Print at [https://www.biorxiv.org/content/10.1101/569244v1] \citep{Jasleen}.
\section{Data Availability Statement}
The dataset analyzed for this study can be found in the [UCI machine learning repository: EEG database data set [https://archive.ics.uci.edu/ml/datasets/EEG+Database].
|
1,116,691,497,838 | arxiv | \section{Introduction and summary}
For classical mechanics (field theory in $0{+}1$ dimensions) there exists
a rich landscape of ${\cal N}{=}\,8$ supersymmetric models, distinguished by
the number~$b$ of propagating bosonic degrees of freedom and by the nature of
the supersymmetry transformations (linear or nonlinear)
\cite{BeIvKrLe1,BeIvKrLe2,IvLeSu}.
Restricting to the linear type, the notation $(b,{\cal N},{\cal N}{-}b)$ counts their
propagating bosonic, fermionic and auxiliary components.
As was already observed in~\cite{IvKrPa,DoPaJuTs}, an important role is played by
a potential inhomogeneity in the supersymmetry transformation of the fermions.
The parameters appearing there may be viewed as a constant shift of the auxiliary
components and are introduced through the superfield constraints. Together with
Fayet-Iliopoulos terms, they create a bosonic potential, lead to central charges
and partial supersymmetry breaking.
To accomodate these inhomogeneous terms, we apply the techniques discussed
in \cite{PaTo} and~\cite{KuRoTo} and produce the most general inhomogeneous linear
supermultiplets compatible with the ordinary supersymmetry algebra
$\{Q_i,Q_j\}=\delta_{ij} H$ (without central extensions).
Here, we concentrate on the classical mechanics of a (2,8,6)~particle.
The Lagrangian and Hamiltonian of this model has been formulated for a general
prepotential~$F$ in~\cite{BeKrNe} (without inhomogeneity) and in~\cite{BeKrNeSh}
(with inhomogeneity).
Here, we specialize to the conformal case and investigate the classical dynamics
of the conformal (2,8,6)~particle.
The inhomogeneous (2,8,6) ${\cal N}{=}\,8$ supermultiplet, under the requirement of
scale-invariance for the action, defines a unique superconformal mechanical
system. The only free parameters are the the scale-setting Fayet-Iliopoulos coupling
and the dimensionless shift entering the inhomogeneous supersymmetry transformations.
We review the inhomogeneous supersymmetry transformations for ${\cal N}{\le}\,8$
and rederive the invariant conformal action for the inhomogeneous (2,8,6) multiplet
including Fayet-Iliopoulos terms, without using superspace technology.
After eliminating the auxiliary components we arrive at a very specific
(non-isotropic and indefinite) Weyl factor and bosonic potential in the two-dimensional
target space. It proves to be legitimate (at least classically) to restrict to
a (positive-definite) half-space, where we present some typical particle trajectories.
The inhomogeneous supersymmetry transformations that we investigate here
close the ordinary supersymmetry algebra without central extensions.
This is the case because we work within the Lagrangian framework.
Central extensions of the supersymmetry algebra can arise,
both in the classical and quantum cases, as a consequence of
the Hamiltonian formulation and the closure of the Noether-(super)charge algebra
under the Poisson bracket structure~\cite{IvKrPa}.
It is tempting to push the idea of this paper to even higher-extended supersymmetry.
For example, by coupling two inhomogeneous (2,8,6) multiplets linked by an extra, 9th,
supersymmetry, one should be able to construct an ${\cal N}{=}\,9$ superconformal mechanics
model with a four-dimensional target. This might be related with the standard reduction of
${\cal N}{=}\,4$ super Yang-Mills to an off-shell multiplet of type (9,16,7) in one dimension.
\newpage
\section{Inhomogeneous minimal linear supermultiplets}
Minimal linear supermultiplets of extended supersymmetry in one dimension are usually
formulated with homogeneous transformations for their component fields. However, in
some cases it is possible to extend the supersymmetry transformations by the addition
of an inhomogeneous term. This is admissible at
\begin{itemize}
\addtolength{\itemsep}{-6pt}
\item ${\cal N}{=}\,2$ for the supermultiplet $(0,2,2)$
\item ${\cal N}{=}\,4$ for the supermultiplets $(0,4,4)$ and $(1,4,3)$
\item ${\cal N}{=}\,8$ for the supermultiplets $(0,8,8)$ and $(1,8,7)$ and $(2,8,6)$
\end{itemize}
The remaining ${\cal N}=2,4,8$ supermultiplets do not admit an inhomogeneous extension,
as can be easily verified by investigating the closure of the
ordinary ${\cal N}$-extended supersymmetry algebra.
\par
Let $x$ and $y$ be physical bosons, $\psi$, $\psi_i$, $\lambda$ and $\lambda_i$
denote fermions, and $g$, $g_i$, $f$ and $f_i$ describe auxiliary fields.
Here, the isospin index $i$ runs over a range depending on the number of supersymmetries.
The presence of an inhomogeneous term requires the following mass dimension for the fields:
\begin{equation}
[t]=-1 \qquad\longrightarrow\qquad [x]=-1\ ,\quad [\psi]=-\sfrac12\ ,\quad [g]=0\ .
\end{equation}
In all the above cases, by a suitable R~transformation, the inhomogeneous terms can be
rotated to point only in a specific iso-direction. We choose the one with the highest
iso-index, i.e.~$i=2,3$ or~$7$, depending on the case. With this choice, let us list
the various supersymmetry transformations~$Q_i$ for the six cases listed above.
{\bf (0,2,2).}\quad
For the inhomogenous ${\cal N}{=}\,2$ $(0,2,2)$ supermultiplet, the two supersymmetry
transformations, without loss of generality, can be expressed as
($j,k=1,2$, $\epsilon_{12}=1$)
\begin{eqnarray}
&
\begin{array}{ll}
Q_1 \psi_j =g_j\ , &
Q_1 g_j={\dot \psi_j}\ ,
\\[4pt]
Q_2 \psi_j = \epsilon_{jk} {\tilde g_k}\ ,\quad &
Q_2 g_j= \epsilon_{jk} {\dot \psi_k} \ ,
\end{array}&
\end{eqnarray}
where the inhomogeneous extension hides in
\begin{equation}
\tilde g_k\ :=\ g_k+c_k \qquad\textrm{with}\quad c_k\in\mathbb R\ ,
\end{equation}
and we rotate to $c_1=0$, $c_2\equiv c>0$.
{\bf (0,4,4).}\quad
For the ${\cal N}{=}\,4$ $(0,4,4)$ multiplet, we have ($i,j,k=1,2,3$, $\epsilon_{123}=1$)
\begin{eqnarray}
&
\begin{array}{llll}
Q_0\psi=g\ , &Q_0 \psi_j =g_j\ , &
Q_0 g= {\dot \psi}\ , & Q_0 g_j={\dot \psi_j}\ ,
\\[4pt]
Q_i\psi = g_i\ ,\ &Q_i \psi_j = -\delta_{ij} {g}+\epsilon_{ijk} {\tilde g_k}\ ,\ &
Q_i g= -{\dot \psi_i}\ ,&\ Q_i g_j= \delta_{ij} {\dot\psi}-\epsilon_{ijk} {\dot\psi_k}\ ,
\end{array}&
\end{eqnarray}
and we may choose
\begin{equation}
\tilde g_1=g_1\ ,\quad \tilde g_2=g_2\quad\textrm{but}\quad\tilde g_3=g_3+c\ .
\end{equation}
{\bf (1,4,3).}\quad
The ${\cal N}{=}\,4$ $(1,4,3)$ multiplet looks slightly different,
\begin{eqnarray}
&
\begin{array}{llll}
Q_0 x=\psi\ , &Q_0\psi={\dot x}\ , &Q_0 \psi_j =g_j\ , & Q_4 g_j={\dot \psi_j}\ ,
\\[4pt]
Q_i x=\psi_i\ ,\quad& Q_i\psi=-g_i\ ,\quad&
Q_i\psi_j=\delta_{ij}{\dot x}+\epsilon_{ijk} {\tilde g_k}\ ,\quad&
Q_i g_j= -\delta_{ij} {\dot\psi}-\epsilon_{ijk} {\dot \psi_k}\ ,
\end{array}&
\end{eqnarray}
with the same $\tilde g_k$ as in (0,4,4).
{\bf (0,8,8).}\quad
Without loss of generality, we can generate the ${\cal N}{=}\,8$ multiplets from the
${\cal N}{=}\,4$ ones by replacing the quaternionic structure constants~$\epsilon_{ijk}$
by the (totally antisymmetric) octonionic structure constants~$c_{ijk}$, with $i,j,k=1,\ldots,7$ and
\begin{equation}
c_{123}=c_{147}=c_{165}=c_{246}=c_{257}=c_{354}=c_{367}=1\ ,
\end{equation}
together with $c_{ijk}=0$ for all other index combinations.
Therefore, the case of (0,0,8) yields
\begin{eqnarray}
&
\begin{array}{llll}
Q_0\psi=g\ , &Q_0 \psi_j =g_j\ , & Q_0 g= {\dot \psi}\ , & Q_0 g_j={\dot \psi_j}\ ,
\\[4pt]
Q_i\psi = g_i\ ,\quad&Q_i \psi_j = -\delta_{ij} g+ c_{ijk} {\tilde g_k}\ ,\quad&
Q_i g= -{\dot \psi_i}\ ,\quad & Q_i g_j= \delta_{ij} {\dot\psi}-c_{ijk} {\dot \psi_k}\ ,
\end{array}&
\end{eqnarray}
and we take
\begin{equation}
\tilde g_k = g_k + \delta_{k,7}\,c\ .
\end{equation}
{\bf (1,8,7).}\quad
In analogy with (1,4,3), we get
\begin{eqnarray}
&
\begin{array}{llll}
Q_0 x=\psi\ , &Q_0\psi={\dot x}\ , &Q_0 \psi_j =g_j\ , & Q_0 g_j={\dot \psi_j}\ ,
\\[4pt]
Q_i x=\psi_i\ ,\quad& Q_i\psi = -g_i\ ,\quad&
Q_i \psi_j = \delta_{ij} {\dot x}+\epsilon_{ijk} {\tilde g_k}\ ,\quad&
Q_i g_j= -\delta_{ij} {\dot\psi}-\epsilon_{ijk} {\dot \psi_k}\ ,
\end{array}&
\end{eqnarray}
and again $\tilde g_k=g_k$ except for $\tilde g_7=g_7+c$ with $c>0$.
{\bf (2,8,6).}\quad
This is the most interesting multiplet. It is convenient to present it in
quaternionic form, by fusing $(1,4,3)\oplus(1,4,3)=(2,8,6)$, with components
labeled by $(x,\psi_{(i)},g_{(i)})$ and $(y,\lambda_{(i)},f_{(i)})$, respectively,
where $i=1,2,3$.
It is convenient to present the supersymmetry transformations in the following table,
{\tiny
\begin{eqnarray}\label{table1}
&
\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}\hline
&x&g_1&g_2&g_3&y&f_1&f_2&f_3&\psi&\psi_1&\psi_2&\psi_3&\lambda&\lambda_1&\lambda_2&
\lambda_3\\ \hline
Q_0&\psi&{\dot \psi_1}&{\dot\psi_2}&{\dot\psi_3}&\lambda&{\dot\lambda_1}&{\dot\lambda_2}&{\dot\lambda_3}&
{\dot x}&g_1&g_2&g_3&{\dot y}&f_1&f_2&f_3\\\hline
Q_1&\psi_1&-{\dot\psi}&-{\dot\psi_3}&{\dot\psi_2}&\lambda_1&
-{\dot\lambda}&{\dot\lambda_3}&-{\dot\lambda_2}&-g_1&{\dot x}&{\tilde g_3}&-{\tilde g_2}&-f_1&{\dot y}&-{\tilde f_3}&{\tilde f_2}\\\hline
Q_2&\psi_2&{\dot \psi_3}&-{\dot \psi}&-{\dot\psi_1}&\lambda_2&-{\dot\lambda_3}&-{\dot\lambda}&{\dot\lambda_1}&
-g_2&-{\tilde g_3}&{\dot x}&{\tilde g_1}&-f_2&{\tilde f_3}&{\dot y}&-{\tilde f_1}\\\hline
Q_3&\psi_3&-{\dot \psi_2}&{\dot\psi_1}&-{\dot\psi}&\lambda_3&{\dot\lambda_2}&-{\dot\lambda_1}&-{\dot\lambda}&
-g_3&{\tilde g_2}&-{\tilde g_1}&{\dot x}&-f_3&-{\tilde f_2}&{\tilde f_1}&{\dot y}\\\hline
Q_4&\lambda&-{\dot\lambda_1}&-{\dot\lambda_2}&-{\dot\lambda_3}&-\psi&{\dot\psi_1}&
{\dot\psi_2}&{\dot\psi_3}&-{\dot y}&f_1&f_2&f_3&{\dot x}&-g_1&-g_2&-g_3\\\hline
Q_5&\lambda_1&{\dot\lambda}&{\dot \lambda_3}&-{\dot\lambda_2}&-\psi_1&-{\dot\psi}&{\dot\psi_3}&-{\dot\psi_2}&-f_1&-{\dot y}&-{\tilde f_3}&{\tilde f_2}&g_1&{\dot x}&-{\tilde g_3}&{\tilde g_2}\\\hline
Q_6&\lambda_2&-{\dot\lambda_3}&{\dot\lambda}&{\dot\lambda_1}&-\psi_2&-{\dot\psi_3}&
-{\dot\psi}&{\dot\psi_1}&-f_2&{\tilde f_3}&-{\dot y}&-{\tilde f_1}&g_2&{\tilde g_3}&{\dot x}&-{\tilde g_1}\\\hline
Q_7&\lambda_3&{\dot\lambda_2}&-{\dot\lambda_1}&{\dot\lambda}&-\psi_3&
{\dot\psi_2}&-{\dot\psi_1}&-{\dot\psi}&-f_3&-{\tilde f_2}&{\tilde f_1}&-{\dot y}&g_3&-{\tilde g_2}&{\tilde g_1}&{\dot x}\\\hline
\end{array}
&\nonumber
\end{eqnarray}
}
Inspection shows that $Q_0,Q_1,Q_2,Q_3$ act within each of the two (1,4,3) submultiplets,
while the additional supersymmetries $Q_4,Q_5,Q_6,Q_7$ mix the two.
Having SO(3)-rotated inside each (1,4,3) submultiplet to
\begin{equation}
\tilde g_k = g_k + \delta_{k3}\,c \qquad\text{and}\qquad \tilde f_k = f_k + \delta_{k3}\,c'
\end{equation}
we may employ a further SO(2) rotation, acting on the $\psi_3\lambda_3$ and
$g_3f_3$ planes, to remove the $c'$ contribution and align the inhomogeneity
with one of the two ${\cal N}{=}\,4$ submultiplets.
\section{Invariant action for a (2,8,6) particle}
To investigate the dynamics of superconformal particles on a line, based on the
various inhomogeneous supermultiplets, we shall need to construct invariant actions
for them. For ${\cal N}{\ge}\,4$ and the presence of at least one physical boson,
there exists a canonical method~\cite{KuRoTo} to generate such actions, by setting
\begin{equation} \label{N4action}
{\cal S} \= \int\!\mathrm d t\;{\cal L} \= \int\!\mathrm d t\ Q_1Q_2Q_3Q_4\,F(x,y,\ldots)\ ,
\end{equation}
where $F(x,y,\ldots)$ is an unconstrained prepotential.
In order to obtain conformally invariant mechanics, the action should not contain
any dimensionful coupling parameter, and therefore, due to $[Q_i]=\sfrac12$, we
demand that $[F]=-1$. One can prove that the ensuing scale invariance extends to
full conformal invariance.
Without the inhomogeneous extension, (\ref{N4action}) yields only a kinetic term with
some metric. It is the inhomogeneous term which will give rise to a Calogero-type
potential. The action may be complemented by the addition of a Fayet-Iliopoulos term
\begin{equation}
{\cal S}_{\textrm{FI}} \= \int\!\mathrm d t\;\sum_i(q_i g_i + r_i f_i)
\qquad\textrm{with}\quad [q_i]=[r_i]=1\ ,
\end{equation}
introducing dimensionful couplings compatible with conformal invariance.
These Fayet-Iliopoulos terms produce an oscillatorial damping, via the DFF
trick of conformal mechanics~\cite{AlFuFu}.
For the (1,4,3) multiplet (only $x$ and $g_i$, no $y$ or $f_i$), the proper choice
for the prepotential is
\begin{equation}
F(x)\= \sfrac14\,x\ln x \qquad\longrightarrow\qquad
{\cal L}+{\cal L}_{\textrm{FI}}\=
F''(x)\bigl({\dot x}^2+g_i^2\ +\ c\,g_3\bigr)+q_ig_i
\ +\ \textrm{fermions}\ .
\end{equation}
After eliminating the auxiliary components $g_i$ via their equations of motion
and putting the fermions to zero, one gets
\begin{eqnarray}
{\cal L}'_{\textrm{bos}}&=&
F''(x)\bigl({\dot x}^2-\sfrac14c^2\bigr)\ -\ \sfrac14q_i^2/F''(x)\ -\ \sfrac12c\,q_3
\nonumber\\[4pt]
&=&\sfrac14\bigl({\dot x}^2-\sfrac14c^2\bigr)/x\ -\ g_i^2x\ -\ \sfrac12c\,q_3
\label{143pot} \\[4pt]
&=&\sfrac12{\dot w}^2-\sfrac18c^2w^{-2}\ -\ \sfrac12g_i^2w^2\ -\
\sfrac12c\,q_3\ ,\nonumber
\end{eqnarray}
and we have recovered the standard conformal action
after the coordinate change $x=\sfrac12w^2$.
Stepping up to ${\cal N}{=}\,8$, we change the iso-labelling to make $Q_0,Q_1,Q_2,Q_3$
manifest,
\begin{equation} \label{N8action}
{\cal S} \= \int\!\mathrm d t\;{\cal L} \= \int\!\mathrm d t\ Q_0Q_1Q_2Q_3\,F(x,y,\ldots)\ .
\end{equation}
Demanding invariance under the additional four supersymmetries by requiring
\begin{equation} \label{N8constraint}
Q_l{\cal L} \= \partial_t W_l \qquad\textrm{for}\quad l=4,5,6,7
\end{equation}
imposes severe constraints on~$F$.
In fact, for the (1,8,7) multiplet no action can be invariant under the inhomogeneous
supersymmetry transformations.\footnote{
In the homogeneous case the constraint reads $F^{\prime\prime\prime\prime}(x)=0$,
which produces ${\cal L}=(ax{+}b)\,\dot x^2+\ldots$.}
However, the situation is much more interesting for the (2,8,6) multiplet.
Here, the constraint~(\ref{N8constraint}) says that, like in the homogeneous
case~\cite{GoRoTo}, the prepotential~$F(x,y)$ must be harmonic,
\begin{equation}
\Box F \ \equiv\ F_{xx}+F_{yy}\=0\ .
\end{equation}
The general solution is encoded in a meromorphic function~$H(z)$ via
\begin{equation} \label{harmonicF}
F(x,y) \= H(z) + \overline{H(z)} \= 2\,\mathrm{Re} H(z)\ ,
\end{equation}
where it is convenient to pass to complex coordinates,
\begin{eqnarray}
&
\begin{array}{llll}
z=x+\mathrm i y\ ,\quad& \partial_z=\sfrac12(\partial_x-\mathrm i\partial_y)\ ,\quad&
h_i=g_i+\mathrm i f_i\ ,\quad& \chi_{(i)}=\psi_{(i)}+\mathrm i\lambda_{(i)} \\[4pt]
\bar z=x-\mathrm i y\ ,& \partial_{\bar z}=\sfrac12(\partial_x+\mathrm i\partial_y)\ ,&
\bar h_i=g_i-\mathrm i f_i\ ,& \bar\chi_{(i)}=\psi_{(i)}-\mathrm i\lambda_{(i)}\ .
\end{array}&
\end{eqnarray}
Inserting (\ref{harmonicF}) into~(\ref{N8action}), we obtain
\begin{eqnarray}
{\cal L}&=&
2\,\mathrm{Re}\,\bigl\{ H_{zz} (\dot{\bar z}\dot{z}\,+\,\bar h_ih_i\,+\,c\,h_3
\,+\,\sfrac12\dot{\bar\chi}\chi-\sfrac12\bar\chi\dot\chi
\,+\,\sfrac12\dot{\bar\chi}_i\chi_i-\sfrac12\bar\chi_i\dot\chi_i) \nonumber\\[4pt]&+&
H_{zzz} (\chi\chi_ih_i\,+\,\sfrac12\epsilon_{ijk}\chi_i\chi_j h_k\,+\,c\,\chi\chi_3)
\ +\ \sfrac16 H_{zzzz} \epsilon_{ijk}\chi\chi_i\chi_j\chi_k \bigr\}\ ,
\end{eqnarray}
where the inhomogeneous extension is clearly visible in the terms
containing the parameter~$c$.
The bosonic metric $g_{z\bar z}=H_{zz}{+}\bar H_{\bar z\bar z}$ is special
K\"ahler of rigid type~\cite{fre}.
Reverting to real notation and introducing the Weyl factors
\begin{equation}
\Phi \= 2\,\mathrm{Re} H_{zz}\=\sfrac12(F_{xx}{-}F_{yy}) \qquad\text{and}\qquad
\widetilde\Phi\= -2\,\mathrm{Im} H_{zz}\=F_{xy}\ ,
\end{equation}
the Lagrangian reads
\begin{eqnarray}
{\cal L}&=&\Phi\bigl({\dot x}^2+{\dot y}^2+{g_i}^2+{f_i}^2-\psi{\dot\psi}
-\lambda{\dot\lambda}-\psi_i{\dot\psi_i}-\lambda_i{\dot\lambda_i}\bigr)
\nonumber\\[4pt]&+&
\Phi_x\bigl(\psi\psi_ig_i-\psi\lambda_if_i-\lambda\psi_if_i-\lambda\lambda_ig_i\,+\,
\epsilon_{ijk}(\sfrac12 g_i\psi_j\psi_k-\sfrac12 g_i\lambda_j\lambda_k-f_i\lambda_j\psi_k)
\bigr) \nonumber\\[4pt]&+&
\Phi_y\bigl(\lambda\psi_ig_i-\lambda\lambda_if_i+\psi\psi_if_i+\psi\lambda_ig_i\,+\,
\epsilon_{ijk}(\sfrac12 f_i\psi_j\psi_k-\sfrac12 f_i\lambda_j\lambda_k+g_i\lambda_j\psi_k)
\bigr) \nonumber\\[4pt]&+&
\sfrac12(\Phi_{xx}{-}\Phi_{yy})\epsilon_{ijk}\bigl(\sfrac16\psi\psi_i\psi_j\psi_k+\sfrac16\lambda\lambda_i\lambda_j\lambda_k-\sfrac12\psi\psi_i\lambda_j\lambda_k-\sfrac12\lambda\lambda_i\psi_j\psi_k\bigr)
\nonumber\\[4pt]&+&
\Phi_{xy}\,\epsilon_{ijk}\bigl(\sfrac16\lambda\psi_i\psi_j\psi_k-\sfrac16\psi\lambda_i\lambda_j\lambda_k+\sfrac12\psi\lambda_i\psi_j\psi_k-\sfrac12\lambda\psi_i\lambda_j\lambda_k)\bigr)
\nonumber\\[4pt]&+&
c\,\bigl( \Phi g_3 +{\widetilde \Phi}f_3 +\Phi_x(\psi\psi_3-\lambda\lambda_3)+\Phi_y(\lambda\psi_3+\psi\lambda_3) \bigr)\ ,
\end{eqnarray}
to which we add the Fayet-Iliopoulos terms
\begin{equation}
{\cal L}_{\textrm{FI}} \= q_ig_i+r_if_i\ .
\end{equation}
The harmonic prepotential with the correct scaling dimension~$[H]=-1$ is
\footnote{
Multiplying $H$ with a phase corresponds to an irrelevant rotation in the complex plane.}
\begin{equation}
H(z) \= \sfrac18\,z\ln z \qquad\longleftrightarrow\qquad
F(x,y)\=\sfrac18\,x\ln(x^2{+}y^2)-\sfrac14\,y\arctan\sfrac{y}{x}\ ,
\end{equation}
and the corresponding Weyl factors read
\begin{equation}
\Phi\=\sfrac14\,\mathrm{Re}\frac1z\=\sfrac14\frac{x}{x^2{+}y^2} \qquad\text{and}\qquad
\widetilde{\Phi}\=-\sfrac14\,\mathrm{Im}\frac1z\=\sfrac14\frac{y}{x^2{+}y^2}\ .
\end{equation}
Note that the corresponding metric is an indefinite one, as it must be for any harmonic
Weyl factor.
In the bosonic limit, obtained by setting all fermions equal to zero, we obtain
\begin{equation}
{\cal L}_{\textrm{bos}}+{\cal L}_{\textrm{FI}}\=
\Phi\,({\dot x}^2+{\dot y}^2+{g_i}^2+{f_i}^2) +c\,(\Phi\,g_3+{\widetilde\Phi}f_3)
+q_ig_i+r_if_i\ .
\end{equation}
We eliminate the auxiliary fields via their algebraic equations of motion,
\begin{eqnarray}
&
\begin{array}{lll}
g_1=-\frac{q_1}{2\Phi}\ ,\quad& g_2=-\frac{q_2}{2\Phi}\ ,\quad&
g_3=-\frac{q_3{+}c\Phi}{2\Phi}\ \\[4pt]
f_1=-\frac{r_1}{2\Phi}\ ,\quad& f_2=-\frac{r_2}{2\Phi}\ ,\quad&
f_3=-\frac{r_3{+}c{\widetilde\Phi}}{2\Phi}\ ,
\end{array}&
\end{eqnarray}
and arrive at
\begin{eqnarray}
{\cal L}'_{\textrm{bos}}&=&
\Phi\,\bigl({\dot x}^2+{\dot y}^2\bigr)\ -\ \sfrac1{4\Phi}\bigl(
q_1^2+q_2^2+(q_3{+}c\Phi)^2+r_1^2+r_2^2+(r_3{+}c\widetilde{\Phi})^2\bigr)\nonumber\\[4pt]
&=& \frac{x}{x^2{+}y^2}\frac{{\dot x}^2+{\dot y}^2}{4}\ -\
\frac{(q_i^2{+}r_i^2)(x^2{+}y^2)}{x}\ -\ c\,\frac{q_3x{+}r_3y}{2x}\ -\ \frac{c^2}{16x}\\[6pt]
&=:& K\ -\ V\ ,\nonumber
\end{eqnarray}
making explicit the effect of both the inhomogeneous supersymmetry transformation ($c$) and
the Fayet-Iliopoulos terms ($q_i,r_i$) on the potential~$V$.
It is tempting to perform the same coordinate change as for the (1,4,3) multiplet,
$x=\sfrac12w^2$, which yields
\begin{equation} \label{wy}
{\cal L}'_{\textrm{bos}}\=
\sfrac12(1{+}\gamma^2)^{-1}\Bigl({\dot w}^2+\frac{{\dot y}^2}{w^2}\Bigr)\ -\
\sfrac12(1{+}\gamma^2)(q_i^2{+}r_i^2)w^2\ -\ \sfrac12\,c\,(q_3{+}r_3\gamma)\ -\
\frac{c^2}{8w^2}\ ,
\end{equation}
where $\gamma=2y/w^2$. This form reveals both the oscillator and Calogero terms,
but also shows the added complexity in two dimensions (mostly hidden in~$\gamma$).
Putting $y\equiv0$ (also $\gamma{=}0$) brings back the (1,4,3) result~(\ref{143pot}).
\section{Trajectories of a (2,8,6) particle}
Without loss of generality, let us drop inessential Fayet-Iliopoulos terms and put
\begin{equation}
q_1=q_2=r_1=r_2=0 \qquad\text{and}\qquad q_3=:q\ ,\quad r_3=:r\ ,\quad q{+}\mathrm i r=:s\ .
\end{equation}
In complex coordinates, the kinetic and potential energies then read
\begin{eqnarray}
K&=& \Phi\,{\dot z}\dot{\bar z}\=
\sfrac18\frac{z{+}\bar z}{z\bar z}\,{\dot z}\dot{\bar z}\ ,\\[4pt]
V&=& \bigl( (q{+}c\Phi)^2+(r{+}c\widetilde\Phi)^2\bigr)/4\Phi\=
\sfrac18\frac{1}{z{+}\bar z}\bigl(4s\bar z+c\bigr)\bigl(4\bar s z+c\bigr)\ .
\end{eqnarray}
\begin{figure}[ht]
\centerline{
\lower2ex\hbox{
\includegraphics[width=9cm]{pot.eps}
}
\hfill
\includegraphics[width=5cm]{lev.eps}
}
\caption{Potential~$V$ and its level curves for $(c,q,r)=(4,1,2)
\quad\longrightarrow\quad z_{\textrm{min}}=\sfrac15(1{-}2\mathrm i)$. }
\label{fig:1}
\end{figure}
The level curves of this potential are circles of center and radius
\begin{equation}
z_0(V)=\frac{2V-c\,s}{4(q^2{+}r^2)} \qquad\text{and}\qquad
r(V)=\frac{\sqrt{V(V{-}c\,q)}}{2(q^2{+}r^2)}\ ,
\end{equation}
respectively, and its only minimum $V_{\textrm{min}}=cq$ is located at
\begin{equation}
z_{\textrm{min}}=z_0(cq)=\frac{c\,\bar s}{4(q^2{+}r^2)}\ .
\end{equation}
The parameter~$r$ governs the asymmetry under $y\to-y$.
The reflection $x\to-x$ flips the sign of $V{-}\sfrac12cq_3$.
Due to the factor of $z{+}\bar z=2x$, both the Weyl factor and the potential are
strictly positive on the right half-space $x{>}0$ and strictly negative for $x{<}0$.
Therefore, the (2,8,6) particle is a reasonable dynamical system only if its trajectories
do not cross the $x{=}0$ dividing line.
Seen from the right half-space, the potential barrier for $x{\to}0$ has a hole at $y{=}0$
if $c{=}0$, but the Weyl factor explodes precisely there.
For large coordinate values, the potential grows linearly with~$x$ and
quadratically with~$y$, so the $x{>}0$ trajectories remain bounded.
The equation of motion takes the form
\begin{eqnarray}
0&=&\Phi^3\ddot z\ +\ \Phi^2\Phi_z {\dot z}^2\ -\
\sfrac14\Phi_{\bar z}\bigl(q^2+(r+2\mathrm i cH_{zz})^2\bigr) \nonumber\\[4pt]
&\propto &(z{+}\bar z)^3 z\bar z\,\ddot z\ -\ (z{+}\bar z)^2 \bar z^2{\dot z}^2\ +\
z^2\bar z^2\bigl( (4qz)^2+(4rz{+}\mathrm i c)^2\bigr)\ ,
\end{eqnarray}
which in real coordinates reads
\begin{eqnarray}
0&=& \ddot x-\frac{1}{2x}\frac{x^2{-}y^2}{x^2{+}y^2}({\dot x}^2{-}{\dot y}^2)
-\frac{2y}{x^2{+}y^2}\,\dot x\,\dot y
+\frac{x^2{+}y^2}{x^3}\bigl(2(q^2{+}r^2)(x^2{-}y^2)-cr\,y-\sfrac18 c^2\bigr)
\ ,\nonumber\\[4pt]
0&=& \ddot y+\frac{y}{x^2{+}y^2}({\dot x}^2{-}{\dot y}^2)
-\frac{1}{x}\frac{x^2{-}y^2}{x^2{+}y^2}\,\dot x\,\dot y
+\frac{x^2{+}y^2}{x^3}\bigl(4(q^2{+}r^2)\,x\,y+cr\,x\bigr)\ .
\end{eqnarray}
The only constant of motion of this system is the energy $E=T+V$, so the generic particle
motion is not integrable. Figure~2 shows the trajectory for the $(c,q,r)$-value
chosen in Figure~1 and a couple of initial conditions.
\begin{figure}[ht]
\centerline{
\includegraphics[width=7.7cm]{tra_1_0.eps}
\hfill
\includegraphics[width=6.3cm]{tra_0.1_1.eps}
}
\caption{Trajectories for $(c,q,r)=(4,1,2)$ with initial conditions
$(z,\dot z)(0)=(1,0)$ (left) and $(z,\dot z)(0)=(\sfrac{1}{10}{+}\mathrm i,0)$ (right). }
\label{fig:2}
\end{figure}
One sees that the curve does not fill out the region~$V(x,y)\le E$,
on effect of the position-dependent effective mass $M=2\Phi(x,y)$.
It is also clear that the $x{=}0$ barrier is impenetrable.
Therefore, it makes sense to substitute $w=\sqrt{2x}$ and introduce the dynamics
in the $wy$-plane according to~(\ref{wy}). The trajectories of Figure~2 get
somewhat distorted in these variables, but their qualitative behavior is unchanged.
\bigskip
\noindent
{\bf Acknowledgements}
\noindent
O.L. thanks CBPF for warm hospitality. This work was partially supported by
CNPq and by DFG grant Le-838/9-2.
\newpage
|
1,116,691,497,839 | arxiv | \section{Introduction}
\label{sec:intro}
Audio source separation has been intensively studied over the last decade \cite{OzerovF10,LiutkusFRPD14,Roux15,FitzgeraldLB16,Takahashi17} owing to its wide range of applications such as karaoke, remixing, spatial audio, and many other downstream tasks. Although the separation performance has greatly improved thanks to recent advances in deep neural network (DNN)-based methods, it remains far from perfect in many challenging scenarios including the separation of music containing many instrumental sounds mixed in a stereo format
\cite{JanssonHMBKW17,Takahashi18MMDenseLSTM,JLee2019,Liu2019mss,defossez2019music,Takahashi19, Takahashi21}.
In some cases, such as music production, sources can be assumed to be known during the mixing stage. Informed source separation (ISS) takes advantage of this and computes side-information during the so-called \textit{encoding} stage.
Side-information can be either embedded into mixtures by using watermark approaches \cite{Parvaix10, Liutkus12} or simply transmitted along with the mixtures \cite{Ozerov11}. Separation models are designed to utilize the side-information to improve the performance. In \cite{Parvaix10}, the modified discrete cosine transform (MDCT) coefficients are used to form the side-information, and separation is performed by estimating a mask from the encoded MDCT coefficients at each time-frequency point by assuming the sparsity of sources. In \cite{Ozerov11, Liutkus12}, local Gaussian models (LGM) are adopted to solve ISS problems. Bl\"{a}ser \textit{et al. } applied non-negative tensor factorization (NTF) to ISS, where factorized matrices are compressed and transmitted as the side-information and reconstructed matrices are used to calculate the parameters of the Wiener filter \cite{Max18}.
Another line of works that closely related to ISS is to use extra information such as a music score or text \cite{Miron17,Manilow20,Kinoshita2015TextinformedSE}, or to use an extra model trained for other tasks such as automatic speech recognition \cite{Takahashi20} to leverage knowledge from other domains.
Existing ISS approaches heavily rely on the side-information, and separation models are specifically designed to use the side-information. Therefore, such models cannot perform separation or perform poorly if the side-information is not available.
In this work, different from previous works, we adopt a pretrained DNN-based separation model that is trained without any side-information for ISS. Rather than modifying the pretrained separation model to use the side-information, we compute an imperceptible perturbation that is carefully designed to improve the separation of the model and add it to the mixture.
The proposed approach is closely related to adversarial examples, which were originally discovered in image classification \cite{Szegedy2014}, that is, imperceptibly small perturbations can significantly alter DNN predictions. Since adversarial examples can be a crucial problem for many DNN-based systems, they have been intensively investigated from different aspects including attack methods \cite{Su2019}, defense methods \cite{Madry2018, Bai2019}, transferability \cite{Dong2018, Wu2020}, and the cause of network vulnerabilities \cite{Goodfellow2015, Ilyas2019}.
Recently Takahashi \textit{et al. } investigated adversarial examples on audio source separation and reported that some attack methods are effective with limited transferability \cite{Takahashi21adv}. The proposed method in this paper can be seen as an application of adversarial attacks for the opposite purpose, namely, the perturbation is computed to improve the separation rather than degrade it. In this analogy, we refer to samples computed by the proposed method as \textit{amicable examples}. However, the effectiveness of amicable examples is unclear as they can have potentially different properties from adversarial examples. This is because (i) while (untargeted) adversarial examples only need to be apart from targets and many possible perturbation can degrade the separation, amicable examples have concrete targets; thus, amicable examples may be difficult to find or less effective; (ii) if the loss curve becomes flat around the target $y$ but becomes steep away from the target, the improvement of an amicable example may be not as significant as that of an adversarial example; (iii) amicable examples may be more prone to stacking with local optima. In our experiment, we provide both quantitative and qualitative evaluations of the effect of amicable examples.
An advantage of the proposed method is that since the separation model is not modified to use side-information, the model can be used for both standard mixtures and amicable examples in a unified manner.
Moreover, we show that amicable examples are robust against audio signal compression, which allows us to transmit amicable examples at low bit rates.
As shown in our experiment in Sec.~\ref{sec:untargeted}, amicable examples can selectively improve the performance of the targeted separation model and have very limited effects on untargeted separation models. Although this is often a desirable property, having explicit control of the effects of amicable examples on multiple models is more desirable. To this end, we propose the use of multi-model multi-purpose perturbation learning (MMPL) to control the effect of perturbation in both positive and negative ways depending on the separation model.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/advamic.pdf}
\caption{Simplified visualization of a loss curve for the separation $f(x)$. For simplicity, we consider the $l_\infty$ norm for the constraint ($\|\nu\|_\infty<\epsilon$). For the adversarial example $x+\eta$, the input $x$ is perturbed towards the direction to increase the criterion $d()$ within an $\epsilon$-ball while the perturbation $\nu$ for the amicable example decreases it. The minimum loss can be obtained when $x+\nu=y$, which may be outside of the $\epsilon$-ball.}
\label{fig:advamic}
\end{figure}
The contributions of this work are summarized as follows:
{
\setlength{\leftmargini}{20pt}
\begin{enumerate}
\item We propose amicable examples, the opposite optimization problem to adversarial examples, and apply them to ISS. The proposed method allows us to use the same separation model universally under both informed- and non-informed conditions.
\item We investigate the effectiveness of amicable examples on targeted and untargeted models and show the selective effectiveness for the targeted model. We further show the robustness of amicable examples against distortions caused by signal compression.
\item We further propose MMPL to control the effects of the perturbation against multiple models individually.
\item We show that, by using MMPL, amicable and adversarial examples can co-exist, namely, a perturbation can significantly improve the performance of some models and significantly degrade the performance of others.
\end{enumerate}
}
\section{Amicable-example-based informed Source Separation}
Given $N$ sources $y=[y_1, \dots, y_N]$ and a mixture $x=\sum_{i=1}^N y_i$, a DNN-based separation model $f_\theta()$ is trained to minimize the expectation of a training criterion $d()$ across data $D$ as
\begin{equation}
\min_\theta \mathbb{E}_{(x,y)\in D}[d(f_\theta(x), y)],
\label{eq:ss}
\end{equation}
where $\theta$ denotes network parameters. A typical choice for $d()$ is $l1$ or $l2$ distance. We use $l2$ distance in this work. Unlike conventional ISS, where the separation model is designed to use the side-information $\psi_\omega(y)$ encoded from $y$ and optimize the parameters $\theta, \omega$ as $\min_{\theta, \omega}d(f_\theta(x, \psi_\omega(y)),y)$, we fix the separation model parameters $\theta$ and instead compute a perturbation $\nu$ that minimizes the criterion under a constraint $\mathcal{C}$ on the perturbation as
\begin{equation}
\min_{\nu \in \mathcal{V}} d(f_\theta(x+\nu), y),~ \mathcal{V} = \{ \eta ~|~ \mathcal{C}(\nu) < \epsilon\}.
\label{eq:amic}
\end{equation}
The perceptibility of the perturbation $\nu$ is highly depends on the input mixture $x$ to be added, for example, low-level noise can be perceptible when the mixture is also low-level, and high-level noise can be hardly perceptible when the mixture is also high-level. To incorporate the masking effect, we use short-term power ratio (STPR) regularization \cite{Takahashi21adv} as the constraint, i.e.,
\begin{equation}
\mathcal{C}_{\STPR}(\nu) = \|\vartheta(\nu, l) / \vartheta(x, l)\|_1,
\label{eq:mask}
\end{equation}
where $\vartheta(\nu, l) = [\|\nu_1\|_2, \cdots, \|\nu_N\|_2]$ is the patchwise $l2$ norm function, which computes the norms of short segments $\nu_n=[\nu(t_n),\cdots,\nu(t_n+l)]$ of length $l$ starting from time index $t_n=(n-1)l$.
Unlike adversarial examples, where the perturbation can be arbitrarily large without the constraint on the magnitude of the perturbation, amicable noise $\nu$ can be self-regularized without the constraint because injecting too large noise in the mixture itself makes the separation difficult close to the target. Nevertheless, we found that the constraint is essential not only for regularizing the magnitude of the perturbation but also for robustly obtaining improvements in the separation.
By introducing a Lagrange weight $\lambda$, \eqref{eq:amic} can be solved by minimizing the loss function $L$ using stochastic gradient descent:
\begin{equation}
L(\nu) = \|f(x+\nu), y\|_2^2 + \lambda\mathcal{C}_{\STPR}(\nu).
\label{eq:amicloss}
\end{equation}
We omit $\theta$ for the sake of clarity.
If the negative of the first term is used instead, \eqref{eq:amicloss} results in the loss function for the adversarial example. However, optimization behavior can be different depending on the loss surface, as shown in Fig.~\ref{fig:advamic}. If the loss curve becomes flat around the (local) optimal point but becomes steep away from the optimal point, the improvement by amicable examples may not be as significant as that by adversarial examples and vice versa.
\section{Incorporating multiple models for multiple purposes}
An amicable example perturbs the mixture towards the direction where the separator \textit{believes} it sounds more like the target sources. A natural question is ``\textit{Is the amicable example universal for other separation models?}". As shown in our experiment in Sec.~\ref{sec:untargeted}, we found that amicable example is specific to the separation model used to compute it, which we call the \textit{targeted model}, and it does not markedly improve the performance of untargeted models. This property is useful when one wants to minimize the side-effect on other models or to design a system where a targeted model exclusively benefits from the amicable example. However, it is more useful to have individual control in the separation of multiple separation models. To this end, we propose MMPL as
\begin{equation}
L(\nu) = \sum_i\alpha_i\|f^i(x+\nu), y\|_2^2 + \lambda\mathcal{C}_{\STPR}(\nu),
\label{eq:mmpl}
\end{equation}
where $f^i$ denotes the $i$th separation model and $\alpha_i$ the weight to control the effect on each model. Note that $\alpha_i$ can be a negative value and in such a case, the term promotes the perturbation to be an adversarial example for model $f^i$. When all $\alpha_i$ are negative and $\sum_i\alpha_i=-1$, the loss becomes similar to the adversarial attack against an ensemble of models \cite{Liu17ensambleattack}. However, MMPL is more general and flexible as it can produce both amicable and adversarial examples at the same time depending on the model, i.e., suppose $\Gamma=\{i|\alpha_i>0\}$ and $\Lambda=\{i|\alpha_i<0\}$, the perturbation $\nu$ acts as the amicable example for models $f^{i\in\Gamma}$ but acts as adversarial example for models $f^{i\in\Lambda}$. To the best of our knowledge, this is the first attempt to learn a perturbation that serves as both amicable and adversarial examples simultaneously.
\label{sec:perturblevel}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/DISDRvsSDRi.pdf}
\caption{SDR improvement with different amicable noise levels. Higher $\DISDR$ indicates lower noise level.}
\label{fig:perturblevel}
\end{figure}
\section{Experiments}
\label{sec:exSS}
\subsection{Setup}
Experiments are conducted on the \textit{test} set of the MUSDB18 dataset \cite{sisec2018}, which contains 50 songs recorded in stereo format at 44.1 kHz.
Four sources ({\it vocals, drums, bass, other}) and their mixture are available for each song. To speed up the evaluation for our extensive experiments, we crop 10s clips from each song and use them for the evaluation.
The signal-to-distortion ratio (SDR) is used as the evaluation metric of separation performance. As in SiSEC 2018 \cite{sisec2018}, SDR is computed using the {\it museval} package and the median over all tracks of the median of each track is reported. To evaluate how much the mixture is distorted by the perturbation, we use the SDR between the mixture $x$ and the perturbed mixture $x+\nu$,
\begin{equation}
\DISDR = \SDR(x, x+\nu),
\label{eq:metric}
\end{equation}
which we call the degradation of input $\DISDR$.
For the separation models, we use three open-source libraries, namely Open-Unmix (UMX) \cite{stoter19}, D3Net \cite{Takahashi21}, and Demucs \cite{defossez2019music}, to ensure a variety of separation algorithms. UMX is based on bidirectional long-short term memory (BLSTM) layers and performs the separation in the frequency domain. D3Net is a convolutional neural network and also operates in the frequency domain. Demucs consists of both convolution and BLSTM layers and performs separation in the time domain. All models are trained on the MUSDB18 \textit{train} set. In addition, UMX and Demucs have their variants: UMX$_{{\rm HQ}}$ is trained on the uncompressed MUSDB18 \textit{train} set and Demucs$_{ex}$ is trained with 150 additional songs.
The initial perturbation is a uniform noise $[-\epsilon, \epsilon], \epsilon=0.01$, and is optimized using Adam for 300 iterations.
\begin{table}[t]
\caption{\label{tab:subj} {\it Subjective test on perceptibility of amicable examples.}}
\vspace{2mm}
\centering{
\begin{tabular}{c | c}
\hline
$\DISDR$ [dB] & Accuracy\\
\hline\hline
27.8 & 51.0\%\\
30.1 & 49.0\%\\
\hline
\end{tabular}
}
\end{table}
\begin{table}[t]
\caption{\label{tab:untargeted} {\it SDRs of the separation of original mixtures and amicable example computed using UMX$_{{\rm HQ}}$ (in dB). The amicable example selectively improves the performance of the targeted model.}}
\vspace{2mm}
\centering{
\footnotesize
\begin{tabular}{c | c | c c c c c}
\hline
Model & input & vocals & drums & bass & other & Avg.\\
\hline\hline
UMX$_{{\rm HQ}}$ & \multirow{4}{*}{Original} & 6.25 & 6.24 & 5.07 & 3.40 & 5.24\\
Demucs & & 6.71 & 5.92 & \textbf{5.31} & 2.41 & 5.09\\
D3Net & & \textbf{7.08} & \textbf{6.79} & 5.08 & \textbf{3.56} & \textbf{5.63}\\
UMX & & 6.65 & 5.91 & 4.86 & 3.39 & 5.20\\
\hline
UMX$_{{\rm HQ}}$ & \multirow{4}{*}{\begin{tabular}{p{0.9cm}}Amicable \\{\scriptsize (UMX$_{{\rm HQ}}$)}\end{tabular}} & \textbf{8.44} & \textbf{8.03} & \textbf{6.76} & \textbf{5.61} & \textbf{7.21}\\
Demucs & & 6.66 & 5.96 & 5.49 & 2.51 & 5.16\\
D3Net & & 7.18 & 6.73 & 5.12 & 3.62 & 5.66\\
UMX & & 7.53 & 6.74 & 5.66 & 4.46 & 6.10\\
\hline
\end{tabular}
}
\end{table}
\begin{table*}[t]
\caption{\label{tab:mmpl} {\it SDRs for separation of perturbed samples computed using MMPL in two scenarios ($\alpha_i$ are both positive and $\alpha_i$ have opposite signs). Values in brackets indicate the SDR improvement over the separation of the original mixture.}}
\vspace{2mm}
\centering{
\begin{tabular}{c | c | c | c c c c c}
\hline
Model & $\alpha_{i}$ & $\DISDR$ & vocals & drums & bass & other & Avg.\\
\hline\hline
UMX$_{{\rm HQ}}$ & positive &\multirow{2}{*}{29.21} & 7.90 (+1.65) & 8.26 (d+2.02) & 6.22 (+1.15) & 5.08 (+1.68) & 6.87 (+1.63)\\
Demucs$_{ex}$ & positive & & 9.10 (+1.60) & 9.89 (+2.01)& 9.75 (+2.19) &5.87 (+2.56) &8.65 (+2.09)\\
\hline
UMX$_{{\rm HQ}}$ & positive & \multirow{2}{*}{28.82} & 7.89 (+1.64) & 8.26 (+2.02) & 6.22 (+1.15) & 5.08 (+1.68) & 6.86 (+1.62)\\
Demucs$_{ex}$ & negative& & 0.51 (-6.99) & 0.5 (-7.38) & 1.47 (-6.09) & -1.08 (-4.39) & 0.35 (-6.21)\\
\hline
\end{tabular}
}
\end{table*}
\subsection{Level of amicable noise and separation improvement}
First, we investigate the relationship between the level of perturbation and separation performance improvement. We use UMX$_{{\rm HQ}}$ for the evaluation and set different $\lambda$ values to control the perturbation level. Fig. \ref{fig:perturblevel} shows the SDR improvement $\SDRi$ over the original mixture with different $\DISDR$. As expected, the SDR improvement becomes more significant with increasing perturbation level. For 27.8 dB $\DISDR$, an improvement of more than 2 dB is obtained on average.
To evaluate the perceptibility of the amicable noise, we conduct a subjective test similarly to the double-blind triple-stimulus with hidden reference format (ITU-R BS.1116), where the reference is the original mixture and either A or B is the same as the reference and the other is an amicable example. Evaluators are asked to identify which one of A or B is the same as the reference. We test two amicable noise levels and 40 audio engineers evaluated five songs of 10~s duration for each noise level. Table \ref{tab:subj} shows that the accuracy of correctly identifying the reference is close to the chance rate (50\%) at $\DISDR$ of 27.8 dB; thus, the amicable noise is imperceptible.
\subsection{Effects on untargeted models}
\label{sec:untargeted}
Next, we test the amicable example on untargeted models. The amicable example is computed using UMX$_{{\rm HQ}}$ and tested on different separation models. Table \ref{tab:untargeted} shows the SDR values computed on the separations of the original mixture and amicable examples. By comparing the results of the original mixture and amicable example for each model, we observe that the amicable example significantly improves the SDRs of the targeted model UMX$_{{\rm HQ}}$ but only slightly improves the SDRs of Demucs and D3Net. This indicates that the loss surfaces \eqref{eq:amic} of these models are very different and thus the amicable noise is not generalized to different models. In contrast, the SDR improvement of UMX is more significant than that of Demucs and D3Net, probably because the architectures of UMX and UMX$_{{\rm HQ}}$ are identical and they were trained on very similar datasets (only their high-frequency components are different); thus, their loss surfaces are probably also similar.
\subsection{Robustness against signal compression}
\label{sec:robustness}
As audio signals are often compressed to reduce the bandwidth or file size for transmission, it is important to assess the robustness of an amicable example against compression to verify its usability in realistic scenarios. To this end, we study how SDR improvements change by compressing the amicable example using an MP3 encoder with different compression levels. Fig. \ref{fig:compress} shows that even after the amicable example is compressed with 256 kbps, the SDR improvement over the original mixture is nearly the same as that of the uncompressed example. Although more aggressive compression rates slightly degrade the effectiveness, the amicable example still improvse the SDR by 1.57 dB on average at 128 kbps.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/compressed.pdf}
\caption{SDR improvement with different MP3 compression bit rates.}
\label{fig:compress}
\end{figure}
\subsection{Amicable adversarial example with MMPL}
\label{sec:MMPL}
Finally, we evaluate MMPL in two scenarios using UMX$_{{\rm HQ}}$ and Demucs$_{ex}$. In the first scenario, $\alpha_{i}$ in \eqref{eq:mmpl} is set to be positive for both UMX$_{{\rm HQ}}$ and Demucs$_{ex}$. In this case, the perturbation is computed to improve both models. In the second scenario, we use a negative $\alpha_i$ for Demucs$_{ex}$ instead. This makes the perturbation amicable for UMX$_{{\rm HQ}}$ but adversarial for Demucs$_{ex}$. To balance the magnitude of loss for both models, we set $(\alpha_{\rm{UMX}}, \alpha_{\rm{Demucs}})$ to be $[1,100]$ for the former case and $[1,-100]$ for latter.
The results are shown in Table \ref{tab:mmpl}. As observed, when we include both models to compute the amicable example, the performance of both models are improved, which is not the case when only one model is used, as shown in Sec. \ref{sec:untargeted}. More interestingly, in the second scenario, where we use opposite signs for the two models, we observe that the same perturbation significantly improves UMX$_{{\rm HQ}}$ but significantly degrades Demucs$_{ex}$. The results show that we can design a perturbation to be both an amicable example and an adversarial example depending on the model. We believe that this finding is important as it is closely related to security applications, e.g., even if a sample can be separated well with some models, it still can be an adversarial example for other models. We will further investigate this in the future.
\section{Conclusion}
We propose amicable example-based informed source separation, where an imperceptible perturbation from the mixture is computed to improve the separation. Experimental results show that amicable examples selectively improve the performance of the targeted model and are robust against the audio compression. We further propose multi-model multi-purpose learning (MMPL) to individually control the effect of the perturbation for multiple models. MMPL is shown to be capable of computing a perturbation that works as both an amicable example and an adversarial example depending on the model.
\ninept
\bibliographystyle{IEEEbib}
|
1,116,691,497,840 | arxiv | \section{#1}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\def |
1,116,691,497,841 | arxiv | \section{Introduction}\label{intro}
Recent advances in harnessing the topological properties of light beams have led to exciting new developments in various fields \cite{Vector-vortex-Ref1-Forbes, Dunlop-review}. Out of numerous complex and exotic configurations \cite{Vector-vortex-Ref1-Forbes}, vortex and vector beams are perhaps among the paramount examples of structured light. On the one hand, vortex beams carrying Orbital Angular Momentum (OAM) manifest azimuthally twisting wavefront given by $exp (i\ell \phi)$ around the beam propagation axis \cite{OAM-Allen}, where the topological charge $\ell$, also known as the phase winding number, signifies the number of $2\pi$ shifts across the azimuthal coordinate $\phi$ in the transverse plane. On the other hand, vector beams are characterized by their spatially varying polarization, which
is encoded in their spin angular momentum (SAM) \cite{vector-vortex-Ref4-Vector-Zhan}. A vector beam can also be interpreted as a superposition of two optical vortices with counterrotating circular polarization (right circular RCP and left circular LCP) and equal but opposite topological charges, thus null OAM: $\ell_{RCP} = -\ell_{LCP}$. Consequently, the phase difference between the two circularly polarized components governs the spatially varying linear polarization distribution. Contrastingly, vector-vortex beams (VVB) that are tailored simultaneously in their SAM and OAM, can be interpreted as a superposition of two circularly polarized optical vortices of opposite handedness and distinct topological charges: $\ell_{RCP} = \ell_{p} + 1$ and $\ell_{LCP} = \ell_{p} - 1$, where the topological Pancharatnam charge $\ell_{p}$ defined in terms of the Pancharatnam-Berry phase \cite{vector-vortex-Ref46} accounts for the intertwined phase and polarization properties of the VVB \cite{vector-vortex-Ref38}.
Structured light beams have enabled numerous interdisciplinary applications \cite{Vector-vortex-Ref1-Forbes, Dunlop-review, vector-vortex-Ref4-Vector-Zhan}. In particular, structured light has opened new frontiers in light-matter interactions: photonic induction and control of ultrafast currents in semiconductors, thus opening the possibilities of reconfigurable optoelectronic circuits \cite{semiconductor-current1, semiconductor-current2}, OAM transfer to valence electrons \cite{OAM-bound-electron}, photoelectrons \cite{OAM-free-electron}, observation of magnetic helicoidal dichroism \cite{Ruchon-3}, and selective excitation of multipolar modes in localized surface plasmons \cite{Multipolar-surface-plasmon}, to name a few. Given the plenitude of applications, controlled generation and manipulation of light beams exhibiting OAM, SAM, or both in the short-wavelength regime are particularly relevant as they can allow extending their applications to nanometric spatial and subfemtosecond temporal scales \cite{Garcia-Review2}. We note that short-wavelength structured beams have been demonstrated at large-scale synchrotron and X-ray free-electron lasers facilities utilizing diffractive optical elements \cite{OAM-FEL, OAM-FEL2}. However, these alternatives often suffer from poor throughput, lack of widespread accessibility, and less tunability in the sense that each desired configuration requires a custom-designed diffractive optical element. On the other hand, high-order harmonic generation (HHG) resulting from the highly-nonlinear interaction of intense laser light with atoms \cite{vector-vortex-Ref48-Schafer-ATI}, leads to highly coherent, ultrashort, and high-frequency (odd multiples of the driving field) radiation rainbow that can extend up to soft-X-ray spectral range \cite{Popmintchev1}. On the macroscopic level, when the emitted radiation from a large number of individual atoms is phase-matched, the HHG allows mapping certain characteristics of the driving beam into the harmonic radiation. Exploiting the field-driven coherent nature of the HHG process, long-wavelength structured-beam driven HHG in noble gases has made long strides, enabling control and manipulation of OAM and/or SAM in the extreme-ultraviolet (EUV) spectral range \cite{Garcia2013, Gariepy, Geneaux, David-article, vector-vortex-Ref66, vector-vortex-Ref68, Self-Torque, vector-vortex-Ref53, Garcia-Vector, Garcia-Fractional, Optica-vector-vortex, ACS-Photonics-Pandey2022, Sanson1}.
In this work, we resort to HHG in a 15 mm long argon-filled gas cell to demonstrate the generation and characterization of EUV structured beams. We show that the HHG process allows for a continuous tunability of the polarization state of the upconverted vector beam from radial to azimuthal. Moreover, by driving HHG with IR vortex beams, we report the production, and single-shot intensity, wavefront characterization of EUV vortex beams exhibiting very high topological charges. Furthermore, we demonstrate that the HHG process allows combining the spatially inhomogeneous polarization distribution of a vector beam and the twisted wavefront of vortex beams in a controlled manner, yielding EUV vector-vortex beams carrying large OAM, and tunable polarization state. Controlled generation and manipulation of EUV light with phase and/or polarization singularities pave the way for their applications in fundamental and applied studies using a table-top high-harmonic source.
\section{Experimental setup}
In Figure \ref{vector-vortex-setup}, we depict the experimental setup to generate EUV vector and VVB from a linearly polarized near-infrared (IR) Gaussian beam of central wavelength 815 nm, pulse duration $\sim$40 fs, $\sim$15 mJ maximum energy, and diameter $\sim$24 mm at $1/e^{2}$. On the one hand, the vertically polarized incoming Gaussian beam is transformed into a vector beam using a large aperture (3-inch diameter) segmented polarization converter composed of eight half-wave plates with azimuthally varying optical axis orientation. On the other hand, the IR vector-vortex driver with topological charge $\ell_{p1}$ is obtained by inserting both a spiral phase plate (SPP) and the polarization converter in the incoming Gaussian beam. We remark that the utilized configuration allows continuous tuning of the polarization state of the vector and vector-vortex driving beams from radial to azimuthal by altering the polarization of the fundamental IR beam from vertical (s-polarization) to horizontal (p-polarization) using a half-waveplate.
We focus the IR vector, vortex, or vector-vortex driving beam into a 15 mm long argon-filled gas cell using a 2-meter focal length lens to generate high harmonics. The remaining IR driving beam after the generation medium is filtered using a 300 nm thick Al filter. Thereafter, we steer the HHG structured beam towards a high-resolution EUV Hartmann wavefront sensor (EUV-HASO, Imagine Optic) through a narrowband 45-degree multilayer flat mirror \cite{XUV-Mirror}. This narrowband flat mirror serves for two purposes: spectral and polarization filtering of the structured HHG beams. The vertical polarization intensity component of the $q = 25^{\rm th}$ harmonic of the driving beam centered at $\lambda_{EUV} = $ 32.6 nm is guided to the EUV wavefront sensor: the extinction of neighboring orders exceeds 90\% \cite{XUV-Mirror}, while the experimentally determined polarization selectivity is better than $\sim$10:1. As we will show later, our approach based on spectral and polarization filtering allows for an unambiguous interpretation of the topological charge and/or polarization state of the EUV structured beams. Additionally, a high-magnification ($\sim$6) spectrally selective imaging system facilitates the characterization of the vertical polarization intensity component of HHG beams at the exit plane of the generation medium. The EUV imaging system is composed of a narrowband near-normal incidence ($\sim1.2^{0}$) multilayer converging mirror of focal length $\sim500$ mm. Though this configuration is not sensitive to the polarization due to the small angle of incidence, we image the exit plane of the gas cell through the polarizing 45-degree multilayer flat mirror placed before the EUV CCD (see Fig. \ref{vector-vortex-setup}). This allows us to characterize the vertical polarization intensity component of the harmonic beam at the gas cell exit plane, hence permitting an unambiguous affirmation of the HHG beams’ polarization state at the source. As the used EUV multilayer mirrors offer high reflectivity within a small spectral range, we use a spectral-phase control of the IR driver by an acousto-optic modulator (Dazzler, FASTLITE) to spectrally tune the high-harmonic beam. We remark that the truncation of a vector beam can deform its structure upon propagation \cite{Vector-truncation1, Vector-truncation2}. Therefore, in the results presented here, we avoid the aperturing of IR vector and vector-vortex driving beams (see Fig. \ref{vector-vortex-setup}). On the other hand, in the case of HHG driven by IR vortex, we aperture the incoming beam by an iris of 18 mm diameter to optimize the phase matching.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\linewidth]{setup.pdf}
\caption{Experimental setup for the generation and characterization of structured HHG beams. To obtain the IR vector driver, a segmented polarization converter (pol. converter) is inserted in the incoming vertically-polarized fundamental Gaussian beam. The polarization converter and a spiral phase plate (SPP) are concatenated to yield the vector-vortex driving beam. The IR driving beam is focused into a 15 mm long argon-filled gas cell by a 2 m focal length lens to generate high harmonics. The remaining IR driver is removed using a 300 nm thick Al filter. The harmonic beam is guided towards either the EUV Hartmann wavefront sensor (EUV-HASO) located $\sim1.98\: \rm m$ from the HHG source or the nearfield imaging system. Spectral and polarization filtering by the $45^{0}$ multilayer plane mirror allows detection of the vertical polarization component of the $25^{\rm th}$ harmonic beam centered at $\lambda_{EUV} = $ 32.6 nm in the far field of HHG source (at EUV HASO) and the exit of generation medium.}
\label{vector-vortex-setup}
\end{figure*}
\section{Results and discussion}\label{results}
\subsection{EUV vector beams via HHG}
Owing to their remarkable properties, particularly under high numerical aperture focusing conditions \cite{vector-vortex-Ref16, Garcia-vector-2}, vector beams have enabled a wide range of novel applications \cite{vector-vortex-Ref4-Vector-Zhan, vector-vortex-Ref16, Gupta, Payeur}. Conclusively, obtaining intense vector beams in the EUV domain is highly desirable. We also note that the frequency upconversion of vector beams in a thin (500 $\rm \mu m$ gas jet) generation medium was reported in \cite{Garcia-Vector}, where authors used a Rowland-circle type spectrometer as a polarizer to affirm the vectorial polarization of HHG beams. Here, we use a 15 mm long gas cell and exploit our EUV imaging system (Fig. \ref{vector-vortex-setup}) to unambiguously confirm the spatially variant polarization distribution of the HHG beam.
In Figure \ref{HHG-vector-beam}, we show the measured intensity distribution of radially (a) and azimuthally (b) polarized IR vector beams at the waist. In both cases, the intensity distribution presents an annular structure with a dark hole at the center. We note that in contrast to the vortex beams exhibiting a null on-axis intensity due to helical wavefront \cite{OAM-Allen}, the donut-like profile of a vector beam arises from on-axis polarization singularity resulting from spatially varying linear polarization \cite{vector-vortex-Ref4-Vector-Zhan}. We mark the tentative polarization distribution using white arrows in Fig. \ref{HHG-vector-beam}(a, b). To affirm their polarization state, we image the vertical polarization intensity component of the IR vector beams. Indeed, the two vertical intensity lobes for radial (Fig. \ref{HHG-vector-beam}(c)), and the two horizontal lobes for azimuthal (Fig. \ref{HHG-vector-beam}(f)), confirm their respective polarization states. As noted before, by rotating the polarization of the fundamental IR Gaussian beam from vertical to horizontal using a $\lambda/2$ wave plate, we can continuously vary the polarization distribution from radial ($\lambda/2 = 0^{0}$) to azimuthal ($\lambda/2 = 45^{0}$). In Figure \ref{HHG-vector-beam}(d, e), we present the vertical polarization intensity distributions for two intermediate polarization states: $\lambda/2$ wave plate is rotated by (d) $\sim-30^{0}$, and (e) $\sim30^{0}$ from the neutral axis. In all the cases, the two-lobed intensity profile signifies their vectorial polarization distribution, while their orientation represents radial, azimuthal, or intermediate polarization states.
To generate high-harmonics, the IR vector driving beam is focused into the Argon gas cell, as depicted in Fig. \ref{vector-vortex-setup}. Subsequently, the vertical polarization intensity component of the $25^{\rm th}$ harmonic at the gas cell exit plane is acquired by employing the polarization-selective monochromatic imaging system (see Fig. \ref{vector-vortex-setup}). In the bottom row of Fig. \ref {HHG-vector-beam}, we show the intensity distribution of the vertical polarization component of HHG vector beams exhibiting radial (g), azimuthal (j), and intermediate polarization distributions (h, i). Remarkably, akin to the vertical polarization components of IR vector beams (Fig. \ref{HHG-vector-beam}(c-f)), the HHG beams also display a two-lobed profile whose orientation closely follows the trend of the IR driving beams. Conclusively, the coherent nature of the HHG process indeed permits the generation of EUV beams with vectorial polarization distribution in an extended generation medium (15 mm gas cell) from the IR vector drivers. A further loose focusing geometry and a longer gas cell can therefore allow the production of intense EUV vector beams, bringing in the prospect of their practical applications in the EUV domain. Additionally, we note that HHG allows upscaling of the photon energies up to the soft X-ray wavelength range \cite{Popmintchev1}, hence further adding to the utility of HHG-based polarization structured short-wavelength pulses. We also remark that unlike frequency upconversion in nonlinear crystals where spatially varying linear polarization of the fundamental vector beam is unfavorable, the uniqueness of the HHG process permits the generation of EUV vector beams.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\linewidth]{Vector.pdf}
\caption{Polarization-structured EUV vector beams via HHG. Left panel: experimental intensity distribution of the IR vector beams with (a) radial and (b) azimuthal polarization distributions at the waist. Right panel: top rows shows the vertical polarization (‘s’-polarization) component of the driving beam bearing radial (c), azimuthal (f), and intermediate polarization states (d, e). Note that the white arrows in the left panel represent tentative polarization distribution, which is affirmed through the intensity profiles of vertically polarized components depicted in the right panel. In the bottom row, we show the vertical polarization components of the HHG vector beams ($25^{\rm th}$ harmonic) manifesting radial (g), azimuthal (j), and in-between polarization distributions (h, i). These intensity distributions of HHG vector beams are acquired using the monochromatic polarization selective EUV imaging system that is configured to image the exit plane of the gas cell (see Fig. \ref{vector-vortex-setup}).}
\label{HHG-vector-beam}
\end{figure*}
\subsection{Generation of high-harmonic vector-vortex beams}
Here, we detail the high-harmonic frequency upconversion of IR VVB driving beams of different topological Pancharatnam charges that are obtained by concatenating SPP and the polarization converter (see Fig. \ref{vector-vortex-setup}). In Figure \ref{IR-vector-vortex}, we depict the theoretical (left panel) and experimental (right panel) characterization of the IR vector-vortex driving beam of $\lvert \ell_{p1}\rvert = 1$ (top row) and $\lvert \ell_{p1}\rvert = 2$ (bottom row). The theoretical panel shows the evolution of the intensity profile and the polarization distribution—denoted by the embedded polarization ellipses—as the driving beam propagates from the polarization converter (a, b) to the gas cell (c, d). In the experimental panel, in addition to the intensity distribution of the IR driver at the waist (e, f), we present the separated RCP and LCP components comprising the total beam for the respective cases in (g, h). The topological charge of the RCP and LCP components is inferred by using the relationship between the topological charge and the radius of maximum intensity \cite{R-max}. We note that the divergence of Laguerre-Gauss modes depends on the topological charge \cite{Divergence-Padgett}. As the two components comprising the total beam exhibit different topological charges, vector-vortex beams manifest complex behavior upon propagation. In particular, the radial polarization right after the polarization converter (a, b) transforms into a complex polarization distribution resembling a full-Poincaré beam \cite{vector-vortex-Ref40, vector-vortex-Ref44}. Nevertheless, a ring of linear polarization (indicated by the green lines in (c, d)) occurs wherever RCP and LCP components overlap with equal intensity. Interestingly, in the region of linear polarization, the dephasing induced between the RCP and LCP components due to the difference in their Gouy phase leads to the evolution of the initial polarization distribution from radial at the polarization converter (a, b) to azimuthal at the gas target (c, d). Contrastingly, in the case of a pure vector beam comprising two counterrotating circularly polarized vortices with equal and opposite topological charges ($\ell_{RCP} = -\ell_{LCP}$), there is no polarization rotation due to the absence of phase shift. This aspect is evident from the experimental characterization presented in Fig. \ref{HHG-vector-beam}(c to f), where the initial polarization state remains static as the IR vector beams propagate towards the waist after the focusing lens.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\linewidth]{IR-vector-vortex.pdf}
\caption{Theoretical (left panel) and experimental (right panel) characterization of the IR vector-vortex driver of topological charges $\lvert \ell \rvert = 1$ (top row) and $\lvert \ell \rvert = 2$ (bottom row). For radially polarized driving beams, we show the theoretical intensity maps with embedded polarization ellipses right after the polarization converter (a, b) and after propagating to the gas cell (c, d). Note that the clean input beam (a, b) with linear radial polarization transforms into a complex polarization distribution at the gas target (c, d). The right panel depicts the experimental profiles (e, f), and the separated RCP and LCP components at the gas target for IR VVB of topological charge (g) $\ell_{p1} = +1$, (i) $\ell_{p1} = -1$, (h) $\ell_{p1} = +2$, and (j) $\ell_{p1} = -2$. The helicity of the circular polarization between the two parts that compose the total beam reverses when the sign of topological charge is changed from positive (g, h) to negative (i, j).}
\label{IR-vector-vortex}
\end{figure*}
To drive HHG, the IR VVB is focused into the gas cell (see Fig. \ref{vector-vortex-setup}). In Figure \ref{vector-vortex-l2-rad-pol}, we depict the far field vertical polarization intensity (top row) and wavefront (bottom row) profiles of the HHG VVB driven by the IR beam of topological Pancharatnam charge $\ell_{p1} = -1$ (c, d), and $\ell_{p1} = +2$ (g, h). To contrast the behavior concerning harmonic vortex beam exhibiting uniform vertical polarization, we present the intensity and wavefront maps of the HHG-OAM beam for IR driver of topological charge $\ell_{1} = -1$ (a, b) and $\ell_{1} = +2$ (e, f). Note that all the wavefronts here are represented in the unit of the central wavelength of EUV light ($\lambda_{EUV} = 32.6 \: \rm nm$). Regarding uniformly polarized HHG vortex beams, we observe an annular intensity distribution (a, e) and azimuthally twisting wavefront (b, f). Interestingly, the $25^{\rm th}$ harmonic beam bears a total peak-to-valley (PtV) wavefront variation of $\sim -24.9 \lambda_{EUV}$ (b) and $\sim +49.2 \lambda_{EUV}$ (f) for the IR vortex driver of $\ell_{1} = -1$ and $\ell_{1} = +2$, respectively. Therefore, the total wavefront twist in the transverse plane indicating the topological charge of vortex beams is within $2\%$ of the theoretical expectation. These results unambiguously affirm the topological charge scaling of the HHG-OAM beams with harmonic order \cite{Garcia2013, ACS-Photonics-Pandey2022}: $\ell_{q} = q\ell_{1}$. Moreover, the sense of wavefront rotation changes from anticlockwise (b) to clockwise (f) following the sign of the topological charge of the driving beam, indicating tunability of OAM helicity for HHG-OAM beams. We also remark that the intensity and wavefront of the harmonic vortex beams are reconstructed from single-shot acquisitions, therefore showing that an extended generation medium allows the production of intense HHG-OAM beams without degradation of their high-charge vortex structure.
\begin{figure*}[ht]
\centering
\includegraphics[width=1\linewidth]{HHG-vector-vortex.pdf}
\caption{Characterization of high-OAM HHG vortex and vector-vortex beams. Intensity (top row) and wavefront (bottom row) profiles of the $25^{\rm th}$ harmonic beam for IR vortex driver of (a, b) $\ell_{1} = -1$ and (e, f) $\ell_{1} = +2$. In (c, d) and (g, h), we depict the vertical polarization intensity and wavefront components of the HHG beam for radially polarized vector-vortex drivers of $\ell_{p1} = -1$ and $\ell_{p1} = +2$, respectively. In comparison to harmonic vortex beams with uniform linear-vertical polarization, the two-lobed profiles with vertical orientation in (c, d) and (g, h) represent spatially inhomogeneous linear polarization with the radial distribution. Additionally, azimuthally twisting wavefront in both the lobes with clockwise (h) and anticlockwise (d) rotation for the positive and negative sign of the topological charge indicates the control of the OAM helicity of harmonic VVB.}
\label{vector-vortex-l2-rad-pol}
\end{figure*}
In contrast to the uniformly polarized EUV vortex, the vertical polarization intensity projection of the HHG vector-vortex beams displays a two-lobed profile with vertical orientation, indicating spatially inhomogeneous polarization distribution in the farfield of the HHG source. Note that though the driving beam at the gas cell plane exhibits complex polarization distribution (Fig. \ref{IR-vector-vortex}(c, d)), the severe restriction of harmonic efficiency on the ellipticity of the driving beam restricts the efficient generation in the region with low ellipticities \cite{vector-vortex-Ref72}. We remark that as the initial radial polarization at the polarization converter transforms to azimuthal at the focus of the driver (see Fig. \ref{IR-vector-vortex}(a-d)), the high-harmonics are generated initially with azimuthal polarization at the gas cell. However, as the Gouy phase-induced phase shift is the same for the IR driving and HHG beams \cite{Optica-vector-vortex}, the azimuthal polarization at the gas target remarkably transforms to near-radially polarized high-order harmonics in the far-field. Consequently, in the farfield of the HHG source (Fig. \ref{vector-vortex-l2-rad-pol}(c, d) and Fig. \ref{vector-vortex-l2-rad-pol}(g, h)), the vertical polarization projections exhibit two vertically oriented lobes. Furthermore, in both cases, the wavefront manifests an azimuthal twist whose direction of rotation follows the sign of the topological Pancharatnam charge of the IR driver. On the one hand, these results demonstrate that the harmonic beam indeed manifests combined characteristics of vector and vortex beams. On the other hand, they also reveal that the HHG process allows controlling the OAM helicity of the harmonic VVB.
We remark that as for HHG vector beams (Fig. \ref{HHG-vector-beam}), we were able to generate EUV VVB with intermediate polarization states by varying the polarization of the fundamental IR beam, as depicted in the left panel of Fig. \ref{vector-vortex-l2-rad-pol}. For azimuthally polarized IR driver of $\ell_{p1} = -2$, the right panel shows the vertical polarization components of (e) intensity and (f) wavefront. In all the cases, vector and vortex characteristics of the HHG beams are apparent from the two-lobed intensity distribution and the azimuthally twisting wavefront, respectively. Besides, as expected, the direction of wavefront rotation reverses with a change in the sign of $\ell_{p1}$ (Fig. \ref{vector-vortex-l2-rad-pol}(f)). In conclusion, the experimental results in Figs. \ref{vector-vortex-l2-rad-pol}, \ref{vector-vortex-l2-inter-pol} demonstrate the controlled generation of HHG VVB, and manipulation of their polarization state, as well as, their OAM helicity.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\linewidth]{HHG-vector-vortex-int-pol.pdf}
\caption{HHG vector-vortex beams driven by $\ell_{p1} = +2$ and $\ell_{p1} = -2$ IR beam with intermediate and azimuthal polarization states. Left panel: measured intensity (top row) and wavefront (bottom row) of the HHG vector-vortex beams with polarization states in between radial and azimuthal for $\ell_{p1} = +2$ IR driver. Note that the IR driving beams with intermediate polarization states are generated by varying the polarization of the fundamental Gaussian beam using a half-waveplate. The orientation of the half-waveplate is noted on top of each column: $0^{0}$ corresponds to the neutral axis that leaves the initial vertical polarization of the Gaussian beam unchanged. The vertical polarization intensity distributions of the HHG beam show a rotated two-lobed profile, while the wavefronts manifest a clockwise twisting wavefront. Right panel: (e) intensity, and (f) wavefront of vertical component for azimuthally polarized HHG VVB driven by $\ell_{p1} = -2$ IR beam. Remarkably, the handedness of the wavefront twist reverses from clockwise (b, d) to anticlockwise (f) when the sign of topological charge changes from positive (b, d) to negative (f).}
\label{vector-vortex-l2-inter-pol}
\end{figure*}
\subsubsection{Topological charge scaling of HHG VVB}
In the case of complete intensity distribution, as for uniformly-polarized harmonic vortex beams (Fig. \ref{vector-vortex-l2-rad-pol}(a, b) and Fig. \ref{vector-vortex-l2-rad-pol}(e, f)), the topological charge can be conveniently retrieved from the overall wavefront twist along the azimuthal coordinate. However, as the HHG vector-vortex beams exhibit a non-uniform linear polarization distribution, the spectrally selective EUV mirror at $45^{0}$ angle of incidence offer maximum reflectivity for the vertically polarized component, while the horizontal polarization component is poorly reflected (a factor of 10:1). Consequently, the HHG vector-vortex beams detected by the wavefront sensor present a two-lobed intensity distribution (Figs. \ref{vector-vortex-l2-rad-pol}, \ref{vector-vortex-l2-inter-pol}). Therefore, to deduce the topological charge in these cases, we adopt a different approach. We subtract the theoretically defined phase of different topological charges from the experimental wavefront map. Thereafter, the theoretical order that yields a minimum root-mean-square (RMS) wavefront residue designates the topological charge of the HHG VVB.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.8\linewidth]{vector-vortex-charge.pdf}
\caption{Deducing the topological charge of experimental HHG vector-vortex beams. (a) For radially polarized HHG vector-vortex driven by $\ell = -1$ IR beam, we show the RMS wavefront residue in $\lambda$ units after subtracting the theoretical phase of various topological charges. A minimum RMS wavefront residue is obtained for $\ell = 25$, which coincides with the theoretically formulated scaling of Pancharatnam topological with the harmonic order: $\ell_{pq} = q\ell_{p1}$. Residual phase after subtracting (b) $\ell = 20$, (c) $\ell = 25$, and (d) $\ell = 30$.}
\label{vector-vortex-charge}
\end{figure*}
We present the result of this analysis for the selected case of HHG vector-vortex driven by the radially polarized beam of $\ell_{p1} = -1$ in Fig. \ref{vector-vortex-charge}. Remarkably, a minimum RMS wavefront residue is obtained after subtracting the theoretical azimuthal phase of order $\ell = 25$ (Fig. \ref{vector-vortex-charge}(a)). To further corroborate this aspect, we show the residual wavefronts after subtracting the theoretical orders of $\ell = 20$, $\ell = 25$, and $\ell = 30$ in Figs. \ref{vector-vortex-charge}(b-d), respectively. Indeed, relatively, the flattest wavefront with a minimum RMS residue is obtained after subtracting the theoretical phase of topological charge $\ell = 25$ (comparing Figs. \ref{vector-vortex-charge}(b-d)). This behavior shows that for vector-vortex driven HHG, the topological charge of the $q^{\rm th}$ harmonic scales linearly with the Pancharatnam topological charge of the IR driving beam \cite{Optica-vector-vortex}: $\ell_{pq} = q\ell_{p1}$. Indeed, by considering separate conservation of SAM and OAM while satisfying parity conservation that restricts the number of IR photons participating in the HHG process to an odd integer, we can show that HHG VVB results from the superposition of two circularly polarized vortices with opposite helicity and distinct topological charges \cite{vector-vortex-Ref77, Optica-vector-vortex, Garcia-Fractional}: $\lvert \ell_{q, \: RCP} - \ell_{q, \: LCP}\rvert = 2$. For the IR driver of Pancharatnam topological charge $\ell_{p1} = -1$, the harmonic VVB is composed of: $\ell_{q, \: RCP} = q\ell_{p1} - 1$ and $\ell_{q, \: LCP} = q\ell_{p1} + 1$. Therefore, the HHG Pancharatnam charge, which is the average of the topological charge of RCP and LCP components, is equal to $-25$ in this case. On the one hand, these results demonstrate that the selection rule for EUV vectorial-vortices obtained through HHG is governed by the upscaling of the topological Pancharatnam charge and not the OAM of the constituent RCP and LCP modes \cite{Optica-vector-vortex}. On the other hand, they simultaneously reflect on the merit of our experimental approach based on EUV wavefront sensing for the characterization of HHG structured beams.
\section{Conclusion}\label{CONCLUSIONS}
We demonstrate frequency upconversion of near-infrared structured beams to the EUV spectral range via HHG in a 15 mm long generation medium. Our work shows that HHG facilitates the generation of EUV vector beams exhibiting spatially variant linear polarization with tunable distribution ranging from radial to azimuthal. Concerning OAM-driven HHG, we demonstrate production and complete spatial characterization of intensity and wavefront of EUV vortex beams carrying topological charge up to 50. Owing to the long generation medium and thus a larger emitting volume, the signal level was sufficiently high to permit single-shot intensity and wavefront characterization of high topological charge HHG vortex beams. The characterization method based on EUV wavefront sensing allowed us to unambiguously assess the topological charge and OAM helicity of the HHG vortex beams, affirming the linear scaling of HHG beam topological charge with harmonic order \cite{Garcia2013, ACS-Photonics-Pandey2022}. Finally, through synchronous control of SAM and OAM of the driving beam, we show that the HHG process allows controlled generation of vector-vortex beams, thus merging the spatially varying polarization of a vector beam and the twisted wavefront of vortex beams. Moreover, the presented results reveal the possibility of tuning not only the topological charge but also the polarization state and the OAM helicity of the EUV VVB. Through wavefront characterization of HHG VVB, we experimentally demonstrate that the selection rule for EUV vectorial-vortices obtained through HHG is governed by the upscaling of topological Pancharatnam charge and not the OAM of the constituent RCP and LCP modes.
Ultrafast structured EUV light further widens the scope of table-top high-harmonic sources for fundamental and applied studies. On the one hand, tightly focused high topological charge ultrafast EUV vortex beams carrying large OAM may enable light-matter OAM transfer to atoms and molecules \cite{vector-vortex-Ref34}, and photonic induction of skyrmionic defect in magnetic materials \cite{Skyrmion-OAM}. On the other hand, intense EUV vector beams obtained through HHG in an extended generation medium pave the way for short-wavelength lithography \cite{Garcia-vector-67, Garcia-vector-68}, microscopy \cite{Garcia-vector-65}, and diffraction imaging applications \cite{Garcia-vector-25,Garcia-vector-66}. Our further work involves investigating the tight focusing of these structured EUV beams using a Kirkpatrick-Baez focusing system. Concerning HHG VVB with combined characteristics of vector and vortex beams, our work opens the possibility of synthesizing attosecond light-springs \cite{ Garcia2013} tailored to exhibit spatially varying polarization \cite{Optica-vector-vortex}, therefore offering a new degree of freedom in attosecond beams.
\backmatter
\bmhead{Acknowledgments}
The project leading to this publication has received funding from the ERC under the European Union’s Horizon 2020 research and innovation program (ATTOSTRUCTURA - grant agreement No 851201). We acknowledge the computer resources at MareNostrum and the technical support provided by Barcelona Supercomputing Center (FI-2020-3-0013). The authors thank CEMOX installation at IOGS, Palaiseau, France, for the design and fabrication of the multilayer optics. We acknowledge the technical support of IJC Lab staff J. Demailly and O. Neveu. We are thankful to F. Quéré (Institut rayonnements matière du CEA, Saclay) for providing the polarization converter used in this study. \newline
\noindent \textbf{Funding:} European Research Council (851201); Ministerio de Ciencia de Innovación y Universidades, Agencia Estatal de Investigación and European Social Fund (PID2019-106910GB-I00, RYC-2017-22745); Junta de Castilla y León and FEDER Funds (SA287P18); Université Paris-Saclay (2012-0333TOASIS, 50110000724-OPTX, PhOM REC-2019-074-MAOHAm); Conseil Régional, Île-de-France (501100003990); Barcelona Supercomputing Center (FI-2020-3-0013). \newline
\noindent \textbf{Data availability:} The raw data reported in this study are available from the corresponding authors upon request. \newline
\noindent \textbf{Competing interests:} The authors declare no competing interests.
\bibliographystyle{naturemag}
|
1,116,691,497,842 | arxiv | \section{Introduction}
One of the most characteristic predictions of chiral soliton models, such as the Skyrme model, is the possible existence
of the $SU(3)-$multiplets of dibaryons, tribaryons, etc., whose strange components could be stable relative to strong
interactions \cite{1} - \cite{5}. These predictions followed the paper by Jaffe \cite{6} where the existence of a strange
dibaryon, the $H-$particle, was predicted in the MIT bag model. The experimantal checking of these predictions would be
very important since it would provide evidence for the whole concept of baryons as solitons of some effcetive lagrangian,
or lead to substantial modifications of these models.
The purpose of present note is to clear up some general properties of exotic baryon systems in the framework of chiral
soliton models and the collective coordinates quantization picture which have not been well understood in previous
investigations. We shall use here the rigid rotator model criticized, e.g. in \cite{7}, so the results obtained should
be considered as a starting point for more elaborate investigations. Arguments based on a very simple expression for the
quantum correction due to rotation in a "strange" direction show that this correction has the property of stabilizing
the baryon systems more strongly than than the quantum corrections due to other zero modes, the value of the binding
energy being increased with increasing baryon number $B$ of the system.
Below we shall consider objects which can be interpreted as being built of a minimal number of quarks (valence quarks
only) as well as those which contain additional quark-antiquark pairs. The latter may have positive strangeness as well
as a ratio of strangeness to baryon number $|S/B| > 3$, for some of possible multiplets. The 'price' to be paid for addition
of each $q\bar q$-pair is estimated, and it is shown that this price is modest. Finally, arguments are given that in
the case of strange skyrmion crystals quantum correction due to zero modes stabilize them more strongly than $SU(2)-$quantized
skyrmion crystals (neutron crystals).
\section{$SU(3)$ - quantum corrections for nonexotic and exotic systems}
Our consideration is based on the well known expression for the energy of baryon systems obtained from the $SU(2)-$solitons
by means of rotations in the space $SU(3)-$collective coordinates \cite{1} - \cite{5}. This energy consists of the
classical mass of the soliton $M_{cl}$ and quantum corrections due to zero modes $E_{rot}$ as well as nonzero modes
(vibration, breathing, etc) which are difficult to calculate and by this reason are usually omitted:
$$E= M_{cl} + E_{rot} = $$
$$= M_{cl} + {1\over \lambda_s}\left[C_2(SU(3)) - {3Y'^2\over 4} - N(N+1)\right] + {N(N+1)\over \lambda_r} +... \eqno(1) $$
$M_{cl}$ is bound, at least for not very large baryon numbers $B$ (the masses of toroidal bound skyrmions with $B=2,\,3,\,4$
have been calculated at first in ref. \cite{8}). $C_2(SU(3))=[p^2+q^2+pq + 3(p+q)]/3$ is the $SU(3)$ Casimir operator, $p$
and $q$ being the numbers of upper (lower) indeces in the spinor describing the $SU(3)$ multiplet under consideration,
$Y'$ and $N$ are the right hypercharge and isospin (the mistake in the interpretation of $N$ in \cite{3} has been corrected in
\cite{5}), $\lambda_s$ and $\lambda_r$ are the moments of inertia characterizing the system, they differ by a factor of two
from their standard definition (accepted in classical mechanics, e.g.). The terms depending on the orbital momentum, like
$J(J+1)/\Lambda_r$ will be discussed later. The expresiion $(1)$ was obtained for systems posessing the generalized axial
symmetry \cite{8,5}. As it was recently established in \cite{9} for systems with baryon numbers $B = 3 - 6$ at least,
there exist configurations of more complicated form than the torus-like one, which have even lower energy. However, we
expect that some changes in the spatial form of the solitons will not influence the overall structure of the expression $(1)$,
so, our our arguments hold also for this case.
Let us consider the $SU(3)-$multiplets for which the right hypercharge $Y'$ is the largest in the multiplet (we call them the
"minimal" multiplets):
$$ Y' = {p+2q \over 3}, \eqno(2) $$
From the quantization condition \cite{10} it follows that $Y' = N_cB/3 $, but here we shall restrict ourselves to the case
of the number of colors $N_c=3$. Since the right isospin $N = p/3$, it is easy to establish that the coefficient of $1/\lambda_s$
in $(1)$ is equal to $(p+2q)/2 = 3B/2$. So, for the whole family of minimal multiplets with fixed baryon number and $N$ varying
from $0$ or $1/2$ up to $3B/2$ we obtain the universal relation
$$E_{rot} = {3B\over 2\lambda_s} +{N(N+1)\over \lambda_r} +{J(J+1)\over \Lambda_r} +$$
$$+ J_z^2 \left[{1\over B^2\lambda_z} - {1\over B^2\lambda_r} -{1\over \Lambda_r} \right]. \eqno (3) $$
It was already noted in \cite{1} that the first coefficient in $(3)$ is the same for the octet and decuplet of baryons (hyperons).
Therefore, measurements of their masses cannot help in cross-checking the $\lambda_s$. The same holds also for families of
baryonic systems with arbitrary baryon numbers. These families consist of $\overline{10}-,\; 27-,\;35-,\;28-$plets for $B=2$,
of $\overline{35}-,\; 64-,\;81-,\;80-$ and $55-$plets for $B=3$, etc \cite{11}.
For toroidal few-baryon systems considered in \cite{4,8} $\lambda_s(B)$ increases proportional to the baryon number of the
system, or even faster. The same holds also for $\lambda_r(B)$. The orbital inertia $\Lambda_r(B)$ is considerably larger than
$\lambda_s$ and $\lambda_r$, it grows with increasing $B$ faster than $B^2$. By this reason the depending on $\Lambda_r$
contribution in $(3)$ is smaller than the first two terms, if $J$ is not very large, and does not play a crucial role
\footnote{Even for a rotating neutron star the energy of the orbital rotation is considerably smaller than the sum of the
rotational energies of individual skyrmions (neutrons) if the angular velocity of rotation is not very large (period
$T>10^{-3}$}. in the binding of the system. It follows immediately from $(3)$ that the energy (due to zero modes) per unit
baryon number decreases with increasing $B$ and goes to zero for rotations in the "strange" direction. The energy due to
isotopic rotations decreases like $1/B^2$ for the multiplets with the smallest value of $N$ ($1/2$ or $1$), but is
approximately constant for $N=N_{max} = 3B/2$.
The contribution to the binding energy per unit baryon number for the systems with smallest $N$ increases with increasing $B$
and approaches the value
$${\epsilon\over B} \simeq {3\over 2\lambda_s(B=1)} + {3\over 4 \lambda_r(B=1)}. \eqno (4) $$
Since usually $\lambda_s$ is a few times smaller than $\lambda_r$, "strange" rotational energy binds baryonic systems more
strongly than isotopic rotations \cite{1,4,5}. The numerical results for the binding energy depend on the values of the
model parameters. It follows from $(4)$ that the asymptotic relative binding energy per baryon equals to 0.33 in the Skyrme
model \cite{1,4} and to 0.46 in the model with explicit scalar meson \cite{5}. These values seem to be too large, but they
are smaller in the case of few-baryon systems. The relative binding energy of quantized dibaryons (i.e. divided by the total
energy of possible final states) is less sensitive to the type of the model and to the values of the parameters, and in many
cases of interest is close to $0.2$.
Let us go to the "nonminimal" representations for which the largest possible hypercharge of the system is larger than their
baryon numbers. Such multiplets have to contain components with positive strangeness. In the quark model language these systems
could contain some number $m$ of quark-antiquark pairs. We have now
$$ {p+2q\over 3} = B+m $$ and
$$ Y' ={p+2q \over 3} - m, $$
if we assume that in a nonminimal representation the number of indices is increased by $m$ for both $p$ and $q$ in comparison
with corresponding minimal representation. Of course, this is not the only possibility. In this case the right isospin
$N=(p+m)/2 = N_0 +m$, and an elementary calculation yields
$$ C_2(SU(3)) - N(N+1) - {3B^2\over 4}\; = \;{3B\over 2} +m\left({3B\over 2} +m -N +1\right). \eqno(5)$$
So, the increase in energy of the system due to the addition of $m\;\, q\bar q $ pairs contains terms linear in $m$ as well
as quadratic ones. At fixed $B$ and $m$ expression $(5)$ decreases with increasing $N$ and is minimal for
$$ N_{max} = {3B\over 2} +m, $$
the second term in the above formula being equal to $m$. Taking into account the contribution to the energy due to isotopic
rotations we obtain for the total increase of the energy due to the addition of $m$ quark-antiquark pairs:
$$\delta E_{rot} = m\left[{3B/2 +1 +m -N\over \lambda_s} + {2N+1 -m\over \lambda_r}\right]. \eqno(6) $$
For an octet of baryons, e.g., $\delta E_{rot} =2/\lambda_s + 3/\lambda_r$, at $m=1$ \footnote{In this case for starting
configuration $(p_0,q_0) =(1,1)$, $N_0=1/2$, and after addition of one $q\bar q-$pair we have $(p,q)=(2,2)$, $N=3/2$.}.
The contribution of the rotation into the "strange" direction decreases with increasing $N$ down to $m/\lambda_s$,
so, when strange quarks are "dissolved" in multiplets of higher dimension (in $p$ or $N$) the energy necessary
for this "dissolving" decreases. However, the energy of the isotopic rotations increases.
There are also other ways to go to nonminimal representations by means of asymmetric increase of $p$ and $q$. E.g.,
one can increase separately $p$ or $q$ by $3m$ which will correspond to the addition of $m$ or $2m$ quark-antiquark pairts.
In the first case $N=(m+p)/2$, in the second one $N=p/2 + m$, the expression for $\delta E$ remains the same with
the substitution $m\to 2m$ in the latter case. In the Skyrme model with the parameters $F_\pi = 108\,MeV$, $e=4.84$
we have $\lambda_s^{-1} \simeq 0.3\,GeV$ and $\lambda_r^{-1} \simeq 0.1\,GeV$, and the energy surplus for $m=1$ equals
$$\delta E_{rot} = \left[0.3\left({3B\over 2} +2 -N\right) + 0.2 N\right]. $$
In the model with an additional scalar meson \cite{5} it is larger by a factor $\sim 1.5$. So, $\delta E$ has the
desired order of magnitude because it is expected that such states should be above the threshold fot the
decay due to strong interactions.
\section{Quantum corrections in case of skyrmion crystals }
The binding energy of the extended skyrmion crystal consists of two parts: the classical binding energy and the
binding energy due to quantum corrections \cite{12} - \cite{17}. The first one depends on the particular symmetry
of the crystal. We shall concentrate now on the contribution of the quantum corrections to the binding energy
and show that there is a principal distinction between the contribution due to the rotations in the "strange" and
"nonstrange" direction.
Let us recall the arguments of ref. \cite{12} which have led to the conclusion that there is an important contribution
of the quantum corrections to the binding energy of the crystal. The basic assumption is that flavor rotations of
the whole crystal can be described by one and the same set of collective coordinates, i.e. the crystal is coherently
rotated in flavor space. It follows immediately that the corresponding moment of inertia of the crystal $\Lambda = n\lambda$,
where $n$ is the number of unit cells in the crystal, and $\lambda$ is the moment of inertia of the unit cell. The
rotational energy of the whole crystal $E_{rot}= T(T+1)/n\lambda$ should be compared with $nt(t+1)/\lambda$ where $t=1/2$
is the isospin of the unit cell. For the neutron crystal $T=nt=n/2$, and for large $n$ we have
$$E_{rot}= nt^2/\lambda\,<\, nt(t+1)/\lambda, \eqno(7) $$
i.e. the effect of binding arises. Its physical sources are the quantum fluctuations of the isospin momentum of the
free neutron which make $\vec t^2$ three times greater than $t_z^2$. These fluctuations are suppressed inside the
crystal, and for the whole crystal they are negligible. It was assumed in the above arguments that the inertia of the
unit cell does not differ too much from that of the free neutron.
If we consider now the $SU(3)$ flavor rotations we obtain under the same assumptions for the energy of the crystal:
$$E_{rot} = {1\over n\lambda_s}\left[C_2(SU(3))[P,Q] -{3B^2\over 4}-N^{tot}(N^{tot} +1)\right]+
{N^{tot}(N^{tot}+1)\over n\lambda_r},
\eqno(8) $$
$B=n$, which should be compared with the analogous expression for $n$ unit cells. $P$ and $Q$ are the numbers of indeces
in the spinor describing the crystal in the $SU(3)$ space, $N^{tot}$ is the corresponding right isospin, $N^{tot} = P/2$.
The simplest and probably most natural possibility is to assume that $P=np$ (by analogy with the neutron crystal).
In this case we obtain
$$E_{rot} = {3B\over 2\,n\lambda_s} +{n^2\over 4\,n\lambda_r}\, =\, {3\over 2\lambda_s} + {n\over 4 \lambda_r}. \eqno(9)$$
This expression is quite similar to the expression for $E_{rot}$ in the case of few-baryon systems \footnote{The first
term in Eq. $(9)$ should be compared with corresponding contribution for $B=n$ separate cells (baryons), equal to
$3n/2\lambda_s$}. The distinctions are
that now $n\lambda$ enters instead of $\lambda(B)$, $n/2$ instead of $N$, and the orbital rotation energy is absent (for
the crystal at rest). So, due to the cancellation of the quadratic terms in $P,\,Q$ we have obtained that the contribution
of the first term in $(9)$ to the energy of the crystal is constant and equal to $3/2\lambda_s$. This is the main difference
between "strange" and isotopical rotations. As in the case of few-baryon systems, this term is the same for all $SU(3)$
representations $(P,Q)$ which satisfy the relation $P+2Q = 3n$.
The discussion of the astrophysical aspects of the possible existence of strange matter is beyond the scope of the present
paper. Observational results imply that there is a place for dark baryonic matter in the Universe, although arguments
were given that the strange matter should have evaporated at the early stages of the evolution of the Universe \cite{18}.
However, the possibility of the existence of strange matter does not seem to be excluded completely.
\\
\section{Discussion of some general problems}
We have shown that in the collective coordinate quantization approach a transparent expression for the quantum correction to
the energy of the baryon system can be obtained, illustrating that rotations into "strange" direction stabilize few-baryon systems
as well as skyrmion crystals even more than isotopic rotations. The stabilizing property of zero modes quantum corrections
seems to be a simple consequence of a natural property of solitons: their geometrical sizes and moments of inertia increase
with increasing baryon number.
The fact that the quantum corrections due to "strange" rotations decrease with increasing baryon number of the system may
also mean that the whole approach of the $SU(3)-$ collective coordinates quantization becomes more selfconsistent with
increasing $B$. Indeed, several attempts to to describe the $B=1$ hyperon spectrum \cite{10,19,20} have met problems since
the strange inertia $\lambda_s$ is especially small for $B=1$, and therefore the quantum correction is especially large in
this case. As shown above, the relative value of this correction drops as $1/B$, so the problems in the $B=1$ sector disappear
for systems with large $B$. However, if this argument is correct, the same result should be obtained in other versions of
quantization, e.g. in that of Callan and Klebanov. In other words, it may turn out that systems with large $B$ are in some
sense simpler objects than system with $B=1$. There remain, however, some effects which role has not been clarified till now.
First of all, the vibrational corrections remain to be calculated. They should increase the energy of the system, as well as
of the whole crystal. For the system with $B=1$ they have been analyzed in \cite{21,22}. An elegant method for estimating
them has been proposed in \cite{23}, and for the case of the skyrmion crystal in \cite{15}.
The other problem is the applicability of the soliton approach with increasing baryon number. It is clear that under usual
conditions the soliton (skyrmion) cannot be too large, e.g. with baryon number of the order of the Avogadro number. But it
is unclear what parameter defines the maximal baryon number of the soliton.
We have not included in our consideration the meson mass terms in the effective lagrangian of the model. According to our
previous experience \cite{3} - \cite{5} the mass terms do not play a crucial role in the binding of quantized skyrmions, although
they define the mass splittings inside the multiplets, which happened to be too small in comparison with experimental data,
especially in Syracuse model \cite{5}. The mass terms induce also the mass splittings of the quantized skyrmion crystals.
\\
The author is greatly indebted to Andreas Wirzba for many useful discussions and reading of the manuscript, and to Andy Jackson
for enlightning discussions of some points of principle.\\
{\bf References}
\baselineskip=11pt
|
1,116,691,497,843 | arxiv | \section{Introduction}
Over the past decade, wireless communication systems have transformed from being limited to serving low data rates, e.g., voice calls and text messages, to offering dependable high data rate services, notably, for video content. The demand in massive amounts of data will only increase going forward, leading to potential bottlenecks. Two potential solutions offered towards alleviating network congestion are \textit{device-to-device (D2D) communications} and \textit{caching}. The former shifts some of the traffic load from the core network to the end users, while the later shifts it from the peak to off-peak hours, i.e., to when the network resources is underutilized. More specifically, \textit{D2D communications} utilize the radio channel for end users to communicate directly instead of routing via the network infrastructure \cite{doppler2009device,tehrani2014device,asadi2014survey}, while \textit{caching} stores partial content that may be requested by users in the network in off-peak hours so as to reduce delivery rates to these users during peak-traffic hours\cite{dowdy1982comparative,almeroth1996use}. The seminal reference \cite{maddah2014fundamental} introduced coded caching and demonstrated that, designing the downloading of partial data in off-peak hours, and the delivery signal in peak-hours in a manner to serve multiple users' file demands simultaneously, provides gains that are above and beyond simply placing some partial content in the caches. In particular, it has been shown that jointly designing the cache placement and delivery phases provides a \textit{global caching gain} that results from the ability of serving multiple users by a single transmission, in addition to the \textit{local caching gain} that results from the fact that some of the requested data is locally available in the user's cache. There has been significant recent interest in caching systems, notably in designing coded caching strategies demonstrating gains in various network settings beyond the broadcast network setting of the original reference, see for example, \cite{maddah2016coding,niesen2017coded ,shariatpanahi2015multi,yang2018coded,bidokhti2017benefits,ibrahim2017centralized,ibrahim2017optimization,
zewail2017combination,ji2016fundamental}.
Caching in D2D communications have been pioneered in reference \cite{ji2016fundamental}. In particular, a network where a server, with database of $N$ files, each with size $F$ bits, connected to $K$ users, each equipped with a cache memory of size $MF$ bits, has been considered. In the cache placement phase, the server populates the cache memories of the users with partial content from the server's database. During the delivery phase, in contrast with the communication model in \cite{maddah2014fundamental}, the server remains inactive and the users' requests are to be satisfied via D2D communications only. Both centralized and decentralized schemes were provided. In the centralized schemes, the cache placement and delivery phases are jointly optimized, which requires the knowledge of the number of active users in the system while performing cache placement. In decentralized scheme, this knowledge is not necessary. The fundamental limits of coded caching in device-to-device networks have been further investigated in \cite{ji2016wireless,ji2017fundamental,shabani2016mobility,
tebbi2017coded,chorppath2017network}. For instance, references \cite{ji2016wireless,ji2017fundamental,shabani2016mobility} have studied the impact of coded caching on throughput scaling laws of D2D networks under the protocol model in \cite{gupta2000capacity}
Beside the need of reducing the network load during the peak hours, maintaining secure access and delivery is also essential in several applications, e.g., subscription services. These concerns can be addressed by the so called secure caching and secure delivery requirements studied in server based models. For \textit{secure delivery} \cite{sengupta2015fundamental,awan2015fundamental,zewail2016coded,zewail2017combination}, any external entity that overhears the signals during the delivery phase must not obtain any information about the database files.
In particular, reference \cite{awan2015fundamental} has studied a device-to-device caching system with secure delivery. Utilizing one-time padding, a centralized scheme has been proposed by jointly optimizing the cache placement and delivery phase. The order-optimality of this scheme has been shown in \cite{awan2018bounds}, i.e., the multiplicative gap between the achievable delivery load, in \cite{awan2015fundamental} and the developed lower bound, in \cite{awan2018bounds}, can be bounded by a constant that is independent from the system's parameters. For \textit{secure caching} \cite{ravindrakumar2016fundamental,zewail2016coded,zewail2017combination}, each user should be able to recover its requested file, but must not gain any information about the contents of the files it did not request.
In this paper, we investigate the fundamental limits of secure caching in \textit{device-to-device} networks. That is, unlike the settings in \cite{zewail2016coded,ravindrakumar2016fundamental,zewail2017combination}, the server disengages during the delivery phase, and users' requests must be satisfied via D2D communications only. By the end of the delivery phase, each user must be able to reconstruct its requested file, while not being able to obtain any information about the remaining $N\!-\!1$ files. For this D2D model, we derive lower and upper bounds on the rate-memory trade-off. We propose a centralized caching scheme, where the server encodes each file using proper \textit{non-perfect secret sharing schemes} \cite{ito1989secret,ito1987secret,li2015optimal,blakley1984security,blakley1984security}, and generates a set of random keys \cite{shannon1949}. Then, the server carefully places these file shares and keys in the cache memories of the users. During the delivery phase, each user maps the contents of its cache memory into a signal transmitted to the remaining users over a shared multicast link. Next, motivated by the proposed schemes in \cite{jin2016new} under no secrecy requirements, we provide a semi-decentralized scheme, using a grouping-based approach that guarantees secure caching, and does not require the knowledge of the number of active users in the system while populating the users' cache memories. To evaluate the performance of these achievable schemes, we also develop a lower bound on the required transmission sum rate based on cut-set arguments. We show that the multiplicative gap between the lower and upper bounds is bounded by a constant. Furthermore, we observe numerically that this gap vanishes as the memory size increases.
By virtue of the D2D model, the delivery load has to be completely transferred from the server to the end users during cache placement in this network, so that no matter what file is demanded by a user, it can be delivered from other users. As such, imposing secure caching requirement will also facilitate secure delivery as we shall see in the sequel. In other words, for the proposed schemes, secure delivery \cite{sengupta2015fundamental,awan2015fundamental,zewail2016coded} is also satisfied as a byproduct.
This work demonstrates that D2D communications can effectively replace a server with full database access despite the fact that each user accesses only a portion of the database and that this is possible with a negligible transmission overhead, while keeping the users ignorant about the database contents. That is to say that, the performance of the system under investigation and the one in \cite{ravindrakumar2016fundamental} are very close to one another for realistic values of the system parameters. We note that while the centralized scheme and its performance were presented in brief in the conference paper \cite{zewail2016fundamental}, the decentralized coded caching scheme and the order-optimality results are presented for the first time in this paper, along with proof details of all results.
The remainder of the paper is organized as follows. In Section \ref{sec:sm}, we describe the system model. In Sections \ref{sec:ach} and \ref{sec:dec}, we detail the centralized and decentralized coded caching schemes, respectively. Section \ref{sec:lower} contains the derivation of the lower bound. In Section \ref{sec:numerical}, we demonstrate the system performance by numerical results. Section \ref{sec:con} summarizes our conclusions. In the following, we will use the notation $[L] \triangleq \{1,\ldots, L\}$, for a positive integer $L$.
\section{System Model}\label{sec:sm}
Consider a network where a server, with a database of $N$ files, $W_1,\ldots, W_N$, is connected to $K$ users. The files are assumed to be independent from one another, each has size $F$ bits and is uniformly generated. Each user is equipped with a cache memory with size $MF$ bits, i.e., each user is capable of storing $M$ files. We denote by $M$ the normalized cache memory size and define $Z_k$ to represent the contents of the cache memory at user $k$, where $k\in \{1,2,\ldots,K\}$. The system operates over two consecutive phases, as depicted in Fig. \ref{fig_sm}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.28]{system_model2.eps}
\centering
\caption{Device-to-device secure coded caching system.}\label{fig_sm}
\end{figure}
\subsection{Cache Placement Phase}
In this phase, the server allocates functions of its database into the users' cache memories without the knowledge of file demands. These possible allocations are designed to preserve the memory capacity constraint at each user. This is made precise by the following definition
\begin{Definition1}
(Cache Placement): In the cache placement phase, the server maps the files of its database to the cache memories of the users. In particular, the content of the cache memory at user $k$ is given by
\begin{equation}
Z_k = \phi_k(W_1, W_2,\ldots,W_N), \qquad k =1,2,\ldots,K,
\end{equation}
where $\phi_k: [2^F]^N\rightarrow [2^F]^M$, such that $H(Z_k)\leq MF$.
\end{Definition1}
In this work, aligned with the caching literature, e.g., \cite{sengupta2015fundamental,awan2015fundamental,ravindrakumar2016fundamental,
zewail2016coded,zewail2017combination}, we assume that the cache placement phase is secure, i.e., it is not overheard by any unauthorized entity and the cache contents of each user is not accessible to any other user.
\subsection{Delivery Phase}
During peak traffic, each user requests a file. We assume that the demand distribution is uniform for all users,
and independent from one user to another \cite{maddah2014fundamental,ji2016fundamental}, i.e., each user can request each file with equal probability. The index of the file requested by user $k$ is $d_k\in[N]$, and $\bm d=(d_1,\ldots,d_K)$ represents the demand vector of all users.
Similar to \cite{ji2016fundamental}, we require that the delivery phase is carried our by D2D communications only, i.e., the server participates only in the cache placement phase. Therefore, we need the caches at the users to be able to store the whole library, collectively. Without secrecy requirements, we would need $KM\geq N$ to accomplish this. In Section \ref{sec:ach}, we will see that a larger total memory constraint will be required in order to satisfy the secrecy requirements. With the knowledge of the demand vector $\bm d$, user $k$ maps the contents of its cache memory, $Z_k$, into a signal that is transmitted to all network users over a noiseless interference-free multicast link. From the $K\!-\!1$ received signals and $Z_k$, user $k$ must be able to decode its requested file, $W_{d_k}$, with negligible probability of error. We have the following definition for encoding and decoding at each user.
\begin{Definition1}
(Coded Delivery): The transmitted signal by user $k$ is given by
\begin{equation}
X_{k,\bm d} = \psi_k(Z_k,\bm d),
\end{equation}
where $\psi_k: [2^F]^M\times [N]^K\rightarrow [2^F]^{R_k}$ is the encoding function, $R_k$ is the normalized rate of the transmitted signal by user $k$ and $k \in [K]$. In addition, user $k$ recovers its requested file as
\begin{equation}
\hat W_{d_k} = \mu_k(Z_k,\bm d, X_{1,\bm d},\ldots,X_{k-1,\bm d},X_{k+1,\bm d},\ldots,X_{K,\bm d}),
\end{equation}
where $\mu_k: [2^F]^M\times [N]^K\times [2^F]^{\sum_{i\neq k}R_i}\rightarrow [2^F]$ is the decoding function, and $k \in[K]$.
\end{Definition1}
Let $R_T=\sum_{i=1}^K R_i$ be the normalized sum rate of the transmitted signals by all users
\subsection{System Requirements}
During the delivery phase, the server remains silent, and all users' requests must be satisfied via D2D communications. Therefore, we have the following reliability requirement.
\begin{Definition1}
(Reliability) For each user to recover its requested file from its received signals and the contents of its cache memory, we need
\begin{equation}\label{reliableconst}
\max_{\bm d, k\in[K]} Pr(\hat W_{d_k}\neq W_{d_k})\leq \epsilon,
\end{equation}
for any $\epsilon>0$.
\end{Definition1}
We impose secure caching constraints on the system. In particular, we require that each user must be able to decode only its requested file, and not be able to obtain any information about the content of the remaining $N-1$ files.
\begin{Definition1}
(Secure caching) For any $\delta_1>0$, we have
\begin{equation}\label{secrtiveconst}
\max_{\bm d, k\in[K]} I(\bm W_{-d_k}; \bm X_{-k,\bm d},Z_k)\leq \delta_1,
\end{equation}
where $\bm W_{-d_k}\!\!\!=\!\!\{W_1,\ldots,W_N\} \backslash \{W_{d_k}\}$, i.e., the set of all files except the one requested by user $k$ and $\bm X_{-k,\bm d}=\{X_{1,\bm d},\ldots,X_{K,\bm d} \}\setminus \{X_{k,\bm d}\}$, i.e., the set of all received signals by user $k$.
\end{Definition1}
We aim to minimize the sum rate during the delivery phase under reliability and secure caching requirements. Formally, we have the following definition.
\begin{Definition1}
The secure memory-rate pair $(M, R_T)$ is said to be \textbf{achievable} if $\forall \epsilon, \delta_1 >0$ and $F \rightarrow \infty$, there exists a set of caching functions, $\{\phi_k\}_{k=1}^K$, encoding functions, $\{\psi_k\}_{k=1}^K$, and decoding functions, $\{\mu_k\}_{k=1}^K$, such that (\ref{reliableconst}) and (\ref{secrtiveconst}) are satisfied. The optimal secure memory-rate trade-off is defined as $R_T^{*}=\inf\{R_T: (M,R_T) \mbox{ is achievable} \}.$
\end{Definition1}
We are also interested in the secure delivery requirement, defined below.
\begin{Definition1}
(Secure Delivery) Any eavesdropper that overhears the transmitted signals during the delivery phase must not obtain any information about the contents of the data phase files. Therefore, we have
\begin{align}\label{securedelivery}
\max_{\bm d} I(W_1,\ldots,W_N; X_{1,\bm d},\ldots,X_{K,\bm d})\leq \delta_2,
\end{align}
for any $\delta_2>0$.
\end{Definition1}
\begin{remark}
We will see that for the D2D setting we consider, our proposed schemes for secure caching will automatically satisfy the secure delivery requirement.
\end{remark}
\begin{remark}
In general, secure delivery and secure caching requirements do not have to imply one another. For instance, if $M\geq N$, secure delivery is trivially satisfied by storing the entire database at each user during the cache placement phase violating the secure caching requirements. An example for the reverse scenario, i.e., where secure caching does not imply the secure delivery can be found in subsection \ref{specialcase}
\end{remark}
We aim to minimize the total delivery load, i.e., the total transmission rate, by designing the cache contents and the delivery strategy while maintaining the secure caching requirement. Since no user should be able to decode a file he did not request, and the requests of the users will be only revealed at the beginning of the delivery phase, we must design the cache's contents of each user in a way that does not reveal any information about the system files. This makes the cache placement problem relevant to the problem of multiple assignment in secret sharing \cite{ito1987secret,ito1989secret,li2015optimal}, in the sense that, we aim to distribute the library over the set of end users such that the shares assigned to each of them cannot reveal any information about the files. Here the size of shares allocated to each user must not exceed its cache storage capacity, $MF$. Additionally, by the end of the delivery phase, each user must be able to decode only its requested file. With the D2D model, we require the system to maintain self-sustainability without the participation of the server during the delivery phase. Thus, the caches' contents at all users must be able to regenerate the library files. In the following two sections, we provide two schemes that minimize the delivery load while maintaining the systems' requirements.
\section{Centralized Coded Caching Scheme}\label{sec:ach}
In this section, we consider a scenario where the server is able to perform cache placement in a centralized manner. That is, the server knows the total number of users in the system, $K$, at the beginning of the cache placement phase. We utilize non-perfect secret sharing schemes \cite{yamamoto1986secret,blakley1984security,secretsharing} to encode the database files. The basic idea of the non-perfect secret sharing schemes is to encode the secret in such a way that accessing a subset of shares does not reduce the uncertainty about the secret, and only accessing all shares does. For instance, if the secret is encoded into the scaling coefficient of a line equation, the knowledge of one point on the line does not reveal any information about the secret as there remain infinite number of possibilities to describe the line. One can learn the secret only if two points on the line are provided, and then can do so precisely. We will utilize \textit{non-perfect secret sharing schemes}, formally defined as follows.
\begin{Definition1} \cite{yamamoto1986secret,blakley1984security,secretsharing} For a file $W$ with size $F$ bits, an $(m,n)$ non-perfect secret sharing scheme generates $n$ shares, $S_1,S_2,\ldots,S_n$, such that accessing any $m$ shares does not reveal any information about the file $W$, i.e., \begin{equation}
I(W;\mathcal{S})=0, \quad \forall \mathcal{S}\subseteq\{S_1,S_2,\ldots,S_n\}, |\mathcal{S}|\leq m.
\end{equation}
Furthermore, we have
\begin{equation}
H(W|S_1,S_2,\ldots,S_n)=0,
\end{equation}
i.e., the file $W$ can be reconstructed from the $n$ shares.
\end{Definition1}
For large enough $F$, an $(m,n)$ non-perfect secret sharing scheme exists with shares of size equals to $\frac{F}{n-m}$ bits \cite{yamamoto1986secret,blakley1984security,secretsharing}. We chose to use schemes from this class as they give shares with sizes equal to the secret size divided by the gap, $\frac{F}{n-m}$ bits. By contrast, perfect secret sharing schemes \cite{shamir1979share} give shares of size equal to the secret size, $F$ bits. Therefore, the non-perfect secret sharing schemes are more efficient in our case in terms of storage and delivery load.
\subsection{Cache Placement Phase}\label{gencache}
First, we present a scheme that works for $M=\frac{Nt}{K-t}+\frac{1}{t}+1$, and $t\in[K-1]$, noting that the remaining values of $M$ can be achieved by memory sharing \cite{maddah2014fundamental}. That is, for any value of $M$, we pick the most two adjacent values, $M_1$ and $M_2$, such that $M_i=\frac{Nt}{K-t_i}+\frac{1}{t_i}+1$, $i=1,2$, $t_i\in[K-1]$, and $M_1\leq M \leq M_2$. We determine the sharing parameter $\alpha\in [0,1]$, by solving the equation $M=\alpha M_1+ (1-\alpha) M_2$. Then, each file, $W_n$, is divided into two subfiles, $W_n^1$ and $W_n^2$, of sizes $\alpha F$ and $(1-\alpha) F$ bits, respectively. The achievability scheme is obtained by applying the scheme designed for the system with memory $M_i$ on the subfiles $W_n^i$, and $i=1,2$.
As a first step, the server encodes each file in the database using a non-perfect secret sharing scheme \cite{yamamoto1986secret,blakley1984security,secretsharing}. In particular, a file, $W_n$, is encoded using a $\left(t {{K-1}\choose {t-1}},t{{K}\choose {t}}\right)$ non-perfect secret sharing scheme. Each share, with size $F_s$ bits, is denoted $S_{n,\mathcal{T}}^j$, where $j=1,\ldots,t$, and $\mathcal{T}\subseteq [K]$ with $|\mathcal{T}|=t$, and
\begin{equation}
F_s=\frac{F}{t {{K}\choose {t}}-t {{K-1}\choose {t-1}}}=\frac{F}{(K-t){{K-1}\choose {t-1}}}.
\end{equation}
We refer to the set $\mathcal{T}$ by the \textit{allocation set} as it determines how the shares will be allocated in the users' caches. In particular, the server places the shares $S_{n,\mathcal{T}}^j$, $\forall j, n$ in the cache of user $k$ whenever $k\in \mathcal{T}$. Also, the parameter $t$ can be see as number of users that will store the same share.
Additionally, the server generates $(t+1){{K}\choose{t+1}}$ independent keys, i.e., they are independent from one another and independent from the library files. In particular, each key is uniformly distributed over $[2^{F_s}]$, and is denoted by $K_{\mathcal{T}_K}^i$, where $i=1,\ldots,t+1$, and $\mathcal{T}_K\subseteq [K]$ with $|\mathcal{T}_K|=t+1$. The server places the keys $K_{\mathcal{T}_K}^i$, $\forall i$, in user $k$'s cache if $k\in \mathcal{T}_K$, i.e., $\mathcal{T}_K$ represents the key allocation set. Therefore, the cached content by user $k$ at the end of the cache placement phase is given by
\begin{equation}\label{cenZ}
Z_k=\left\{S_{n,\mathcal{T}}^i,K_{\mathcal{T}_K}^j: k\in \mathcal{T},\mathcal{T}_K, \mbox{ and } \forall i,n,j \right\}.
\end{equation}
We summarize the cache placement procedure in Algorithm \ref{alg_place1}. In the following remark, we verify that this placement satisfies the cache memory capacity constraint.
\begin{remark}
In the aforementioned placement scheme, each user stores $t{{K-1}\choose{t-1}}$ shares of each file and $(t+1){{K-1}\choose{t}}$ distinct keys, thus the accumulated number of cached bits is given by
\begin{align}\label{memacc}
Nt&{{K-1}\choose{t-1}}F_s+(t+1){{K\!-\!1}\choose{t}}F_s=\frac{Nt}{K-t}F+(1\!+\!\frac{1}{t})F\!=MF.
\end{align}
It follows that we have
\begin{align}
t=\frac{1+(M-1)K+\sqrt{\left(1-(M-1)K\right)^2-4KN}}{2(N+M-1)}.
\end{align}
Clearly, the proposed allocation scheme satisfies the cache memory capacity constraint at each user.
\end{remark}
\begin{algorithm}[t]
\begin{algorithmic}[1]
\REQUIRE $\{ W_{1}, \dots, W_{N}\}$
\ENSURE $ Z_k, k \in [K]$
\FOR{$l \in [N]$}
\STATE Encode $W_l$ using an $(t {{K-1}\choose {t-1}},t{{K}\choose {t}})$ non-perfect secret sharing scheme
$\rightarrow S_{l,\mathcal{T}}^{j}$, $j=1,\ldots,t$, and $\mathcal{T}\subseteq [K]$ with $|\mathcal{T}|=t$.
\ENDFOR
\FOR{$\mathcal{T}_K\subseteq [K]$ with $|\mathcal{T}_K|=t+1$}
\STATE Generate independent keys $K_{\mathcal{T}_K}^i$, $i=1,\ldots,t+1$.
\ENDFOR
\FOR{$k \in [K]$}
\STATE $Z_k \leftarrow \left\{K_{\mathcal{T}_K}^i: k\in \mathcal{T}_K, \forall i \right\} \bigcup \bigcup_{l \in [N]} \left\{S_{l,\mathcal{T}}^j: k\in \mathcal{T}, \forall j \right\}$
\ENDFOR
\end{algorithmic}
\caption{Cache placement procedure}\label{alg_place1}
\end{algorithm}
\begin{remark}\label{minmem}
We note that the minimum value of the normalized cache size, $M$, that is need to apply the proposed scheme is $M_{\text{min}}=M\vert_{t=1}=2+\frac{N}{K-1}$. For a system without secrecy requirements \cite{ji2016fundamental}, we need $M\geq \frac{N}{K}$, while with secure delivery, the scheme in \cite{awan2015fundamental} requires $M\geq 2+\frac{N-2}{K}$. It is evident that, with secrecy requirements, more memory is required, as the users not only cache from the data but also cache the secure keys.
\end{remark}
\vspace{-0.25 in}
\subsection{Coded Delivery Phase}\label{cendel}
At the beginning of the delivery phase, each user requests one of the $N$ files and the demand vector is known to all network users. To derive an upper bound on the required transmission sum rate, we focus our attention on the worst case scenario. We concentrate on the more relevant scenario of $N\geq K$.
The delivery procedure consists of ${K \choose {t+1}}$ transmission instances. At each transmission instance, we consider a set of users $\mathcal{S}\subseteq[K]$, where $|\mathcal{S}|=t+1$. We refer to $\mathcal S$ as the transmission set. For $k\in \mathcal{S}$, user $k$ multicasts the following signal of length $F_s$ bits
\begin{equation}
X_{k,\bm d}^{\mathcal{S}}=\oplus_{l \in \mathcal{S}\setminus\{k\} } S^j_{d_l,\mathcal{S} \setminus\{l\}}\oplus K^i_{\mathcal{S}}.
\end{equation}
Note that the index $i$ is chosen such that each key is used only once, while the index $j$ is chosen to ensure that each transmission is formed by shares that had not been transmitted in previous transmissions by the other users in $\mathcal{S}$. For example, they can be chosen as the relative order of the user's index with respect to the indices of the remaining users in $\mathcal{S}$. Thus, in total, the transmitted signal by user $k$ can be expressed as
\begin{equation}
X_{k,\bm d}=\bigcup_{\mathcal{S}:\ \! k\in\mathcal{S}, \ \! \mathcal{S}\subseteq[K],|\mathcal{S}|=t+1 }\{X_{k,\bm d}^{\mathcal{S}}\}.
\end{equation}
Observe that the cache memories of the users from any subset, $\mathcal{S}_t\subset\mathcal{S}$, with $|\mathcal{ S}_t|=t$ contain $t$ shares of the file requested by the user in $\mathcal{S}\setminus \mathcal{ S}_t$, as can be seen from (\ref{cenZ}). Thus, utilizing its cache contents, each user in $\mathcal{S}$ obtains $t$ shares from its requested file during this instance of transmission, i.e., the user in $\mathcal{S}\setminus \mathcal{ S}_t$ obtains the shares $S_{d_{\mathcal{S}\setminus \mathcal{S}_t},\mathcal{S}_t}^j \ \forall j$.
Observe also that user $k$ belongs to ${K-1} \choose {t}$ different choices of such subsets of the users, thus at the end of the delivery phase, user $k$ obtains $t{{K-1} \choose {t}}$ \textit{new} shares of its requested files, in addition to the cached $t{{K-1} \choose {t-1}}$ shares. Therefore, user $k$ can decode its requested file from its $t{{K} \choose {t}}$ shares, i.e., the reliability requirement (\ref{reliableconst}) is satisfied. Delivery procedure is summarized in Algorithm \ref{alg_delv1}.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\REQUIRE $\bm d$
\ENSURE $X_{k,\bm d}, k\in [K] $
\FOR{$k\in [K]$}
\FOR{$\ensuremath\mathcal S \in [K], |\ensuremath\mathcal S|=t+1, k\in \ensuremath\mathcal S$}
\STATE $X_{k,\bm d}^{\mathcal{S}}\leftarrow \oplus_{l \in \mathcal{S}\setminus\{k\} } S^j_{d_l,\mathcal{S} \setminus\{l\}}\oplus K^i_{\mathcal{S}} $, for some choice of $i$ and $j$
\ENDFOR
\STATE $X_{k,\bm d}\leftarrow \bigcup_{\mathcal{S}\subseteq[\hat K], k \in \ensuremath\mathcal S} \{X_{k,\bm d}^{\mathcal{S}}\}$
\ENDFOR
\end{algorithmic}
\caption{Delivery procedure}\label{alg_delv1}
\end{algorithm}
\subsection{Rate Calculation}
Now, we focus our attention on calculating the required transmission rate. Note that there are $K \choose {t+1}$ different choices of the set $\mathcal{S}$. For each choice, $t+1$ signals of length $F_s$ bits are transmitted, thus the total number of the transmitted bits is given by
\begin{equation}
R_TF=(t+1){K \choose {t+1}} F_s=\frac{K}{t}F.
\end{equation}
Consequently, we can achieve the following normalized sum rate
\begin{equation}
R_T=\frac{2K(N+M-1)}{1+(M-1)K+\sqrt{\left(1-(M+1)K\right)^2-4KN}}.
\end{equation}
Therefore, we can state the following theorem
\begin{theorem}\label{them:cen}
Under centralized placement, for $M=\frac{Nt}{K-t}+\frac{1}{t}+1$, and $t\in[K-1]$, the secure sum transmission rate is upper bounded by
\begin{equation}
R_T^*\leq R_T^C\leq\!\frac{2K(N+M\!-\!1)}{1+(M\!-\!1)K+\sqrt{\left(1\!-\!(M\!-\!1)K\right)^2\!-\!4KN}}.
\end{equation}
In addition, using memory sharing \cite{maddah2014fundamental}, we can achieve the convex envelope of the points given by the values $M\!=\!\frac{Nt}{K-t}\!+\!\frac{1}{t}\!+\!1$, and $t\in[K-1]$.
\end{theorem}
\subsection{Secrecy Analysis}
For user $k$, the cache's contents, $Z_k$ given by (\ref{cenZ}), contains only $t {{K-1}\choose {t-1}}$ shares, from each file, resulting from a $\left(t {{K-1}\choose {t-1}},t{{K}\choose {t}}\right)$ non-perfect secret sharing scheme. Therefore, $Z_k$, by itself, cannot reveal any information about the files to user $k$.
During the delivery phase, if at any instance, user $k$ belongs to the transmission set, $\mathcal{S}$, then the transmitted signals are formed from the shares of the requested file by user $k$, $W_{d_k}$, and shares that have been already placed in the cache of user $k$ during the cache placement phase, i.e., from $Z_k$. When user $k$ does not belong to the transmission set, all the transmitted signals are encrypted using one-time pads, unknown to user $k$, thus, user $k$ cannot gain any information from these signals \cite{shannon1949}. Therefore, the secure caching constraint, (\ref{secrtiveconst}), is satisfied.
We observe that the server has generated $(t+1){{K} \choose {t+1}}$ independent keys with lengths equal to the share size. Thus, with a proper selection of the encrypting key for each transmission, we can ensure a unique use of each key, i.e., one-time padding. The above discussion implies that the secrecy of the transmitted signals, from any external wiretapper that accesses the network links during the delivery phase, is also guaranteed \cite{shannon1949}. One-time pads are essential to ensure the secure caching requirement in (\ref{secrtiveconst}), whereas the secure delivery requirement in (\ref{securedelivery}), is satisfied as a byproduct.
\subsection{An Illustrative Example}
Consider a system with four users and a library consists of four files, $W_1,W_2,W_3,W_4$, i.e., $K=N=4$. Each user has a normalized memory size $M=\frac{11}{2}$, which gives us $t=2$ and indicates that each of the resulting shares will be cached by two different users. The server encodes each file using $(6,12)$ non-perfect secret sharing scheme. For a file, $W_n$, the server generates $12$ shares, which we label by $S_{n,\mathcal{T}}^i$ where $i=1,2$, $\mathcal{T}\subset\{1,2,3,4\}$ and $|\mathcal{T}|=2$, each of size $F/6$ bits.
Furthermore, the server generates the set of keys $K_{\mathcal{T}_K}^j$, uniformly distributed over $\{1,\ldots,2^{F/6}\}$, where $j=1,2,3$, $\mathcal{T}_K\subset\{1,2,3,4\}$ and $|\mathcal{T}_K|=3$.
User $k$ stores the shares $S_{n,\mathcal{T}}^i$, and the keys $K_{\mathcal{T}_K}^j$ whenever $k\in\mathcal{T}$ and $k\in\mathcal{T}_K$, $\forall n,j,i$, respectively. Therefore, the cache contents at the users are given by
\begin{equation}\nonumber
Z_{1}=\begin{Bmatrix}
S_{n,12}^i,S_{n,13}^i,S_{n,14}^i, \ \forall n,i, K_{123}^j,K_{124}^j,K_{134}^j, \ \forall j
\end{Bmatrix},
\end{equation}
\begin{equation}\nonumber
Z_{2}=\begin{Bmatrix}
S_{n,12}^i,S_{n,23}^i,S_{n,24}^i, \forall n,i,
K_{123}^j,K_{124}^j,K_{234}^j, \ \forall j
\end{Bmatrix},
\end{equation}
\begin{equation}\nonumber
Z_{3}=\begin{Bmatrix}
S_{n,13}^i,S_{n,23}^i,S_{n,34}^i, \forall n,i,
K_{123}^j,K_{134}^j,K_{234}^j, \ \forall j
\end{Bmatrix},
\end{equation}
\begin{equation}\nonumber
Z_{4}=\begin{Bmatrix}
S_{n,14}^i,S_{n,24}^i,S_{n,34}^i, \forall n,i,
K_{124}^j,K_{134}^j,K_{234}^j, \ \forall j
\end{Bmatrix}.
\end{equation}
Each user caches $6$ shares of each file. We observe that the caches will not be able to reveal any information about the unrequested files thanks to the non-perfect secret sharing encoding. Also, note that the cache capacity constraints at all the users are satisfied.
Now, consider the delivery phase, where user $k$ requests the file $W_k$, i.e., $\bm d=(1,2,3,4)$. In this case, the users transmit the following signals.
\begin{align}\nonumber
X_{1,\bm d}=\begin{Bmatrix}
&S_{2,13}^1 \oplus S_{3,12}^1 \oplus K_{123}^1, S_{4,13}^1 \oplus S_{3,14}^1 \oplus K_{134}^1, \nonumber \\
& \quad S_{2,14}^1 \oplus S_{4,12}^1 \oplus K_{124}^1
\end{Bmatrix},
\end{align}
\begin{align}\nonumber
X_{2,\bm d}=\begin{Bmatrix}
&S_{1,23}^2\oplus S_{3,12}^2 \oplus K_{123}^2, S_{4,23}^2\oplus S_{3,24}^2 \oplus K_{234}^2, \nonumber \\
& \quad S_{1,24}^2\oplus S_{4,12}^2 \oplus K_{124}^2
\end{Bmatrix},
\end{align}
\begin{align}\nonumber
X_{3,\bm d}=\begin{Bmatrix}
& S_{1,23}^1\oplus S_{2,13}^2 \oplus K_{123}^3, S_{4,13}^2\oplus S_{1,34}^1 \oplus K_{134}^2,
\nonumber \\
&S_{2,34}^1\oplus S_{4,23}^1 \oplus K_{234}^3
\end{Bmatrix},
\end{align}
\begin{align}\nonumber
X_{4,\bm d}=\begin{Bmatrix}
&S_{1,24}^1\oplus S_{2,14}^2 \oplus K_{124}^3, S_{1,34}^2\oplus S_{3,14}^2 \oplus K_{134}^3,
\nonumber \\
&S_{2,34}^2\oplus S_{3,24}^1 \oplus K_{234}^1
\end{Bmatrix}.
\end{align}
From its received signals, $X_{2,\bm d}$, $X_{3,\bm d}$ and $X_{4,\bm d}$, and utilizing its cached content, user $1$ gets $S_{1,23}^1$, $S_{1,23}^2$, $S_{1,24}^1$, $S_{1,24}^2$, $S_{1,34}^1$ and $S_{1,34}^2$. Thus, user $1$ can reconstruct its requested file, $W_1$, from its $12$ shares. Similarly, users $2$, $3$ and $4$ are able to decode files $W_2$, $W_3$ and $W_4$, respectively.
We observe that user $k$ will only obtain new shares of its requested file $W_k$, thus it cannot gain any information about the remaining files, $\{W_1,W_2,W_3,W_4\}\setminus \{W_k\}$. This is done by proper selection of the keys so that each user cannot gain any information about the remaining three files. In addition, each signal is encrypted using one-time pad which ensures the secrecy of the database files from any external eavesdropper as in \cite{awan2015fundamental}. In this delivery procedure, each user participates by $3$ distinct transmissions, each of size $F/6$ bits, thus $R_T^C=2$. Comparing with the system in \cite{ravindrakumar2016fundamental}, where the server is responsible for the delivery phase, we see that a normalized secure rate $\simeq 1.3$ is achievable, for the same system parameters. This difference is due to limited access of the shares at each user, unlike the case in \cite{ravindrakumar2016fundamental} where the server can access all shares during the delivery phase, i.e., the cost of having D2D delivery.
\subsection{Secure Caching without Secure Delivery for $M=N(K-1)$}\label{specialcase}
The scheme described above ensures that the requirements in (\ref{secrtiveconst}) (and (\ref{securedelivery})) are satisfied. The encryption keys are essential to achieve both. In the following, we study a special case where we can provide a scheme that achieves secure caching, i.e., satisfy (\ref{secrtiveconst}), without the necessity of satisfying the secure delivery constraint, i.e., (\ref{securedelivery}).
More specifically, when $M=N(K-1)$, we can achieve a normalized rate equals to $\frac{K}{K-1}$ without utilizing encryption keys. In particular, each file is encoded using $((K-1)^2,K(K-1))$ non-perfect secret sharing scheme. The resulting shares, each of size $F_s=\frac{F}{K-1}$ bits, are indexed by $S_{n,i}^j$, where $n$ is the file index, $j=1,\ldots,K-1$, and $i=1,\ldots,K$. The server allocates the shares $S_{n,i}^j$, $\forall j,n$ and $i\neq k$ in the memory of user $k$, i.e.,
\begin{equation}\label{censpecial}
Z_k=\{S_{n,i}^j: \forall j,n \mbox{ and } i\neq k\}.
\end{equation}
Thus, each user stores $N(K-1)^2$ shares, which satisfies the memory capacity constraint.
At the beginning of the delivery phase, each user announces its request. Again, we assume that the users request different files. User $k$ multicasts the following signal to all other users
\begin{equation}
X_{k,\bm d}=\oplus_{l \in [K]\setminus\{k\} } S^j_{d_l,l},
\end{equation}
where $j$ is chosen to ensure that each transmission is formed by fresh shares which had not been included in the previous transmissions. From its received $K-1$ signals, user $k$ can extract the shares $S^j_{d_k,k}$, $\forall j$. By combining these shares with the ones in its memory, user $k$ recovers its requested file, $W_{d_k}$. The total number of bits transmitted under this scheme is $R_TF=KF_s.$
Thus, the following normalized sum rate, under the secure caching constraint (\ref{secrtiveconst}), is achievable for $M=N(K-1)$,
\begin{equation}
R_T=\frac{K}{K-1}.
\end{equation}
This rate matches the cut set bound as in Section \ref{sec:lower}.
\subsection{Discussion}
The above scheme, in subsection \ref{specialcase}, satisfies only the secure caching constraint (\ref{secrtiveconst}), without ensuring the protection from any external eavesdropper that overhears the transmitted signals during the delivery phase. On the other hand, the general scheme, presented in subsections \ref{gencache} and \ref{cendel}, achieves the same rate, i.e., $R_T^C=\frac{K}{K-1}$, when $M=N(K-1)+\frac{K}{K-1}$, while satisfying the secure caching constraint (\ref{secrtiveconst}) and the secure delivery constraint (\ref{securedelivery}), simultaneously. In other words, an additional memory at each user with size $\frac{K}{K-1}F$ bits is required to ensure the additional requirement of secure delivery.
We observe that the encryption keys serve to satisfy both the secure caching and secure delivery requirements. Therefore, one can think about the satisfaction of the secure delivery requirement as a byproduct of the general scheme in subsections \ref{gencache} and \ref{cendel}, i.e., the secure delivery comes for free while satisfying the secure caching constraint, whenever $M \leq \frac{N(K-2)}{2}+\frac{1}{K-2}+1$.
Under different network topologies, secure delivery may require additional cost. For example, in recent reference \cite{zewail2017combination}, we have shown that there is no need to use encryption keys to satisfy the secure caching requirements in the setting of combination networks. This is possible due to the unicast nature of the network links, which is not the case in the system under investigation, as we assume that the users communicate with each other via multicast links.
\section{Decentralized Coded Caching Scheme}\label{sec:dec}
In this section, we provide a decentralized coded caching scheme, \cite{maddah2015decentralized}, for our setup. The proposed scheme is motivated by the ones in \cite{jin2016new} for multicast coded caching setup without secrecy requirements \cite{maddah2014fundamental} \cite{maddah2015decentralized}. It does not require the knowledge of the number of active users of the delivery phase during cache placement. This scheme operates over two phases as follows.
\vspace{-.2 in}
\subsection{Cache Placement Phase}\label{dec:place}
The main idea of the cache placement scheme is to design the cache contents for a number of users $L$ that is less than the number of users in the system during the delivery phase, i.e., $K$. $L$ is in effect a lower bound on the expected number of active users in the system.
For a given $L$ and $M=\frac{Nt}{L-t}+\frac{2}{t}+1$, and $t\in[L-1]$, each file in the database is encoded using a suitable non-perfect secret sharing scheme. In particular, a file, $W_n$, is encoded using $\left(t {{L-1}\choose {t-1}},t{{L}\choose {t}}\right)$ non-perfect secret sharing scheme. We obtain $t{{L}\choose {t}}$ shares, each with size $\bar F_s$, where
\begin{equation}
\bar F_s=\frac{F}{t {{L}\choose {t}}-t {{L-1}\choose {t-1}}}=\frac{F}{(L-t){{L-1}\choose {t-1}}}.
\end{equation}
Each share is denoted by $S_{n,\mathcal{T}}^j$, where $n$ is the file index, i.e., $n\in[N]$, $j=1,\ldots,t$, and $\mathcal{T}\subseteq [L]$ with $|\mathcal{T}|=t$.
The server prepares the following set of cache contents, $\bar{Z_l}$,
\begin{equation}
\bar{Z_l}=\{S_{n,\mathcal{T}}^j: l\in \mathcal{T}, \ \ \forall j, n\}, \qquad \qquad l=1,2,\ldots,L.
\end{equation}
Once user $k$ joins the system, it caches the content $\bar Z_{l_k}$ where $l_k=k \mod L$. Such allocation results in dividing the set of active users into $\lceil\frac{K}{L}\rceil$ virtual groups. In particular, we group the first $L$ users to join the system in group $1$, and the users from $L+1$ to $2L$ in group $2$ and so on. Note that each group from $1$ to $\lceil\frac{K}{L}\rceil-1$ contains $L$ users, and the group $\lceil\frac{K}{L}\rceil$ contains $K-(\lceil\frac{K}{L}\rceil-1)L$ users. These groups are formed sequentially in time.
As explained in Section \ref{sec:ach}, we require the server to generate a set of random keys to be shared between the users. For group $u$, $u=1,\ldots,\lceil\frac{K}{L}\rceil-1$, the server generates the keys $K_{u,\mathcal{T}_K}^i$, where $i=1,\ldots,t+1$, $\mathcal{T}_K\subseteq [L]$ and $|\mathcal{T}_K|=t+1$. Each key is uniformly distributed over $[2^{\bar F_s}]$. User $l_k$ from group $u$ stores the keys $K_{u,\mathcal{T}_K}^i$, $\forall i$, whenever $l_k\in \mathcal{T}_K$.
In addition, the server generates the keys $K_{u^*,\mathcal{T}_K}^i$, where $i=1,\ldots,t+1$, and $\mathcal{T}_K\subseteq [L], |\mathcal{T}_K|=t+1$, and allocates these keys in the cache memories of the users in groups $1$ and $\lceil\frac{K}{L}\rceil$ as follows. The keys $\{K_{u^*,\mathcal{T}_K}^i, \ \forall i\}$ are cached by user $l_k$ from group $\lceil\frac{K}{L}\rceil$, as long as $l_k\in \mathcal{T}_K$. User $l_k$ from group $1$ stores the keys $K_{u^*,\mathcal{T}_K}^j$ for only one specific $j$ whenever $l_k\in \mathcal{T}_K$. This index $j$ is chosen such that the users from group $1$ store different keys.
In summary, at the end of cache placement, cache contents of user $k$ are given by
\begin{equation}\label{decenall}
Z_k=\begin{cases} &\{\bar Z_{l_k}, K_{1,\mathcal{T}_K}^i,K_{u^*,\mathcal{T}_K}^j: l_k\in \mathcal{T}_K, \forall i, \mbox{ for a specific } j\}, \\
& \qquad \qquad \qquad \qquad \mbox{if } 1 \leq k \leq L,\\
&\{ \bar Z_{l_k}, K_{u,\mathcal{T}_K}^i: u=\lceil\frac{k}{L}\rceil, l_k\in \mathcal{T}_K, \forall i\}, \\
& \qquad \qquad \qquad \qquad \mbox{if } L\!+\!1 \leq k \leq K\!-\!(\lceil\frac{K}{L}\rceil\!-\!1)L,\\
&\{ \bar Z_{l_k}, K_{u^*,\mathcal{T}_K}^i: l_k\in \mathcal{T}_K, \forall i\}, \\
& \qquad\qquad \qquad \qquad \mbox{if } K\!-\!(\lceil\frac{K}{L}\rceil\!-\!1)L+\!1 \leq k \leq K.
\end{cases}
\end{equation}
\begin{remark}
We need to ensure that this allocation procedure does not violate the memory capacity constraint at each user. Observe that each user stores the same amount of the encoded file shares, however, the users from group $1$ stores more keys than the other users. Thus, satisfying the memory constraint at the users in group $1$ implies satisfying the memory constraint at all network users.
Each user in group $1$ stores $Nt{{L-1}\choose{t-1}}$ shares and $(t+2){{L-1}\choose{t}}$ keys. Thus, the total number of the stored bits is given by
\begin{equation}\label{memacc2}
Nt{{L\!-\!1}\choose{t\!-\!1}}\bar F_s\!+\!(t\!+\!2){{L\!-\!1}\choose{t}}\bar F_s\!=\!\frac{Nt}{L\!-\!t}F\!+\!(1\!+\!\frac{2}{t})F\!=\!MF,
\end{equation}
and from (\ref{memacc2}), we get
\begin{align}
t=\frac{2+(M-1)L+\sqrt{\left(2-(M-1)L\right)^2-8LN}}{2(N+M-1)}.
\end{align}
Therefore, the proposed scheme satisfies the cache capacity constraint at each user.
\end{remark}
\subsection{Coded Delivery Phase}\label{dec:delivery}
We focus our attention on the worst case demand, where $K$ users request $K$ different files. The delivery phase is divided into $\lceil\frac{K}{L}\rceil$ stages. At each stage, we focus on serving the users of one group. For any stage $u$, where $u=1,\ldots,\lceil\frac{K}{L}\rceil-1$, the delivery process during stage $u$ is performed in a way similar to the one described in subsection \ref{cendel} with $K=L$ to serve the requests of users in group $u$. In particular, at each transmission instance, we consider $\mathcal{S}\subseteq [L]$, where $|\mathcal{S}|=t+1$. User $k$, with $l_k\in \mathcal{S}$, multicasts a signal, of length $\bar F_s$ bits, given by
\begin{equation}
K^i_{u,\mathcal{S}} \oplus_{l_v \in \mathcal{S}\setminus\{l_k\} } S^j_{{d}_v,\mathcal{S} \setminus\{l_v\}},
\end{equation}
where the index $i$ is chosen in way that guarantees the uniqueness of the key utilized for each transmission.
From the cache placement phase, we observe that any $t$ users belong to the set $\mathcal{S}$ share $t$ shares of the file requested by the remaining user that is in $\mathcal{S}$. Thus, each user in $\mathcal{S}$ obtains $t$ shares from its requested file during this instance of transmission. At the end of stage $u$, each user from group $u$ can decode its requested file from its $t{L \choose {t}}$ shares.
Since, there are $L \choose {t+1}$ different choices of the set $\mathcal S$, and for each choice $t+1$ signals of length $\bar F_s$ are transmitted, the total number of the transmitted bits to serve the users from group $u$ is
\begin{equation}
R_uF=(t+1){L \choose {t+1}} \bar F_s=\frac{L}{t}F, \qquad u=1,\ldots,\lceil\frac{K}{L}\rceil-1.
\end{equation}
Now, we focus on serving the users of the last group, i.e., group $\lceil\frac{K}{L}\rceil$. First, recall that the number of users in this group is $p\triangleq K-(\lceil\frac{K}{L}\rceil-1)L< L$, thus these users cannot satisfy their requests via device-to-device communications between them only. We require some of the users from group $1$ to participate in this last stage of the delivery phase. In particular, the users indexed by $l_k$, with $\l_k=p+1,\ldots,L$, from group $1$ forms a virtual group with the users from group $\lceil\frac{K}{L}\rceil$, such that the resulting group contains $L$ users.
Note that, at this stage, the requests of the users from group $1$ have been already served, during stage $1$. Therefore, at each transmission instance, we consider only the sets $\mathcal{S}\subseteq[L]$, where $|\mathcal{S}|=t+1$ with $l_k\in \mathcal{S} $ and $l_k \in[p]$. We define the sets $\mathcal S_{u^*}$ and $\mathcal S_{u^*}^c$ to represent the subset of $\mathcal{S}$ that contains the users from group $\lceil\frac{K}{L}\rceil$ and group $1$, respectively, i.e., $\mathcal S_{u^*} \cup \mathcal S_{u^*}^c=\mathcal{S}$. Since, we only care now about serving the users in group $\lceil\frac{K}{L}\rceil$, we neglect any set $\mathcal S$ with $\mathcal S_{u^*}=\{\}$. For the sets that contain only one user, user $k$, from group $\lceil\frac{K}{L}\rceil$, i.e., $l_k \in \mathcal S_{u^*}$, $|\mathcal S_{u^*}|=1$, and $|\mathcal S_{u^*}^c|=t$, each user in the set $\mathcal S_{u^*}^c$ transmits
\begin{equation}
K^j_{{u^*},\mathcal{S}} \oplus S^i_{{d}_{k},\mathcal{S} \setminus\{l_k\}},
\end{equation}
where $i$ is chosen to ensure that from every transmission user $k$ obtains a different share from its requested file.\newline
For the sets that contain more than one user from group $\lceil\frac{K}{L}\rceil$, i.e., $|\mathcal S_{u^*}| \geq 2$, each user in the set $\mathcal{S}$ multicasts a signal of length $\bar F_s$ given by
\begin{equation}
K^j_{u^*,\mathcal{S}} \oplus_{l_v \in \mathcal S_{u^*}\setminus\{l_k\} } S^i_{{d}_v,\mathcal S_{u^*} \setminus\{l_v\}}.
\end{equation}
By taking into account all possible sets with $|\mathcal S_{u^*}|\geq 1$, the total number of the transmitted bits during this stage is given by
\begin{equation}
R_{u^*}F=pt {{L-p} \choose {t}} \bar F_s+\sum_{u=2}^{\min(p,t)} (t+1){{L-p} \choose {t-u+1}} \bar F_s.
\end{equation}
Consequently, we can obtain the following upper bound on the normalized sum rate
\begin{equation}
R_T^{D}= R_{u^*}+\left(\lceil\frac{K}{L}\rceil\!-\!1\right)R_u.
\end{equation}
\begin{theorem}
For any integer $L\leq K$, $M=\frac{Nt}{L-t}+\frac{2}{t}+1$, and $t\in[L-1]$, the secure sum rate under decentralized coded caching is upper bounded by
\begin{align}
R_T^*\leq R_T^D&\leq\frac{2L(N+M-1)\left(\lceil\frac{K}{L}\rceil-1\right)}{2\!+\!(M\!-\!1)L\!+\!\sqrt{\left(2\!-\!(M\!-\!1)L\right)^2\!\!-\!8LN}}\nonumber \\
& \qquad+\frac{pt \langle{{L-p} \choose {t}} \rangle+\sum_{u=2}^{\min(p,t)} (t+1)\langle{{L-p} \choose {t-u+1}}\rangle}{(L-t){{L-1} \choose {t-1}}},
\end{align}
where $\langle{{h} \choose {r}}\rangle={{h} \choose {r}}$ whenever $h\geq r$ and $0$ otherwise, and $p=K-(\lceil\frac{K}{L}\rceil-1)L$. In addition, the convex envelope of the above points, defined for each $M$, is also achievable.
\end{theorem}
Using memory sharing \cite{maddah2014fundamental}, we can achieve the convex envelope of the points given by the values $M=\frac{Nt}{L-t}+\frac{2}{t}+1$, and $t\in[L-1]$
\subsection{Discussion}
In the decentralized coded caching scheme proposed in \cite{sengupta2015fundamental}, for server-based coded caching with secure delivery, key placement is done in a centralized manner after a decentralized caching of a fraction $\frac{M-1}{N-1}$ of each file, without the knowledge of the users' demands. For server-based systems with secure caching, a decentralized scheme was proposed in \cite{ravindrakumar2016fundamental}.
We note that developing decentralized schemes for D2D coded caching systems is more involved compared with decentralized schemes for server-based coded caching systems \cite{maddah2014fundamental}. This due to the requirement that in D2D the server must disengage the delivery process, i.e., the end users collectively must possess pieces of the entire library. When there are no secrecy requirements, reference \cite{ji2016fundamental} has proposed a decentralized D2D coded caching scheme, which utilizes maximum distance separable (MDS) codes to encode the files at the server to satisfy the users' requests without the participation of the server during the delivery phase.
For our D2D secure coded caching, we utilize a grouping-based approach that allows disengaging the server from the delivery phase, and the key placement is done during the cache placement phase, without the knowledge of the users' demands. We choose this grouping-based approach instead of utilizing the MDS encoding in \cite{ji2016fundamental}, as each user not only needs to store the keys used in encrypting its intended signals but also the keys that are used to encrypt its transmitted signals. Therefore, applying a decentralized cache placement based on MDS coding requires the users to dedicate a large fraction of their cache memories for the keys to be allocated by the server during the delivery phase after announcing the demand vector. By contrast, our proposed scheme ensures a practical self-sustainable system with reasonable fraction of each cache memory dedicated to encryption keys. Once a group of $L$ users joins the system, sequentially, the server can place the keys in the memories of these users. At the end of the cache placement phase, before the beginning of the delivery phase, the server allocates the keys to be used in encrypting the signals intended to the last group in the caches of the users of the first and the last group. We remark that this grouping-based approach can be used to develop new decentralized schemes for multicast coded caching scenarios with secure delivery and secure caching that were considered in \cite{sengupta2015fundamental,ravindrakumar2016fundamental,awan2015fundamental} as well.
We observe, from (\ref{decenall}), that the cache memory at each user is divided into two partitions, one for the shares of the files, and the other one for the keys. The partition assigned for the shares $\bar{Z_l}$ can be identical in multiple users. Thus, encrypting the signal with a one-time pad is necessary to satisfy the secure caching requirement. Note that each user from group $1$ which participates in the last stage of the delivery phase, knows only the keys that it will use to encrypt its transmitted signal, thus it cannot gain any new information from the signals transmitted during this last stage. The secure delivery requirement is satisfied as a byproduct as in the centralized scheme of Section \ref{sec:ach}.
\subsubsection{The Choice of $L$}
A key element in designing the aforementioned semi-decentralized scheme, is the choice of the parameter $L$, which can be determined by observing the number of users in the system during the peak traffic hours over a sufficient amount of time. Then, we can choose $L$ as the minimum value among the observed numbers of users in the system.
We note that as long as $L$ is close to $K$, the exact number of active users in the system, we can benefit from more multicast opportunities which helps in reducing the overall delivery load.
We note that a minor potential drawback of the provided scheme is that some users cache memories may be under-utilized by a small fraction. In particular, other than the users from group 1 who participate in serving the last group, a very small fraction of size, $\frac{1}{t}$, that is not scaled with the library size, is not utilized from each user memory. This fraction can be see as a cost for disengaging the server from the delivery phase. This fraction cannot be used to cache from data directly due to the secure caching requirement. A good estimate of $L$, i.e., choosing $L$ close to $K$, will reduce the number of users that do not fully utilize their memory.
\subsubsection{User Mobility During the Placement Phase}
If a user, $f$, leaves the system during the cache placement phase, then its cached contents, $Z_f$, should be assigned by the server to populate the cache memory of the first user to join the system after this departure. If no user joins the system before the beginning of the delivery phase, then the server can update the contents of the last user that joined the system with $Z_f$.
\section{Lower Bound}\label{sec:lower}
In this section, we derive a lower (converse) bound on the normalized sum of the required sum rate. The derivation is based on cut-set arguments \cite{cover2006elements}, similar to \cite{ji2016fundamental,sengupta2015beyond}.
Assume that the first $s$ users, where $s\in\{1,2,\ldots,\min(N/2,K)\}$, request the files from $1$ to $s$, such that user $i$ requests $W_i$, $i\in\{1,2,\ldots,s\}$. The remaining users are assumed to be given their requested files by a genie. We define $\bm X_1$ to represent the transmitted signals by the users to respond to these requests, i.e., $\bm X_1=\{X_{1,(1,\ldots,s)},\ldots,X_{K,(1,\ldots,s)}\}$. At the next request instance, the first $s$ users request the files from $s+1$ to $2s$, such that user $i$ requests $W_{s+i}$. These requests are served by transmitting the signals $\bm X_2=\{X_{1,(s+1,\ldots,2s)},\ldots,X_{K,(s+1,\ldots,2s)}\}$. We proceed in the same manner, such that at the request instance $q$, the first $s$ users request the files from $(q-1)s+1$ to $qs$, such that user $i$ requests $W_{(q-1)s+i}$, and the users transmit the signals $\bm X_q=\{X_{1,((q-1)s+1,\ldots,qs)},\ldots,X_{K,((q-1)s+1,\ldots,qs)}\}$, where $q\in\{1,\ldots,\lfloor N/s \rfloor\}$. In addition, we define $\bm{ \bar X_q}=\{X_{s+1,((q-1)s+1,\ldots,qs)},\ldots,X_{K,((q-1)s+1,\ldots,qs)}\}$ to denote the set of the transmitted signals by the users indexed by $s+1$ to $K$ at request instance $q$.
From the received signals over the request instances $1,2,\ldots,\lfloor N/s \rfloor$ and the information stored in its cache, i.e., $Z_i$, user $i$ must be able to decode the files $W_i, W_{i+s},\ldots,W_{i+(\lfloor N/s \rfloor-1)s}$. Consider the set of files $ \mathcal{ \bar W}=\{W_1,\ldots,W_{(q-1)s+k-1},W_{(q-1)s+k+1},\ldots,W_{s\lfloor N/s \rfloor} \}$, i.e., the set of all requested files excluding the file, $W_{(q-1)s+k}$, which was requested by user $k$ at the request instance $q$. Therefore, we have
\begin{align}
(s&\lfloor N/s\rfloor-1)F=H(\mathcal{ \bar W}) \nonumber \\
&\leq H(\mathcal{ \bar W})-H(\mathcal{ \bar W}|\bm X_1,\ldots,\bm X_{\lfloor N/s \rfloor}, Z_1,\ldots,Z_s)+\epsilon\label{lowerreliable}\\
&=I(\mathcal{ \bar W};\bm{ \bar X_1},\ldots,\bm {\bar X_{\lfloor N/s \rfloor}}, Z_1,\ldots,Z_s)+\epsilon\\
&=I(\mathcal{ \bar W};\bm{ \bar X_q}, Z_k)+I(\mathcal{ \bar W};\bm{ \bar X_1},\ldots,\bm{ \bar X_{q-1}},\bm{ \bar X_{q+1}},\ldots,\bm{ \bar X_{\lfloor N/s \rfloor}}, \nonumber \\
& \qquad \qquad \quad \qquad Z_1,\ldots,Z_{k-1},Z_{k+1},\ldots,Z_s|\bm{ \bar X_q}, Z_k)+\epsilon.\label{chain}
\end{align}
Step (\ref{lowerreliable}) follows from (\ref{reliableconst}) as the users must be able to decode their requested files utilizing their cache's contents and received signals. To simplify the notation, we define
\begin{align}
& \bm{\mathcal{X}}=\{\bm{\bar X_1},\ldots,\bm{\bar X_{q-1}},\bm{\bar X_{q+1}},\ldots,\bm{ \bar X_{\lfloor N/s \rfloor}}\} \nonumber \\ &\qquad\mbox{ and } \qquad \bm{\mathcal Z} =\{ Z_1,\ldots,Z_{k-1},Z_{k+1},\ldots,Z_s\}\nonumber.
\end{align}
Now, (\ref{chain}) can be expressed as
\begin{align}
I(&\mathcal{ \bar W};\bm{ \bar X_q}, Z_k)+I(\mathcal{ \bar W};\bm{\mathcal{X}},\bm{\mathcal{Z}}|\bm{ \bar X_q}, Z_k)+\epsilon\nonumber \\& \leq I(\mathcal{ \bar W};\bm{\mathcal{X}},\bm{\mathcal{Z}}|\bm{ \bar X_q}, Z_k)+\epsilon+\delta \label{lowerconf}\\
&=H(\bm{\mathcal{X}},\bm{\mathcal{Z}}|\bm{ \bar X_q}, Z_k)-H(\bm{\mathcal{X}},\bm{\mathcal{Z}}|\mathcal{ \bar W},\bm{ \bar X_q}, Z_k)+\epsilon+\delta\\
&\leq H(\bm{\mathcal{X}},\bm{\mathcal{Z}}|\bm{ \bar X_q}, Z_k)+\epsilon+\delta\\
&\leq H(\bm{\mathcal{X}},\bm{\mathcal{Z}})+\epsilon+\delta\\
&=H(\bm{\mathcal{X}})+H(\bm{\mathcal{Z}}|\bm{\mathcal{X}})+\epsilon+\delta\\
&\leq H(\bm{\mathcal{X}})+H(\bm{\mathcal{Z}})+\epsilon+\delta\\
&\leq \sum_{j=1,j\neq q}^{\lfloor N/s \rfloor} H(\bm {\bar X_j})+\sum_{i=1,i\neq k}^s H(Z_i)+\epsilon+\delta\\
&\leq (\lfloor N/s \rfloor-1)\frac{K-s}{K}RF+(s-1)MF+\epsilon+\delta.
\end{align}
Note that step (\ref{lowerconf}) is due to (\ref{secrtiveconst}). Therefore, we can get
\begin{equation}
R_T\geq \frac{K[(s\lfloor N/s\rfloor-1)-(s-1)M]}{(\lfloor N/s \rfloor-1)(K-s)}.
\end{equation}
Taking into account all possible cuts, we obtain the lower bound stated in the following theorem.
\begin{theorem}
The achievable secure rate is lower bounded by
\begin{equation}\label{d2dlower}
R_T^{*}\geq \max_{s\in\{1,2,\ldots,\min(K,N/2)\}}\frac{K[(s\lfloor N/s\rfloor-1)-(s-1)M]}{(\lfloor N/s \rfloor-1)(K-s)}.
\end{equation}
\end{theorem}
\begin{remark}
We note that $R_T^{*}\geq \frac{K}{K-1}$, which is obtained by setting $s=1$ in Theorem 3, can be achieved whenever $M\geq N(K-1)$, using the proposed scheme in subsection \ref{specialcase}.
\end{remark}
In addition, the multiplicative gap between the upper bound in Theorem \ref{them:cen} and the above lower bound is bounded by a constant as stated in the following theorem.
\begin{theorem}\label{them:gap}
For $M\geq 2 +\frac{N}{K-1}$, there exists a constant, $c$, independent of all the system parameters, such that
\begin{equation}
1\leq\frac{R_{T}^{C}}{R_{T}^*}\leq c.
\end{equation}
\end{theorem}
\begin{proof}
See the Appendix.
\end{proof}
\section{Numerical Results}\label{sec:numerical}
\begin{figure}[t]
\includegraphics[width=3.2 in,height=2.2 in]{diff_requirements.eps}
\centering
\caption{\small Comparison between the required transmission rates under different system requirements for $N=K=30$.}\label{fig_diff_requirements}
\end{figure}
In this section, we demonstrate the performance of the proposed schemes numerically. Fig. \ref{fig_diff_requirements} shows the performance of D2D coded caching systems under different requirements. In particular, we compare our system that provides both secure caching and secure delivery with the system with just secure delivery \cite{awan2015fundamental} and the one with no secrecy constraints \cite{ji2016fundamental}. For the latter two cases, the rate is equal to zero wherever $M\geq N$, as the entire database can be stored in each cache memory. However, by setting $s=1$ in the lower bound stated in Theorem $3$, we get that the sum rate under secure caching is bounded below by $\frac{K}{K-1}$.
\begin{figure}[t]
\includegraphics[width=3.2 in,height=2.2 in]{single_vs_D2D.eps}
\centering
\caption{\small The achievable secure rates for the single server and D2D coded caching for $N=K=30$.}\label{fig_single_vs_D2D}
\end{figure}
In Fig. \ref{fig_single_vs_D2D}, we compare the performance of our system and the one considered in \cite{ravindrakumar2016fundamental}. As expected, the system, in \cite{ravindrakumar2016fundamental}, where the server, with full access to the file shares, is responsible for the delivery phase, achieves lower transmission rate compared with the considered setup where the delivery phase has to be performed by users, each of which has limited access to the files shares. Interestingly, we observe that the gap between the required transmission rates vanishes as $M$ increases, i.e., the loss due to accessing a limited number of shares at each user is negligible when $M$ is sufficiently large.
Fig. \ref{fig_uppervslower} shows that the gap between the lower and upper bounds decreases as $M$ increases. As mentioned before, Theorem $3$ points out that the sum rate is bounded below by $\frac{K}{K-1}$, by setting $s=1$. For large enough $M$, our proposed schemes achieves a sum rate equal to $\frac{K}{K-1}$, which matches the lower bound.
In Fig. \ref{fig_decen_vs_cen}, we plot the achievable rates under different choices of $L$ in a system with $K=100$. It is worth noting that even with $L=60$, which is much smaller than the number of users in the system ($K$), the gap between the achievable rate using the decentralized and centralized schemes is negligible for realistic values of $M$. In other words, even with a inaccurate lower bound of the number of users in the system $K$, the proposed decentralized scheme performs very close to the centralized one.
\begin{figure}
\includegraphics[width=3.2 in,height=2.2 in]{uppervslower.eps}
\centering
\caption{\small The upper bound vs the lower bound for $N=K=100$.}\label{fig_uppervslower}
\end{figure}
\begin{figure}
\vspace{.1 in}
\includegraphics[width=3.2 in,height=2.2 in]{decen_vs_cen.eps}
\centering
\caption{\small Achievable rates via decentralized and centralized schemes $N=K=100$.}\label{fig_decen_vs_cen}
\end{figure}
\vspace{-.15 in}
\section{Conclusions }\label{sec:con}
In this work, we have characterized the fundamental limits of secure device-to-device coded caching systems. We have investigated a cache-aided network, where the users' requests must be served via D2D communications only. We have imposed secure caching constraint on all users, i.e., a user cannot obtain any information about any file that he had not requested. We have developed an achievable centralized coded caching scheme for this network, where the server encodes each file using a proper non-perfect secret sharing scheme and generates a set of random keys. The resulting shares and keys are carefully placed in the users' cache during the cache placement phase. After announcing the users' demands, each user transmits a one-time padded signal to the remaining users. In addition, we have provided a sequential decentralized scheme that does not require the knowledge of the number of the active users for cache placement. As a byproduct of the proposed achievability schemes, the system also keeps the files secret from any external eavesdropper that overhears the delivery phase, i.e., the secure delivery is also guaranteed. We have derived a lower (converse) bound based on cut-set arguments. Furthermore, we have shown that our proposed scheme is order-optimal and optimal in the large memory region. Our numerical results indicate that the gap between the lower and upper bounds decreases as the cache memory capacity increases. Similarly, the performance of centralized and decentralized schemes are very close for large memories. Overall, we have shown that the D2D communications can replace the server in the delivery phase with a negligible transmission overhead. This offers an affirmation of D2D communications' significant role in upcoming communication systems.
\appendices
\vspace{-.1 in}
\section{Proof of Theorem \ref{them:gap} }
First, we show that the multiplicative gap between the achievable rate in \cite{ravindrakumar2016fundamental} for multicast coded caching and the achievable rate in Theorem \ref{them:cen} can be bounded by a constant. We recall the upper and lower bounds from \cite{ravindrakumar2016fundamental}.
\begin{equation}
R_{\text{Multicast}}\triangleq\frac{K(N+M-1)}{N+(K+1)(M-1)},
\end{equation}
\begin{equation}\label{multicatlower}
R_{\text{Multicast}}^{*}\geq \! \max_{s\in\{1,2,\ldots,\min(K,N/2)\}}\frac{(s\lfloor N/s\rfloor\!-\!1)\!-\!(s\!-\!1)M}{(\lfloor N/s \rfloor\!-\!1)}.
\end{equation}
Therefore, we have
\begin{align}
\frac{R_T^C}{R_{\text{Multicast}}}=\frac{2(N+(K+1)(M-1))}{1+(M-1)K+\sqrt{(1-(M-1)K)^2-4KN}}.
\end{align}
To simplify the notation, let $U=M-1$ and $V=\sqrt{(1-KU)^2-4KN}$. Then, we have
\begin{align}
\frac{R_T^C}{R_{\text{Multicast}}}\!&\!=\frac{2KU}{1\!+\!KU\!+\!V}\!+\!\frac{2U}{1\!+\!KU\!+\!V}\!+\!2\frac{N}{1\!+\!KU\!+\!V},\\
&\leq 2+1+2\frac{N}{1+KU+V}.
\end{align}
Note that the minimum value of $M$=$\frac{N}{K-1}+2$, thus we have $U\geq \frac{N}{K-1}$ and
\begin{align}
\frac{R_T^C}{R_{\text{Multicast}}}\leq 3+2\frac{N}{1+\frac{KN}{K-1}+V}\leq 5 =c'.
\end{align}
Now, consider
\begin{align}
\frac{R_T^C}{R_T^*}&=\frac{R_T^C}{R_{\text{Multicast}}}\times \frac{R_{\text{Multicast}}}{R_T^*}\leq\frac{R_T^C}{R_{\text{Multicast}}}\times \frac{R_{\text{Multicast}}}{R_{\text{Multicast}}^*},\label{apped1}\\
&\leq c^{'} \times c^{''}=c.
\end{align}
We observe that for any value for $s$, the RHS of (\ref{d2dlower}) equals $\frac{K}{K-s}$ RHS of (\ref{multicatlower}), thus we have $R_{\text{Multicast}}^*\leq R_{T}^*$ and we can get we get (\ref{apped1}). The last step follows from \cite[Theorem 3]{ravindrakumar2016fundamental}, i.e., $\frac{R_{\text{Multicast}}}{R_{\text{Multicast}}^*}\leq c^{''}$, where $c^{''}$ is a constant independent of the system parameters.
\bibliographystyle{IEEEtran}
|
1,116,691,497,844 | arxiv | \section{Introduction}
\label{sec:introduction}
Quantum mechanics and molecular mechanics (QM/MM) coupling methods have been widely used for simulations of large systems in materials science and biology \cite{bernstein09, csanyi04, gao02, kermode08, ogata01, warshel76, zhang12}. A QM model is required to accurately treat
bond breaking/formation, charge transfer, electron excitation and other electronic processes.
However, the QM calculations can only be applied to systems with hundreds/thousands of
atoms due to their demanding computational cost. By contrast, MM methods based on empirical inter-atomic potentials are able to treat millions of atoms or more, but with reduced accuracy and transferablity (MM can be very accurate at reference configurations or near equilibrium, but may have significant error for general configurations).
QM/MM coupling methods promise (near-)QM accuracy at (near-)MM computational cost for large-scale atomistic simulations.
In QM/MM simulations, the computational domain is partitioned into QM and MM regions.
The region of primary interest is described by a QM model, and the QM region is embedded in an ambient environment (e.g., bulk crystal) that is described by an MM model.
Some coupling/embedding schemes are applied to link the QM and MM regions.
A natural and fundamental question is how to assign each atom (site) to QM or MM subsystems, in order to achieve the optimal balance between accuracy and computational cost.
Even for static problems this is not straightforward: we should include the active sites in the QM region when the region of interest is fairly localized and relatively well separated from the environment, however, how to find an optimal partition such that the computational cost can be optimized without loss of accuracy remains unclear. For dynamic problems, this could be more challenging since some sites need to be reassigned as the environments evolve (see, e.g. \cite{csanyi04,duster17,kermode08}).
The goal of the adaptive QM/MM method is to offer the capability of automatic partition of QM/MM subsystems on the fly according to the error distribution in the process of a simulation.
This is a distinct advantage over conventional QM/MM method, where a static partition is prescribed for the QM and MM subsystems.
The adaptive QM/MM method has been proposed in some applications, including the study of important molecular fragments in macromolecules, monitoring molecules entering/leaving binding sites, and tracking proton transfer via the Grotthuss mechanism (see \cite{duster17} and references therein).
Because the size of the QM region can be set as small as possible (up to the accuracy requirement) in the adaptive QM/MM method, the computational costs can be controlled. Small QM subsystems also facilitate the utilization of high-level
QM theory and make simulations on long time scales feasible, which may potentially lead to new insights on physical systems.
The efficiency of an adaptive algorithm is determined by the accuracy of {\it a posteriori} error indicator, which indicates the (QM/MM) classification criteria of the atomic sites.
Despite various existing implementations of adaptive QM/MM coupling methods which mostly rely on empirical error indicators \cite{kerdcharoen1996, kerdcharoen2002, heyden2007, watanabe2014, waller2014, boereboom2016}, up to our best knowledge, we have not seen any rigorous {\it a posteriori} error estimate for QM/MM coupling.
In fact, recent developments in a similar field, atomistic/continuum coupling methods for crystalline defects
(see, e.g. \cite{Abdulle:2013,arndtluskin07c,Ortner:qnl.1d,OrtnerWang:2014,prud06,Shenoy:1999a,Wang:2017, Liao2018}) have provided valuable insights also on the study of QM/MM methods.
The purpose of this paper is to construct a rigorously justifiable {\it a posteriori} error indicator that is an upper bound of the true error ({\it reliability}), and further design an adaptive QM/MM algorithm.
In this work, we use a prototypical QM/MM model as a proof of concept, with tight binding model as the QM model, and focus only on the static problems.
We will investigate the adaptive QM/MM coupling with more realistic QM models such as density function theory (DFT) models and study the dynamic problems in our future work.
\subsubsection*{Outline}
In Section \ref{sec:pre} we brifely describe the tight binding model and QM/MM coupling methods for crystalline defects.
In Section \ref{sec:analysis}, we derive a residual based {\it a posteriori} error indicator for QM/MM coupling, prove its reliability, and further provide some sampling strategy to accelerate the evaluation of the error indicator.
In Section \ref{sec:adaptive}, we propose an adaptive QM/MM algorithm that automatically adjust the QM and MM regions on the fly according to the proposed {\it a posteriori} error indicator.
In Section \ref{sec:numerics}, we present several numerical experiments for point defects in two dimensional triangular lattice.
In Section \ref{sec:conclusion}, we make concluding remarks and point out some promising directions for future work.
\subsubsection*{Notation}
We use the symbol $\langle\cdot,\cdot\rangle$ to denote an abstract duality
pairing between a Banach space and its dual space. The symbol $|\cdot|$ normally
denotes the Euclidean or Frobenius norm, while $\|\cdot\|$ denotes an operator
norm.
For the sake of brevity of notation, we will denote $A\backslash\{a\}$ by
$A\backslash a$, and $\{b-a~\vert ~b\in A\}$ by $A-a$.
For $E \in C^2(X)$, the first and second variations are denoted by
$\<\delta E(u), v\>$ and $\<\delta^2 E(u) v, w\>$ for $u,v,w\in X$.
For a finite set $A$, we will use $\#A$ to denote the cardinality of $A$.
The symbol $C$ denotes generic positive constant that may change from one line
of an estimate to the next. When estimating rates of decay or convergence, $C$
will always remain independent of the system size, the configuration of the lattice and the the test functions. The dependence of $C$ will be normally clear from the context or stated explicitly.
\section{Model set up}
\label{sec:pre}
\setcounter{equation}{0}
\subsection{The tight binding model and its site energy}
\label{sec:tb}
\defR_{\rm cut}{R_{\rm cut}}
\defN{N}
In this paper, we use the tight binding model as the quantum mechanical model, which is a ``minimalist" electronic structure model.
For simplicity of presentation, we consider a `two-centre' tight binding model \cite{goringe97,Papaconstantopoulos15} with a single orbital per atom and the identity overlap matrix.
All results in this paper can be extended directly to general non-self-consistent tight binding models, as described in~\cite[\S~2 and Appendix A]{chen15a}.
Consider a many-particle system consisting of $N$ atoms.
Let $d\in\{2,3\}$ be the space dimension and $\Omega\subset\mathbb{R}^d$ be an {\it index set} (or {\it reference configuration}), with $\#\Omega=N$.
An atomic configuration is a map $y : \Omega\to\mathbb{R}^d$ satisfying
\begin{equation} \label{eq:non-interpenetration}
|y(\ell)-y(k)| \geq \mathfrak{m}|\ell-k| \qquad\forall~\ell,k\in\Omega
\end{equation}
with {\em accumulation parameter} $\mathfrak{m} > 0$. In the following, we use $r_{\ell k}:=|y(\ell)-y(k)|$ for brevity of notation.
The `two-centre' tight binding model is formulated in terms of a discrete Hamiltonian, with the matrix elements
\begin{eqnarray}\label{tb-H-elements}
\Big(\mathcal{H}(y)\Big)_{\ell k}
=\left\{ \begin{array}{ll}
h_{\rm ons}\left(\sum_{j\neq \ell}
\varrho\big(|y({\ell})-y(j)|\big)\right)
& {\rm if}~\ell=k \\[1ex]
h_{\rm hop}\big(|y(\ell)-y(k)|\big) & {\rm if}~\ell\neq k,
\end{array} \right.
\end{eqnarray}
where $h_{\rm ons} \in C^{\mathfrak{n}}([0, \infty))$ is the on-site term,
$\varrho \in C^{\mathfrak{n}}([0, \infty))$ represents the charge density
with $\varrho(r) = 0~\forall r\in[R_{\rm cut},\infty)$ and $R_{\rm cut}>0$ stands for the cutoff radius,
$h_{\rm hop} \in C^{\mathfrak{n}}([0, \infty))$ is the hopping term with
$h_{\rm hop}(r)=0~\forall r\in[R_{\rm cut},\infty)$.
Throughout this paper, we will assume that $\mathfrak{n}\geq 4$.
With the above tight binding Hamiltonian $\mathcal{H}$,
we can define the band energy of the system
\begin{eqnarray}\label{e-band}
E^\Omega(y)=\sum_{s=1}^N f(\varepsilon_s)\varepsilon_s,
\end{eqnarray}
where $(\varepsilon_s)_{s = 1}^N$ are the eigenvalues of $\mathcal{H}(y)$ with associated eigenvectors $\psi_s$ such that
\begin{eqnarray}\label{eigen-H}
\mathcal{H}(y)\psi_s = \varepsilon_s\psi_s\quad s=1,2,\cdots,N,
\end{eqnarray}
and $f$ is the Fermi-Dirac distribution function for the energy states of a system consisting of particles that obey the Pauli exclusion principle,
\begin{eqnarray}\label{fermi-dirac}
f(\varepsilon) = \left( 1+e^{(\varepsilon-\mu)/(k_{\rm B}T)} \right)^{-1}
\end{eqnarray}
with $\mu$ a fixed chemical potential, $k_{\rm B}$ the Boltzmann constant , and $T>0$ the temperature of the system.
We note that it is reasonable to fix the chemical potential $\mu$ in the thermodynamic limit of the
grand canonical ensemble of the electrons \cite{chen16}.
Following \cite{finnis03}, we can distribute the energy to each atomic site
\begin{eqnarray}\label{E-El}
E^\Omega(y)=\sum_{\ell\in\Omega} E_{\ell}^{\Omega}(y)
\qquad{\rm with}\qquad
E_{\ell}^\Omega(y) := \sum_{s}f(\varepsilon_s)\varepsilon_s
\left|[\psi_s]_{\ell}\right|^2,
\end{eqnarray}
which formally defines a site energy $E_{\ell}^{\Omega}(y)$. For the purpose of molecular modeling, we need to justify the {\it regularity and locality}, the {\it isometry and permutation invariance}, and the existence of {\it thermodynamic limit} for this site energy.
Suppose $\Lambda$ is a countable index set or reference configuration, and $\Omega\subset\Lambda$ is a finite subset.
We denote by $E_\ell^\Omega$ the site energy with respect to the subsystem $\Omega \subset \Lambda$.
For a domain $A \subset \mathbb{R}^d$, we use the short-hand $E_\ell^{A} := E_\ell^{A \cap \Lambda}$.
In the tight binding Hamiltonian \eqref{tb-H-elements}, the interaction range of each atom is uniformly localized, which satisfies the assumptions on Hamiltonian matrix elements in \cite{chen15a} (the interactions decays exponentially).
Then the following lemma from \cite[Theorem 3.1 (i)]{chen15a} implies the existence of the {\it thermodynamic} limit of $E_\ell^{\Omega}$ as $\Omega \uparrow \Lambda$, and guarantees that $E_{\ell}^{\Omega}$ defined in \eqref{E-El} can be taken as a proper (approximate) site energy.
\begin{lemma}\label{lemma-thermodynamic-limit}
If $y:\Lambda\rightarrow\mathbb{R}^d$ is a configuration satisfying \eqref{eq:non-interpenetration}, then,
\begin{itemize}
\item[(i)] {\rm (regularity and locality of the site energy)}
$E^{\Omega}_{\ell}(y)$ possesses $j$th order partial derivatives with
$1 \leq j \leq \mathfrak{n}-1$, and there exist positive constants $C_j$ and $\eta_j$ such that
\begin{eqnarray}\label{site-locality-tdl}
\left|\frac{\partial^j E^{\Omega}_{\ell}(y)}{\partial [y(m_1)]_{i_1}
\cdots\partial [y(m_j)]_{i_j}}\right|
\leq C_j e^{-\eta_j\sum_{l=1}^j|y(\ell)-y(m_l)|}
\end{eqnarray}
with $m_k\in\Omega$ and $1\leq i_k\leq d$ for any $1\leq k\leq j$;
\item[(ii)] {\rm (isometry and permutation invariance)}
If $g:\mathbb{R}^d\rightarrow\mathbb{R}^d$ is an isometry, then
$E^{\Omega}_{\ell}(y) = E^{\Omega}_{\ell}(g(y))$;
If $\mathcal{G}:\Omega\rightarrow\Omega$ is a permutation, then
$E^{\Omega}_{\ell}(y) = E^{\mathcal{G}^{-1}(\Omega)}_{\mathcal{G}^{-1}
(\ell)}(y\circ\mathcal{G})$;
\item[(iii)] {\rm (thermodynamic limit)}
$\displaystyle E_{\ell}(y):=\lim_{R\rightarrow\infty} E^{B_R(\ell)}_{\ell}(y)$
exists and satisfies (i), (ii).
\end{itemize}
\end{lemma}
For a finite subset $\Omega \subset \Lambda$, we define the (negative) force
\begin{eqnarray}\label{eq:force}
f^{\Omega}(y):=-\nabla E^{\Omega}(y),
\quad\text{and in component notation,} \quad
\big[f_{\ell}^{\Omega}(y)\big]_i =
-\frac{\partial E^{\Omega}(y)}{\partial [y(\ell)]_i}
\quad 1\leq i\leq d.
\quad
\end{eqnarray}
Using \eqref{E-El}, we have
\begin{eqnarray}\label{Fl-El}
\big[ f_{\ell}^{\Omega}(y) \big]_i = -\sum_{k\in\Omega}
\frac{\partial E_k^{\Omega}(y)}{\partial [y(\ell)]_i},
\end{eqnarray}
which, together with Lemma \ref{lemma-thermodynamic-limit}, yields the
thermodynamic limit of the force $f_{\ell}(y)$, as well as its regularity, locality, and isometry/permutation invariance.
\subsection{Variational formulation for crystalline defects}
\label{sec:defects}
\defR^{\rm def}{R^{\rm def}}
\def\mathscr{R}{\mathcal{R}}
\def\mathcal{N}{\mathcal{N}}
\defr_{\rm cut}{R_{\rm c}}
\def\Lambda^{\rm hom}{\Lambda^{\rm hom}}
\defD^{\rm def}{D^{\rm def}}
\def\Lambda^{\rm def}{\Lambda^{\rm def}}
\defR_{\rm DEF}{R_{\rm DEF}}
\def{\rm Adm}{{\rm Adm}}
\def\mathscr{E}{\mathcal{E}}
\def\Lambda{\Lambda}
\def\dot{\mathscr{U}}^{1,2}{{\mathscr{U}}^{1,2}}
\def\mathscr{U}_0{{\mathscr{U}^{\rm c}}}
\def{D}'{{\sf D}}
\def{\sf e}{{\sf e}}
A rigorous framework for modelling the geometry equilibration of crystalline defects has been developed in
\cite{chenpre_vardef,2013-defects}, which formulates the equilibration of crystal defects as a variational problem in a discrete energy space, and establishes qualitatively sharp far-field decay estimates for the corresponding equilibrium configuration.
We emphasize that these results rely heavily on a ``locality" assumption of the models, which has been shown for tight binding model in Lemma \ref{lemma:regularity}.
For sake of simplicity, we only present results on point defects here. All analysis and algorithms can be generated to straight dislocations (see \cite{chenpre_vardef,ehrlacher13}).
Given $d \in \{1, 2, 3\}$, ${\sf A} \in \mathbb{R}^{d \times d}$ non-singular, $\Lambda^{\rm hom} := {\sf A} \mathbb{Z}^d$ is the
homogeneous reference lattice which represents a perfect single lattice crystal formed by identical atoms
and possessing no defects.
$\Lambda\subset \mathbb{R}^d$ is the reference lattice with some local defects. The mismatch between $\Lambda$ and $\Lambda^{\rm hom}$
represents possible defects, which are contained in some localized defect cores.
The generalization to multiple defects is straightforward.
For simplicity, we assume the defects are near the origin and
\begin{eqnarray}\label{ass:ref_config}
\Lambda \setminus B_{R_{\rm DEF}} = (A \mathbb{Z}^d) \setminus B_{R_{\rm DEF}}
\end{eqnarray}
with $R_{\rm DEF}\geq 0$.
For analytical purposes, we assume that there exits a regular partition $\mathcal{T}_{\Lambda}$ of $\mathbb{R}^d$
into triangles if $d=2$ and tetrahedra if $d=3$, whose nodes are the reference sites $\Lambda$.
Recall that the deformed configuration of the infinite lattice $\Lambda$ is a map $y: \Lambda\rightarrow\mathbb{R}^d$, which can be decomposed as
\begin{eqnarray}\label{y-u}
y(\ell) = \ell + u(\ell) \qquad\forall~\ell\in\Lambda
\end{eqnarray}
with $u:\Lambda\rightarrow\mathbb{R}^d$ the displacement with respect to the reference configuration $\Lambda$.
If $\ell\in\Lambda$ and $\ell+\rho\in\Lambda$, then we define the finite difference
$D_\rho u(\ell) := u(\ell+\rho) - u(\ell)$. For a subset $\mathscr{R} \subset \Lambda-\ell$, we
define $D_\mathscr{R} u(\ell) := (D_\rho u(\ell))_{\rho\in\mathscr{R}}$,
and $Du(\ell) := D_{\Lambda-\ell} u(\ell)$.
For $\gamma > 0$ we define the (semi-)norms
\begin{eqnarray*}
\big|Du(\ell)\big|_\gamma := \bigg( \sum_{\rho \in \Lambda-\ell} e^{-2\gamma|\rho|}
\big|D_\rho u(\ell)\big|^2 \bigg)^{1/2}
\quad{\rm and}\quad
\| Du \|_{\ell^2_\gamma} := \bigg( \sum_{\ell \in \Lambda}
|Du(\ell)|_\gamma^2 \bigg)^{1/2}.
\end{eqnarray*}
All (semi-)norms $\|\cdot\|_{\ell^2_\gamma}, \gamma > 0,$ are equivalent, see \cite{ortner12} (also \cite[Appendix A]{chen15b}).
We can now define the natural function space of finite-energy displacements,
\begin{displaymath}
\dot{\mathscr{U}}^{1,2}(\Lambda) := \big\{ u : \Lambda \to \mathbb{R}^d, \| Du \|_{\ell^2_\gamma} < \infty \big\}.
\end{displaymath}
We denote $\dot{\mathscr{U}}^{1,2}(\Lambda)$ by $\dot{\mathscr{U}}^{1,2}$ whenever it is clear from the context, .
Let $E_\ell$ denote the site energy we defined in Lemma \ref{lemma-thermodynamic-limit} (iii).
Due to its translation invariance, we define $V_{\ell} : (\mathbb{R}^d)^{\Lambda-\ell}\rightarrow\mathbb{R}$ by
\begin{eqnarray}
V_{\ell}(Du) := E_{\ell}(x_0+u) \qquad{\rm with}\quad
x_0:\Lambda\rightarrow\mathbb{R}^d ~~{\rm and}~~ x_0(\ell)=\ell~~\forall~\ell\in\Lambda.
\end{eqnarray}
For a displacement $u$ with $x_0+u$ satisfying \eqref{eq:non-interpenetration}, we can formally define the energy-difference functional
%
\begin{eqnarray}\label{energy-difference}
\mathcal{E}(u) := \sum_{\ell\in\Lambda}\Big(E_{\ell}(x_0+u)-E_{\ell}(x_0)\Big)
= \sum_{\ell\in\Lambda}\Big(V_{\ell}(Du(\ell))-V_{\ell}(\pmb{0})\Big).
\end{eqnarray}
It was shown in \cite[Theorem 2.7]{chenpre_vardef} (see also \cite{2013-defects}) that,
if $\delta\mathscr{E}(0) \in (\dot{\mathscr{U}}^{1,2})^*$, then $\mathscr{E}$ is well-defined on the space ${\rm Adm}_0$ and in fact
$\mathscr{E} \in C^{\mathfrak{n}-1}({\rm Adm}_0)$, where
\begin{displaymath}
{\rm Adm}_{\frak{m}}(\Lambda) := \big\{ u \in \dot{\mathscr{U}}^{1,2}(\Lambda), ~
|x_0(\ell)+u(\ell)-x_0(m)-u(m)| > \frak{m} |\ell-m|
\quad\forall~ \ell, m \in \Lambda
\big\}.
\end{displaymath}
Whenever it is clear from the context, we will denote ${\rm Adm}_{\frak{m}}(\Lambda)$ by ${\rm Adm}_{\frak{m}}$.
Let ${\rm Adm}_0=\cup_{\mathfrak{m}>0}{\rm Adm}_{\mathfrak{m}}$.
Due to the decay imposed by the condition $u\in\dot{\mathscr{U}}^{1,2}$, any displacement $u\in{\rm Adm}_0$ belongs to ${\rm Adm}_{\mathfrak{m}}$ with some constant $\mathfrak{m}>0$.
To this end, we can rigorously formulate the variational problem for the equilibrium state as,
\begin{equation}\label{eq:variational-problem}
\bar u \in \arg\min \big\{ \mathscr{E}(u), u \in {\rm Adm}_0 \big\},
\end{equation}
where ``$\arg\min$'' is understood as the set of local minima.
Equivalently, the minimizer $\bar{u}$ satisfies the following first and second order optimality conditions
\begin{eqnarray}
\label{eq:optimality-1}
\big\< \delta\mathscr{E}(\bar{u}) , v\big\> = 0 , \qquad
\big\< \delta^2\mathscr{E}(\bar{u}) v , v\big\> \geq 0,
& \qquad\forall~v\in\dot{\mathscr{U}}^{1,2} .
\end{eqnarray}
Alternatively, we may consider the force equilibrium formulation instead of the energy minimization formulation:
\begin{eqnarray}\label{eq:problem-force}
{\rm Find}~ \bar{u} \in {\rm Adm}_0, ~~ {\rm s.t.} \quad
f_{\ell}(\bar{u}) = 0 \qquad \forall~\ell\in\Lambda ,
\end{eqnarray}
where
\begin{eqnarray}\label{eq:force-Du}
f_{\ell}(u) = -\nabla_{\ell} \mathscr{E}(u) = - \sum_{\rho\in\ell-\Lambda}
V_{\ell-\rho,\rho}\big(Du(\ell-\rho)\big) + \sum_{\rho\in\Lambda-\ell} V_{\ell,\rho}\big(Du(\ell)\big). \qquad
\end{eqnarray}
Note that any minimizer of \eqref{eq:variational-problem} also solves \eqref{eq:problem-force}.
The second part of \eqref{eq:optimality-1} is usually difficult to justify analytically.
Hence we impose the following strong stablility condition for the minimizer $\bar{u}$, namely,
\begin{equation}\label{eq:strong-stab}
\exists~ \bar{c} > 0 ~~\text{ s.t. }\quad
\big\< \delta^2 \mathscr{E}(\bar u) v, v\big\> \geq \bar{c}
\| Dv \|_{\ell^2_\gamma}^2 \qquad \forall v \in\dot{\mathscr{U}}^{1,2}.
\end{equation}
The constants $\bar{c}$ has mild dependence on the parameter $\gamma$.
Nevertheless, since all norms $\|\cdot\|_{\ell^2_{\gamma}}$ are equivalent, we hereafter ignore this dependence.
The following result from \cite{chenpre_vardef} gives the decay estimates for the equilibrium state for point defects.
\begin{lemma} \label{lemma:regularity}
Let $\gamma>0$.
If $\bar u \in {\rm Adm}_0$ is a strongly stable solution to \eqref{eq:variational-problem}
in the sense that \eqref{eq:strong-stab} is satisfied,
then there exists a constant $C > 0$ such that
%
\begin{align}\label{decay-estimate}
|D\bar{u}(\ell)|_\gamma \leq C (1+|\ell|)^{-d}.
\end{align}
\end{lemma}
\subsection{QM/MM coupling}
\label{sec:qmmm}
\def{\rm Adm}{{\rm Adm}}
\def\mathscr{E}{\mathcal{E}}
\def\Lambda{\Lambda}
\def{D}'{{\sf D}}
\def{\sf e}{{\sf e}}
\def\Lambda^{\rm QM}{\Lambda^{\rm QM}}
\def\Lambda^{\rm MM}{\Lambda^{\rm MM}}
\def\Lambda^{\rm FF}{\Lambda^{\rm FF}}
\def\Lambda^{\rm BUF}{\Lambda^{\rm BUF}}
\def\Omega^{\rm QM}{\Omega^{\rm QM}}
\def\Omega^{\rm MM}{\Omega^{\rm MM}}
\def\Omega^{\rm FF}{\Omega^{\rm FF}}
\def\Omega^{\rm BUF}{\Omega^{\rm BUF}}
\defR_{\rm QM}{R_{\rm QM}}
\defR_{\rm MM}{R_{\rm MM}}
\defR_{\rm FF}{R_{\rm FF}}
\defR_{\rm BUF}{R_{\rm BUF}}
\defV^{\rm MM}{V^{\rm MM}}
\defV^{\rcut}_{\#}{V^{r_{\rm cut}}_{\#}}
\def\mathcal{E}^{\rm H}{\mathcal{E}^{\rm H}}
\def\bar{u}^{\rm H}{\bar{u}^{\rm H}}
\def\widetilde{D}{\widetilde{D}}
\def\Adm^{\rm H}_0{{\rm Adm}^{\rm H}_0}
\def\mathscr{U}^{\rm H}{\mathscr{U}^{\rm H}}
\def\bar{u}^{\rm H}{\bar{u}^{\rm H}}
To solve the variational problem \eqref{eq:variational-problem} approximately, we must restrict the infinite dimensional space ${\rm Adm}_0$ over $\Lambda$ to a finite dimensional subspace over some bounded domain with artificial boundary conditions.
The significant computational cost (roughly speaking, cube of the degrees of freedom) drastically limits the system size which can be handled by the QM models (in this paper, the tight binding model).
The QM/MM coupling schemes combine the accuracy of QM models with the low computational cost of MM models, and therefore allow simulations with much larger systems.
Generally speaking, QM/MM coupling schemes can be classified according to
whether they link the QM and MM regions on the level of energies or forces \cite{bernstein09,chen15b}:
the energy-based methods build a hybrid total energy
functional and look for the minimizer of this functional; while the force-based methods solve the force balance equation with QM and MM contributions
and possibly with an interpolation between the two in a transition region.
We will focus on energy-based methods in this paper, and all our analysis and algorithms can be generalized to force-based methods without too much difficulty.
The first step of QM/MM algorithm is to decompose the reference configuration $\Lambda$ into three disjoint sets,
$\Lambda = \Lambda^{\rm QM}\cup \Lambda^{\rm MM}\cup \Lambda^{\rm FF}$, where $\Lambda^{\rm QM}$ denotes the QM region, $\Lambda^{\rm MM}$ denotes the MM region, and $\Lambda^{\rm FF}$ denotes the far-field region where atom positions will be frozen according to the far-field predictor.
Moreover, we define a buffer region $\Lambda^{\rm BUF}\subset\Lambda^{\rm MM}$ surrounding $\Lambda^{\rm QM}$ such that all atoms in $\Lambda^{\rm BUF}\cup\Lambda^{\rm QM}$ are involved in the evaluation of the site energies in $\Lambda^{\rm QM}$ using the tight binding model. (see Figure \ref{qmmmgeom} for a schematic plot for the case of a two dimensional point defect)
More precisely, we require
\begin{eqnarray}\label{buf}
B_{r_{\rm cut}}(\ell) \subset \Lambda^{\rm QM}\cup\Lambda^{\rm BUF} \qquad\forall~\ell\in\Lambda^{\rm QM}
\end{eqnarray}
with some cutoff distance $r_{\rm cut}>0$.
Due to the locality in Lemma \ref{lemma-thermodynamic-limit},
the error from truncation of the buffered layer $\Lambda^{\rm BUF}$ decays exponentially fast as $r_{\rm cut}$ increases. Therefore, $E^{\Lambda^{\rm BUF}\cup\Lambda^{\rm QM}}_{\ell}$ is a good approximation of $E_{\ell}$ for sufficiently large $r_{\rm cut}$.
For simple cases, we can use balls centred at the defect core to decompose $\Lambda$,
and use parameters $R_{\rm QM}$, $R_{\rm MM}$ and $R_{\rm BUF}(\geqr_{\rm cut})$ to represent the respective radii
(see also Figure \ref{fig:qmmmgd} for a schematic plot).
In the MM region, we approximate the tight binding site potential $V_{\ell}$ by some MM site potential $V^{\rm MM}(Du(\ell))$,
which will be constructed such that:
(a) it is cheap to evaluate, usually an explicit function of the atomic configuration;
(b) it only depends on finitely many atoms within a finite range neighbourhood, say, only on sites in $B_{r_{\rm cut}}(\ell)$;
(c) it is accurate enough when the local atomic configuration is close to perfect lattice.
Note that when $\ell\in \Lambda^{\rm MM}$ is far away from defects, e.g. $R_{\rm QM}>R_{\rm DEF}+r_{\rm cut}$, the potential $V^{\rm MM}$ becomes homogeneous and does not depend on $\ell$.
Typically, we can use a Taylor expansion with respect to the reference configuration $x_0$ as follows (see also \cite[eq. (36)]{chen15b}).
Define $V^{\rcut}_{\#}:\big(\mathbb{R}^d\big)^{\mathcal{R}}\rightarrow\mathbb{R}$ as,
\begin{eqnarray*}
V^{\rcut}_{\#}\big(D_{\mathcal{R}}u(\ell)\big) := E_{\ell}^{\Lambda\cap B_{r_{\rm cut}}(\ell)}(x_0+u) \quad\forall ~ |\ell|>R_{\rm DEF}+r_{\rm cut}
\quad{\rm with}~~\mathcal{R}=B_{r_{\rm cut}}\cap\big(\Lambda^{\rm hom}\backslash 0\big) .
\end{eqnarray*}
The MM potential is given by
\begin{eqnarray}\label{taylor}
V^{\rm MM}\big({\bm g}\big)
:= V^{\rcut}_{\#}({\bf 0}) + \sum_{j=1}^k \frac{1}{j!} \delta^j V^{\rcut}_{\#}({\bf 0})\left[{\bm g}^{\otimes j}\right]
\quad{\rm with}~~k\geq 2,
\end{eqnarray}
where $\delta^j V^{\rcut}_{\#}({\bf 0})\left[{\bm g}^{\otimes j}\right]$ denotes the $j$-th order variations,
e.g., $ \deltaV^{\rcut}_{\#}({\bf 0})\left[{\bm g}\right] = \langle\deltaV^{\rcut}_{\#}({\bf 0}),{\bm g}\rangle$ and
$\delta^2V^{\rcut}_{\#}({\bf 0})\left[{\bm g}^{\otimes 2}\right] = \langle\delta^2V^{\rcut}_{\#}({\bf 0}){\bm g},{\bm g}\rangle$.
This construction is used throughout the numerical experiments in Section \ref{sec:numerics}.
The QM/MM hybrid energy difference functional approximates the QM energy difference functional $\mathscr{E}$ by
\begin{eqnarray}\label{eq:hybrid_energy}
\quad \mathscr{E}^{\rm H}(u)
= \sum_{\ell\in \Lambda^{\rm QM}}
\Big( V_{\ell}\big(D u(\ell)\big) - V_{\ell}\big(\pmb{0}\big) \Big)
+ \sum_{\ell\in \Lambda^{\rm MM}\cup\Lambda^{\rm FF}}
\Big( V^{\rm MM}\big(D u(\ell)\big) - V^{\rm MM}\big(\pmb{0}\big) \Big) \qquad
\end{eqnarray}
and replace the admissible set ${\rm Adm}_0$ by
\begin{eqnarray}\label{e-mix-space}
{\rm Adm}_{0}^{\rm H} := {\rm Adm}_{0} \cap \mathscr{U}^{\rm H}
\qquad \text{with} \qquad
\mathscr{U}^{\rm H} := \left\{ u \in \dot{\mathscr{U}}^{1,2} ~\lvert~ u=0~{\rm in}~\Lambda^{\rm FF} \right\} .
\end{eqnarray}
Finally, the energy-based QM/MM energy coupling scheme, as an approximation of \eqref{eq:variational-problem}, is the following finite dimensional minimization problem.
\begin{eqnarray}\label{problem-e-mix}
\bar{u}^{\rm H} \in \arg\min\big\{ \mathcal{E}^{\rm H}(u) ~\lvert~ u\in \Adm^{\rm H}_0 \big\},
\end{eqnarray}
Let $\bar{u}^{\rm H}$ be the approximate equilibrium state of \eqref{problem-e-mix}. The Taylor expansion construction of the MM site potential about the far-field lattice state (see \cite[\S 4.1]{chen15b})
and the decay estimate in Lemma \ref{lemma:regularity} lead to the convergence of $\bar{u}^{\rm H}$ to $\bar u$ and {\it a priori} error estimates with respect to the size of QM
and MM regions (see \cite[\S 4.2]{chen15b}).
To be more precise, for a two dimensional triangular lattice with point defects, if the MM site potential is given by second order Taylor expansion \eqref{taylor},
then we have the following {\it a priori} error estimate for the QM/MM approximation \eqref{problem-e-mix} (a special case of \cite[Theorem 4.1]{chen15b})
\begin{eqnarray}\label{a_priori}
\|\bar{u}^{\rm H}-\bar{u}\|_{\dot{\mathscr{U}}^{1,2}} \leq C\Big( R_{\rm QM}^{-3} + R_{\rm MM}^{-1} + \exp(-\kappar_{\rm cut}) \Big)
\end{eqnarray}
with some constants $C,\kappa>0$ independent of $R_{\rm QM}$, $R_{\rm MM}$ and $r_{\rm cut}$.
We observe immediately from this estimate that, to balance different contributions to the error and achieve (quasi) optimal computational costs,
one should take $R_{\rm MM}\approxR_{\rm QM}^3$ for sufficiently large $r_{\rm cut} \approx \log R_{\rm MM}$.
In our analysis and algorithms, the MM potentials do not need to be restricted to the constructions \eqref{taylor} or those in \cite{chen15b},
it suffices to make the following assumption that the QM/MM approximation $\bar{u}^{\rm H}$ converges to the exact equilibria in the sense of
\begin{eqnarray}\label{ass:convergence_QMMM}
\lim_{R_{\rm QM}\rightarrow\infty} \|\bar{u}^{\rm H}-\bar{u}\|_{\dot{\mathscr{U}}^{1,2}} = 0.
\end{eqnarray}
This systematic convergence \eqref{ass:convergence_QMMM} is a basic requirement for a reliable QM/MM scheme.
\section{A posteriori error estimates}
\label{sec:analysis}
\setcounter{equation}{0}
In this section, we derive an {\it a posteriori} error indicator for QM/MM approximations, and show its reliability such that the true error is bounded from above by the error indicator.
Furthermore, we design certain sampling techniques to improve the efficiency of evaluating the indicator in practical calculations.
\subsection{Residual estimates}
\label{sec:res}
For any solution $\bar{u}^{\rm H}\in\dot{\mathscr{U}}^{1,2}$ of the QM/MM approximation \eqref{problem-e-mix}, we define the residual $\res{\bar{u}^{\rm H}}$ as a functional on $\dot{\mathscr{U}}^{1,2}$:
\begin{eqnarray}\label{eq:res} \nonumber
\res{\bar{u}^{\rm H}}(v) := \big\< \delta\mathscr{E}(\bar{u}^{\rm H}),v \big\>
= \sum_{\ell\in\Lambda}\big\< \delta V_{\ell}(D\bar{u}^{\rm H}), Dv(\ell) \big\>,
\qquad\forall~v\in\dot{\mathscr{U}}^{1,2} .
\end{eqnarray}
Let $\|\cdot\|_{-1}$ be the dual norm of $\dot{\mathscr{U}}^{1,2}$, the following lemma indicates that $\|\res{\bar{u}^{\rm H}}\|_{-1}$
provides both lower and upper bounds of the approximation error.
\begin{lemma}\label{lemma:res}
Let $\bar{u}$ and $\bar{u}^{\rm H}$ be the solutions of \eqref{eq:variational-problem} and \eqref{problem-e-mix}, respectively.
If $\bar{u}$ is storngly stable in the sense of \eqref{eq:strong-stab}
and $R_{\rm QM}$ is sufficiently large, then there exist constants $c$ and $C$ such that
\begin{eqnarray}\label{res-bound}
c\|\bar{u}-\bar{u}^{\rm H}\|_{\dot{\mathscr{U}}^{1,2}} \leq \|\res{\bar{u}^{\rm H}}\|_{-1} \leq C\|\bar{u}-\bar{u}^{\rm H}\|_{\dot{\mathscr{U}}^{1,2}} .
\end{eqnarray}
\end{lemma}
\begin{proof}
Let $r>0$ be such that $B_r(\bar{u}) \subset {\rm Adm}_{\frak{m}}$ for some $\frak{m} > 0$.
%
Since we have assumed $\mathfrak{n}\geq 4$, it follows from Lemma \ref{lemma-thermodynamic-limit} that $\mathscr{E}\in C^3({\rm Adm}_0)$. Therefore $\delta\mathscr{E}$ and $\delta^2\mathscr{E}$ are Lipschitz continuous in $B_r(\bar{u})$ with
uniform Lipschitz constants $L_1$ and $L_2$, i.e., for any $w\in B_r(\bar{u})$
\begin{align}
\label{proof-4-1-2}
\|\delta\mathscr{E}(\bar{u})-\delta\mathscr{E}(w)\|
&\leq L_1\|D\bar{u}-Dw\|_{\ell^2_\gamma},
\\[1ex]
\label{proof-4-1-3}
\|\delta^2\mathscr{E}(\bar{u})-\delta^2\mathscr{E}(w)\|
&\leq L_2\|D\bar{u}-Dw\|_{\ell^2_\gamma} .
\end{align}
Using \eqref{ass:convergence_QMMM}, we can take $R_{\rm QM}$ sufficiently large such that $\bar{u}^{\rm H}\in B_r(\bar{u})$.
It follows from first order optimality \eqref{eq:optimality-1} and the Lipschitz continuity of $\delta\mathscr{E}$ \eqref{proof-4-1-2} that
\begin{eqnarray*}
\res{\bar{u}^{\rm H}}(v) = \big\< \delta\mathscr{E}(\bar{u}^{\rm H})-\delta\mathscr{E}(\bar{u}) , v \big\>
\leq L_1\|D\bar{u}-D\bar{u}^{\rm H}\|_{\ell^2_\gamma} \|Dv\|_{\ell^2_\gamma}
\qquad\forall~v\in\dot{\mathscr{U}}^{1,2} ,
\end{eqnarray*}
which leads to the lower bound estimate
\begin{eqnarray}\label{proof-4-1-4}
\|\res{\bar{u}^{\rm H}}\|_{-1} \leq C\|\bar{u}-\bar{u}^{\rm H}\|_{\dot{\mathscr{U}}^{1,2}}
\end{eqnarray}
with the constant $C$ depending on $\gamma$ and $L_1$.
For the upper bound estimate, the Lipschitz continuity of $\delta^2\mathscr{E}$ \eqref{proof-4-1-3} and the strong stability condition \eqref{eq:strong-stab} imply the existence of $\tilde{r}\in (0,r)$, such that for any $w\in B_{\tilde{r}}(\bar{u})$
\begin{eqnarray*}
\big\< \delta^2 \mathscr{E}(w) v, v\big\> \geq \frac{\bar{c}}{2}
\| Dv \|_{\ell^2_\gamma}^2 \qquad \forall v \in\dot{\mathscr{U}}^{1,2}.
\end{eqnarray*}
Note that \eqref{ass:convergence_QMMM} implies that for $R_{\rm QM}$ large enough, $\bar{u}^{\rm H}\in B_{\tilde{r}}(\bar{u})$.
%
Therefore,
\begin{multline*}
\quad
\|\res{\bar{u}^{\rm H}}\|_{-1}\|D\bar{u}-D\bar{u}^{\rm H}\|_{\ell^2_\gamma}
~\geq~ \res{\bar{u}^{\rm H}}(\bar{u}-\bar{u}^{\rm H}) = \big\< \delta\mathscr{E}(\bar{u}^{\rm H})-\delta\mathscr{E}(\bar{u}) , \bar{u}^{\rm H}-\bar{u} \big\>
\\[1ex]
~=~ \big\< \delta^2\mathscr{E}(w)(\bar{u}^{\rm H}-\bar{u}) , \bar{u}^{\rm H}-\bar{u} \big\>
~\geq~ \frac{\bar{c}}{2}\|D\bar{u}-D\bar{u}^{\rm H}\|^2_{\ell^2_\gamma},
\qquad
\end{multline*}
where $w=t\bar{u}+(1-t)\bar{u}^{\rm H}$ with some $t\in(0,1)$.
This leads to the estimate
\begin{eqnarray}\label{proof-4-1-5}
\|\res{\bar{u}^{\rm H}}\|_{-1} \geq c\|\bar{u}-\bar{u}^{\rm H}\|_{\dot{\mathscr{U}}^{1,2}}
\end{eqnarray}
with some constant $c$ depending on $\gamma$ and $\bar{c}$.
%
We complete the proof by combining \eqref{proof-4-1-4} and \eqref{proof-4-1-5}.
\end{proof}
\subsection{A practical a posteriori error indicator}
\label{sec:posteriori}
We observe from Lemma \ref{lemma:res} that an ideal {\it a posteriori} error indicator is
\begin{eqnarray}\label{eq:ideal_posteriori_indicator}
\eta^{\rm ideal}(\bar{u}^{\rm H}) := \|\res{\bar{u}^{\rm H}}\|_{-1}.
\end{eqnarray}
The upper and lower bound estimate of the residual dual norm can ensure the reliability and efficiency of the error indicator.
However, the ideal error indicator \eqref{eq:ideal_posteriori_indicator} is not computable and can not be used directly in practice
since $\delta V_{\ell}(\bar{u}^{\rm H})$ is very complicate to compute for a QM model.
The aim of this section is to construct an {\it a postriori} error indicator that can be computed from the QM/MM approximation $\bar{u}^{\rm H}$ with moderate
computational cost and meanwhile, can control the error $\|\bar{u}^{\rm H}-\bar{u}\|_{\dot{\mathscr{U}}^{1,2}}$ from above.
A natural idea is to use the force $f_{\ell}(\bar{u}^{\rm H})$ (definded by \eqref{eq:force-Du}) to construct the error indicator,
since
(a) the equilibrium state $\bar{u}$ satisfies the force balance equation $f_{\ell}(\bar{u})=0~(\forall~\ell\in\Lambda)$,
and hence $\big|f_{\ell}(\bar{u}^{\rm H})\big|=\big|f_{\ell}(\bar{u}^{\rm H})-f_{\ell}(\bar{u})\big|$ can be related to the error $\|\bar{u}-\bar{u}^{\rm H}\|$,
(b) comparing with $\delta V_{\ell}(\bar{u}^{\rm H})$, the QM force $f_{\ell}(\bar{u}^{\rm H})$ is much easier to compute
by using the Hellmann-Feynman formula (see e.g. \cite{martin04}).
We show in the following theorem an {\it a posteriori} error indicator, which gives an upper bound of the error $\|\bar{u}-\bar{u}^{\rm H}\|_{\dot{\mathscr{U}}^{1,2}}$.
\begin{theorem}\label{theorem:upperbound}
Let $\bar{u}$ and $\bar{u}^{\rm H}$ be the solutions to \eqref{eq:variational-problem} and \eqref{problem-e-mix}, respectively.
If $\bar{u}$ is storngly stable in the sense of \eqref{eq:strong-stab}
and $R_{\rm QM}$ is sufficiently large, then there exists a constant $C$ such that
\begin{eqnarray}\label{eta-upperbound}
\|\bar{u}-\bar{u}^{\rm H}\|_{\dot{\mathscr{U}}^{1,2}} \leq C\eta(\bar{u}^{\rm H}) ,
\end{eqnarray}
where
\begin{eqnarray}\label{eq:error-indicator-fl}
\eta(\bar{u}^{\rm H}) := \left\{
\begin{array}{ll}
\displaystyle
\sum_{\ell\in\Lambda} \log(2+|\ell|)\cdot\big|f_{\ell}(\bar{u}^{\rm H})\big|, \quad & {\rm if} ~ d=2,
\\[1ex]
\displaystyle
\left(\sum_{\ell\in\Lambda} \big|f_{\ell}(\bar{u}^{\rm H})\big|^{\frac65}\right)^{\frac56}, & {\rm if}~ d=3.
\end{array} \right.
\end{eqnarray}
\end{theorem}
\begin{proof}
For a displacement $v\in\dot{\mathscr{U}}^{1,2}$, we define the equivalence classes
\begin{eqnarray*}
[v] := \left\{ v+t ~:~ t \in \mathbb{R}^d \right\} .
\end{eqnarray*}
Due to the translation invariance in Lemma \ref{lemma:regularity} (ii),
there is no need to make the distinction between $v$ and $[v]$.
More specifically, we have that for any $w\in[v]$, $Dv(\ell)=Dw(\ell)$
and $f_{\ell}(v)=f_{\ell}(w),~\forall~\ell\in\Lambda$.
%
It follows from \cite[Proposition 12]{ortner12} and \cite[Theorem 2.2]{OrtnerSuli12} that
for any $v\in\dot{\mathscr{U}}^{1,2}$,
\begin{eqnarray}
\label{estimateOS_d2}
&|v(\ell)-v(0)| \leq C\|v\|_{\dot{\mathscr{U}}^{1,2}}\log(2+|\ell|) & {\rm if}~d=2 ~~ {\rm and}
\\[1ex]
\label{estimateOS_d3}
& \text{there exists a } v_0\in[v] \text{ such that } v_0\in \ell^6 & {\rm if} ~d=3.
\end{eqnarray}
For $d=2$, \eqref{estimateOS_d2} and the fact $\sum_{\ell\in\Lambda}f_{\ell}(\bar{u}^{\rm H}) =0$
imply that, for any $v\in\dot{\mathscr{U}}^{1,2}$,
\begin{multline*}
\qquad
\res{\bar{u}^{\rm H}}(v) = \<\delta\mathscr{E}(\bar{u}^{\rm H}),v\> = \sum_{\ell\in\Lambda} f_{\ell}(\bar{u}^{\rm H})v(\ell)
= \sum_{\ell\in\Lambda} f_{\ell}(\bar{u}^{\rm H})\tilde{v}(\ell)
\\
\leq C \sum_{\ell\in\Lambda} f_{\ell}(\bar{u}^{\rm H})\|\tilde{v}\|_{\dot{\mathscr{U}}^{1,2}}\log(2+|\ell|) \leq C \eta(\bar{u}^{\rm H})\|v\|_{\dot{\mathscr{U}}^{1,2}},
\qquad\qquad\qquad
\end{multline*}
where $\tilde{v}=v-v(0)\in[v]$.
This inequality together with \eqref{res-bound} completes the proof of $d=2$ case.
For $d=3$, we can choose $v_0\in[v]$ as in \eqref{estimateOS_d3} to obtain that,
for any $v\in\dot{\mathscr{U}}^{1,2}$,
\begin{eqnarray*}
\res{\bar{u}^{\rm H}}(v) = \sum_{\ell\in\Lambda} f_{\ell}(\bar{u}^{\rm H})v_0(\ell)
\leq C \|f_{\ell}(\bar{u}^{\rm H}) \|_{\ell^{\frac65}} \|v_0\|_{\ell^6} \leq C \eta(\bar{u}^{\rm H})\|v\|_{\dot{\mathscr{U}}^{1,2}},
\end{eqnarray*}
and use similar argument to complete the proof.
\end{proof}
\begin{remark}
Although we have stated the {\it a posteriori} error indicators for both $d=2$ and $d=3$ cases in Theorem \ref{theorem:upperbound}, we will focus on the implementations of two dimensional systems in this paper.
Three dimensional systems will be investigated in our future works.
\end{remark}
\begin{remark}\label{remark:lowerbound}
By approximating $\|\res{\bar{u}^{\rm H}}\|_{-1}$ with the error indicator $\eta(\bar{u}^{\rm H})$,
we keep only the upper bound estimate in \eqref{res-bound}, but may have lost the lower bound.
The design of reliable and efficient error indicator (with both upper and lower bound estimates) for QM/MM schemes may require more involved constructions and analysis, and will be investigated in our future work. See \cite{HW_SY_2018_Efficiency_A_Post_1D} for a recent advance in this direction.
\end{remark}
The error indicator \eqref{eq:error-indicator-fl} is still not computable since
the sum over $\ell\in\Lambda$ is an infinite sum,
and the force $f_{\ell}(\bar{u}^{\rm H})$ is the tight binding (QM) force of the infinite body.
We use the cutoff radii $r_{\rm cut}$ to truncate the simulation domain and compute the force, hence, the approximation of \eqref{eq:error-indicator-fl} could be written as:
\begin{eqnarray}\label{eta_posteriori}
\eta_{r_{\rm cut}}(\bar{u}^{\rm H}) := \sum_{\ell\in\Omega_{\rm c}} \log(2+|\ell|)\cdot\big|f^{r_{\rm cut}}_{\ell}(\bar{u}^{\rm H})\big|
\qquad{\rm with}\qquad
\Omega_{\rm c} = \Lambda\bigcap\Big(\bigcup_{\ell\in\Lambda^{\rm QM}\cup\Lambda^{\rm MM}}B_{r_{\rm cut}}(\ell)\Big) ,
\end{eqnarray}
where $f^{r_{\rm cut}}_{\ell}(\bar{u}^{\rm H}) :=f_{\ell}^{B_{r_{\rm cut}}(\ell)}(\bar{u}^{\rm H})$ is the force computed from
a finite system in the ball $B_{r_{\rm cut}}(\ell)$, defined by \eqref{Fl-El}.
Thanks to the locality result in Lemma \ref{lemma:regularity}, the error of this approximated error indicator (compared with \eqref{eq:error-indicator-fl}) decays exponentially fast to 0 as $r_{\rm cut}$ increases.
\subsection{A sampling strategy for the evaluation of error indicator}
\label{sec:sample}
By Theorem \ref{theorem:upperbound}, the error indicator \eqref{eq:error-indicator-fl} bounds the error of the approximate equilibrium state $\bar{u}^{\rm H}$ from above.
It may be more useful to have some local error indicator,
in order to direct us to adjust the QM and MM regions automatically.
And furthermore, the computational cost is still very expensive since the evaluation of the error indicator \eqref{eta_posteriori} requires the computation of the QM forces $f_{\ell}(\bar{u}^{\rm H})$
at all sites $\ell\in\Omega$, which is very expensive in real simulations.
Therefore, we propose an strategy to partition the simulation domain into local elements and construct local error indicator on each element.
A good partition can also help us compute the {\it a posteriori} error indicator efficiently.
We can sample one or a few sites in each element, and compute the force on the sampled sites to represent the error distribution in this element.
We will focus on the two dimensional systems in this paper. The three dimensional implementation is in principle similar but technically more involved and we will leave it to future work.
We decompose the simulation domain $\Omega_{\rm c}$ in \eqref{eta_posteriori} with a partition $\mathcal{T}:=\{T\}$, such that $ \Omega_{\rm c}=\cup_{T\in\mathcal{T}} $.
We can then approximate $\eta_{r_{\rm cut}}(\bar{u}^{\rm H})$ by
\begin{eqnarray}\label{eta_sample}
\nonumber
\eta_{r_{\rm cut}}(\bar{u}^{\rm H}) &=& \sum_{T\in\mathcal{T}}\sum_{\ell\in T} \log(2+|\ell|)\cdot\big|f^{r_{\rm cut}}_{\ell}(\bar{u}^{\rm H})\big|
\\[1ex]
&\approx& \sum_{T\in\mathcal{T}} w(T) \log(2+|\tilde{\ell}(T)|)\cdot\big|f_{\tilde{\ell}(T)}^{r_{\rm cut}}(\bar{u}^{\rm H})\big|
~=:~ \eta_{r_{\rm cut}}^{\mathcal{T}}(\bar{u}^{\rm H}),
\end{eqnarray}
where $\tilde{\ell}(T)$ denotes the repatom of $T$, and $w(T)$ gives the weight of the element $T\in\mathcal{T}$
(e.g., one can take $w(T)$ be the number of sites in $T$, or the relative area of $T$).
We use
\begin{eqnarray}\label{eta_local}
\eta_{r_{\rm cut}}^{T}(\bar{u}^{\rm H}) = w(T) \log(2+|\tilde{\ell}(T)|)\cdot\big|f_{\tilde{\ell}(T)}^{r_{\rm cut}}(\bar{u}^{\rm H})\big|
\end{eqnarray}
to denote the local error indicator on element $T$, which will provide us the information
for the model adjustments in the adaptive algorithm. Multiple repatoms within one simplex $T$ is also possible and can be chosen by, e.g., Gauss-Lobatto quadrature rule.
\begin{remark}
The motivation behind the partition and sampling is that the force distribution is smooth in most of the area,
for example, in the area away from the QM region (see e.g. Figure \ref{qmmmfrc} ).
Therefore, the sampling can keep the accracy of the error indicator while at the same time significantly reducing the computational cost.
\end{remark}
The choice of partition $\mathcal{T}$ is crucial for the accuracy and efficiency of the evaluation of the error indicator.
In this paper, we focus more on local point defects, and partition the simulation domains in polar coordinates.
The following partition strategy generates a graded mesh along the radius direction. The efficiency of this strategy is shown by our numerical experiments for some prototypical problems (see Section \ref{sec:numerics}).
For two dimensional quasi spherically symmetric defect configuration, for example, single point defect or microcrack, or multiple point defects, we have the following algorithm.
\vskip 0.2cm
\begin{algorithm}[H]
\caption{Graded mesh generation}
\label{alg:grademesh}
\begin{enumerate}
\item
Let $n_{\theta}\in\mathbb{Z}_+$ and $\tau=2\pi/M$.
Set $0=\theta_0 < \theta_1 < \cdots < \theta_{n_{\theta}} = 2\pi$ with $\theta_{j+1} = \theta_j + \tau$.
\item
Let $n_r\in\mathbb{Z}_+$ and $n_r = n_{\rm QM} + n^1_{\rm MM} + n^2_{\rm MM} + n_{\rm FF}$.
%
Let
\begin{displaymath}
\begin{array}{ll}
h_1\geq\cdots\geq h_{n_{\rm QM}}>0 , & \text{ from defect core to QM/MM interface }\\
0<h_{n_{\rm QM}+1}<\cdots<h_{n_{\rm QM}+n^1_{\rm MM}} , & \text{ from QM/MM interface to the coarsest $T\in \mathcal{T}$} \\
h_{n_{\rm QM}+n^1_{\rm MM}+1}>\cdots>h_{n_r+n^1_{\rm MM}+n^2_{\rm MM}} >0 , & \text{ from the coarsest $T\in\mathcal{T}$ to MM/FF interface}\\
0<h_{n_r-n_{\rm FF}+1}<\cdots<h_{n_r} , & \text{ far field}
\end{array}
\end{displaymath}
such that
$\displaystyle \sum_{k=1}^{n_{\rm QM}} h_k = R_{\rm QM}$,
$\displaystyle \sum_{k=1}^{n_r-n_{\rm FF}} h_k = R_{\rm MM}$
and $\displaystyle \sum_{k=+1}^{n_r} = R_{\rm MM} + r_{\rm cut}$.
Set $0=r_0<r_1<\cdots<r_{n_r}$ with $r_k = r_{k-1} + h_k$.
\item
Let $\mathcal{T}=\{T_{ij}\}$, $T_{ij} = (r_{i-1}, r_i]\times(\theta_{j-1}, \theta_j]$ in polar coordinate
with $i = 1,\cdots, n_r$ and $j = 1,\cdots, n_{\theta}$.
Let $\tilde{\ell}_{ij}\in T_{ij}$ be the site that is closest to the centre of $T_{ij}$,
and $w(T_{ij})$ be the number of atoms that lies in $T_{ij}$.
\end{enumerate}
\end{algorithm}
\begin{remark}
In the numerical experiments in Section \ref{sec:numerics}, we use the following parameters in Algorithm \ref{alg:grademesh},
\begin{align*}
&h_1=\cdots= h_{n_{\rm QM}}=1 , \\
&h_{n_{\rm QM}+j} = \frac12(\frac{\sum_{k=1}^{n_{\rm QM}+j-1}h_k}{n_{\rm QM}})^{1.5}+\frac12(\frac{\sum_{k=1}^{n_{\rm QM}+j}h_k}{n_{\rm QM}})^{1.5} ,
\quad{\rm for}~~1\leq j\leq n^1_{\rm MM}, \\
&h_{n_{\rm QM}+n^1_{\rm MM}+j} = h_{n_{\rm QM}+n^1_{\rm MM}-j} ,
\quad{\rm for}~~1\leq j\leq n^2_{\rm MM},\\
&h_{n_r-n_{\rm FF}+j}=h_{n_{\rm QM}+j} ,
\quad{\rm for}~~1\leq j\leq n_{\rm FF}.
\end{align*}
\end{remark}
\begin{remark}
For more general defect configurations, one may generate adaptive mesh according to the error indicator, for example, starting from a coarse partition. The technical details will appear in our forthcoming paper.
\end{remark}
\section{Adaptive QM/MM algorithms}
\label{sec:adaptive}
\setcounter{equation}{0}
\setcounter{figure}{0}
In this section, we design an adaptive QM/MM algorithm for crystalline defects based on the {\it a posteriori} error indicator \eqref{eta_local}.
The basic idea of the adaptive method is to repeat the following procedure before reaching the required accuracy:
$$
\mbox{Solve}~\rightarrow~\mbox{Estimate}~\rightarrow~
\mbox{Mark}~\rightarrow~\mbox{Refine}.
$$
Given a partition $\Lambda^{\rm QM}$ and $\Lambda^{\rm MM}$, the ``Solve" step computes the approximate equilibrium state $\bar{u}^{\rm H}$ by solving \eqref{problem-e-mix}.
The ``Estimate" step computes the {\it a posteriori} error indicators \eqref{eta_sample} and \eqref{eta_local}.
The ``Mark" step uses some adaptation strategy to choose sets $\mathcal{M}_{\rm r}$ for model refinement, and here it refers to the change of $\Lambda^{\rm QM}$/$\Lambda^{\rm MM}$ interface and $\Lambda^{\rm MM}$/$\Lambda^{\rm FF}$ interface.
We choose the following D\"{o}rfler strategy, which is a widely used marking strategy to enforce error reduction.
\begin{algorithm}[H]
\caption{D\"{o}rfler Strategy.}
\label{alg:dorfler}
\quad Prescibe $0<\tau<1$.
\begin{enumerate}
\item
Choose the minimum set $\mathcal{M}_{\rm r}\subset\Lambda$ such that the following D\"{o}rfler properties are satisfied
\begin{eqnarray}\label{dorfler-strategy}
\sum_{T\subset \mathcal{M}_{\rm r}}\eta^T_{r_{\rm cut}}(\bar{u}^{\rm H}) \geq \tau \sum_{T\in \mathcal{T}} \eta^T_{r_{\rm cut}}(\bar{u}^{\rm H}) .
\end{eqnarray}
\item
Mark all the sites in $\mathcal{M}_{\rm r}$ (for refinement).
\end{enumerate}
\end{algorithm}
\def\textrm{dist}{\textrm{dist}}
To select the minimum set $\mathcal{M}_{\rm r}$,
we first sort $\eta_T(\bar{u}^{\rm H})$ in descending order for $T\in\mathcal{T}$,
then compute the partial sums of the sorted sequence until \eqref{dorfler-strategy} is satisfied.
In the QM/MM coupling, we should further decompose the marked set into two nonintersecting subsets
\begin{eqnarray*}
\mathcal{M}_{\rm r} = \mathcal{M}_{\rm r}^{\rm QM} \cup \mathcal{M}_{\rm r}^{\rm MM}, \qquad \mathcal{M}_{\rm r}^{\rm QM} \cap \mathcal{M}_{\rm r}^{\rm MM} = \emptyset.
\end{eqnarray*}
We will see from the numerical tests (see Figure \ref{fig:side:a} and \ref{fig:qmmmgd} (b))
that the force distribution are mainly concentrated near the $\Lambda^{\rm QM}/ \Lambda^{\rm MM}$ interface and $\Lambda^{\rm MM}/\Lambda^{\rm FF}$ interface.
Therefore, the simplest way to determine this decomposition is by the position of the atomic site $\ell\in\mathcal{M}_{\rm r}$:
if $\textrm{dist} (\ell, \Lambda^{\rm QM}) < \textrm{dist}(\ell, \Lambda^{\rm FF})$ , then $\ell$ goes into $\mathcal{M}_{\rm r}^{\rm QM}$; otherwise, $\ell$ goes into $\mathcal{M}_{\rm r}^{\rm MM}$.
Then, with the D\"{o}rfler adaptation strategy, $\mathcal{M}_{\rm r}$ consequently lies in two (separated) regions around two interfaces, which can easily be decomposed into the QM and the MM parts.
\begin{remark}
\label{rem:generalalg}
The decomposition of $\mathcal{M}_{\rm r}$ into QM and MM parts requires the assumption that the errors are concentrated around $\Lambda^{\rm QM}/ \Lambda^{\rm MM}$ and $\Lambda^{\rm MM}/\Lambda^{\rm FF}$ interfaces. This seems to be ad hoc, which may not work for general defect configurations. But the main purpose of this paper is to develop an analytical framework for adaptive QM/MM computation, and justify it numerically by some prototypical examples, such as the single vacancy and two separated vacancies in Section \ref{sec:numerics} where such an assumption holds. A more general numerical approach may need to combine ideas such as stress based error indicator from adaptive atomistic/continuum coupling method \cite{Wang:2017,Liao2018}, and will be investigated in our future work.
\end{remark}
\begin{remark}
One can also use the so-called maximum marking strategy for the ``Mark" step.
The maximum strategy chooses a $T_{\rm max}\in \mathcal{T}$, such that
\begin{eqnarray*}\label{maximum-strategy}
\eta_{r_{\rm cut}}^{T_{\rm max}}(\bar{u}^{\rm H}) = \max_{T\in\mathcal{T}}\eta_{r_{\rm cut}}^T(\bar{u}^{\rm H}) .
\end{eqnarray*}
%
and $\mathcal{M}_{\rm r}$ contains all sites in $T_{\rm max}$. In this paper we will stick to the D\"{o}rfler strategy, which behaves more efficiently in all our numerical examples.
\end{remark}
Once the marked sets $\mathcal{M}_{\rm r}^{\rm QM}$ and $\mathcal{M}_{\rm r}^{\rm MM}$ are determined,
the ``Refine" step adjusts the domain decomposition accordingly for the next ``Solve" step.
Note that usually more atomic sites than that in the marked sites $\mathcal{M}_{\rm r}$ are refined, in order to keep the QM and MM regions regular.
The adaptive QM/MM algorithm is given as follows.
\begin{algorithm}[H]
\caption{Adaptive QM/MM algorithm}
\label{alg:main}
\quad
\begin{enumerate}
\item
Prescribe $\varepsilon_{\rm tol}>0$, $\mathrm{r}\in(0,1)$, $N_{\rm QM}^{\max}$, $N_{\rm MM}^{\max}$ and $R_{\rm BUF}$.
Initialize $\Lambda^{\rm QM}$ and $\Lambda^{\rm MM}$.
Construct $\Lambda^{\rm BUF}\subset\Lambda^{\rm MM}$ such that \eqref{buf} is satisfied.
\item
If $\#\Lambda^{\rm QM}>N_{\rm QM}^{\max}$ or $\#\Lambda^{\rm MM}>N_{\rm MM}^{\max}$, \textbf{STOP};
otherwise solve \eqref{problem-e-mix} to obtain $\bar{u}^{\rm H}$.
\item
Compute the error indicator $\eta_{r_{\rm cut}}^{\mathcal{T}}(\bar{u}^{\rm H})$ in \eqref{eta_sample} and $\eta_{r_{\rm cut}}^{T}(\bar{u}^{\rm H})$ in \eqref{eta_local} for each $T\in\mathcal{T}$.
If $\eta_{r_{\rm cut}}(\bar{u}^{\rm H})<\varepsilon_{\rm tol}$, \textbf{STOP}; otherwise, go to Step 4.
\item
Use D\"{o}rfler Strategy to construct $\mathcal{M}_{\rm r}$, and decompose $\mathcal{M}_{\rm r}$ into $\mathcal{M}_{\rm r}^{\rm QM}$ and $\mathcal{M}_{\rm r}^{\rm MM}$.
\item
Construct new $\Lambda^{\rm QM}$, $\Lambda^{\rm MM}$ and $\Lambda^{\rm BUF}$ such that
$\Lambda^{\rm QM}\supset\mathcal{M}_{\rm r}^{\rm QM}$, $\Lambda^{\rm MM}\supset\mathcal{M}_{\rm r}^{\rm MM}$ and \eqref{buf} is satisfied, go to Step 2.
\end{enumerate}
\end{algorithm}
\section{Numerical experiments}
\label{sec:numerics}
\setcounter{equation}{0}
\setcounter{figure}{0}
In this section, we will complement our theoretical analysis with numerical experiments.
We consider two-dimensional triangle lattice $\Lambda^{\textrm{hom}}:={\sf A}\mathbb{Z}^{2}$ with
embedded local point defects, where
\begin{equation}
{\sf A} = \mymat{1 & \cos(\pi/3) \\ 0 & \sin(\pi/3)}.
\label{eq:Atrilattice}
\end{equation}
For the tight-binding model, we use a simple toy model with the Hamiltonian given in \eqref{tb-H-elements}, where the onsite term is $h_{\rm ons} = 0$, and the hopping term is given by the Morse potential
\begin{eqnarray*}
h_{\rm hop} (r) = e^{-4(r-1)} \quad{\rm for}~~r>0.
\end{eqnarray*}
\noindent
{\bf Example 1.} (Single vacancy)
Consider a single vacancy located at the origin with $\Lambda = \Lambda^{\rm hom}\backslash\{\pmb 0\}$.
We first use pure QM (tight binding) calculations to verify the convergence and reliability of our {\it a posteriori} error indicator,
where the QM subsystems is embedded directly in a bulk environment (without any MM subsystems). The geometry of the partition of QM and far field regions are shown in Figure \ref{fig:qmgeom}, where red atoms are simulated by QM model surrounded by far field atoms colored in green.
We still denote the approximate equilibrium solution by $\bar{u}^{\rm H}$.
We observe from Figure \ref{fig:qmape} that our error indicator decays in the same rate as
$\|\bar{u} - \bar{u}^{\rm H}||_{\dot{\mathscr{U}}^{1,2}}$ while the QM region increases. This not only supports the reliability (upper bound estimate) in Theorem \ref{theorem:upperbound}, but also shows the efficiency (lower bound estimate) of our error indicator.
We also compare the decay of error indicators with different cutoff $r_{\rm cut}$ in Figure \ref{fig:side:b}.
It is observed that the choice of $r_{\rm cut}$ does affect the reliability of our error indicator.
From our numerical simulations, we see that $r_{\rm cut}=5$ or 6 is good enough
and will be used throughout the following numerical experiments.
We present the force distribution (with respect to the radii) in Figure \ref{fig:side:a}.
Instead of using $f_{\ell}^{r_{\rm cut}}(\bar{u}^{\rm H})$ in \eqref{eta_posteriori}, we plot the force $f_{\ell}^{B_R(0)}(\bar{u}^{\rm H})$ computed on a very large simulation domain with $R\ggR_{\rm QM}+r_{\rm cut}$.
Note that $f_{\ell}^{B_R(0)}(\bar{u}^{\rm H})$ makes very accurate approximation of the true QM force $f_{\ell}(\bar{u}^{\rm H})$ in the thermodynamic limit.
The figure shows that the force are more concentrated around the interface between QM and far field regions.
\begin{figure}[!htb]
\centering
\subfigure[Partition of the QM and far field region.]{
\label{fig:qmgeom}
\includegraphics[height=6cm]{qmgeomtest.png}}
\hspace{0.2cm}
\subfigure[Verification of the {\it a posteriori} error indicator.]{
\label{fig:qmape}
\includegraphics[height=6cm]{qmape.png}}
\caption{Comparison of the {\it a posteriori} error indicator and $\|u - \bar{u}\|_{\dot{\mathscr{U}}^{1,2}}$ with pure QM calculations.}
\label{fig:pureQM}
\end{figure}
\begin{figure}[!htb]
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[scale=0.5]{qmrcutape.png}
\caption{Error indicators with different $r_{\rm cut}$.
}
\label{fig:side:b}
\end{minipage}
%
\begin{minipage}[t]{0.5\linewidth}
\centering
\includegraphics[scale=0.5]{qmfrctop.png}
\caption{Force distribution along the radii with a pure QM calculation.}
\label{fig:side:a}
\end{minipage}
\end{figure}
We show the QM/MM domain decomposition in Figure \ref{qmmmgeom}, and the corresponding force distribution (with fixed partition)
with respect to the radius direction in Figure \ref{qmmmfrc}. Similar to Figure \ref{fig:side:a}, we plot the force $f_{\ell}^{B_R(0)}$ on a very large system with $R\ggR_{\rm MM}+r_{\rm cut}$.
We observe that forces are mainly concentrated around the $\Lambda^{\rm QM}/ \Lambda^{\rm MM}$ interface and the $\Lambda^{\rm MM}/\Lambda^{\rm FF}$ interface.
This motivates the adaptive Algorithm \ref{alg:main}, which will assigns more sample points near those two interfaces.
We show the sample points in Figure \ref{gmsamp} and the elapsed time for the evaluation of the error indicator with/without sampling algorithm in Figure \ref{fig:surf_single}.
It is clear that the evaluation time for the error indicator with sampling \eqref{eta_sample} is significantly reduced compared with that without sampling \eqref{eta_posteriori}.
Furthermore, the sampling algorithm does not affect the accuracy of the error indicators (see Figure \ref{singlegooderr}).
\begin{figure}[!htb]
\centering
\subfigure[Partition of the domain.]{
\label{qmmmgeom}
\includegraphics[scale=0.55]{qmmmgeom.png}}
\hspace{0.2cm}
\subfigure[Force distribution along the radii with a QM/MM coupling.]{
\label{qmmmfrc}
\includegraphics[scale=0.5]{qmmmforce.png}}
\caption{Domain decomposition in the QM/MM coupling scheme, and force distribution along the radii.}
\label{fig:qmmmgd}
\end{figure}
\begin{figure}[!htb]
\centering
\subfigure[Graded mesh sampling.]{
\label{gmsamp}
\includegraphics[scale=0.58]{GMsampling.png}}
%
\subfigure[Computing time for the {\it a posteriori} error indicator.]{
\label{fig:surf_single}
\includegraphics[scale=0.51]{qmmmtime.png}}
\caption{Sampling points for the {\it a posteriori} error indicator, and the scaling of computational time.}
\label{fig:samp}
\end{figure}
We then perform the adaptive algorithm (Algorithm \ref{alg:main}) to compute the single vacancy example. In each ``Solve" step, the computational cost is proportional to $N_{\rm QM}^3+N_{\rm MM}$,
as the cost to solve the tight binding model scales cubically and the cost to solve the MM model scales linearly with respect to the number of atoms.
The decay curves for the errors of QM/MM solutions and the {\it a posteriori} error indicators are shown in Figure \ref{singlegooderr}, as a function of $N_{\rm QM}^3+N_{\rm MM}$.
The relation between $N_{\rm QM}$ and $N_{\rm MM}$ during the adaptation process is shown in Figure \ref{singlegoodpath}, from which we observe that our adaptive algorithm can achieve optimal computational complexity.
\begin{figure}[!htb]
\centering
\subfigure[Decay of the error in the adaptive algorithm.]{
\label{singlegooderr}
\includegraphics[scale=0.5]{SingleGood.png}}
\hspace{0.2cm}
\subfigure[Relation between $R_{\rm QM}$ and $R_{\rm MM}$ in the adaptive algorithm.]{
\label{singlegoodpath}
\includegraphics[scale=0.5]{SingleGoodPath.png}}
\caption{Convergence of the adaptive algorithm, and the scaling of QM and MM radius.}
\label{singlegood}
\end{figure}
\noindent
{\bf Example 2.} (Two separated vacancies)
Consider two vacancies that are away from each other (see Figure \ref{tsvgeom}).
Since the system has quasi-spherical symmetry away from the defects, we are still able to apply our graded mesh algorithm. We show one QM/MM partition in Figure \ref{tsvgeom} and the distribution of the sample points in Figure \ref{tsvape}, which are selected with respect to each vacancy core.
Here we adapt the graded mesh generation Algorithm \ref{alg:grademesh} such that the sampling points in the left half plane are generated by the vacancy on the left, and the sampling points in the right half plane are generated by the vacancy on the right. See Remark \ref{rem:generalalg} for discussions on a more general approach.
In our adaptive simulations, the initial geometry contains two isolated QM regions.
The adaptive algorithm adjust the QM and MM regions automaticlly according to the error indicators.
We show the evolution of the QM/MM partitions during the adaptation process in Figure \ref{fig:TSVsteps},
and observe the merge and growth of QM subsystems as $N_{\rm QM}$ increases.
We plot the the error indicators and true approximation errors in Figure \ref{tsvgooderr},
which shows the accuracy of our adaptive algorithm and the efficiency of the sampling techniques.
The relation between $N_{\rm QM}$ and $N_{\rm MM}$ is shown in Figure \ref{tsvgoodpath}, which implies that our adaptive algorithm can give optimal computational scaling.
\begin{figure}[htb]
\centering
\subfigure[Geometry of two separated vacancies and QM/MM decompositions.]{
\label{tsvgeom}
\includegraphics[height=6cm]{qmmmTSVgeom.png}}
\hspace{0.2cm}
\subfigure[Sample points for two separated vacancies.]{
\label{tsvape}
\includegraphics[height=6cm]{qmmmTSVsamp.png}}
\label{tsv2}
\caption{Geometry of the QM/MM decompositions and corresponding sample points.}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[height=4cm]{step1}
\includegraphics[height=4cm]{step2}
\includegraphics[height=4cm]{step3}
\includegraphics[height=4cm]{step4}
\caption{Evolution of QM and MM partition in the adaptation process. $N_{\rm QM}$, the numbers of atomic sites in the QM region are 36, 71, 95, and 133 from left to right.}
\label{fig:TSVsteps}
\end{figure}
\begin{figure}[htb]
\centering
\subfigure[Convergence curves of the error indicators and approximation erros.]{
\label{tsvgooderr}
\includegraphics[scale=0.53]{TSVGood.png}}
\hspace{0.2cm}
\subfigure[$N_{\rm QM}$ vs $N_{\rm MM}$]{
\label{tsvgoodpath}
\includegraphics[scale=0.53]{TSVGoodPath.png}}
\caption{Convergence of the adaptive algorithm, and the scaling of QM and MM radius.}
\label{tsvgood}
\end{figure}
\section{Conclusions}
\label{sec:conclusion}
\setcounter{equation}{0}
In this paper, we provide an {\it a posteriori} error indicator for QM/MM coupling approximations, and design an adaptive algorithm for crystalline solids with embedded defects.
The error indicator not only gives an upper bound for the approximation error of the geometry equilibration, but also allows to adjust the QM/MM decomposition on the fly.
Moreover, the error indicator can be computed efficiently with a sampling algorithm.
We conclude that,
(a) more flexible sampling methods are required to compute the error indicator for more general defected systems,
(b) our method is potentially more efficient and important for dynamic problems (with moving defects), where a coarsening process should be applied.
These issues will be investigated in our future work.
\subsubsection*{Acknowledgements}
We are grateful to Christoph Ortner and Julian Braun from University of Warwick for stimulating discussions regarding this work.
|
1,116,691,497,845 | arxiv | \section{Introduction}\label{intro.sec}
Nearest neighbor searching is the following problem: we are given a set
$S$ of $n$ {\em data points} in a metric space, $X$, and are asked to
preprocess these points so that, given any {\em query point} $q \in X$,
the data point nearest to $q$ can be reported quickly. Nearest neighbor
searching has applications in many areas, including knowledge discovery
and data mining \cite{fpsu-akddm-96}, pattern recognition and
classification \cite{ch-nnpc-67,dh-pcsa-73}, machine learning
\cite{cs-wnnalsf-93}, data compression \cite{gg-vqsc-92}, multimedia
databases \cite{fsna-qivcqs-95}, document retrieval
\cite{ddflh-ilsa-90}, and statistics \cite{dw-nnmd-82}.
There are many possible choices of the metric space. Throughout we will
assume that the space is $R^d$, real $d$-dimensional space, where
distances are measured using any Minkowski $L_m$ distance metric. For
any integer $m \ge 1$, the {\em $L_m$-distance} between points $p =
(p_1,p_2,\ldots,p_d)$ and $q=(q_1,q_2,\ldots,q_d)$ in $R^d$ is defined
to be the $m$-th root of $\sum_{1 \le i \le d} |p_i-q_i|^m$. The $L_1$,
$L_2$, and $L_{\infty}$ metrics are the well-known Manhattan, Euclidean
and max metrics, respectively.
Our primary focus is on data structures that are stored in main memory.
Since data sets can be large, we limit ourselves to consideration of
data structures whose total space grows linearly with $d$ and $n$.
Among the most popular methods are those based on hierarchical
decompositions of space. The seminal work in this area was by Friedman,
Bentley, and Finkel \cite{fbf-afbml-77} who showed that $O(n)$ space and
$O(\log n)$ query time are achievable for fixed dimensional spaces in
the expected case for data distributions of bounded density through the
use of kd-trees. There have been numerous variations on this theme.
However, all known methods suffer from the fact that as dimension
increases, either running time or space increase exponentially with
dimension.
The difficulty of obtaining algorithms that are efficient in the worst
case with respect to both space and query time suggests the alternative
problem of finding {\em approximate} nearest neighbors. Consider a set
$S$ of data points in $R^d$ and a query point $q \in R^d$. Given
$\epsilon > 0$, we say that a point $p \in S$ is a {\em
$(1+\epsilon)$-approximate nearest neighbor} of $q$ if
\[
\hbox{\it dist}(p,q) \le (1 + \epsilon)\hbox{\it dist}(p^*,q),
\]
where $p^*$ is the true nearest neighbor to $q$. In other words, $p$
is within relative error $\epsilon$ of the true nearest neighbor.
The approximate nearest neighbor problem has been heavily studied
recently. Examples include algorithms by Bern \cite{b-acpqhd-93},
Arya and Mount \cite{am-annqf-93}, Arya, et al. \cite{amnsw-oaann-94},
Clarkson \cite{c-aacpq-94}, Chan \cite{c-annqr-97}, Kleinberg
\cite{k-tanns-97}, Indyk and Motwani \cite{im-anntr-98}, and
Kushilevitz, Ostrovsky and Rabani \cite{kor-esann-98}.
In this study we restrict attention to data structures of size $O(dn)$
based on hierarchical spatial decompositions, and the kd-tree in
particular. In large part this is because of the simplicity and
widespread popularity of this data structure. A kd-tree is binary tree
based on a hierarchical subdivision of space by splitting hyperplanes
that are orthogonal to the coordinate axes \cite{fbf-afbml-77}. It is
described further in the next section. A key issue in the design of the
kd-tree is the choice of the splitting hyperplane. Friedman, Bentley,
and Finkel proposed a splitting method based on selecting the plane
orthogonal to the median coordinate along which the points have the
greatest spread. They called the resulting tree an optimized kd-tree,
and henceforth we call the resulting splitting method the {\em standard
splitting method}. Another common alternative uses the shape of the
cell, rather than the distribution of the data points. It splits
each cell through its midpoint by a hyperplane orthogonal to its longest
side. We call this the {\em midpoint split method}.
A number of other data structures for nearest neighbor searching based
on hierarchical spatial decompositions have been proposed. Yianilos
introduced the {\em vp-tree} \cite{y-dsann-93}. Rather than using an
axis-aligned plane to split a node as in kd-tree, it uses a data point,
called the vantage point, as the center of a hypersphere that partitions
the space into two regions. There has also been quite a bit of
interest from the field of databases. There are several data structures
for database applications based on $R$-trees and their variants
\cite{bkss-rtera-90,srf-rtdim-87}. For example, the {\em X-tree}
\cite{bkk-xtish-96} improves the performance of the R$^*$-tree by
avoiding high overlap. Another example is the SR-tree
\cite{ks-stish-97}. The {\em TV-tree} \cite{ljf-ttish-94} uses a
different approach to deal with high dimensional spaces. It reduces
dimensionality by maintaining a number of active dimensions. When all
data points in a node share the same coordinate of an active dimension,
that dimension will be deactivated and the set of active dimensions
shifts.
In this paper we study the performance of two other splitting methods,
and compare them against the kd-tree splitting method. The first,
called {\em sliding-midpoint}, is a splitting method that was introduced
by Mount and Arya in the ANN library for approximate nearest neighbor
searching \cite{ma-alanns-97}. This method was introduced into the
library in order to better handle highly clustered data sets. We know
of no analysis (empirical or theoretical) of this method. This method
was designed as a simple technique for addressing one of the most
serious flaws in the standard kd-tree splitting method. The flaw is
that when the data points are highly clustered in low dimensional
subspaces, then the standard kd-tree splitting method may produce highly
elongated cells, and these can lead to slow query times. This splitting
method starts with a simple midpoint split of the longest side of the
cell, but if this split results in either subcell containing no data
points, it translates (or ``slides'') the splitting plane in the
direction of the points until hitting the first data point. In
Section~\ref{slmid.sec} we describe this splitting method and analyze
some of its properties.
The second splitting method, called {\em minimum-ambiguity}, is a
query-based technique. The tree is given not only the data points, but
also a collection of sample query points, called the {\em training
points}. The algorithm applies a greedy heuristic to build the tree in
an attempt to minimize the expected query time on the training points.
We model query processing as the problem of eliminating data points from
consideration as the possible candidates for the nearest neighbor.
Given a collection of query points, we can model any stage of the
nearest neighbor algorithm as a bipartite graph, called the {\em
candidate graph}, whose vertices correspond to the union of the data
points and the query points, and in which each query point is adjacent
to the subset of data points that might be its nearest neighbor. The
minimum-ambiguity selects the splitting plane at each stage that
eliminates the maximum number of remaining edges in the candidate graph.
In Section~\ref{minamb.sec} we describe this splitting method in greater
detail.
We implemented these two splitting methods, along with the standard
kd-tree splitting method. We compared them on a number of synthetically
generated point distributions, which were designed to model
low-dimensional clustering. We believe this type of clustering is not
uncommon in many application data sets \cite{jd-acd-88}. We used
synthetic data sets, as opposed to standard benchmarks, so that we could
adjust the strength and dimensionality of the clustering. Our results
show that these new splitting methods can provide significant
improvements over the standard kd-tree splitting method for data sets
with low-dimensional clustering. The rest of the paper is organized as
follows. In the next section we present background information on the
kd-tree and how to perform nearest neighbor searches in this tree. In
Section~\ref{split.sec} we present the two new splitting methods. In
Section~\ref{empir.sec} we describe our implementation and present our
empirical results.
\section{Background}\label{backgr.sec}
In this section we describe how kd-trees are used for performing exact
and approximate nearest neighbor searching. Bentley introduced the
kd-tree as a generalization of the binary search tree in higher
dimensions \cite{b-mbstu-75}. Each node of the tree is implicitly associated
with a $d$-dimensional rectangle, called its {\em cell}. The root node
is associated with the bounding rectangle, which encloses all of the
data points. Each node is also implicitly associated with the subset of
data points that lie within this rectangle. (Data points lying on the
boundary between two rectangles, may be associated with either.) If the
number of points associated with a node falls below a given threshold,
called the {\em bucket size}, then this node is a leaf, and these points
are stored with the leaf. (In our experiments we used a bucket size of
one.) Otherwise, the construction algorithm selects a splitting
hyperplane, which is orthogonal to one of the coordinate axes and passes
through the cell. There are a number of {\em splitting methods} that may
be used for choosing this hyperplane. We will discuss these in greater
detail below. The hyperplane subdivides the associated cell into
two subrectangles, which are then associated with the children of this
node, and the points are subdivided among these children according to
which side of the hyperplane they lie. Each internal node of the tree
is associated with its splitting hyperplane (which may be given as the
index of the orthogonal axis and a cutting value along this axis).
Friedman, Bentley and Finkel \cite{fbf-afbml-77} present an algorithm to
find the nearest neighbor using the kd-trees. They introduce the
following splitting method, which we call the {\em standard splitting
method}. For each internal node, the splitting hyperplane is chosen to
be orthogonal to the axis along which the points have the greatest {\em
spread} (difference of maximum and minimum). The splitting point is
chosen at the median coordinate, so that the two subsets of data points
have nearly equal sizes. The resulting tree has $O(n)$ size and $O(\log
n)$ height. White and Jain \cite{wj-assr-96} proposed an alternative,
called the {\em VAM-split}, with the same basic idea, but the splitting
dimension is chosen to be the one with the maximum variance.
Queries are answered by a simple recursive algorithm. In the basis
case, when the algorithm arrives at a leaf of the tree, it computes the
distance from the query point to each of the data points associated with
this node. The smallest such distance is saved. When arriving at an
internal node, it first determines the side of the associated hyperplane
on which the query point lies. The query point is necessarily closer to
this child's cell. The search recursively visits this child. On
returning from the search, it determines whether the cell associated
with the other child is closer to the query point than the closest point
seen so far. If so, then this child is also visited recursively. When
the search returns from the root, the closest point seen is returned.
An important observation is that for each query point, every leaf whose
distance from the query point is less than the nearest neighbor will be
visited by the algorithm.
It is an easy matter to generalize this search algorithm for answering
{\em approximate} nearest neighbor queries. Let $\epsilon$ denote the
allowed error bound. In the processing of an internal node, the further
child is visited only if its distance from the query point is less than
the distance to the closest point so far, divided by $(1+\epsilon)$.
Arya et al. \cite{amnsw-oaann-94} show the correctness of this
procedure. They also show how to generalize the search algorithm for
computing the $k$-closest neighbors, either exactly or approximately.
Arya and Mount \cite{am-afvq-93} proposed a number of improvements to
this basic algorithm. The first is called {\em incremental distance
calculation}. This technique can be applied for any Minkowski metric.
In addition to storing the splitting hyperplane, each internal node of
the tree also stores the extents of associated cell projected
orthogonally onto its splitting axis. The algorithm does not maintain
true distances, but instead (for the Euclidean metric) maintains squared
distances. When the algorithm arrives at an internal node, it knows the
squared distance from the query point to the associated cell. They show
that in constant time (independent of dimension) it is possible to use
this information to compute the squared distance to each of the
children's cell. They also presented a method called {\em priority
search}, which uses a heap to visit the leaves of the tree in increasing
order of distance from the query point, rather than in the recursive
order dictated by the structure of the tree. Yet another improvement is
a well-known technique from nearest neighbor searching, called {\em
partial distance calculation} \cite{bg-imdeavq-85,s-rnnsk-91}. When
computing the distance between the query point and a data point, if the
accumulated sum of squared components ever exceeds the squared distance
to the nearest point so far, then the distance computation is
terminated.
One of the important elements of approximate nearest neighbor searching,
which was observed by Arya et al. \cite{amnsw-oaann-94}, is that there
are two important properties of any data structure for approximate
nearest neighbor searching based on spatial decomposition.
\begin{description}
\item[Balance:] The height of the tree should be $O(\log n)$, where
$n$ is the number of data points.
\item[Bounded aspect ratio:] The leaf cells of the tree should have
bounded aspect ratio, meaning that the ratio of the longest
to shortest side of each leaf cell should be bounded above
by a constant.
\end{description}
Given these two constraints, they show that approximate nearest neighbor
searching (using priority search) can be performed in $O(\log n)$ time
from a data structure of size $O(dn)$. The hidden constant factors in
time grow as $O(d/\epsilon)^d$. Unfortunately, achieving both of these
properties does not always seem to be possible for kd-trees. This is
particularly true when the point distribution is highly clustered. Arya
et al. present a somewhat more complex data structure called a {\em
balanced box-decomposition tree}, which does satisfy these properties.
The extra complexity seems to be necessary in order to prove their
theoretical results, and they show empirically that it is important when
data sets are highly clustered in low-dimensional subspaces. An
interesting practical question is whether there exist methods that
retain the essential simplicity of the kd-tree, while providing
practical efficiency for clustered data distributions (at least in most
instances, if not in the worst case).
Bounded aspect ratio is a sufficient condition for efficiency, but it is
not necessary. The more precise condition in order for their results to
apply is called the {\em packing constraint} \cite{amnsw-oaann-94}.
Define a {\em ball} of radius $r$ to be the locus of points that are
within distance $r$ of some point in $R^d$ according to the chosen
metric. The packing constraint says that the number of large cells that
intersect any such ball is bounded.
\begin{description}
\item[Packing Constraint:] The number of leaf cells of size at least
$s$ that intersect an open ball of radius $r > 0$ is bounded
above by a function of $r/s$ and $d$, but independent of $n$.
\end{description}
If a tree has cells of bounded aspect ratio, then it satisfies the
packing constraint. Arya et al., show that if this assumption is
satisfied, then priority search runs in time that is proportional to the
depth of the tree, times the number of cells of maximum side length
$r\epsilon/d$ that intersect a ball of radius $r$. By the packing
constraint this number of cells depends only on the dimension and
$\epsilon$. The main shortcoming of the standard splitting method is
that it may result in cells of unbounded aspect ratio.
\section{Splitting Methods}\label{split.sec}
In this section we describe the splitting methods that are considered
in our experiments. As mentioned in the introduction, we implemented
two splitting methods, in addition to the standard kd-tree splitting
method. We describe them further in each of the following sections.
\subsection{Sliding-Midpoint}\label{slmid.sec}
The sliding-midpoint splitting method was first introduced in the ANN
library for approximate nearest neighbor searching \cite{ma-alanns-97}.
This method was motivated to remedy the deficiencies of two other
splitting methods, the standard kd-tree splitting method and the
midpoint splitting method. To understand the problem, suppose that the
data points are highly clustered along a few dimensions but vary greatly
along some the others (see Fig.~\ref{slmid.fig}). The standard kd-tree
splitting method will repeatedly split along the dimension in which the
data points have the greatest spread, leading to many cells with high
aspect ratio. A nearest neighbor query near the center of the bounding
square would visit a large number of these cells. In contrast, the
midpoint splitting method bisects the cell along its longest side,
irrespective of the point distribution. (If there are ties for the
longest side, then the tie is broken in favor of the dimension along
which the points have the highest spread.) This method produces cells of
aspect ratio at most 2, but it may produce leaf cells that contain no
data points. The size of the resulting tree may be very large when the
data distribution is highly clustered data and the dimension is high.
\begin{figure}[htbp]
\centerline{\psfig{figure=Figs/slmid.eps,height=1.25in}}
\caption{Splitting methods with clustered point sets.}
\label{slmid.fig}
\end{figure}
The sliding midpoint method works as follows. It first attempts to
perform a midpoint split, by the same method described above. If data
points lie on both sides of the splitting plane then the algorithm acts
exactly as it would for the midpoint split. However, if a trivial split
were to result (in which all the points lie to one side of the splitting
plane), then it attempts to avoid this by ``sliding'' the splitting plane
towards the points until it encounters the first data point. More
formally, if the split is performed orthogonal to the $i$th coordinate,
and all the data points have $i$-coordinates that are larger than that
of the splitting plane, then the splitting plane is translated so that
its $i$th coordinate equals the minimum $i$th coordinate among all the
data points. Let this point be $p_1$. Then the points are partitioned
with $p_1$ in one part of the partition, and all the other data points
in the other part. A symmetrical rule is applied if the points all have
$i$th coordinates smaller than the splitting plane.
This method cannot result in any trivial splits, implying that the
resulting tree has size $O(n)$. Thus it avoids the problem of large
trees, which the midpoint splitting method is susceptible to. Because
there is no guarantee that the point partition is balanced, the
depth of the resulting tree may exceed $O(\log n)$. However, based on
our empirical observations, the height of this tree rarely exceeds the
height of the standard kd-tree by more than a small constant factor.
It is possible to generate a cell $C$ of very high aspect ratio, but it
can be shown that if it does, then $C$ is necessarily adjacent to a
sibling cell $C'$ that is fat along the same dimension that $C$ is
skinny. As a result, it is not possible to generate arbitrarily long
squences of skinny cells, as the standard splitting method could.
The sliding midpoint method can be implemented with little more effort
than the standard kd-tree splitting method. But, because the depth of
the tree is not necessarily $O(\log n)$, the $O(n\log n)$ construction
time bound does not necessarily hold. There are more complex algorithms
for constructing the tree that run in $O(n\log n)$ time
\cite{amnsw-oaann-94}. However, in spite of these shortcomings, we will
see that the sliding-midpoint method, can perform quite well for highly
clustered data sets.
\subsection{Minimum-Ambiguity}\label{minamb.sec}
All of the splitting methods described so far are based solely on
the data points. This may be quite reasonable in applications where
data points and query points come from the same distribution. However
this is not always the case. (For example, a common use of nearest
neighbor searching is in iterative clustering algorithms, such as the
{\em k-means algorithm} \cite{f-camdeic-65,gg-vqsc-92,m-smcamo-67}.
Depending on the starting conditions of the algorithm, the data points
and query points may be quite different from one another.) If the two
distributions are different, then it is reasonable that preprocessing
should be informed of the expected distribution of the query points, as
well as the data points. One way to do this is to provide the
preprocessing phase with the data points and a collection of sample
query points, called {\em training points}. The goal is to compute a
data structure which is efficient, assuming that the query distribution
is well-represented by the training points. The idea of presenting
a training set of query points is not new. For example, Clarkson
\cite{c-nnqms-97} described a nearest neighbor algorithm that uses
this concept.
The {\em minimum-ambiguity splitting method} is given a set $S$ of data
points and a training set $T$ of sample query points. For each query
point $q \in T$, we compute the nearest neighbor of $q$ in $S$ as part
of the preprocessing. For each such $q$, let $r(q)$ denote the distance
to the nearest point in $S$. Let $b(q)$ denote the {\em nearest
neighbor ball}, that is, the locus of points (in the current metric)
whose distance from $q$ is at most $r(q)$. As observed earlier, the
search algorithm visits every leaf cell that overlaps $b(q)$ (and
it may generally visit a large set of leaves).
Given any kd-tree, let $C(q)$ denote the set of leaf cells of the tree
that overlap $b(q)$. This suggests the following optimization problem,
given point sets $S$ and $T$, determine a hierarchical subdivision of
$S$ of size $O(n)$ such that the {\em total overlap}, $\sum_{q \in T}
|C(q)|$, is minimized. This is analogous to the packing constraint, but
applied only to the nearest neighbor balls of the training set. We do
not know how to solve this problem optimally, but we devised the
minimum-ambiguity splitting method as a greedy heuristic.
To motivate our method, we introduce a model for nearest neighbor
searching in terms of a pruning process on a bipartite graph. Given a
cell (i.e., a $d$-dimensional rectangle) $C$. Let $S_C$ denote the
subset of data points lying within this cell and let $T_C$ denote the
subset of training points whose such that the nearest neighbor balls
intersects $C$. Define the {\em candidate graph} for $C$ to be the
bipartite graph on the vertex set $S \cup T$, whose edge set is $S_C
\times T_C$. Intuitively, each edge $(p,q)$ in this graph reflects the
possibility that data point $p$ is a candidate to be the nearest neighbor
of training point $q$. Observe that if a cell $C$ intersects $b(q)$ and
contains $k$ data points, then $q$ has degree $k$ in the candidate graph
for $C$. Since it is our goal to minimize the number of leaf nodes that
overlap $C$, and assuming that each leaf node contains at least one
data point, then a reasonable heuristic for minimizing the number of
overlapping leaf cells is to minimize the average degree of vertices
in the candidate graph. This is equivalent to minimizing the total
number of edges in the graph. This method is similar to techniques
used in the design of linear classifiers based on impurity functions
\cite{bfos-crt-84}.
Here is how the minimum-ambiguity method selects the splitting
hyperplane. If $|S_C| \le 1$, then from our desire to generate a tree
of size $O(n)$, we will not subdivide this cell any further. Otherwise,
let $H$ be some orthogonal hyperplane that cuts $C$ into subcells $C_1$
and $C_2$. Let $S_1$ and $S_2$ be the resulting partition of data
points into these respective subcells, and let $T_1$ and $T_2$ denote
the subsets of training points whose nearest neighbor balls intersect
$C_1$ and $C_2$, respectively. Notice that these subsets are not
necessarily disjoint. We assign a {\em score} to each such hyperplane
$H$, which is equal to the sum of the number of edges in the ambiguity
graphs of $C_1$ and $C_2$. In particular,
\[
\hbox{Score}(H) = |S_1| \cdot |T_1| + |S_2| \cdot |T_2|.
\]
Intuitively a small score is good, because it means that the average
ambiguity in the choice of nearest neighbors is small. The
minimum-ambiguity splitting method selects the orthogonal hyperplane $H$
that produces a nontrivial partition of the data points and has the
smallest score. For example, in Fig.~\ref{minamb.fig} on the left, we
show the score of the standard kd-tree splitting method. However,
because of the higher concentration of training points on the right side
of the cell, the splitting plane shown on the right actually has a lower
score, and hence is preferred by the minimum-ambiguity method. In this
way the minimum-ambiguity method tailors the structure of the tree to
the distribution of the training points.
\begin{figure}[htbp]
\centerline{\psfig{figure=Figs/minamb.eps,height=1.5in}}
\caption{Minimum ambiguity splitting method.}
\label{minamb.fig}
\end{figure}
The minimum-ambiguity split is computed as follows. At each stage it is
given the current cell $C$, and the subsets $S_C$ and $T_C$. For each
coordinate axis, it projects the points of $S_C$ and the extreme
coordinates of the balls $b(q)$ for each $q \in T_C$ orthogonally onto
this axis. It then sweeps through this set of projections, from the
leftmost to the rightmost data point projection, updating the score as
it goes. It selects the hyperplane with the minimum score. If there
are ties for the smallest score, then it selects the hyperplane that
most evenly partitions the data points.
\section{Empirical Results}\label{empir.sec}
We implemented a kd-tree in C++ using the three splitting methods: the
standard method, sliding-midpoint, and minimum-ambiguity. For each
splitting method we generated a number data point sets, query point
sets, and (for minimum-ambiguity) training point sets. The tree
structure was based on the same basic tree structure used in ANN
\cite{ma-alanns-97}. The experiments were run on a Sparc Ultra,
running Solaris 5.5, and the program was compiled by the g++ compiler.
We measured a number of statistics for the tree, including its size,
depth, and the average aspect ratio of its cells.
Queries were answered using priority search. For each group of queries
we computed a number of statistics including CPU time, number of nodes
visited in the tree, number of floating-point operations, number of
distance calculations, and number of coordinate accesses. In our plots we
show only the number of nodes in the tree visited during the search. We
chose this parameter because it is a machine-independent quantity, and
was closely correlated with CPU time. In most of our experiments,
nearest neighbors were computed approximately.
For each experiment we fixed the number of data points, the
dimension, the data-point distribution, and the error bound
$\epsilon$. In the case of the minimum-ambiguity method, the query
distribution is also fixed, and some number of training points were
generated. Then a kd-tree was generated by applying the appropriate
splitting method. For the standard and sliding-midpoint methods the
tree construction does not depend on $\epsilon$, implying that the same
tree may be used for different error bounds. For the minimum-ambiguity
tree, the error bound was used in computing the tree. In particular,
the nearest neighbors of each of the training points was computed only
approximately. Furthermore, the nearest neighbor balls $b(q)$ for each
training point $q$ were shrunken in size by dividing their radius by the
factor $1+\epsilon$. This is because this is the size of the ball that
is used in the search algorithm.
For each tree generated, we generated some number of query points.
The query-point distribution was not always the same as the data
distribution, but it is always the same as the training point
distribution. Then the nearest neighbor search was performed on these
query points, and the results were averaged over all queries.
Although we ran a wide variety of experiments, for the sake of
conciseness we show only a few representative cases. For all of the
experiments described here, we used 4000 data points in dimension 20 for
each data set, and there were 12,000 queries run for each data set. For
the minimum-ambiguity method, the number of training points was 36,000.
The value of $\epsilon$ was either 1, 2, or 3 (allowing the reported
point to be a factor of 2, 3, or 4 further away than the true nearest
neighbor, respectively). We computed the exact nearest neighbors
off-line to guage the algorithm's actual performance. The reason for
allowing approximation errors is that in moderate to high dimensions,
the search times are typically smaller by orders of magnitude. Also the
errors that were observed are typically quite a bit smaller on average
than these bounds (see Fig.~\ref{avgerr.tab}). Note that average error
committed was typically only about $1/30$ of the allowable error. The
maximum error was computed for each run of 12,000 query points, and then
averaged over all runs. Even this maximum error was only around $1/4$
of the allowed error. Some variation (on the order of a factor of 2)
was observed depending on the choice of search tree and point
distributions.
\begin{figure}
\begin{center}
\begin{tabular}{||c|c|c|c||}
\hline\hline
$\epsilon$ & Avg. error & Std. dev. & Max. Error \\
\hline
1.0 & 0.03643 & 0.0340 & 0.248 \\
2.0 & 0.06070 & 0.0541 & 0.500 \\
3.0 & 0.08422 & 0.0712 & 0.687 \\
\hline\hline
\end{tabular}
\end{center}
\caption{Average error commited, the standard deviation of the error,
and the maximum error versus the allowed error, $\epsilon$. Values
were averaged over all runs.}
\label{avgerr.tab}
\end{figure}
\subsection{Distributions Tested}
The distributions that were used in our experiments are listed below.
The clustered-gaussian distribution is designed to model point sets that
are clustered, but in which each cluster is full-dimensional. The
clustered-orthogonal-ellipsoid and clustered-ellipsoid distributions are
both explicitly designed to model point distributions which are
clustered, and the clusters themselves are flat in the sense that the
points lie close to a lower dimensional subspace. In the first case the
ellipsoids are aligned with the axes, and in the other case they are
more arbitrarily oriented.
\begin{description}
\item[Uniform:]
Each coordinate was chosen uniformly from the interval $[-1,1]$.
\item[Clustered-gaussian:]
The distribution is given a number of color classes $c$, and
a standard deviation $\sigma$. We generated $c$ points from
the uniform distribution, which form cluster centers. Each point
is generated from a gaussian distribution centered at a randomly
chosen cluster center with standard deviation $\sigma$.
\item[Clustered-orthogonal-ellipsoids:]
The distribution can be viewed as a degenerate clustered-gaussian
distribution where the standard deviation of each coordinate is
chosen from one of two classes of distributions, one with a large
standard deviation and the other with a small standard deviation.
The distribution is specified by the number of color classes $c$
and four additional parameters:
\begin{itemize}
\item $d_{\rm max}$ is the maximum number of fat dimensions.
\item $\sigma_{\rm lo}$ and $\sigma_{\rm hi}$ are the minimum and maximum bounds
on the large standard deviations, respectively (for the
fat sides of the ellipsoid).
\item $\sigma_{\rm thin}$ is the small standard deviation (for the thin
sides of the ellipsoid).
\end{itemize}
Cluster centers are chosen as in the clustered-gaussian
distribution. For each color class, a random number $d'$
between $1$ and $d_{\rm max}$ is generated, indicating the number of
fat dimensions. Then $d'$ dimensions are chosen at random to be
fat dimensions of the ellipse. For each fat dimension the
standard deviation for this coordinate is chosen uniformly from
$[\sigma_{\rm lo},\sigma_{\rm hi}]$, and for each thin dimension the standard
deviation is set to $\sigma_{\rm thin}$. The points are then generated
by the same process as clustered-gaussian, but using these
various standard deviations.
\item[Clustered-ellipsoids:]
This distribution is the result of applying $d$ random rotation
transformations to the points of each cluster about its center.
Each cluster is rotated by a different set of rotations. Each
rotation is through a uniformly distributed angle in the range
$[0,\pi/2]$ with respect to two randomly chosen dimensions.
\end{description}
In our experiments involving both clustered-orthogonal-ellipsoids and
clustered-ellipsoids, we set the number of clusters to 5, $d_{\rm max} = 10$,
$\sigma_{\rm lo} = \sigma_{\rm hi} = 0.3$, and $\sigma_{\rm thin}$ varied from $0.03$ to $0.3$.
Thus, for low values of $\sigma_{\rm thin}$ the ellipsoids are relatively flat,
and for high values this becomes equivalent to a clustered-gaussian
distribution with standard deviation of 0.3.
\subsection{Data and Query Points from the Same Distribution}
For our first set of experiments, we considered data and query points
from the same clustered distributions. We considered both
clustered-orthogonal-ellipsoids and clustered-ellipsoid distributions in
Figs.~\ref{dce-qce.fig} and~\ref{dcr-qcr.fig}, respectively. The three
different graphs are for (a) $\epsilon = 1$, (b) $\epsilon = 2$, and (c)
$\epsilon = 3$. In all three cases the same clusters centers were
used. Note that the graphs do not share the same $y$-range,
and in particular the search algorithm performs significantly faster
as $\epsilon$ increases.
Observe that all of the splitting methods perform better when $\sigma_{\rm thin}$
is small, indicating that to some extent they exploit the fact that the
data points are clustered in lower dimensional subspaces. The relative
differences in running time were most noticeable for small values of
$\sigma_{\rm thin}$, and tended to diminish for larger values.
Although the minimum-ambiguity splitting method was designed for dealing
with data and query points from different distributions, we were
somewhat surprised that it actually performed the best of the three
methods in these cases. For small values of $\sigma_{\rm thin}$ (when
low-dimensional clustering is strongest) its average running time
(measured as the number of noded visited in the tree) was typically from
30-50\% lower than the standard splitting method, and over 50\% lower than
the sliding-midpoint method. The standard splitting method typically
performed better than the sliding-midpoint method, but the difference
decreased to being insignificant (and sometimes a bit worse) as
$\sigma_{\rm thin}$ increased.
\subsection{Data and Query Points from Different Distributions}
For our second set of experiments, we considered data points from a
clustered distribution and query points from a uniform distribution.
This particular choice was motivated by the situation shown in
Fig.~\ref{slmid.fig}, where the standard splitting method can produce
cells with high aspect ratios. For the data points we considered both
the clustered-orthogonal-ellipsoids and clustered-ellipsoid
distributions in Figs.~\ref{dce-qu.fig} and~\ref{dcr-qu.fig},
respectively. As before, the three different graphs are for (a)
$\epsilon = 1$, (b) $\epsilon = 2$, and (c) $\epsilon = 3$. Again,
note that the graphs do not share the same $y$-range.
Unlike the previous experiment, overall running times did not vary
greatly with $\sigma_{\rm thin}$. Sometimes running times increased moderately
and other times they decreased moderately as a function of $\sigma_{\rm thin}$.
However, there were significant differences between the standard
splitting method, which consistently performed much worse than the other
two methods. For the smallest values of $\sigma_{\rm thin}$, there was around a
5-to-1 difference in running time between then standard method and
sliding-midpoint.
For larger values of $\epsilon$ (2 and 3) the performance of
sliding-midpoint and minimum-ambiguity were very similar, with
sliding-midpoint having the slight edge. It may seem somewhat
surprising that minimum-ambiguity performed significantly worse (a
factor of 2 to 3 times worse) than sliding-midpoint, since
minimum-ambiguity was designed exactly for this the situation where
there is a difference between data and query distributions. This may be
due to limitations on the heuristic itself, or the limited size of the
training set. However, it should be kept in mind that sliding-midpoint
was specially designed to produce large empty cells in the uncluttered
regions outside the clusters (recall Fig.~\ref{slmid.fig}).
\subsection{Construction Times}
The results of the previous sections suggest that the minimum ambiguity
splitting produces trees that can answer queries efficiently for a
variety of point and data distributions. Its main drawback is the
amount of time that it takes to build the tree. Both the standard and
sliding-midpoint methods can be built quite efficiently in time $O(nh)$,
where $n$ is the number of data points, and $h$ is the height of the
tree. The standard kd-tree has $O(\log n)$ height, and while the
sliding-midpoint tree need not have $O(\log n)$ height, this seems to be
true for many point distributions. For the 4000 point data sets in
dimension 20, both of these trees could be constructed in under 10 CPU
seconds.
However, the construction time for the minimum-ambiguity tree is quite a
bit higher. It can be argued that the time to construct the tree is
roughly (within logarithmic factors) proportional to the time to compute
the (approximate) nearest neighbors for all the training points. In
order to construct the tree, first the nearest neighbors for each of the
training points must be computed. This is done in an auxiliary nearest
neighbor tree, e.g., one built using the standard or sliding-midpoint
method. Then to determine the splitting hyperplane for each cell of the
minimum-ambiguity tree, requires consideration of all the nearest
neighbor balls that overlap the current cell. However, in order to
compute the nearest neighbors of the training points, each point whose
nearest neighbor ball overlaps the cell would have to visit the cell in
any case.
Since we used 9 times the number of data points as training points, it
is easy to see that the minimum-ambiguity tree will take much longer
to compute than the other two trees. Notice that when $\epsilon > 0$,
we compute nearest neighbors approximately, and so this can offer an
improvement in construction time. In Fig.~\ref{ma-const.fig} we
present the construction time for the minimum-ambiguity tree for
various combinations of data and training distributions. Observe
that the construction times are considerably greater than those for
the other two methods (which were under 10 CPU seconds), and that
the construction time is significantly faster for higher values
of $\epsilon$.
\section{Conclusions}
In this paper we have presented an empirical analysis of two new
splitting methods for kd-trees: sliding-midpoint and minimum-ambiguity.
Both of these methods were designed to remedy some of the deficiencies
of the standard kd-tree splitting method, with respect to data
distributions that are highly clustered in low-dimensional subspaces.
Both methods were shown to be considerably faster than the standard
splitting method in answering queries when data points were drawn from a
clustered distribution and query points were drawn from a uniform
distribution. The minimum-ambiguity method performed better when both
data and query points were drawn from a clustered distribution. But this
method has a considerably higher construction time. The
sliding-midpoint method, while easy to build, seems to perform sometimes
better and sometimes worse than the standard kd-tree splitting method.
The enhanced performance of the minimum-ambiguity method suggests that
even within the realm of kd-trees, there may be significant improvements
to be made by fine-tuning the structure of the tree to the data and
query distributions. However, because of its high construction cost, it
would be nice to determine whether there are other heuristics that would
lead to faster construction times. This suggest the intriguing
possibility of search trees whose structure adapts dynamically to the
structure of queries over time. The sliding-midpoint method raises hope
that it may be possible to devise a simple and efficiently computable
splitting method, that performs well across a wider variety of
distributions than the standard splitting method.
\section{Acknowledgements}
We would like to thank Sunil Arya for helpful discussions on the
performance of the sliding-midpoint method.
|
1,116,691,497,846 | arxiv | \section{Introduction}
Automated planning is a major topic of research in artificial intelligence, and enjoys a long and distinguished history \cite{strips:-a-new-approach-to-the-application-of-theorem}. The classical paradigm assumes a distinguished initial state, comprised of a set of facts, and is defined over a set of actions which change that state in one way or another. Actions are further characterised in terms of their applicability conditions, that is, things that must be true for the agent to be able to execute it, and effects, which procedurally amounts to adding new facts to a state while removing others. The scientific agenda is then to design algorithms that synthesise a sequence of actions that takes the agent from an initial state to a desired goal state.
From the early days, automated planning was motivated by robotics applications. But it was observed that the real world -- or more precisely, the robot's knowledge about the world -- is almost never simply a set of facts that are true, and actions that the agent intends to execute never operate the way they are supposed to. One way to make sense of this complication is to separate the ``high-level reasoning,'' in our case the planner's search space, from the low-level sensor-motor details. On the positive side, such a move allows the plan representation to be finite, discrete and simple. On the negative side, significant expert knowledge has to go into materialising this separation of concerns, possibly at the loss of clarity on the behaviour of the system as a whole.
Incidentally, by testing the robot's effectors repeatedly in a controlled environment, one can approximate the uncertain effects of an action in terms of a probability distribution. Similarly, based on minimalistic assumptions about the environment, expressed as a probabilistic prior, by repeated sampling, the robot can update its prior to converge on a reasonable posterior that approximates the environment \cite{probabilistic-robotics}. To that end, probabilistic planning attempts to incorporate such models directly into the planning process. There are to-date numerous languages and algorithmic frameworks for probabilistic planning, e.g., \cite{probabilistic-planning-via-heuristic-forward,decision-theoretic-planning:-structural-assumptions,planning-and-acting-in-partially-observable,planning-under-uncertainty-for-robotic}.
In this article, we briefly report on probabilistic planning through the lens of {\it probabilistic programming} \cite{probabilistic-programming}. Probabilistic programming is a programming paradigm that aims to ease the specification of structured probability distributions; these languages are developed so as to enable modularity and re-use in probabilistic machine learning applications. Their atomic building blocks incorporate stochastic primitives, and the formal representation also allows for compositionality. Here, we specifically provide an overview of the features of two kinds of systems, both of which have their roots in logic programming:
\begin{itemize}
\item HYPE \cite{planning-in-discrete-and-continuous-markov}: a planning framework based on {\it distributional clauses} \cite{the-magic-of-logical-inference-in-probabilistic}; and
\item {ALLEGRO} \cite{allegro:-belief-based-programming-in-stochastic}: a high-level control programming framework that extends GOLOG \cite{knowledge-in-action:-logical-foundations}.
\end{itemize}
These two systems emphasise different strengths of probabilistic programming, which we think are particularly useful for complex modelling issues raised in probabilistic planning. HYPE can easily describe growing and shrinking state spaces owing to uncertainty about the existence of objects, and thus is closely related to BLOG models \cite{blog:-probabilistic-models-with:book,first-order-open-universe-pomdps}. Since HYPE is an extension of PROBLOG \cite{problog:-a-probabilistic-prolog-and-its-application}, it stands to benefit from a wide range of applications and machine learning models explored with PROBLOG.\footnote{\small dtai.cs.kuleuven.be/problog} The dynamical aspects of the domain are instantiated by reifying time as an argument in the predicates, and so it is perhaps most appropriate for finite horizon planning problems.
ALLEGRO treats actions as first-class citizens and is built on a rich model of dynamics and subjective probabilities, which allows it to handle context-sensitive effect axioms, and non-unique probability measures placed on first-order formulas. GOLOG has also been widely used for a range of applications that apply structured knowledge (e.g., ontologies) in dynamical settings \cite{cognitive-robotics}, and ALLEGRO stands to inherit these developments. GOLOG has also been shown as a way to structure search in large plan spaces \cite{exploiting-procedural-domain-control}. Finally, since there are constructs for iteration and loops, such programs are most appropriate for modelling non-terminating behaviour \cite{a-logic-for-non-terminating-golog-programs}.
In the sequel, we describe the essential formal and algorithmic contributions of these systems before concluding with open computational issues.
\section{HYPE}
PROBLOG aims to unify logic programming and probabilistic specifications, in the sense of providing a language to specify distributions together with the means to query about the probabilities of events. As a very simple example, to express that the object $ c$ is on the table with a certain probability, and that all objects on the table are also in the room, we would write (free variables are assumed to be quantified from the outside):
\begin{align*}
.6::onTable(c). \\
inRoom(x) \leftarrow onTable(x).
\end{align*}
This then allows us to query the probability of atoms such $ inRoom(c)$.
A more recent extension \cite{the-magic-of-logical-inference-in-probabilistic} geared for continuous distributions and other infinite event-space distributions allows the head atom of a logical rule to be drawn from a distribution directly, by means of the following syntax:
\[ h \sim D \leftarrow b_1, \ldots, b_n. \]
For example, suppose there is an urn with an unknown number of balls \cite{blog:-probabilistic-models-with:book}. Suppose we pull a ball at a time and put it back in the urn, and repeat these steps (say) 6 times. Suppose further we have no means of identifying if the balls drawn were distinct from each other. A probabilistic program for this situation might be as follows:
\begin{align*}
n \sim poisson(6). \\
pos(x) \sim uniform(1,10) \leftarrow between(1, \simeq\!(n),x).
\end{align*}
For simplicity, we assume here that the physical form of the urn is a straight line of length 10, and the position of a ball is assumed to be anywhere along this line.
HYPE is based on a dynamic extension that allows us to temporally index the truth of atoms, and so can be used to reason about actions. For example, the program:
\begin{align*}
numBehind(x,t+1) \sim poisson(1) \leftarrow removeObj(x,t).
\end{align*}
says that on removing the object $ x$ at $ t$, we may assume that there are objects -- typically one such object -- behind $ x$. Such programs can be used in object tracking applications to reason about occluded objects \cite{hybrid-probabilistic-logic-programming}.
A common declaration in many robotics applications \cite{probabilistic-robotics} is to define actions and sensors with an error profile, such as a Gaussian noise model. These can be instantiated in HYPE using:
\begin{align*}
pos(x, t+1) \sim gaussian(\simeq\!(pos(x,t)) + 1, var) \\ \quad \qquad \leftarrow move(x,t). \\
obs(x,t+1) \sim gaussian(\simeq \! (pos(x,t)), var).
\end{align*}
The first rule says that the position of $ x$ on doing a move action is drawn from a normal distribution whose mean is $ x$'s current position incremented by one. The second one says that observing the current position of $ x$ is subject to additive Gaussian noise.
As an automated planning system, HYPE instantiates a Markov decision process (MDP) \cite{markov-decision-processes:-discrete}. Recall that MDPs are defined in terms of states, actions, stochastic transitions and reward functions, which can be realised in the above syntax using rules such as:
\begin{align*}
poss(act, t) \leftarrow conditions(t). \\
reward(num, t) \leftarrow conditions(t).
\end{align*}
To compute a policy, which is a mapping from states and time points to actions, HYPE combines importance sampling and SLD resolution to effectively bridge the high-level symbolic specification and the probabilistic components of the programming model. HYPE allows states and actions to be discrete or continuous, yielding a very general planning system. Empirical evaluations are reported in \cite{planning-in-discrete-and-continuous-markov} and \cite{hybrid-relational-mdps}.
\section{ALLEGRO}
The GOLOG language has been successfully used in a wide range of applications involving control and planning \cite{cognitive-robotics},
and is based on a simple ontology that all changes are a result of named actions \cite{knowledge-in-action:-logical-foundations}.
An initial state describes the truth values of properties, and actions may affect these values in non-trivial context-sensitive ways.
In particular, GOLOG is a programming model where executing actions are the simplest instructions in the program, upon which more involved constructions for iteration and loops are defined.
For example, a program to clear a table containing an unknown number of blocks would be as follows:
\[([\pi x ~onTable(x)?; removeObj(x)] )^*; \neg \exists x ~ onTable(x)? \]
Here, $ \pi$ is the non-deterministic choice of argument, semi-colon denotes sequence, ? allows for test conditions, and $*$ is unbounded iteration. The program terminates successfully because the sub-program before the final test condition removes every object from the table.
As argued in \cite{cognitive-robotics}, the rich syntax of GOLOG allows us, on the one hand, to represent policies and plans in an obvious fashion; for example:
\[ a_1; \ldots; a_n; P?\]
ensures that the goal $P$ is true on executing the sequence of actions. However, the syntax also allows open-ended search; for example:
\[while~~ \neg P~~\pi a.~a \]
tries actions until $P$ is made true.
The benefit of GOLOG then is that it allows us to explore plan formulations between these two extremes, including partially specified programs that are completed by a meta-language planner.
ALLEGRO augments the underlying ontology to reason about probability distributions over state properties, and allow actions with uncertain (stochastic) effects. In logical terms, the semantical foundations rests on a rich logic of belief and actions. Consequently, it can handle partial probabilistic specifications. For example, one can say $ c$ is on the table with a certain probability as before: \( pr(onTable(c)) = .6\), but it is also possible to express the probability that there is an object on the table without knowing which one: $ pr(\exists x~onTable(x)) = .6$. We can go further and simply say that there is a non-zero probability of that statement: $ pr(\exists x~onTable(x)) > 0$, which means that any distribution satisfying the formula is admissible. Such a feature can be very useful: for example, in \cite{integrated-task-and-motion-planning}, it is argued that when planning in highly stochastic environments, it is useful to allow a margin of error in the probability distributions defined over state properties.
To model the case of Gaussian error models, actions with uncertain effects are given a general treatment. For one thing, the effects of actions are axiomatised using the notion of successor state axioms which incorporate Reiter's solution to the frame problem \cite{knowledge-in-action:-logical-foundations}. So, for example, changing the position of an object using a move action can be expressed as:
\begin{align*} pos(x, do(a,s)) = u \equiv \\ \qquad (a = move(x,y) \land pos(x,s) = u+y) \\
\qquad \lor (a\neq move(x,y) \land pos(x,s) = u)\end{align*}
This says that if the action of moving $ x$ was executed, its position (along a straight line) is decremented by $ y$ units, and for all other actions, the position is unaffected. To deal with uncertain effects, we will distinguish between what the agent intends and what actually happens. That is, let $ move(x,y,z)$ be a new action type, where $ y$ is what the agent intends, and $ z$ is what happens. Then, the successor state axiom is rewritten as follows:
\begin{align*} pos(x, do(a,s)) = u \equiv \\ \qquad (a = move(x,y,z) \land pos(x,s) = u+z) \\
\qquad \lor (a\neq move(x,y,z) \land pos(x,s) = u)\end{align*}
The story remains essentially the same, except that $ z$ determines the actual position in the successor state, but it is not in control of the agent. A Gaussian error profile can be accorded to this action by means:
\[ l(move(x,y,z),s) = gaussian(z; y, var)\]
That is, the actual value is drawn from a Gaussian whose mean is the intended argument $ y$. Analogously, attributing additive Gaussian noise in a sensor for observing the position is defined using:
\[ l(obs(x,z),s) = gaussian(z; pos(x,s), var) \]
That is, the observation $ z$ is drawn from a Gaussian whose mean is the actual position of the object $ x$.
As hinted above, as an extension to GOLOG, the syntax of ALLEGRO is designed to compactly represent full or partial plans and policies in a general way, and on termination, ALLEGRO programs can be tested for any probabilistic or expectation-based criteria.
The foundations of ALLEGRO was established in \cite{allegro:-belief-based-programming-in-stochastic}
with a discussion on its empirical behaviour against a predecessor based on goal regression.
\section{Conclusions}
Automated planning is often deployed in an application context, and in highly stochastic and uncertain domains, the planning model may be derived from a complex learning and reasoning pipeline, or otherwise defined over non-trivial state spaces with unknowns. In this article, we reported on two probabilistic programming systems to realise such pipelines. Indeed, combining automated planning and probabilistic programming is receiving considerable attention recently, e.g., \cite{first-order-open-universe-pomdps}.
These languages are general purpose, and their first-order expressiveness can not only enable a compact codification of the domain but also achieve computational leverage.
One of the key concerns with the use of probabilistic programming and stochastic specifications generally is that most systems perform inference by Monte Carlo sampling.
As is well-known, one is able to only obtain asymptotic guarantees with such methods, and moreover, handling low-probability observations can be challenging. In that regard, there have been recent logical approaches for inferring in mixed discrete-continuous probability spaces with tight bounds on the computed answers \cite{hashing-based-approximate-probabilistic-inference,probabilistic-inference-in-hybrid-domains,approximate-counting-in-smt-and-value-estimation}. Since HYPE, ALLEGRO and many such systems use probabilistic inference as a fundamental computational backbone, the question then is whether the aforementioned approaches can enable robust planning and programming frameworks in stochastic domains.
\bibliographystyle{aaai}
|
1,116,691,497,847 | arxiv | \section{Results}
\tocless\subsection{Theory for constructing the intrinsic protein folding
landscape from measurements} In a dual beam optical tweezer setup
(Fig.~\ref{sys}) the protein is covalently connected to
double-stranded DNA handles that are attached to glass or polystyrene
beads in two optical traps. For small displacements of the beads from
the trap centers~\cite{Greenleaf05}, the trap potentials are harmonic,
with strengths $k_x = k_z \equiv k_\text{trap}$ along the lateral
plane, and a weaker axial strength $k_y = \alpha k_\text{trap}$, where
$\alpha < 1$~\cite{Neuman04}. For simplicity, we take both traps to
have equal strengths, though our method can be generalized to an
asymmetric setup. The trap centers are separated from each other
along the $\hat{\mb{z}}$ axis, with trap 1 at $z=0$ and trap 2 at
$z=z_\text{trap}$. As the bead-handle-protein (bhp) system fluctuates
in equilibrium, the positions of the bead centers, $\mb{r}_1(t)$ and
$\mb{r}_2(t)$, vary in time. We assume that the experimentalist can
collect a time series of the $z$ components of the bead positions,
$z_1(t)$ and $z_2(t)$. Denote the mean of each time series as $
\bar{z}_{1}$ and $\bar{z}_2$. We assume that the trap centers are
sufficiently far apart that the whole system is under tension, which
implies that the mean bead displacements are non-zero, $\bar{z}_1 =
z_\text{trap}- \bar{z}_2 = \bar{F}/k_\text{trap} > 0$, where $\bar{F}$
is the mean tension along $\hat{\mb{z}}$. We focus on the case where
there is no feedback mechanism to maintain a constant force, so the
instantaneous tension in the system changes as the total end-to-end
extension component $z_\text{tot}(t) \equiv z_2(t)-z_1(t)$
[Fig.~\ref{sys}] varies. Though we choose one particular passive
setup, the theory can be adapted to other types of passive optical
tweezer systems~\cite{Greenleaf05,Woodside06}, where the force is
approximately constant (in which case we could skip the transformation
into the constant-force ensemble described below). The mean tension
$\bar{F}$, a measure of the overall force scale, can be tuned at the
start of the experiment by making the trap separation $z_\text{trap}$
larger (leading to higher $\bar{F}$) or smaller (leading to lower
$\bar{F}$). Because $\bar{F} = k_\text{trap} (z_\text{trap} -
\bar{z}_\text{tot})/2$, the precise relationship between
$z_\text{trap}$ and $\bar{F}$ requires knowing the mean total
extension $\bar{z}_\text{tot}$, which depends among other things on
the details of the energy landscape. Hence, we cannot in general
calculate beforehand what $\bar{F}$ will be for a given
$z_\text{trap}$. However, one of the advantages of our approach is
that we can combine data from different experimental runs (each having
a different $z_\text{trap}$ and $\bar{F}$) to accurately construct the
protein free energy profile. This combination is carried out through
the weighted histogram analysis method (WHAM)~\cite{Ferrenberg89} (see
Supplementary Information (SI) for details), in a spirit similar
to earlier work in the context of optical
tweezers~\cite{Shirts08,Messieres11}. We first solve the problem of
obtaining the protein landscape based on a single observed
trajectory of bead-to-bead separations specified as $z_\text{tot}$
as a function of $t$.
The key quantity in the construction procedure is ${\cal
P}_\text{tot}(z_\text{tot})$, the equilibrium probability
distribution of $z_\text{tot}$ within the external trap potential,
which can be directly derived from the experimental time series.
The imperfect nature of the measured data, due to noise and low-pass
filtering effects in the recording apparatus, will
distort ${\cal P}_\text{tot}(z_\text{tot})$, but we have developed a
technique to model and approximately correct for these issues (see
Finite Bandwidth Scaling (FBS) in the {\it Methods}). Once we have
an experimental estimate for ${\cal P}_\text{tot}(z_\text{tot})$,
the objective is to find $\tilde{\cal
P}_\text{p}(z_\text{p};F_0)$, the intrinsic distribution of the
protein end-to-end extension component $z_\text{p}$ at some {\it
constant} force $F_0$, whose value we are free to choose. (We will
use tilde notation to denote probabilities in the constant-force
ensemble.) The intrinsic protein free energy profile is
$\tilde{\cal F}_\text{p}( z_\text{p};F_0) = -k_B T \ln \tilde{\cal
P}_\text{p}(z_\text{p};F_0)$. The procedure, obtained from
rigorous theoretical underpinnings described in detail in the
SI, consists of two steps:
\vspace{1em}
\begin{enumerate}
\item {\it Transformation into the constant-force ensemble}. Given
${\cal P}_\text{tot}(z_\text{tot})$, we obtain the total system
end-to-end distribution at a constant $F_0$ using,
\begin{equation}\label{eq:0}
\begin{split}
&\tilde{\cal
P}_\text{tot}(z_\text{tot};F_0)\\
&\:= C^{-1} e^{\beta F_0 z_\text{tot} +
\frac{1}{4}\beta k_\text{trap} ( z_\text{trap} - z_\text{tot})^2} {\cal
P}_\text{tot}(z_\text{tot}),
\end{split}
\end{equation}
where $\beta = 1/k_B T$ and $C$ is a normalization constant. The
equation above applies in the case of a single experimental trajectory
at a particular trap separation $z_\text{trap}$.
\item {\it Extraction of the intrinsic protein distribution}. In the
constant-force ensemble, $\tilde{\cal P}_\text{tot}
= \tilde{\cal P}_\text{b} \ast \tilde{\cal P}_\text{h} \ast
\tilde{\cal P}_\text{p} \ast \tilde{\cal P}_\text{h} \ast
\tilde{\cal P}_\text{b}$, relates the total end-to-end fluctuations
$\tilde{\cal P}_\text{tot}(z_\text{tot};F_0)$ to the end-to-end
distributions for the individual components $\tilde{\cal
P}_\alpha(z_\alpha;F_0)$, where $\alpha$ denotes bead (b), handle
(h), or protein (p), and $\ast$ is a 1D convolution
operator. For the beads, ``end-to-end'' refers to the
extension between the bead center and the handle
attachment point, projected along $\hmb{z}$. In Fourier space the
convolution has the form:
\begin{equation}\label{eq:1}
\begin{split}
\tilde{\cal P}_\text{tot}(k;F_0) &= \tilde{\cal P}_\text{b}^2(k;F_0) \tilde{\cal P}^2_\text{h}(k;F_0) \tilde{\cal P}_\text{p}(k;F_0)\\
& \equiv \tilde{\cal P}_\text{bh}(k;F_0) \tilde{\cal P}_\text{p}(k;F_0),
\end{split}
\end{equation}
where $\tilde{\cal P}_\alpha(k;F_0)$ is the Fourier transform of
$\tilde{\cal P}_\alpha(z_\alpha;F_0)$. Here $\tilde{\cal P}_\text{bh}$,
which is the result of convolving all the bead and handle
distributions, acts as the main point spread function relating the
intrinsic protein distribution $\tilde{\cal P}_\text{p}$ to
$\tilde{\cal P}_\text{tot}$. Since $\tilde{\cal P}_\text{bh}$ can be
modeled from a theoretical description of the handles and
beads, we can solve for $\tilde{\cal P}_\text{p}$ using
Eq.~\eqref{eq:1} and hence find $\tilde{\cal F}_\text{p}$, the
intrinsic free energy profile of the protein.
\end{enumerate}
The derivation of the procedure (given in the SI, along with technical
aspects of its numerical implementation) shows the conditions under
which the two step method works. The mathematical approximation
underlying step 1 becomes exact if either of the following hold:
(i) $k_x = k_y = 0$; (ii) the full 3D total system end-to-end
probability is separable into a product of distributions for
longitudinal ($\hat{\mb{z}}$) and transverse ($\hat{\mb{x}}$,
$\hat{\mb{y}}$) components. In general, condition (ii) is not
physically sensible~\cite{Hyeon08}. However, if
$\bar\rho_\text{tot}$ is the typical length scale describing
transverse fluctuations, then condition (i) is approximately valid
when $\beta k_\text{trap}\bar{\rho}_\text{tot}^2 \ll 1$. If this
condition breaks down, accurate construction of the intrinsic energy
landscape cannot be performed without knowledge of the transverse
behavior. However, in the simulation and experimental results below,
the force scales are such that transverse fluctuations are small,
$\bar{\rho}_\text{tot} \sim {\cal O}(1\;\text{nm})$, so to ensure
condition (i) is met, we require that $k_\text{trap} \ll
k_BT/\bar{\rho}_\text{tot}^2 = 4.1$ pN/nm at $T= 298$ K. We use
the experimental value $k_\text{trap} = 0.25\;\text{pN}/\text{nm}$ in
our test cases~\cite{Gebhardt10}, which is well under the upper limit.
In principle, one can choose any $F_0$, the force value of the
constant force ensemble where we carry out the analysis. In practice,
$F_0$ should be chosen from among the range of forces that is sampled
in equilibrium during the actual experiment, since this will minimize
statistical errors in the final constructed landscape. For example,
setting $F_0 = \bar{F}$, the mean tension, is a reasonable choice.
Step 2 depends on knowledge of $\tilde{\cal P}_\text{bh}(k;F_0)$, and
thus the individual constant-force distributions of the beads and the
handles in Fourier space. The point spread function is characterized
by: the bead radius $R_b$, the handle contour length $L$, the handle
persistence length $l_p$, and the handle elastic stretching modulus
$\gamma$. In $\tilde{\cal P}_\text{h}$ we also include the covalent
linkers which attach the handles to the beads and protein. If we
model these linkers as short, stiff harmonic springs, we have two
additional parameters: the linker stiffness $\kappa$ and natural
length $\ell$. Using the extensible semiflexible chain as a model
for the handles, we exploit an exact mapping between this model and
the propagator for the motion of a quantum particle on the surface of
a unit sphere~\cite{Kierfeld04} to calculate the handle Fourier-space
distribution to arbitrary numerical precision. Together with
analytical results for the bead and linker distributions, we can thus
directly solve for $\tilde{\cal P}_\text{bh}(k;F_0)$. To verify
that the analytical model for the point-spread function can
accurately describe handle/bead fluctuations over a range of forces,
we have analyzed data from control experiments on a system involving
only dsDNA handles attached to beads, where ${\cal P}_\text{tot} =
{\cal P}_\text{bh}$ (SI). The theory simultaneously fits results
for several experimental quantities measured on the same system: the
distributions $\tilde{\cal P}_\text{bh}$ derived from three
different trap separations, corresponding to mean forces $F_0 =
9.4-12.7$ pN, and a force-extension curve. The accuracy of the
model $\tilde{\cal P}_\text{bh}$ is $\approx 1-3\%$, within the
experimental error margins.
\tocless\subsection{Robustness of the theory validated by application to an
exactly soluble model} We first apply the theory to a problem for
which the intrinsic free energy profiles at arbitrary force are known
exactly. The generalized Rouse model (GRM) hairpin (see SI for
details) is a two-state folder whose full 3D equilibrium end-to-end
distributions are analytically solvable. A representative GRM
distribution $\tilde{\cal P}_\text{GRM}$ at $F_0 = 11.9$ pN is plotted
in Fig.~\ref{grmA}(a). Since $\tilde{\cal P}_\text{GRM}$ is
cylindrically symmetric, the top panel shows a projection onto the
$(\rho = \sqrt{x^2+y^2},z)$ plane, while the bottom panel shows the
further projection onto the $z$ coordinate. The two peaks correspond
to the native (N) state at small $z$, and the unfolded (U) state at
large $z$. In order to model the optical tweezer system, we add
handles and beads to the GRM hairpin, whose probabilities $\tilde{\cal
P}_\text{h}$ and $\tilde{\cal P}_\text{b}$ (including transverse
fluctuations) are illustrated in Fig.~\ref{grmA}(b) and (c). The
full 3D behavior is derived in an analogous manner to the theory
mentioned above for the 1D Fourier-space distribution $\tilde{\cal
P}_\text{bh}(k;F_0)$ of the beads/handles; the only difference is
that the transverse degrees of freedom are not integrated out. The
3D convolution of the system components, plus the optical trap
contribution, gives the total distribution ${\cal P}_\text{tot}$ in
Fig.~\ref{grmA}(d). The bead, handle, linker, and trap parameters are
listed in SI Table~S1. From ${\cal P}_\text{tot}$ one can calculate
the mean total $z$ extension and the mean tension, which in this case
are $\bar{z}_\text{tot} = 1199$ nm, $\bar{F} =
k_\text{trap}(z_\text{trap}-\bar{z}_\text{tot})/2 = 11.9$ pN.
\begin{figure*}[t]
\centerline{\includegraphics*[width=0.98\textwidth]{Fig2.pdf}}
\caption{Generalized Rouse model (GRM) hairpin in an optical tweezer
setup. The first row shows the exact end-to-end distributions along
$\hat{\mb{z}}$ for each component type in the system: a) GRM, b)
dsDNA handle, c) polystyrene bead. The handle, bead and trap
parameters are listed in Table~S1 (GRM column). Upper panels show
the probabilities projected onto cylindrical coordinates
$(\rho=\sqrt{x^2+y^2},z)$, while the lower ones show the projection
onto $z$ alone. (d) The result for the total system end-to-end
distribution, ${\cal P}_\text{tot}$, derived by convolving the
component probabilities and accounting for the optical traps. (e-g)
The construction of the original GRM distribution $\tilde{\cal
P}_\text{GRM}$ starting from ${\cal P}_\text{tot}$. (e) ${\cal
P}_\text{tot}$ (purple) and $\tilde{\cal P}_\text{tot}$ (blue) as
a function of $z$ on the bottom axis, measured relative to
$\bar{z}$, the average extension for each distribution. For ${\cal
P}_\text{tot}$, the upper axis shows the $z$ range translated into
the corresponding trap forces $F$. After removing the trap effects,
$\tilde{\cal P}_\text{tot}$ is the distribution for constant force
$F_0=11.9$ pN. (f) $\tilde{\cal P}_\text{bh}$, describing the total
probability at $F_0$ of fluctuations resulting from both handles and
the rotation of the beads. (g) The constructed solution for
$\tilde{\cal P}_\text{GRM}$ (solid line), obtained by numerically
inverting the convolution $\tilde{\cal P}_\text{tot} = \tilde{\cal
P}_\text{bh} \ast \tilde{\cal P}_\text{GRM}$. The exact
analytical result for $\tilde{\cal P}_\text{GRM}$ is shown as a
dashed line. $z_\text{N}$ is the position of the native state (N)
peak.}\label{grmA}
\end{figure*}
\begin{figure*}[t]
\centerline{\includegraphics*[width=0.98\textwidth]{Fig3.pdf}}
\caption{Effects of handle characteristics on the free energy
profile of the GRM in an LOT setup.
(a) The total system free energy ${\cal F}_\text{tot} = -k_BT \ln
{\cal P}_\text{tot}$ for fixed $L=100$ nm, and varying ratios
$l_p/L$. All the other parameters are in Table~S1 (GRM
column). The exact analytical free energy at $F_0=11.9$ pN (dashed line) for the
GRM alone, $\tilde{\cal F}_\text{GRM} = -k_B T \ln \tilde{\cal
P}_\text{GRM}$, is shown for comparison. (b)
For each ${\cal F}_\text{tot}$ in (a), the construction of
$\tilde{\cal F}_\text{GRM}$ at $F_0$, together with the exact answer
(dashed line). (c) For system parameters matching the experiment
(Table~S1), the variance of the point spread function
$\tilde{\cal P}_\text{bh}$ broken down into the individual handle,
bead, and linker contributions. The fraction for each component is
shown as a function of varying handle elastic modulus
$\gamma$.}\label{grmB}
\end{figure*}
The $\mb{\hat{z}}$ probability projection in the bottom panel of (d)
is the information accessible in an experiment, and the computation of
the intrinsic distribution in the bottom panel of (a) is the ultimate
goal of the construction procedure. Comparing (a) and (d), two
effects of the apparatus are visible: the GRM peaks have been
partially blurred into each other, and the transverse ($\rho$)
fluctuations have been enhanced. The handles provide the dominant
contribution to both these effects.
Figs.~\ref{grmA}(e) through (g) illustrate the construction procedure
for the GRM optical tweezer system. Panel (e) corresponds to Step 1,
with a transformation of the distribution ${\cal P}_\text{tot}$ (whose
varying force scale is shown along the top axis) into $\tilde{\cal
P}_\text{tot}$ at constant force $F_0 = 11.9$ pN. Step 2 uses the
exact $\tilde{\cal P}_\text{bh}$, shown in real-space in panel (f),
and produces the intrinsic distribution $\tilde{\cal P}_\text{GRM}$,
drawn as a solid line in (g). The agreement with the exact analytical
result (dashed line) is extremely close, with a median error of $3\%$
over the range shown. This deviation is due to the approximation
in Step 1, discussed above, as well as the numerical implementation
of the deconvolution procedure.
As shown in our previous study~\cite{Hyeon08}, the smaller the ratio
$l_p/L$ for the handles, the more the features of the protein energy
landscape get blurred by the handle fluctuations. Since the
experimentally measured total distribution always distorts to some
extent the intrinsic protein free energy profile due to the finite
duration and sampling of the system trajectory, more flexible handles
will exacerbate the signal-to-noise problem. To illustrate this
effect, we performed Brownian dynamics simulations of the GRM in the
optical tweezer setup, with handles modeled as extensible,
semiflexible bead-spring chains (see SI for details). In
Fig.~\ref{grmB}(a) we compare the free energy ${\cal
F}_\text{tot} = -k_BT \ln {\cal P}_\text{tot}$ for a fixed $L = 100$
nm and a varying $l_p/L$, derived from the simulation trajectories,
and the exact intrinsic GRM result $\tilde{\cal F}_\text{GRM} = -k_BT
\ln \tilde{\cal P}_\text{GRM}$ at $F_0$. When the handles are very
flexible, with $l_p/L= 0.02$, the energy barrier between the native
and unfolded states almost entirely disappears in ${\cal
F}_\text{tot}$, with the noise making the precise barrier shape
difficult to resolve. Remarkably, even with this extreme level of
distortion, using our theory we still recover a reasonable estimate of
the intrinsic landscape [Fig.~\ref{grmB}(b)]. For each ${\cal
F}_\text{tot}$ in Fig.~\ref{grmB}(a), panel Fig.~\ref{grmB}(b)
compares the result of the construction procedure and the exact answer
for $\tilde{\cal F}_\text{GRM}$. Clearly some information is lost as
$l_p/L$ becomes smaller, since the $l_p/L = 0.02$ system does not
yield as accurate a result as the ones with stiffer handles. However
in all cases the basic features of the exact $\tilde{\cal
F}_\text{GRM}$ are reproduced. Thus, the theoretical-based method
works remarkably well over a wide range of handle parameters. This
conclusion is generally valid even when other parameters are varied
(see Fig.~S3 in the SI for tests at various $F_0$ and
$k_\text{trap}$). The excellent agreement
between the constructed and intrinsic free energy profiles for the
exactly solvable GRM hairpin over a wide range of handle and trap
experimental variables establishes the robustness of the theory.
\begin{figure*}[t]
\centerline{\includegraphics*[width=0.98\textwidth]{Fig4.pdf}}
\caption{Intrinsic characteristics of the LZ26 leucine zipper at
constant $F_0$, derived from SOP simulations in the absence of
handles/beads. (a) LZ26 free energy $\tilde{\cal F}_\text{p}$ at
$F_0 = 12.3$ pN vs. end-to-end extension $z$. Representative
protein configurations from the four wells (N, I1, I2, U) are
shown on the right, with asparagine residues colored blue. (b) The
average fraction of native contacts between the two alpha-helical
strands of LZ26 (the ``zipper bonds'') as a function of $z$.
Listed to the left of the curve are the $a$ and $d$ residues in
the heptads making up the amino acid sequence for each LZ26
strand, placed according to their position along the zipper.
Asparagines (N) are highlighted in blue. (c) For the residues
listed in (b), the residue contact energies used in the SOP
simulation (rescaled BT~\cite{Betancourt99} values).}\label{LZ26}
\end{figure*}
\begin{figure*}
\centerline{\includegraphics*[width=0.98\textwidth]{Fig5.pdf}}
\caption{(a,b) A trajectory fragment and the probability
distribution $\tilde{\cal P}_\text{p}$ from SOP simulations of the
LZ26 leucine zipper at constant force $F_0 = 12.3$ pN in the
absence of handles/beads. (c,d) A trajectory fragment and the
total system distribution ${\cal P}_\text{tot}$ at $z_\text{trap}
= 503$ nm. Panel (c) shows both the total extension
$z_\text{tot}(t)$ (purple) and the protein extension
$z_\text{p}(t)$ (gray). Triangles mark times when the protein
makes a transition between states, and the arrows point to two
enlarged portions of the trajectories. In all cases the $z$-axis
origin is $z_\text{I1}$, the peak location of the I1 intermediate
state. (e-g) Leucine zipper free energy profiles extracted from
time series (third row = simulation, fourth row = experiment). The
first column shows the total system end-to-end distribution ${\cal
P}_\text{tot}$, and the corresponding $\tilde{\cal
P}_\text{tot}$ at constant force $F_0 = 12.3$ pN. In the
experimental case $F_0 = 12.3 \pm 0.9$ pN is the mid-point force
at which the I1 and U states are equally likely. For ${\cal
P}_\text{tot}$, $z_\text{trap} = 503$ nm (simulation), $1553\pm
1$ nm (experiment). Force scales at the top are the range of trap
forces for ${\cal P}_\text{tot}$. The second column shows the
computed intrinsic protein free energy profiles $\tilde{\cal
F}_\text{p}$, compared to the total system profile, ${\cal
F}_\text{tot}$ (shifted upwards for clarity). (f) SOP
simulations for the protein alone at constant $F_0$ provide a
reference landscape, drawn as a dashed line. (h) The dashed curve
is the reconstructed $\tilde{\cal F}_\text{p}$ at the mid-point
force $F_0 = 12.1 \pm 0.9$ pN, from a second, independent
experimental trajectory, with $z_\text{trap} = 1547 \pm 1$ nm.
The $\tilde{\cal F}_\text{p}$ curves have a median uncertainty of
$0.4$ $k_BT$ over the plotted range (see SI for error analysis).
}\label{deconv}
\end{figure*}
\vspace{1em}
\tocless\subsection{Intrinsic folding landscape of a simulated leucine
zipper} To demonstrate that the theory can be used to produce
equilibrium intrinsic free energy profiles with multiple states from
mechanical folding trajectories, we performed simulations of a protein
in an optical tweezer setup. The simulations were designed to mirror
the single-molecule experiment reported in Ref.~\cite{Gebhardt10}, and
to this end we studied a coiled-coil, LZ26~\cite{Bornschlogl06}, based
on three repeats of the leucine zipper domain from the yeast
transcriptional factor GCN4~\cite{OShea91} (see {\it Methods}). The
simple linear unzipping of the two strands of LZ26 allows us to map
the end-to-end extension to the protein configuration. Furthermore,
the energy heterogeneity of the native bonds that form the ``teeth''
of the zipper leads to a non-trivial folding landscape with at least
two intermediate states~\cite{Bornschlogl06,Bornschlogl08,Gebhardt10}.
The more complex landscape of LZ26 thus provides an additional
stringent test of the proposed theory.
The native (N) structure of LZ26 is illustrated on the right in
Fig.~\ref{LZ26} (from a simulation snapshot), with the two
alpha-helical strands running from N-terminus at the bottom to
C-terminus at the top. In the experiment a handle is attached to the
N-terminus of each strand, and this is where the strands begin to
unzip under applied force. To prevent complete strand separation, the
C-termini are cross-linked through a disulfide bridge between two
cysteine residues. Each alpha-helix coil consists of a series of
seven-residue heptad repeats, with positions labeled a through g. For
the leucine zipper the a and d positions are the ``teeth'', consisting
of mostly hydrophobic residues (valine and leucine) which have strong
non-covalent interactions with their counterparts on the other strand.
The exceptions to the hydrophobic pattern are the three hydrophilic
asparagine residues in $a$ positions on each strand (marked in blue in
the structure snapshots on the right of Fig.~\ref{LZ26}). As has been
seen experimentally~\cite{Bornschlogl06,Gebhardt10} (and shown below
through simulations), the weaker interaction of these asparagine pairs
is crucial in determining the properties of the intermediate folding
states, a point we will return to in more detail in the Discussion.
In analyzing the LZ26 leucine zipper system, we performed
coarse-grained simulations using the Self-Organized Polymer (SOP)
model~\cite{Hyeon06} (full details in the SI, with selected parameters
summarized in Table~S1). The intrinsic free energy profile
$\tilde{\cal F}_\text{p} = -k_B T \ln \tilde{\cal P}_\text{p}$ at $F_0
= 12.3$ pN is shown in Fig.~\ref{LZ26}(a). The four prominent wells
in $\tilde{\cal F}_\text{p}$ as a function of $z_\text{p}$ correspond
to four stages in the progressive unzipping of LZ26. At $F_0=12.3$ pN
all the states are populated, and the system fluctuates in equilibrium
between the wells. The transition barrier between N and I1 exhibits a
shallow dip that may correspond to an additional, very transiently
populated intermediate. Since this dip is much smaller than $k_B T$,
we do not count it as a distinct state.
Like in the GRM example, adding the optical tweezer apparatus to the
SOP simulation significantly distorts the measured probability
distributions. In the first row of Fig.~\ref{deconv} sample
simulation trajectory fragments are shown both for the protein-only
case [Fig.~\ref{deconv}(a)] at constant force $F_0 = 12.3$ pN, and
within the full optical tweezer system [Fig.~\ref{deconv}(c)] with
$z_\text{trap} = 503$ nm. For the
latter case we plot both $z_\text{tot}(t)$ (purple) and
$z_\text{p}(t)$ (gray), allowing us to see how the bead separation
tracks changes in the protein extension. The probability
distributions $\tilde{\cal P}_\text{p}$ and ${\cal P}_\text{tot}$ are
plotted in Fig.~\ref{deconv}(b) and (d) respectively. In
Fig.~\ref{deconv}(e), the distribution ${\cal P}_\text{tot}$ within
the optical tweezer system is plotted for $z_\text{trap} = 503$ nm.
Though we only illustrate this particular $z_\text{trap}$ value,
$\approx 260$ trajectories are generated at different $z_\text{trap}$
and combined together using WHAM~\cite{Ferrenberg89} (see SI) to
produce a single $\tilde{\cal P}_\text{tot}$ at a constant force $F_0
= 12.3$ pN [Fig.~\ref{deconv}(e)]. We can then use our theoretical
method to recover the protein free energy $\tilde{\cal F}_\text{p}$
[Fig.~\ref{deconv}(f)]. Despite numerical errors due to limited
statistical sampling (both in the protein-only and total system runs),
there is remarkable agreement between the constructed result and
$\tilde{\cal F}_\text{p}$ derived from protein-only simulations. This
is particularly striking given that the total system free energy
${\cal F}_\text{tot}(z_\text{tot})=-k_B T \ln {\cal
P}_\text{tot}(z_\text{tot})$, plotted for comparison in panels (f),
shows how severely the handles/beads blur the energy landscape,
reducing the energy barriers to a degree that the N state is difficult
to resolve. The signature of N in ${\cal F}_\text{tot}(z_\text{tot})$
is a slight change in the curvature at higher energies on the left of
the I1 well. However despite this, we still recover a basin of
attraction representing the N state in the constructed $\tilde{\cal
F}_\text{p}$. Overall, the results in (f) show that our theory can
accurately produce the intrinsic free energy profiles using only the
simulated folding trajectories as input, thus proving a
self-consistency check of the method for a system with multiple
intermediates.
\tocless\subsection{Folding landscape of the leucine zipper from experimental
trajectories} As a final test of the
efficacy of the theory we used the experimental time series
data~\cite{Gebhardt10} to obtain $\tilde{\cal F}_\text{p}$. The data
consists of two independent runs with the LZ26 leucine zipper, using
the same handle/bead parameters for each run (see Table~S1), but at
different trap separations $z_\text{trap}$. We project the
deconvolved landscape from each trajectory onto the mid-point force
$F_0$ where the two most populated states (I1 and U) have equal
probabilities in $\tilde{\cal P}_\text{p}$. The values of $F_0$
derived from the two runs are the same within error bounds: $12.3 \pm
0.9$ and $12.1 \pm 0.9$ pN. The detailed deconvolution steps are
shown for one run in the last row of Fig.~\ref{deconv}, and the final
result, the intrinsic free energy profile $\tilde{\cal F}_\text{p}$,
is shown for both runs in Fig.~\ref{deconv}(h) (solid and dotted blue
curves respectively). Accounting for error due to finite trajectory
length and uncertainties in the apparatus parameters, the median total
uncertainty in each of the reconstructed landscapes is about $0.4$
$k_BT$ in the $z$ range shown (see SI for full error analysis). The
landscapes from the two independent runs have a median difference of
$0.3$ $k_B T$, and hence the method gives consistent results between
runs, up to a small experimental uncertainty, an important test of its
practical utility. The reproducibility of $\tilde{\cal F}_\text{p}$
is a testament to the stability of the dual optical tweezer setup,
allowing us to sample extensively from the energy landscape: each
trajectory lasted for more than 100 s, and thus collected $\sim {\cal
O}(10^2-10^5)$ of the various types of transitions between protein
states (the slowest transition, $\text{U}\to\text{I2}$, occurred on
time scales of $0.4 - 0.6$ s).
Comparison between the experimental $\tilde{\cal F}_\text{p}$ in panel
(h) and the simulation result in (f) reveals a notable difference: the
landscape constructed using the experimental data does not have four
identifiable basins. The N state may not be discernible in the
experiment because of the limited resolution of the apparatus
(see below). The spacing between the I1 and I2 wells is similar
in the simulation and experiment ($\approx 9-13$ nm), but that
between I2 and U is $\approx 13$ nm in the simulation versus 25 nm
in the experiment. This is likely due to a larger helix content in
the unfolded state for the simulation case.
\vspace{2em}
\tocless\section{Discussion}
\tocless\subsection{Origins of the variance in the point spread function} Our
theory for the point spread function $\tilde
P_\text{bh}$ can be used to understand the interplay of physical
effects that relate the intrinsic protein distribution to the total
system. To quantify the various contributions to $\tilde
P_\text{bh}$, we calculated its variance. Since variances of
probability distributions combine additively upon convolution, we
break down the variance of $\tilde P_\text{bh}$ into the individual
bead, handle, and linker contributions. Fig.~\ref{grmB}(c) shows the
fraction of the variance associated with each component as a function
of the handle elastic stretching modulus $\gamma$ at $F_0=12.3$ pN,
with $R_b = 500$ nm, $L = 188$ nm, $l_p = 20$ nm (the approximate
experimental parameters from Ref.~\cite{Gebhardt10}). For any given
value of $\gamma$, the height of each of the four colored slices
represents four fractions. Though not directly measured in
Ref.~\cite{Gebhardt10}, we have assumed $\kappa = 200$
kcal/mol$\cdot$nm$^2$, $\ell = 1.5$ nm for the linkers. The handle
contribution is itself broken down into the ``elastic''
part, defined as the extra variance due to the finite stretching
modulus $\gamma$, compared to an inextensible ($\gamma \to \infty$)
worm-like chain (WLC), and the remainder, which we call the WLC part.
For the case of Ref.~\cite{Gebhardt10}, $\gamma = 400$ pN. Since the
length extension relative to the WLC result is $\approx F_0/\gamma$,
we expect finite handle extensibility to play a small role.
However, the elastic contribution to the total $\tilde{\cal
P}_\text{bh}$ variance at this $\gamma$ is 43\%, comparable to the
WLC contribution of 48\%. Hence, in predicting
$\tilde{\cal P}_\text{bh}$ correctly it is important to account for both the
bending rigidity and elasticity of the handles, which are exactly
modeled in our approach.
\vspace{1em}
\tocless\subsection{Nature and location of the intermediate states in the
leucine zipper energy landscape} The folding landscape of LZ26 is
apparently closely related to the pattern of residue-residue contact
energies between the two strands of the
zipper~\cite{Bornschlogl06,Bornschlogl08,Gebhardt10}. SOP
simulations give us a detailed picture of this relationship.
The average fraction of intact inter-strand (``zipper'') bonds
vs. extension $z$, in Fig.~\ref{LZ26}(b) is a
monotonic curve, starting with the fully closed structure on top (N
state, bond fraction near 1) to the fully open structure at the bottom
(U state, bond fraction near 0). Listed along this curve are the
individual residues at the $a$ and $d$ positions of the heptads in the
sequence. Several features stand out: the transition barriers between
the states show a steeper rate of zipper bond unraveling compared to
the well regions. The change of slope from steep to more gradual
descent occurs near the location of the asparagine residues in the
sequence, and the the well minima of I1, I2, and U occur one or two
residues after the asparagines. The correlation between well minima
locations and asparagines agrees with the experimental
landscape~\cite{Gebhardt10}, underscoring the importance of the weak,
hydrophilic asparagine bonds that interrupt the hydrophobic
valine/leucine pattern at the a/d positions. The sequence of rescaled
BT~\cite{Betancourt99} energies used for the a/d native contacts is
plotted in Fig.~\ref{LZ26}(c). The a/d bonds are all $>2.8$ $k_BT$,
except for the asparagines, which are less stable at $1.7$ $k_B T$.
\vspace{1em}
\tocless\subsection{Instrumental noise filtering, and the limits
of the theoretical approach} The difference in the number of wells
in the simulation and experimental free energy landscapes of the
leucine zipper is related to finite time and spatial
resolution. The measured time series is subject to noise
(environmental vibrations of the optical elements, detector shot
noise), as well as low-pass filtering due to ``parasitic'' effects
in the photodiodes and the nature of the electronic amplification
circuits~\cite{vonHansen12}. The standard experimental protocol
often involves additional low-pass filtering as a way of removing
noise and smoothing trajectories: for the leucine zipper every five
data points (originally recorded at 10 $\mu$s intervals) are
averaged together during collection to give a time step of 50
$\mu$s~\cite{Gebhardt10}; in other cases similar effects are
achieved using Bessel filters~\cite{Yu12}. Noise broadens
the measured distribution of bead separations, while low-pass
filtering narrows it. We developed the FBS technique ({\em Methods}
and SI), based on the details of the specific apparatus used in the
experiment, to estimate and correct for the distortions. For our
system, the FBS theory provides an excellent description, as we have
verified in tests using both numerical simulations and experimental
data (with and without the additional filtering).
However the FBS theory can only apply corrections to peaks
(i.e. distinct protein states) that we observe in the measured
probability distributions. There is the possibility of protein
states leaving no discernible signature in the recorded
distribution. The N state in the leucine
zipper is only connected to the I1 state in the folding
pathway. In the simulations, where the N state is directly
observed, it has short mean lifetimes ($\lesssim 6$ $\mu$s in the
studied force range), and the $\text{N}\leftrightarrow\text{I1}$
change involves the shortest mean extension difference ($\approx 8$
nm) among all the transitions. If the N state in the actual protein
has similar properties, it could be impossible to resolve it in the
experimental data for two different reasons: (i) Regardless of any
additional filtering, the intrinsic low-pass characteristics of the
apparatus filter out states with very short lifetimes.
For our LOT setup, the
effective low-pass filter time-scale for the detectors/electronics
is $\tau_f \approx 7$ $\mu$s (SI), which is at the cutting
edge of current technology. Thus, states with
lifetimes $\lesssim \tau_f$ will not appear as distinct peaks in the
measured distribution. (ii) Independent of the filtering issues in
detection/recording, environmental background noise in the time
series also poses a problem, particularly since we measure bead
displacements, and these have signal amplitudes at high frequencies
that are generally attenuated compared to the intrinsic amplitudes
of the protein conformational changes. The reason for this is that
the beads have much larger hydrodynamic drag than dsDNA handles or
proteins, and their characteristic relaxation times $\tau_r$ in the
optical traps may be comparable to or larger than the lifetime of a
particular protein state. The bead cannot fully respond to force
changes on time scales shorter than its relaxation
time~\cite{Manosas07BJ}. For example, $\tau_r \approx 20$ $\mu$s in
the leucine zipper experiment. If the lifetime of the N state
at a particular force is much smaller than $\tau_r$, protein
transitions from I1$\to$N$\to$I1 will generally occur before the
bead can relax into the N state equilibrium position. If the bead
displacements associated with these transitions are smaller than the
noise amplitude in the time series, the entire excursion to the N
state will be lost to the noise.
We can illustrate the finite response time of the bead using
simulations where resolution is not limited by
noise or apparatus filtering, allowing us to illustrate the
relationship between $z_\text{tot}(t)$ and $z_\text{p}(t)$, compared
in two different trajectory fragments in Fig.~\ref{deconv}(c).
Triangles in the figure indicate times where the protein makes a
transition between states. Changes in protein extension during these
transitions are very rapid, and the bead generally mirrors these
changes with a small time lag, as seen in the enlarged
trajectory interval at $t=36-42$ $\mu$s. When the protein makes
sharp, extremely brief excursions (like a visit to the N state from I1
in the enlarged interval $t=90-96$ $\mu$s), the corresponding changes
in bead separation are smaller and much less well-defined. In the
presence of noise, such tiny changes would be obscured.
Thus, we surmise that the N state is not observable due to some
combination of apparatus filtering, noise, and finite bead response
time. Hence, the theory applied to the experimental data produces a
landscape with only I1, I2, and U wells, as opposed to the four wells
produced from the simulation data. Our labeling of the basins in the
landscape agrees with the earlier state
identification~\cite{Gebhardt10}, and provides an explanation for why
the N state was not resolved.
\vspace{2em}
\tocless\section{Conclusions}
Extraction of the energy landscape of biomolecules using LOT data is
complicated because accurate analysis depends on correcting for
distortions due to system components on the measured result. We have
solved this problem completely by developing a theoretically-based
construction method that accounts for these factors. Through an array
of tests involving an analytically solvable hairpin model,
coarse-grained protein simulations, and experimental data, we have
demonstrated the robustness of the technique in a range of realistic
scenarios. The method works for arbitrarily complicated landscapes,
as demonstrated by the analysis of the leucine zipper experimental
data, producing consistent results when the same protein is studied
under different force scales.
\vspace{2em}
\tocless\section{Materials and Methods}
\tocless\subsection{Finite Bandwidth Scaling (FBS)} Probability
distributions derived from experimental time series of bead-bead
separations are corrupted by
noise, low-pass filtering due to the apparatus, and in some cases
additional filtering due to the data processing protocol. We
developed FBS theory to model and correct for these effects (see
SI for details), using information encoded in time series
autocorrelations, together with earlier spectral characterization
of the dual trap optical tweezer detector and electronic
systems~\cite{vonHansen12}. All the experimental distributions
${\cal P}_\text{tot}$ in the main text were first processed by
FBS.
\vspace{1em}
\tocless\subsection{Leucine zipper} We use a variant of the coarse-grained
self-organized polymer (SOP) model~\cite{Hyeon06,Mickler07PNAS},
where each of the 176 residues in LZ26 is represented by a bead
centered at the $C_\alpha$ position (see SI for details.) The
$\alpha$-helical secondary structure is stabilized by interactions
which mimic $(i,i+4)$ hydrogen bonding~\cite{Denesyuk11}. We use
residue-dependent energies for tertiary
interactions~\cite{Betancourt99}.
\vspace{1em}
\tocless\subsection{Simulations} We simulate (see SI for details)
trajectories for both the protein alone and the full optical tweezer
setup using an overdamped Brownian dynamics (BD)
algorithm~\cite{Ermak78}. The handles used in the LOT setup
[Fig.~\ref{sys}] are modeled as semiflexible chains.
\vspace{1em}
\addtocontents{toc}{\gobblefour}\begin{acknowledgments}
M.H. was a Ruth L. Kirschstein National Research Service postdoctoral
fellow, supported by a grant from the National Institute of General
Medical Sciences (1 F32 GM 97756-1). D.T. was supported by a grant
from the National Institutes of Health (GM 089685). Part of the work
was done while D.T. was in TUM as a senior Humboldt Fellow.
\end{acknowledgments}
\SkipTocEntry |
1,116,691,497,848 | arxiv | \section{Introduction}
In this article, we undertake the study of fundamental matrices associated to systems of generalized Schr\"odinger operators; we
establish the existence of such fundamental matrices and we prove sharp upper and lower exponential decay estimates for them.
Our work is strongly motivated by the papers \cite{She99} and \cite{MP19} in which similar exponential decay estimates were established for fundamental solutions associated to Schr\"odinger operators of the form
$$- \Delta + \mu,$$
and generalized magnetic Schr\"odinger operators of the form
$$- \pr{\nabla - i {\bf a}}^T A \pr{\nabla - i {\bf a}} + V,$$
respectively.
In \cite{She99}, $\mu$ is assumed to be a nonnegative Radon measure, whereas in \cite{MP19}, $A$ is bounded and uniformly elliptic, while ${\bf a}$ and $V$ satisfy a number of reverse H\"older conditions.
Here we consider systems of generalized electric Schr\"odinger operators of the form
\begin{equation}
\label{formalEPDE}
\mathcal{L}_V = -D_\alpha\pr{A^{\alpha \beta} D_\beta } + V,
\end{equation}
where $A^{\alpha \beta} = A^{\alpha \beta}\pr{x}$, for each $\alpha, \beta \in \set{ 1, \dots, n}$, is a $d \times d$ matrix with bounded measurable coefficients defined on $\ensuremath{\mathbb{R}}^n$ that satisfies boundedness and ellipticity conditions as described by \eqref{Abd} and \eqref{ellip}, respectively.
Moreover, the zeroth order potential function $V$ is assumed to be a matrix ${\MC{B}_p}$ function.
We say that $V$ is in the matrix ${\MC{B}_p}$ class if and only if $\innp{V \V{e}, \V{e}} := \V{e}^T V \V{e}$ is uniformly a scalar ${\text{B}_p}$ function for any $\V{e} \in \ensuremath{\mathbb{R}}^d$.
As such, the operators that we consider in this article fall in between the generality of those that appear in \cite{She99} and \cite{MP19}, but are far more general in the sense that they are for elliptic \textit{systems} of equations.
Many of the ideas in Shen's prior work \cite{She94, She95, She96} have contributed to this article.
In particular, we have built on some of the framework used to prove power decay estimates for fundamental solutions to Schr\"odinger operators $-\Delta + V$, where $V$ belongs to the scalar reverse H\"older class ${\text{B}_p}$, for $p = \infty$ in \cite{She94} and $p \ge \frac n 2$ in \cite{She95}, along with the exponential decay estimates for eigenfunctions of more general magnetic operators as in \cite{She96}.
As in both \cite{She99} and \cite{MP19}, Fefferman-Phong inequalities (see \cite{Fef83}, for example) serve as one of the main tools used to establish both the upper and lower exponential bounds that are presented in this article.
However, since the Fefferman-Phong inequalities that we found in the literature only apply to scalar weights, we state and prove new matrix-weighted Fefferman-Phong inequalities (see Lemma \ref{FPml} and Corollary \ref{FPmlCor}) that are suited to our problem.
To establish our new Fefferman-Phong inequalities, we build upon the notion of an auxiliary function associated to a scalar ${\text{B}_p}$ function that was introduced by Shen in \cite{She94}.
More specifically, given a matrix function $V \in {\MC{B}_p}$, we introduce a pair of auxiliary functions: the upper and lower auxiliary functions.
(Section \ref{MaxFun} contains precise definitions of these functions and examines their properties.)
Roughly speaking, we can, in some settings, interpret these quantities as the auxiliary functions associated to the largest and smallest eigenvalues of $V$.
The upper and lower auxiliary functions are used to produce two versions of the Fefferman-Phong inequalities.
Using these auxiliary functions, we also define upper and lower Agmon distances (also defined in Section \ref{MaxFun}), which then appear in our lower and upper exponential bounds for the fundamental matrix, respectively.
We remark that the original Agmon distance appeared in \cite{Agm82}, where exponential upper bounds for $N$-body Schr\"odinger operators first appeared.
Given the elliptic operator $\mathcal{L}_V$ as in \eqref{formalEPDE} that satisfies a suitable set of conditions, there exists a fundamental matrix function associated to $\mathcal{L}_V$, which we denote by $\Gamma^V$.
The fundamental matrix generalizes the notion of a fundamental solution to the systems setting; see for example \cite{HK07}, where the authors generalized the results of \cite{GW82} to the systems setting.
To make precise the notion of the fundamental matrix for our systems setting, we rely upon the constructions presented in \cite{DHM18}.
In particular, we define our bilinear form associated to \eqref{formalEPDE}, and introduce a well-tailored Hilbert space that is used to establish the existence of weak solutions to PDEs of the form $\mathcal{L}_V \V{u} = \V{f}$.
We then assume that our operator $\mathcal{L}_V$ satisfies a natural collection of de Giorgi-Nash-Moser estimates.
This allows us to confirm that the framework from \cite{DHM18} holds for our setting, thereby verifying the existence of the fundamental matrix $\Gamma^V$.
Section \ref{FundMat} contains these details.
In Section \ref{ellipExamples}, assuming very mild conditions on $V$, we verify that the class of systems of ``weakly coupled" elliptic operators of the form
\begin{equation}
- \di \pr{A \nabla} + V \label{WC}
\end{equation}
satisfy the de Giorgi-Nash-Moser estimates that are mentioned in the previous paragraph (see the remark at the end of Section \ref{ellipExamples} for details).
Consequently, this implies that the fundamental matrices associated to weakly coupled elliptic systems exist and satisfy the required estimates.
In fact, this additionally shows that Green's functions associated to these elliptic systems exist and satisfy weaker interior estimates, though we will not need this fact.
Further, we establish local H\"{o}lder continuity of bounded weak solutions $\V{u}$ to
\begin{equation}
- \di \pr{A \nabla \V{u}} + V \V{u} = 0 \label{WCEq}
\end{equation}
under even weaker conditions on $V$.
Specifically, $V$ doesn't have to be positive semidefinite a.e. or even symmetric, see Proposition \ref{HolderContThm} and Remark \ref{HolderRem}.
Finally, although we will not pursue this line of thought in this paper, note that the combination of Proposition \ref{HolderContThm} and Remark \ref{HolderRem} likely leads to new Schauder estimates for bounded weak solutions $\V{u}$ to \eqref{WCEq}.
We remark that this section on elliptic theory for weakly coupled Schr\"odinger systems could be of independent interest beyond the theory of fundamental matrices.
Assuming the set-up outlined above, we now describe the decay results for the fundamental matrices.
We show that there exists a small constant $\varepsilon > 0$ so that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{boundsSummary}
\frac{e^{-\varepsilon \overline{d}(x, y, V)}}{|x-y|^{n-2}} \lesssim \abs{\innp{\Gamma^V (x, y) \V{e}, \V{e}}} \lesssim \frac{ e^{-\varepsilon \underline{d}(x, y, V)}}{|x-y|^{n-2}},
\end{equation}
where $\overline{d}$ and $\underline{d}$ denote the upper and lower Agmon distances associated to the potential function $V \in {\MC{B}_p}$ (as defined in Section \ref{MaxFun}).
That is, we establish an exponential upper bound for the norm of the fundamental matrix in terms of the lower Agmon distance function, while the fundamental matrix is always exponentially bounded from below in terms of the upper Agmon distance function.
The precise statements of these bounds are described by Theorems \ref{UppBoundThm} and \ref{LowerBoundThm}.
For the upper bound, we assume that $V \in {\MC{B}_p}$ along with a noncommutivity condition $\MC{NC}$ that will be made precise in Subsection \ref{NCCondition}.
On the other hand, the lower bound requires the scalar condition $\abs{V} \in {\text{B}_p}$ and that the operator $\mathcal{L}_V$ satisfies some additional properties -- notably a scale-invariant Harnack-type condition.
In fact, the term $\overline{d}(x, y, V)$ in the lower bound of \eqref{boundsSummary} can be replaced by $d(x, y, \abs{V})$, see Remark \ref{differentDistance}.
Interestingly, \eqref{boundsSummary} can be used to provide a beautiful connection between our upper and lower auxiliary functions and Landscape functions that are similar to those defined in \cite{FM12}.
Note that this connection was previously found in \cite{Po21} for scalar elliptic operators with nonnegative potentials.
We will briefly discuss these ideas at the end of Section \ref{LowBds}, see Remark \ref{LandscapeRem}.
To further understand the structure of the bounds stated in \eqref{boundsSummary}, we consider a simple example.
For some scalar functions $0 < v_1 \le v_2 \in {\text{B}_p}$, define the matrix function
$$V = \begin{bmatrix} v_1 & 0 \\ 0 & v_2 \end{bmatrix}.$$
A straightforward check shows that $V \in {\MC{B}_p}$ and satisfies a nondegeneracy condition that will be introduced below.
Moreover, the upper and lower Agmon distances satisfy $\underline{d}\pr{\cdot, \cdot, V} = d\pr{\cdot, \cdot, v_1}$ and $\overline{d}\pr{\cdot, \cdot, V} = d\pr{\cdot, \cdot, v_2}$, where $d\pr{x, y, v}$ denotes the standard Agmon distance from $x$ to $y$ that is associated to a scalar function $v \in {\text{B}_p}$.
We then set
$$\mathcal{L}_V = - \Delta + V.$$
Since $\V{u}$ satisfies $\mathcal{L}_V \V{u} = \V{f}$ if and only if $u_i$ satisfies $- \Delta u_i + v_i u_i = f_i$ for $i = 1, 2$, then $\mathcal{L}_V$ satisfies the set of elliptic assumptions required for our operator.
Moreover, the fundamental matrix for $\mathcal{L}_V$ has a diagonal form given by
$$\Gamma^V = \begin{bmatrix} \Gamma^{v_1} & 0 \\ 0 & \Gamma^{v_2} \end{bmatrix},$$
where each $\Gamma^{v_i}$ is the fundamental solution for $-\Delta + v_i$.
The results of \cite{She99} and \cite{MP19} show that for $i = 1, 2$, there exists $\varepsilon_i > 0$ so that
$$\frac{e^{-\varepsilon_i d(x, y, v_i)}}{|x-y|^{n-2}} \lesssim \Gamma^{v_i}(x, y) \lesssim \frac{ e^{-\varepsilon_i d(x, y, v_i)}}{|x-y|^{n-2}}.$$
Restated, for $i = 1, 2$, we have
\begin{align*}
\frac{e^{-\varepsilon_i d(x, y, v_i)}}{|x-y|^{n-2}} \lesssim \innp{\Gamma^V \V{e}_i, \V{e}_i} \lesssim \frac{ e^{-\varepsilon_i d(x, y, v_i)}}{|x-y|^{n-2}},
\end{align*}
where $\set{\V{e}_1, \V{e}_2}$ denotes the standard basis for $\ensuremath{\mathbb{R}}^2$.
Since $v_1 \le v_2$ implies that $\underline{d}\pr{x, y, V} = d\pr{x, y, v_1} \le d\pr{x, y, v_2} = \overline{d}\pr{x, y, V}$, then we see that there exists $\varepsilon > 0$ so that for any $\V{e} \in \mathbb{S}^1$,
\begin{align*}
\frac{e^{-\varepsilon \overline{d}(x, y, V)}}{|x-y|^{n-2}} \lesssim \innp{\Gamma^V \V{e}, \V{e}} \lesssim \frac{ e^{-\varepsilon \underline{d}(x, y, V)}}{|x-y|^{n-2}}.
\end{align*}
Compared to estimate \eqref{boundsSummary} that holds for our general operators, this example shows that our results are sharp up to constants.
In particular, the best exponential upper bound we can hope for will involve the lower Agmon distance function, while the best exponential lower bound will involve the upper Agmon distance function.
As stated above, the Fefferman-Phong inequalities are crucial to proving the exponential upper and lower bounds of this article.
The classical Poincar\'e inequality is one of the main tools used to prove the original Fefferman-Phong inequalities.
Since we are working in a matrix setting, we use a new matrix-weighted Poincar\'e inequality.
Interestingly, a fairly straightforward argument based on the scalar Poincar\'e inequality from \cite{She99} can be used to prove this matrix version of the Poincar\'e inequality, which is precisely what is needed to prove the main results described above.
Although the main theorems in this article may be interpreted as vector versions of the results in \cite{She99} and \cite{MP19}, many new ideas (that go well beyond the technicalities of working with systems) were required and developed to establish our results.
We now describe these technical innovations.
First, the theory of matrix weights was not suitably developed for our needs.
For example, we had to appropriately define the matrix reverse H\"older classes, ${\MC{B}_p}$.
And while the scalar versions of ${\text{B}_p}$ and ${\text{A}_\iny}$ have a well-understood and very useful correspondence (namely, a scalar weight $v \in \text{B}_p$ iff $v^p \in \text{A}_\infty$), this relationship was not known in the matrix setting.
In order to arrive at a setting in which we could establish interesting results, we explored the connections between the matrix classes ${\MC{B}_p}$ that we develop, as well as ${\MC{A}_\iny}$ and ${\MC{A}_{p,\iny}}$ that were introduced in \cite{Dall15} and \cite{NT96}, \cite{Vol97}, respectively.
The matrix classes are introduced in Section \ref{MWeights}, and many additional relationships (including a matrix version of the previously mentioned correspondence between A${}_\infty$ and B${}_p$) are explored in Appendices \ref{Examples} and \ref{AiApp}.
Given that we are working in a matrix setting, there was no reason to expect to work with a single auxiliary function.
Instead, we anticipated that our auxiliary functions would either be matrix-valued, or that we would have multiple scalar-valued ``directional" Agmon functions.
We first tried to work with a matrix-valued auxiliary function based on the spectral decomposition of the matrix weight.
However, since this set-up assumed that all eigenvalues belong to ${\text{B}_p}$, and it is unclear when that assumption holds, we decided that this approach was overly restrictive.
As such, we decided to work with a pair of scalar-valued auxiliary functions that capture the upper and lower behaviors of the matrix weight.
Once these functions were defined and understood, we could associate Agmon distances to them in the usual manner.
These notions are introduced in Section \ref{MaxFun}.
Another virtue of this article is the verification of elliptic theory for a class of elliptic \textit{systems} of the form \eqref{WC}.
By following the ideas of Caffarelli from \cite{Caf82}, we prove that under standard assumptions on the potential matrix $V$, the solutions to these systems are bounded and H\"older continuous.
That is, instead of simply assuming that our operators are chosen to satisfy the de Giorgi-Nash-Moser estimates, we prove in Section \ref{ellipExamples} that these results hold for this class of examples.
In particular, we can then fully justify the existence of their corresponding fundamental matrices.
To the best of our knowledge, these ideas from \cite{Caf82} have not been used in the linear setting.
A final challenge that we overcame has to do with the fact there are two distinct and unrelated Agmon distance functions associated to the matrix weight $V$.
In particular, because these distance functions aren't related, we had to modify the scalar approach to proving exponential upper and lower bounds for the fundamental matrix associated to the operator $\mathcal{L}_V := \mathcal{L}_0 + V$.
The first bound that we prove for the fundamental matrix is an exponential upper bound in terms of the lower Agmon distance.
In the scalar setting, this upper bound is then used to prove the exponential lower bound.
But for us, the best exponential lower bound that we can expect is in terms of the \textit{upper} Agmon distance.
If we follow the scalar proof, we are led to a standstill since the upper and lower Agmon distances of $V$ aren't related.
We overcame this issue by introducing $\mathcal{L}_\La := \mathcal{L}_0 + \abs{V} I_d$, an elliptic operator whose upper and lower Agmon distances agree and are equal to the upper Agmon distance associated to $\mathcal{L}_V$.
In particular, the upper bound for the fundamental matrix of $\mathcal{L}_\La$ depends on the \textit{upper} Agmon distance of $V$.
This observation, along with a clever trick, allows us to prove the required exponential lower bound.
These ideas are described in Section \ref{LowBds}, using results from the end of Section \ref{UpBds}.
The motivating reasons for studying \textit{systems} of elliptic equations are threefold, as we now describe.
First, real-valued systems may be used to describe complex-valued equations and systems.
To illuminate this point, we consider a simple example.
Let $\Omega \subset \ensuremath{\mathbb{R}}^n$ be an open set and consider the complex-valued Schr\"odinger operator given by
$$L_x = - \di \pr{c \nabla } + x,$$
where $c = \pr{c^{\alpha \beta}}_{\alpha, \beta = 1}^n$ denotes the complex-valued coefficient matrix and $x$ denotes the complex-valued potential function.
That is, for each $\alpha, \beta = 1, \ldots, n$,
$$c^{\alpha \beta} = a^{\alpha \beta} + i b^{\alpha \beta},$$
where both $a^{\alpha \beta}$ and $b^{\alpha \beta}$ are $\ensuremath{\mathbb{R}}$-valued functions defined on $\Omega \subset \ensuremath{\mathbb{R}}^n$, while
$$x = v + i w,$$
where both $v$ and $w$ are $\ensuremath{\mathbb{R}}$-valued functions defined on $\Omega$.
To translate our complex operator into the language of systems, we define
$$A = \begin{bmatrix} a & - b \\ b & a \end{bmatrix}, \quad \quad V = \begin{bmatrix} v & -w \\ w & v \end{bmatrix}.$$
That is, each of the entries of $A$ is an $n \times n$ matrix function:
$$A_{11} = A_{22} = a, \quad A_{12} = -b, \quad A_{21} = b,$$
while each of the entries of $V$ is a scalar function:
$$V_{11} = V_{22} = v, \quad V_{12} = -w, \quad V_{21} = w.$$
Then we define the systems operator
\begin{equation}
\label{LVDef0}
\mathcal{L}_V = -D_\alpha\pr{A^{\alpha \beta} D_\beta } + V.
\end{equation}
If $u = u_1 + i u_2$ is a $\ensuremath{\mathbb{C}}$-valued solution to $L_x u = 0$, where both $u_1$ and $u_2$ are $\ensuremath{\mathbb{R}}$-valued, then $\V{u} = \begin{bmatrix} u_1 \\ u_2 \end{bmatrix}$ is an $\ensuremath{\mathbb{R}}^2$-valued vector solution to the elliptic system described by $\mathcal{L}_V \V{u} = \V{0}$.
This construction also works with complex systems.
Let $C = A + i B$, where each $A^{\alpha\beta}$ and $B^{\alpha\beta}$ is $\ensuremath{\mathbb{R}}^{d \times d}$-valued, for $\alpha, \beta \in \set{1, \ldots, n}$.
If we take $X = V + i W$, where $V$ and $W$ are $\ensuremath{\mathbb{R}}^{d \times d}$-valued, then the operator
$$L_X = - D_\alpha \pr{C^{\alpha \beta} D_\beta } + X$$
describes a complex-valued system of $d$ equations.
Following the construction above, we get a real-valued system of $2d$ equations of the form described by \eqref{LVDef0}, where now
$$A = \begin{bmatrix} A & - B \\ B & A \end{bmatrix}, \qquad
V = \begin{bmatrix} V & - W \\ W & V \end{bmatrix}.$$
In particular, if $X$ is assumed to be a $d \times d$ Hermitian matrix (meaning that $X = X^*$), then $V$ is a $2d \times 2d$ real, symmetric matrix.
This shows that studying systems of equations with Hermitian potential functions (as is often done in mathematical physics) is equivalent to studying real-valued systems with symmetric potentials, as we do in this article.
Moreover, $X$ is positive (semi)definite iff $V$ is positive (semi)definite.
In conclusion, because there is much interest in complex-valued elliptic operators, we believe that it is very meaningful to study real-valued elliptic systems of equations.
Our second motivation comes from physics and molecular dynamics.
Schr\"{o}dinger operators with complex Hermitian matrix potentials $V$ naturally arise when one seeks to solve the Schr\"{o}dinger eigenvalue problem for a molecule with Coulomb interactions between electrons and nuclei.
More precisely, it is sometimes useful to convert the eigenvalue problem associated to the above (scalar) Schr\"{o}dinger operator into a simpler eigenvalue problem associated to a Schr\"{o}dinger operator with a matrix potential and Laplacian taken with respect to only the nuclear coordinates.
See the classical references \cite[p. 335 - 342]{Tan07} and \cite[p. 148 - 153]{WC04} for more details.
Note that this potential is self-adjoint and is often assumed to have eigenvalues that are bounded below, or even approaching infinity as the nuclear variable approaches infinity.
See for example \cite{BHKPSS15,KPSS18, KPSS18b, PSS19,HS20}, where various molecular dynamical approximation errors and asymptotics are computed utilizing the matrix Schr\"{o}dinger eigenvalue problem stated above as their starting point.
With this in mind, we are hopeful that the results in this paper might find applications to the mathematical theory of molecular dynamics.
Moreover, it would be interesting to know whether the results of Sections \ref{FundMat} and \ref{ellipExamples} are true for ``Schr\"{o}dinger operators" with a matrix potential and nonzero first order terms.
Note that such operators also appear naturally when one solves the same Schr\"{o}dinger eigenvalue problem for a molecule with Coulomb interactions between electrons and nuclei, but only partially performs the ``conversion" described in the previous paragraph.
We again refer the reader to \cite[p. 335 - 342]{Tan07} for additional details.
It would also be interesting to determine whether defining a landscape function in terms of a Green's function of a Schr\"{o}dinger operator with a matrix potential would provide useful pointwise eigenfunction bounds.
Third, studying elliptic systems of PDEs with a symmetric nonnegative matrix potential provides a beautiful connection between the theory of matrix-weighted norm inequalities and the theory of elliptic PDEs.
In particular, classical scalar reverse H\"{o}lder and Muckenhoupt A${}_\infty$ assumptions on the scalar potential of elliptic equations are very often assumed (see \cite{She94, She95, She96} for example).
On the other hand, while various matrix versions of these conditions have appeared in the literature (see for example \cite{NT96,Vol97,Gol03,Dall15,Ros16}), the connections between elliptic systems of PDEs with a symmetric nonnegative matrix potential and the theory of matrix-weighted norm inequalities is a mostly unexplored area (with the exception of \cite{Dall15}, which provides a Shubin-Maz'ya type sufficiency condition for the discreteness of the spectrum of a Schr\"{o}dinger operator with complex Hermitian positive-semidefinite matrix potential $V$ on $\ensuremath{\R^n}$).
This project led to the systematic development of the theory of matrix reverse H\"older classes, ${\MC{B}_p}$, as well as an examination of the connections between ${\MC{B}_p}$, ${\MC{A}_\iny}$, and ${\MC{A}_{2,\iny}}$.
By going beyond the ideas from \cite{Dall15}, \cite{NT96}, \cite{Vol97}, we carefully study ${\MC{A}_\iny}$ and prove that ${\MC{A}_\iny} = {\MC{A}_{2,\iny}}$.
Unless otherwise stated, we assume that our $d \times d$ matrix weights (which play the role of the potential in our operators) are real-valued, symmetric, and positive semidefinite.
As described above, real symmetric potentials are equivalent to complex Hermitian potentials through a ``dimension doubling" process.
In fact, because of this equivalence, our results can be compared with those in mathematical physics, where systems with complex, Hermitian matrix potentials are considered.
To reiterate, we assume throughout the body of the article that $V$ is real-valued and symmetric.
However, in Appendix \ref{AiApp}, we follow the matrix weights community convention and assume that our matrix weights are complex-valued and Hermitian.
\subsection{Organization of the article}
The next three sections are devoted to matrix weight preliminaries with the goal of stating and proving our matrix version of the Fefferman-Phong inequality, Lemma \ref{FPml}.
In Section \ref{MWeights}, we present the different classes of matrix weights that we work with throughout this article, including the aforementioned matrix reverse H\"{o}lder condition ${\MC{B}_p}$ for $p > 1$ and the (non-Muckenhoupt) noncommutativity condition $\MC{NC}$ which will be crucial to the proof of Lemma \ref{FPml}.
These two classes will be discussed in relationship to the existing matrix weight literature in Appendix \ref{AiApp}.
Section \ref{MaxFun} introduces the auxiliary functions and their associated Agmon distance functions.
The Fefferman-Phong inequalities are then stated and proved in Section \ref{FPI}.
Section \ref{FPI} also contains the Poincar\'e inequality that is used to prove one of our new Fefferman-Phong inequalities.
The following three sections are concerned with elliptic theory.
Section \ref{EllOp} introduces the elliptic systems of the form \eqref{formalEPDE} discussed earlier.
The fundamental matrices associated to these operators are discussed in Section \ref{FundMat}.
In Section \ref{ellipExamples}, we show that the elliptic systems of the form \eqref{WC} satisfy the assumptions from Section \ref{FundMat}.
The last two sections, Section \ref{UpBds} and Section \ref{LowBds}, are respectively concerned with the upper and lower exponential bounds for our fundamental matrices.
Futher, we discuss the aforementioned connection between our upper and lower auxiliary functions and Landscape functions at the end of Section \ref{LowBds}.
Finally, in our first two appendices, we state and prove a number of results related to the theory of matrix weights that are interesting in their own right, but are not needed for the proofs of our main results.
In Appendix \ref{Examples}, we explore the noncommutativity class $\MC{NC}$ in depth, providing examples and comparing it to our other matrix classes.
In Appendix \ref{AiApp}, we systematically develop the theory of the various matrix classes that are introduced in Section \ref{MWeights}.
In particular, we provide a comprehensive discussion of the matrix ${\MC{B}_p}$ class and characterize this class of matrix weights in terms of the more classical matrix weight class ${\MC{A}_{p,\iny}}$ from \cite{NT96, Vol97}.
This discussion nicely complements a related matrix weight characterization from \cite[Corollary $3.8$]{Ros16}.
We also discuss how the matrix ${\MC{A}_\iny}$ class introduced in \cite{Dall15} relates to the other matrix weight conditions discussed in this paper.
In particular, we establish that ${\MC{A}_\iny} = {\MC{A}_{2,\iny}}$.
Further, we provide a new characterization of the matrix ${\MC{A}_{2,\iny}}$ condition in terms of a new reverse Brunn-Minkowski type inequality.
We hope that Appendix \ref{AiApp} will appeal to the reader who is interested in the theory of matrix-weighted norm inequalities in their own right.
The last appendix contains the proofs of technical results that we skipped in the body.
We have attempted to make this article as self-contained as possible, particularly for the reader who is not an expert in elliptic theory or matrix weights.
In Appendix \ref{AiApp}, we have not assumed any prior knowledge of matrix weights.
As such, we hope that this section can serve as a reference for the ${\MC{A}_\iny}$ theory of matrix weights.
\subsection{Notation}
As is standard, we use $C$, $c$, etc. to denote constants that may change from line to line.
We may use the notation $C(n, p)$ to indicate that the constant depends on $n$ and $p$, for example.
The notation $a \lesssim b$ means that there exists a constant $c > 0$ so that $a \le c b$.
If $c = c\pr{d, p}$, for example, then we may write $a \lesssim_{(d, p)} b$.
We say that $a \simeq b$ if both $a \lesssim b$ and $b \lesssim a$ with dependence denoted analogously.
Let $\innp{\cdot, \cdot}_d : \ensuremath{\mathbb{R}}^d \times \ensuremath{\mathbb{R}}^d \to \ensuremath{\mathbb{R}}$ denote the standard Euclidean inner product on $d$-dimensional space.
When the dimension of the underlying space is understood from the context, we may drop the subscript and simply write $\innp{\cdot, \cdot}$.
For a vector $\V{v} \in \ensuremath{\mathbb{R}}^d$, its scalar length is $\abs{\V{v}} = \innp{\V{v}, \V{v}}^{\frac 1 2}$.
The sphere in $d$-dimensional space is $\ensuremath{\mathbb{S}^{d-1}} = \set{\V{v} \in \ensuremath{\mathbb{R}}^d : \abs{\V{v}} = 1}$.
For a $d \times d$ real-valued matrix $A$, we use the $2$-norm, which is given by
$$\abs{A} = \abs{A}_2 = \sup \set{\abs{Ax} : x \in \ensuremath{\mathbb{S}^{d-1}}} = \sup\set{\innp{Ax, y} : x, y \in \ensuremath{\mathbb{S}^{d-1}}}.$$
Alternatively, $\abs{A}$ is equal to its largest singular value, the square root of the largest eigenvalue of $AA^T$.
For symmetric positive semidefinite $d \times d$ matrices $A$ and $B$, we say that $A \le B$ if $\innp{A \vec e, \vec e} \le \innp{B \vec e, \vec e}$ for every $\vec e \in \ensuremath{\mathbb{R}}^d$.
Note that both $\abs{\V{v}}$ and $\abs{A}$ are scalar quantities.
If $A$ is symmetric and positive semidefinite, then $\abs{A}$ is equal to $\lambda$, the largest eigenvalue of $A$.
Let $\V{v} \in \ensuremath{\mathbb{S}^{d-1}}$ denote the eigenvector associated to $\lambda$ and let $\set{\V{e}_i}_{i=1}^d$ denote the standard basis of $\ensuremath{\mathbb{R}}^d$.
Observe that $\displaystyle \V{v} = \sum_{i=1}^d \innp{\V{v}, \V{e}_i} \V{e}_i$ and for each $j$, $\displaystyle \innp{\V{v}, \V{e}_j}^2 \le \sum_{i=1}^d \innp{\V{v}, \V{e}_i}^2 = 1$.
Then, since $A^{\frac 1 2}$ is well-defined, an application of Cauchy-Schwarz shows that
\begin{equation}
\label{normProp}
\begin{aligned}
\abs{A}
&= \lambda
= \innp{A\V{v}, \V{v}}
\le d \sum_{i=1}^d \innp{ A\V{e}_i,\V{e}_i}.
\end{aligned}
\end{equation}
Let $\Omega \subset \ensuremath{\mathbb{R}}^n$ and $p \in \brp{1,\infty}$.
For any $d$-vector $\V{v}$, we write $\displaystyle \norm{\V{v}}_{L^p(\Omega)} = \pr{\int_{\Omega} \abs{\V{v}\pr{x}}^p dx}^{\frac 1 p}$.
Similarly, for any $d \times d$ matrix $A$, we use the notation $\displaystyle \norm{A}_{L^p(\Omega)} = \pr{\int_{\Omega} \abs{A\pr{x}}^p dx}^{\frac 1 p}$.
When $p = \infty$, we write $\displaystyle \norm{\V{v}}_{L^\infty(\Omega)} = \ess \sup_{x \in \Omega} \abs{\V{v}\pr{x}}$ and $\displaystyle \norm{A}_{L^\infty(\Omega)} = \ess \sup_{x \in \Omega} \abs{A\pr{x}}$.
We say that a vector $\V{v}$ belongs to $L^p\pr{\Omega}$ iff the scalar function $\abs{\V{v}}$ belongs to $L^p\pr{\Omega}$.
Similarly, for a matrix function $A$ defined on $\Omega$, $A \in L^p\pr{\Omega}$ iff $\abs{A} \in L^p\pr{\Omega}$.
In summary, we use the notation $\abs{\cdot}$ to denote norms of vectors and matrices, while we use the notations $\norm{\cdot}_p$, $\norm{\cdot}_{L^p}$, or $\norm{\cdot}_{L^p\pr{\Omega}}$ to denote $L^p$-space norms.
We let $C^\infty_c(\Omega)$ denote the set of all infinitely differentiable functions with compact support in $\Omega$.
If $\V{\varphi} : \Omega \to \ensuremath{\mathbb{R}}^d$ is a vector-valued function for which each component function $\varphi_i \in C^\infty_c(\Omega)$, then $\V{\varphi} \in C^\infty_c(\Omega)$.
For $x \in \ensuremath{\mathbb{R}}^n$ and $r > 0$, we use the notation $B\pr{x, r}$ to denote a ball of radius $r > 0$ centered at $x \in \ensuremath{\mathbb{R}}^n$.
We let $Q(x, r)$ be the ball of radius $r$ and center $x \in \ensuremath{\mathbb{R}}^n$ in the $\ell^\infty$ norm on $\ensuremath{\mathbb{R}}^n$ (i.e. the cube with side length $2r$ and center $x$).
We write $Q$ to denote a generic cube.
We use the notation $\ell$ to denote the sidelength of a cube.
That is, $\ell\pr{Q\pr{x, r}} = 2r$.
We will assume throughout that $n \ge 3$ and $d \ge 2$.
In general, $1 < p < \infty$, but we may further specify the range of $p$ as we go.
\subsection*{Acknowledgements.}
The authors would like to thank Svitlana Mayboroda and Bruno Poggi for interesting discussions and useful feedback.
\section{Matrix Classes}
\label{MWeights}
Within this section, we define the classes of matrix functions that we work with, then we collect a number of observations about them.
\subsection{Reverse H\"older Matrices}
Recall that for a nonnegative scalar-valued function $v$, we say that $v$ belongs to the reverse H\"older class ${\text{B}_p}$ if $v \in L^p_{\loc}\pr{\ensuremath{\mathbb{R}}^n}$ and there exists a constant $C_v$ so that for every cube $Q$ in $\ensuremath{\mathbb{R}}^n$,
\begin{equation}
\label{BpDefOne}
\pr{\fint_Q \brac{v\pr{x}}^p dx}^{\frac 1 p} \le C_v \fint_Q v\pr{x} dx.
\end{equation}
Let $V$ be a $d \times d$ {\bf matrix weight} on $\ensuremath{\mathbb{R}}^n$.
That is, $V$ is a $d \times d$ real-valued, symmetric, positive semidefinite matrix.
For such matrices, we define ${\MC{B}_p}$, the class of reverse H\"older matrices, via quadratic forms as follows.
\begin{defn}[Matrix ${\MC{B}_p}$]
For matrix weights, we say that $V$ belongs to the class of {\bf reverse H\"older matrices}, $V \in {\MC{B}_p}$, if $V \in L^p_{\loc}\pr{\ensuremath{\mathbb{R}}^n}$ and there exists a constant $C_V$ so that for every cube $Q$ in $\ensuremath{\mathbb{R}}^n$ and every $\vec e \in \ensuremath{\mathbb{R}}^d$ (or $\ensuremath{\mathbb{S}^{d-1}}$),
\begin{equation}
\label{BpDefTwo}
\pr{\fint_Q \innp{V\pr{x} \vec e, \vec e}^p dx}^{\frac 1 p}
\le C_V \innp{\pr{ \fint_Q V\pr{x} dx } \vec e, \vec e}.
\end{equation}
This constant is independent of $Q$ and $\V{e}$, so that the inequality holds uniformly in $Q$ and $\V{e}$.
We call $C_V$ the (uniform) ${\MC{B}_p}$ constant of $V$.
Note that $C_V$ depends on $V$ as well as $p$; to indicate this, we may use the notation $C_{V,p}$.
\end{defn}
\begin{rem}
This definition may be generalized as follows.
For any $q, p > 1$, we say that $V \in \mathcal{B}_{q, p}$ if there exists a constant $C_V$ so that for every cube $Q$ in $\ensuremath{\mathbb{R}}^n$ and every $\vec e \in \ensuremath{\mathbb{R}}^d$, it holds that
\begin{equation*}
\pr{\fint_Q \abs{V\pr{x}^{\frac 1 q} \vec e}^{qp} dx}^{\frac 1 p}
\le C_V \fint_Q \abs{V\pr{x}^{\frac 1 q} \vec e}^q dx.
\end{equation*}
Notice that ${\MC{B}_p} = \mathcal{B}_{2, p}$.
This matrix class is unexplored, but its theory is probably similar to what we develop here for ${\MC{B}_p}$.
One might expect a relationship between $\mathcal{B}_{q, p}$ and the $q$-Laplacian.
\end{rem}
Now we collect some observations about such matrix functions.
The first result regards the norm of a matrix ${\MC{B}_p}$ function.
\begin{lem}[Largest eigenvalue is scalar ${\text{B}_p}$]
\label{normVBp}
If $V \in {\MC{B}_p}$, then $\abs{V} \in {\text{B}_p}$ with $C_{\abs{V}} \lesssim_{(d, p)} C_V$.
\end{lem}
\begin{proof}
Let $\V{e}_1, \ldots, \V{e}_d$ denote the standard basis for $\ensuremath{\R^d}$.
Using that $\displaystyle \abs{V} \le d \sum_{i=1}^d \innp{V \V{e}_i, \V{e}_i}$ (as explained in the notation section, see \eqref{normProp}) combined with the H\"older and Minkowski inequalities shows that
\begin{align*}
\pr{\fint_Q \abs{V(x)}^p \, dx}^\frac{1}{p}
&\le \brac{\fint_Q \pr{ d \sum_{j=1}^d \innp{V\pr{x} \V{e}_j, \V{e}_j}}^p \, dx}^\frac{1}{p}
\le d^{2 - \frac 1 p} \sum_{j = 1}^d \pr{\fint_Q \innp{V(x) \V{e}_j, \V{e}_j}^p \, dx }^\frac{1}{p} \\
&\le d^{2 - \frac 1 p} C_V \sum_{j = 1}^d \innp{\pr{\fint_Q V(x) dx } \V{e}_j, \V{e}_j}
\le d^{3 - \frac 1 p} C_V \fint_Q \abs{V(x)} \, dx,
\end{align*}
where we have used the reverse H\"older inequality \eqref{BpDefTwo} in the fourth step.
\end{proof}
\begin{lem}[Gehring's Lemma]
\label{GehringLemma}
If $V \in {\MC{B}_p}$, then there exists $\varepsilon\pr{p, C_V} > 0$ so that $V \in \mathcal{B}_{p+\varepsilon}$.
In particular, $V \in \mathcal{B}_q$ for all $q \in \brac{1, p + \varepsilon}$.
Moreover, if $q \le s$, then $C_{V, q} \le C_{V, s}$.
\end{lem}
\begin{proof}
Since $\innp{V(x)\V{e}, \V{e}}$ is a scalar ${\text{B}_p}$ weight (with ${\text{B}_p}$ constant uniform in $\V{e} \in \ensuremath{\mathbb{R}}^d$), then it follows from the proof of Gehring's Lemma (see for example the dyadic proof in \cite{Per01}) that $V \in {\MC{B}_p}$ implies that there exists $\epsilon > 0$ such that $V \in \MC{B}_{p + \epsilon}$.
Let $q \le p + \varepsilon$, $\V{e} \in \ensuremath{\mathbb{R}}^d$.
Then by H\"older's inequality,
\begin{align*}
\pr{\fint_Q \innp{V\pr{x} \vec e, \vec e}^q dx}^{\frac 1 q}
&\le \frac{1}{\abs{Q}^{\frac 1 q}} \brac{\pr{\int_Q \innp{V\pr{x} \vec e, \vec e}^{p+\varepsilon} dx}^{\frac q {p+\varepsilon}} \abs{Q}^{1 - \frac q {p+\varepsilon}}}^{\frac 1 q} \\
&=\pr{\fint_Q \innp{V\pr{x} \vec e, \vec e}^{p+\varepsilon} dx}^{\frac 1 {p+\varepsilon}}
\le C_{V, p+\varepsilon} \innp{\pr{ \fint_Q V\pr{x} dx } \vec e, \vec e},
\end{align*}
showing that $V \in \mathcal{B}_q$.
If $q \le s \le p + \varepsilon$, then the same argument holds with $C_{V,s}$ in place of $C_{V, p+\varepsilon}$.
\end{proof}
Now we introduce an averaged version of $V$ that will be extensively used in our arguments.
\begin{defn}[Averaged matrix]
Let $V$ be a function, $x \in \ensuremath{\mathbb{R}}^n$, $r > 0$.
We define the {\bf averaged matrix} as
\begin{align}
\Psi\pr{x, r; V} = \frac{1}{r^{n-2}} \int_{Q\pr{x,r}} V\pr{y} dy.
\label{eqB.2}
\end{align}
\end{defn}
These averages have a controlled growth.
\begin{lem}[Controlled growth, cf. Lemma 1.2 in \cite{She95}]
\label{BasicShenLem}
If $V \in {\MC{B}_p}$, then for any $0 < r < R < \infty$,
\begin{align*}
\Psi\pr{x, r; V} \le C_V \pr{\frac{r}{R}}^{2 - \frac{n}{p}} \Psi\pr{x, R; V},
\end{align*}
\label{lB.1}
where $C_V$ is the uniform ${\MC{B}_p}$ constant for $V$.
\end{lem}
\begin{proof}
Let $0 < r < R$.
Then for any $\vec e \in \ensuremath{\mathbb{R}}^n$, applications of the H\"older inequality and the reverse H\"older inequality described by \eqref{BpDefTwo} show that
\begin{align*}
\innp{\pr{\fint_{Q\pr{x, r}} V\pr{y} dy} \vec e, \vec e}
\le \pr{\frac 1 {\abs{Q\pr{x, r}}} \int_{Q\pr{x, r}} \innp{V\pr{y} \vec e, \vec e}^p dy}^{\frac 1 p}
\le \pr{\frac {\abs{Q\pr{x, R}}} {\abs{Q\pr{x, r}}} \fint_{Q\pr{x, R}} \innp{V\pr{y} \vec e, \vec e}^p dy}^{\frac 1 p} \\
&= \pr{\frac R r}^{\frac n p} \pr{ \fint_{Q\pr{x, R}} \innp{V\pr{y} \vec e, \vec e}^p dy}^{\frac 1 p}
\le C_V \pr{\frac R r}^{\frac n p} \innp{\pr{ \fint_{Q\pr{x, R}} V\pr{y} dy } \vec e, \vec e}.
\end{align*}
As $\vec{e} \in \ensuremath{\mathbb{R}}^n$ was arbitrary, then it follows that $\displaystyle \fint_{Q\pr{x, r}} V\pr{y} dy \le C_V \pr{\frac R r}^{\frac n p} \fint_{Q\pr{x, R}} V\pr{y} dy$, which leads to the conclusion of the lemma.
\end{proof}
Furthermore, the ${\MC{B}_p}$ matrices serve as doubling measures.
\begin{lem}[Doubling result]
\label{Vdbl}
If $V \in {\MC{B}_p}$, then $V$ is a doubling measure.
That is, there exists a doubling constant $\gamma = \gamma\pr{n, p, C_V} > 0$ so that for every $x \in \ensuremath{\mathbb{R}}^n$ and every $r > 0$,
\begin{align*}
\int_{Q\pr{x, 2r}} V\pr{y} dy \le \gamma \int_{Q\pr{x, r}} V\pr{y} dy.
\end{align*}
\end{lem}
\begin{proof}
Since each $\innp{V \V{e}, \V{e}}$ belongs to ${\text{B}_p}$, then by the scalar result, $\innp{V \V{e}, \V{e}}$ defines a doubling measure.
Moreover, since the ${\text{B}_p}$ constant is independent of $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$, then so too is the doubling constant associated to each measure defined by $\innp{V \V{e}, \V{e}}$.
It follows that $V$ defines a doubling measure.
\end{proof}
\subsection{Nondegenerate matrices}
Next, we define a very natural class of nondegenerate matrices.
\begin{defn}[Nondegeneracy class]
We say that $V$ belongs to the {\bf nondegeneracy class}, $V \in \MC{ND}$, if $V$ is a matrix weight that satisfies the following very mild nondegeneracy condition:
For any measurable $|E| > 0$, we have (in the usual sense of semidefinite matrices) that
\begin{equation}
V(E) := \int_E V(y) \, dy > 0.
\label{NDCond}
\end{equation}
\end{defn}
First we give an example of a matrix function in ${\MC{B}_p}$ but not in $\MC{ND}$.
\begin{ex}[${\MC{B}_p} \setminus \MC{ND}$ is nonempty]
\label{degenMatrix}
Take $v: \ensuremath{\mathbb{R}}^n \to \ensuremath{\mathbb{R}}$ in ${\text{B}_p}$ and define
$$V = \brac{\begin{array}{cccc} v & 0 & \ldots & 0 \\ 0 & 0 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 0 \end{array}}.$$
It is clear that $V \in {\MC{B}_p}$.
However, since $V$ and its averages all have zero eigenvalues, then $V \notin \MC{ND}$.
\end{ex}
Now we produce a number of examples in both ${\MC{B}_p}$ and $\MC{ND}$.
\begin{ex}[${\MC{B}_p} \cap \MC{ND}$ polynomial matrices]
\label{polyEx}
Let $V$ be a polynomial matrix.
\begin{itemize}
\item[(a)] If $V$ is symmetric, nontrivial along the diagonal, and positive semidefinite, then $V$ satisfies \eqref{NDCond}.
It follows from Corollary \ref{ExampleCor} that $V \in {\MC{B}_p}$ as well.
\item[(b)] If for every $\V{e} \in \ensuremath{\mathbb{R}}^d$, there exists $i \in \set{1, \ldots, d}$ so that $\displaystyle \sum_{j = 1}^d V_{ij}e_j \ne 0$, then $V^T V$ satisfies \eqref{NDCond}.
Since $V^T V$ is symmetric and polynomial, then $V^T V \in \MC{ND} \cap {\MC{B}_p}$.
A similar condition shows that $V V^T \in \MC{ND} \cap {\MC{B}_p}$ as well.
\end{itemize}
\end{ex}
As we will see below, the nondegeneracy condition described by \eqref{NDCond} facilitates the introduction of one of our key tools.
However, there are also practical reasons to avoid working with matrices that aren't nondegenerate.
For example, consider a matrix-valued Schr\"odinger operator of the form $- \Delta + V$, where $V$ is as given in Example \ref{degenMatrix}.
The fundamental matrix of this operator is diagonal with only the first entry exhibiting decay, while all other diagonal entries contain the fundamental solution for $\Delta$.
In particular, since the norm of this fundamental matrix doesn't exhibit exponential decay, we believe that the assumption of nondegeneracy is very natural for our setting.
\subsection{Noncommutativity condition}
\label{NCCondition}
As we'll see below, the single assumption that $V \in {\MC{B}_p}$ will not suffice for our needs, and we'll impose additional conditions on $V$.
To define the noncommutativity condition that we use, we need to introduce the lower auxiliary function associated to $V \in {\MC{B}_p} \cap \MC{ND}$.
If $V \in \MC{ND}$, then by \eqref{eqB.2} and \eqref{NDCond}, for each $x \in \ensuremath{\mathbb{R}}^n$ and $r > 0$, we have $\Psi\pr{x, r; V} > 0$.
If $V \in {\MC{B}_p}$ and $p > \frac{n}{2}$, then the power $2 - \frac n p > 0$ and it follows from Lemma \ref{lB.1} that
\begin{equation}
\label{eqB.3}
\begin{aligned}
& \lim_{r \to 0^+} \innp{\Psi\pr{x, r; V}\V{e}, \V{e}} = 0 \; \text{ for any } \V{e} \in \ensuremath{\R^d}, \\
& \lim_{R \to \infty} \min_{\V{e} \in \ensuremath{\mathbb{S}^{d-1}}} \innp{\Psi\pr{x, R; V} \V{e}, \V{e}} = \infty.
\end{aligned}
\end{equation}
These observations allows us to make the following definition of $\underline{m}$, the lower auxiliary function.
\begin{defin}[Lower auxiliary function]
Let $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$.
We define the {\bf lower auxiliary function} $ \underline{m}\pr{\cdot, V} : \ensuremath{\R^n} \rightarrow (0, \infty)$ as follows:
\begin{align*}
\frac{1}{\underline{m}\pr{x, V}} = \sup_{r > 0} \set{ r : \min_{\V{e} \in \ensuremath{\mathbb{S}^{d-1}}} \innp{\psi\pr{x,r; V} \V{e}, \V{e}} \le 1}.
\end{align*}
\end{defin}
We will investigate this function and others in much more detail within Section \ref{MaxFun}.
For now, we use $\underline{m}$ to define our next matrix class.
\begin{defn}[Noncommutativity class]
If $V \in {\MC{B}_p} \cap \MC{ND}$, then we say that $V$ belongs to the {\bf noncommutativity class}, $V \in \MC{NC}$, if there exists $N_V > 0$ so that for every $x \in \ensuremath{\mathbb{R}}^n$ and every $\V{e} \in \ensuremath{\mathbb{R}}^d$,
\begin{equation}
N_V \abs{\V{e}}^2 \le \int_Q \innp{V^\frac12 (y) V(Q)^{-1} V^\frac12 (y) \V{e}, \V{e}} \, dy,
\label{NCCond}
\end{equation}
where $Q = Q\pr{x, \frac 1 {\underline{m}(x, V)}}$.
\end{defn}
For most of our applications, we consider $\MC{NC}$ as a subset of ${\MC{B}_p} \cap \MC{ND}$ in order to make sense of $\underline{m}(\cdot, V)$.
However, if $V \notin {\MC{B}_p} \cap \MC{ND}$ or $\underline{m}\pr{\cdot, V}$ is not well-defined, we say that $V \in \MC{NC}$ if \eqref{NCCond} holds for every cube $Q \subset \ensuremath{\mathbb{R}}^n$.
To show that this class of matrices is meaningful, we provide a non-example.
\begin{ex}[$\MC{NC}$ is a proper subset of ${\MC{B}_p} \cap \MC{ND}$]
\label{notNCEx}
Define $V : \ensuremath{\mathbb{R}}^n \to \ensuremath{\mathbb{R}}^{2 \times 2}$ by
$$V(x) = \begin{bmatrix}1 & \abs{x}^2 \\ \abs{x}^2 & \abs{x}^4 \end{bmatrix} = \begin{bmatrix}1 & x_1^2 + \ldots x_n^2 \\ x_1^2 + \ldots x_n^2 & \pr{x_1^2 + \ldots x_n^2}^2 \end{bmatrix}.$$
By Example \ref{polyEx}, $V \in {\MC{B}_p} \cap \MC{ND}$.
However, as shown in Appendix \ref{Examples}, $V \notin \MC{NC}$.
\end{ex}
In Section \ref{UpBds}, we prove one of our main results: an upper exponential decay estimate for the fundamental matrix of the elliptic operator $\mathcal{L}_V$, where $V \in {\MC{B}_p} \cap\MC{ND} \cap\MC{NC}$.
A further discussion of these matrix classes and their relationships is available in Appendix \ref{Examples}.
\subsection{Stronger conditions}
To finish our discussion of matrix weights, we introduce some closely related and more well-known classes of matrices.
Note that these assumptions are stronger and more readily checkable.
\begin{defn}[${\MC{A}_\iny}$ matrices]
We say that $V$ belongs to the {\bf A-infinity class of matrices}, $V \in {\MC{A}_\iny}$, if for any $\epsilon > 0$, there exists $\delta > 0$ so that for every cube $Q$,
\begin{equation}
\label{Ainf}
\abs{\set{x \in Q: V\pr{x } \geq \delta \fint_Q V\pr{y} dy}} \geq (1-\epsilon) |Q|.
\end{equation}
\end{defn}
This class of matrix weights was first introduced in \cite{Dall15}, where the author proved a Shubin-Maz'ya type sufficiency condition for the discreteness of the spectrum of a Schrodinger operator $- \Delta + V$, where $V \in {\MC{A}_\iny}$.
Interestingly, and somewhat surprisingly, we show in the Appendix \ref{AiApp} that the condition $V \in {\MC{A}_\iny}$ is equivalent to $V \in \MC{A}_{2, \infty}$.
The class $\MC{A}_{2, \infty}$ is the readily checkable class of matrix weights introduced in \cite{NT96,Vol97}, which we now define.
\begin{defn}[$\mathcal{A}_{2, \infty}$ matrices]
We say $V \in \mathcal{A}_{2, \infty}$, if there exists $A_V > 0$ so that for every cube $Q$, we have
\begin{equation}
\det \pr{\fint_Q V(x) \, dx } \le A_V \exp \pr{\fint_Q \ln \det V(x) \, dx }.
\label{AtwoInf}
\end{equation}
\end{defn}
We briefly discuss the relationship between ${\MC{B}_p}$ and ${\MC{A}_\iny}$.
First, we have the following application of Gehring's lemma.
\begin{lem}[${\MC{A}_\iny} \subset \mathcal{B}_q$]
If $V \in {\MC{A}_\iny}$, then $V \in \mathcal{B}_q$ for some $q > 1$.
\end{lem}
\begin{proof}
Since $\innp{V \V{e}, \V{e}} \in {\text{A}_\iny}$ uniformly in $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$, then by \cite[Lemma 7.2.2]{Gra14},
there exists $q > 1$ so that $\innp{V \V{e}, \V{e}} \in B_q$ uniformly in $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$.
In particular, $V \in \mathcal{B}_q$, as required.
\end{proof}
On the other hand, if $V \in {\MC{B}_p}$, there is no reason to expect that $V \in {\MC{A}_\iny}$.
Namely, if $V \in {\MC{B}_p}$, then for each $\V{e} \in \ensuremath{\mathbb{R}}^d$, $\innp{V(x) \V{e}, \V{e}}$ is a scalar ${\text{B}_p}$ function.
This means that $\innp{V(x) \V{e}, \V{e}}$ is a scalar ${\text{A}_\iny}$ function.
Therefore, if $V \in {\MC{B}_p}$, then for each \textit{fixed} $\V{e}$ and any $\epsilon > 0$, there exists $\delta = \delta\pr{\V{e}} > 0$ so that for every cube $Q$,
\begin{equation}
\label{WkAinf}
\abs{\set{x \in Q: \innp{V\pr{x} \V{e}, \V{e}} \geq \delta \fint_Q \innp{V\pr{y} \V{e}, \V{e}} \, dy}} \geq (1-\epsilon) |Q|.
\end{equation}
In particular, since there is no guarantee that $\displaystyle \inf\set{\delta\pr{\V{e}} : \V{e} \in \ensuremath{\mathbb{S}^{d-1}}} > 0$, the assumption that $V \in {\MC{A}_\iny}$ is not inherited from the assumption that $V \in {\MC{B}_p}$.
Example \ref{notNCEx} gives a matrix function that belongs to ${\MC{B}_p}$ for any $p$, but doesn't belong to $\MC{NC}$, and therefore by Lemma \ref{AIinNC}, also doesn't belong to ${\MC{A}_\iny}$.
Recall that $\lambda_d = \abs{V}$ denotes the largest eigenvalue of $V$.
As we saw in Lemma \ref{normVBp}, if $V \in {\MC{B}_p}$, then $\lambda_d$ belongs to ${\text{B}_p}$.
Let $\lambda_1$ denote the smallest eigenvalue of $V$.
That is, $\lambda_1 = \abs{V^{-1}}^{-1}$.
Under a stronger set of assumptions, we can also make the interesting conclusion that $\lambda_1$ is in ${\text{B}_p}$.
\begin{prop}[Smallest eigenvalue is scalar ${\text{B}_p}$]
\label{la1Prop}
If $V \in {\MC{B}_p} \cap {\MC{A}_\iny}$, then $\lambda_1 \in {\text{B}_p}$.
\end{prop}
The proof of this result appears in Appendix \ref{TechProofs}.
Although the assumption that $V \in {\MC{B}_p} \cap {\MC{A}_\iny}$ implies that the smallest and largest eigenvalues of $V$ belong to ${\text{B}_p}$, it is unclear what conditions would imply that the other eigenvalues belong to this reverse H\"older class.
The next result and its proof show that the $\MC{NC}$ condition can be thought of as a noncommutative, non-${\MC{A}_\iny}$ condition that is very naturally implied by the noncommutativity that is built into the ${\MC{A}_\iny}$ definition.
\begin{lem}[${\MC{A}_\iny} \subset \MC{NC}$]
\label{AIinNC}
If $V \in {\MC{A}_\iny}$, then $V \in \MC{NC}$.
\end{lem}
In the following proof, we establish that \eqref{NCCond} holds for all cubes $Q \subset \ensuremath{\mathbb{R}}^n$, not just those at the special scale which are defined by $Q = Q\pr{x, \frac 1 {\underline{m}(x, V)}}$.
\begin{proof}
Since $V \in {\MC{A}_\iny}$, we may choose $\delta > 0$ so that \eqref{Ainf} holds with $\varepsilon = \frac 1 2$.
That is, for any $Q \subset \ensuremath{\mathbb{R}}^n$, if we define $\displaystyle S = \set{x \in Q: V\pr{x} \geq \delta \fint_Q V\pr{y} dy}$, then $\abs{S} \ge \frac 1 2 \abs{Q}$.
Observe that since
$$S = \set{x \in Q: V(x)^{\frac 1 2} V\pr{Q}^{-1} V(x)^{\frac 1 2} \geq \frac{\delta}{\abs{Q}} I},$$
then
\begin{align*}
\int_Q V(x)^{\frac 1 2} V\pr{Q}^{-1} V(x)^{\frac 1 2} dx
&\ge \int_{S} V(x)^{\frac 1 2} V\pr{Q}^{-1} V(x)^{\frac 1 2} dx
\ge \int_{S} \frac{\delta}{\abs{Q}} I dx
\ge \frac \delta 2 I,
\end{align*}
showing that $V \in \MC{NC}$.
\end{proof}
Next we describe a collection of examples of matrix functions in ${\MC{B}_p} \cap {\MC{A}_{2,\iny}}$.
Let $A = \pr{a_{ij}}_{i, j = 1}^d$ be a $d \times d$ Hermitian, positive definite matrix and let $\Gamma = \pr{\gamma_{ij}}_{i, j = 1}^d$ be some constant matrix.
We use $A$ and $\Gamma$ to define the $d \times d$ matrix function $V : \ensuremath{\mathbb{R}}^n \to \ensuremath{\mathbb{R}}^{d \times d}$ by
\begin{equation}
\label{mpower}
V(x) = \begin{pmatrix}
a_{11} |x|^{\gamma_{11}} & \dots & a_{1d} |x|^{\gamma_{1d}} \\
\vdots &\ddots & \vdots\\
a_{d1} |x|^{\gamma_{d1}} & \dots & a_{dd} |x|^{\gamma_{dd}}
\end{pmatrix}.
\end{equation}
By \cite[Theorem 3.1]{BLM17}, a matrix of the form \eqref{mpower} is positive definite a.e. iff $\gamma_{ij} = \gamma_{ji} = \frac12 \pr{\gamma_{ii} + \gamma_{jj}}$ for $i, j = 1, \ldots, d$.
Moreover, in this setting, \cite[Lemma 3.4]{BLM17} shows that $V^{-1} : \ensuremath{\mathbb{R}}^n \to \ensuremath{\mathbb{R}}^{d \times d}$ is well-defined and given by
\begin{equation}
\label{mpowerIn}
V(x)^{-1} = \pr{a^{ij} \abs{x}^{- \gamma_{ji}}}_{i, j =1}^d,
\end{equation}
where $A^{-1} = \pr{a^{ij}}_{i, j = 1}^d$.
Under the assumption of positive definiteness, these matrices provide a full class of examples of matrix weights in ${\MC{B}_p} \cap {\MC{A}_{2,\iny}}$.
\begin{prop}
\label{BickelToProveProp}
Let $V$ be defined by \eqref{mpower} where $A = \pr{a_{ij}}_{i, j = 1}^d$ is a $d \times d$ Hermitian, positive definite matrix and $\gamma_{ij} = \frac 1 2 \pr{\gamma_i + \gamma_j}$ for some $\V{\gamma} \in \ensuremath{\mathbb{R}}^d$.
If $p \geq 1$ and $\gamma_{i} > - \frac{n}{p}$ for each $1 \leq i \leq d$, then $V \in {\MC{B}_p} \cap {\MC{A}_{2,\iny}}$.
\end{prop}
The proof of this result appears in Appendix \ref{TechProofs}.
The classical Brunn-Minkowski inequality implies that the map $A \mapsto (\det A)^{\frac{1}{d}}$, defined on the set of $d \times d$ symmetric positive semidefinite matrices $A$, is concave.
An application of Jensen's inequality shows that
$$\pr{\det \fint_Q V(x) dx}^\frac{1}{d} \geq \fint_Q \brac{ \det V(x)} ^\frac{1}{d} dx;$$
see \eqref{DetConvexIneq} in the proof of Lemma \ref{MatrixJensen}.
Accordingly, we make the following definition of an associated reverse class.
\begin{defn}[$\RBM$]
\label{RBMDef}
We say that a matrix weight $V$ belongs to the {\bf reverse Brunn-Minkowski class}, $V \in \RBM$, if there exists a constant $B_V > 0$ so that for any cube $Q \subset \ensuremath{\R^n}$, it holds that
\begin{equation}
\label{RBrunnMin}
\pr{\det \fint_Q V(x) dx}^\frac{1}{d} \leq B_V \fint_Q \brac{\det V(x)}^\frac{1}{d} dx.
\end{equation}
\end{defn}
In Appendix \ref{TechProofs}, we also provide the proof of the following``non-${\MC{A}_\iny}$" condition for $V \in \MC{NC}$.
\begin{prop}
\label{RBrunnMinProp}
If $V \in \MC{ND}$ and there exists a constant $B_V > 0$ so that \eqref{RBrunnMin} holds for every cube $Q \subset \ensuremath{\mathbb{R}}^n$, then $V \in \MC{NC}$.
\end{prop}
Even for a $d \times d$ diagonal matrix weight $V$ with a.e. positive entries $\lambda_1(x) , \ldots, \lambda_d(x)$, it is not clear when \eqref{RBrunnMin} holds.
If each $\lambda_j (x) \in {\text{A}_\iny}$ for $1 \leq j \leq d$, then \eqref{RBrunnMin} holds.
In particular, since every diagonal matrix weight $V$ with positive a.e. entries belongs to $\MC{NC}$, then \eqref{RBrunnMin} doesn't provide a necessary condition for $\MC{NC}$.
It would be interesting to find an easily checkable sufficient condition for $V \in \MC{NC}$ that is at least trivially true in the case of diagonal matrix weights.
For a much deeper discussion of the classes ${\MC{B}_p}, {\MC{A}_\iny}, {\MC{A}_{2,\iny}}$, and their relationships to each other, we refer the reader to Appendix \ref{AiApp}.
In fact, we hope that Appendix \ref{AiApp} will serve as a self-contained reference for the reader who is unfamiliar with theory of matrix weights.
We don't discuss matrix $\mathcal{A}_p$ weights in Appendix \ref{AiApp} since they play no role in this paper.
However, \cite{Aa09} serves as an excellent reference for the theory of matrix $\mathcal{A}_p$ weights and the boundedness of singular integrals on these spaces.
\section{Auxiliary Functions and Agmon Distances}
\label{MaxFun}
Now that we have introduced the class of matrices that we work with, we develop the theory of their associated auxiliary functions.
In the scalar setting, these ideas appear in \cite{She94}, \cite{She95}, \cite{She99}, and \cite{MP19}, for example.
As we are working with matrices instead of scalar functions, there are many different ways to generalize these ideas.
We assume from now on that $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p \in \brac{\frac{n}{2}, \infty}$.
By Lemma \ref{GehringLemma}, there is no loss in assuming that $p > \frac n 2$.
Since $V \in \MC{ND}$, then by \eqref{eqB.2} and \eqref{NDCond}, for each $x \in \ensuremath{\mathbb{R}}^n$ and $r > 0$, we have $\Psi\pr{x, r; V} > 0$.
Since $p > \frac{n}{2}$, then the power $2 - \frac n p > 0$ and it follows from Lemma \ref{lB.1} that for any $\V{e} \in \ensuremath{\R^d}$,
\begin{equation}
\label{eqB.3}
\begin{aligned}
& \lim_{r \to 0^+} \innp{\Psi\pr{x, r; V} \V{e}, \V{e}} = 0 \\
& \lim_{R \to \infty} \innp{\Psi\pr{x, R; V} \V{e}, \V{e}} = \infty.
\end{aligned}
\end{equation}
This allows us to make the following definition.
If $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$, then for $x \in \ensuremath{\mathbb{R}}^n$ and $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$, the \textit{auxiliary function} $m\pr{x, \V{e}, V} \in (0, \infty)$ is defined by
\begin{align}
\frac{1}{m\pr{x, \V{e}, V}} = \sup_{r > 0} \set{ r : \innp{\Psi\pr{x,r; V} \V{e}, \V{e}} \le 1}.
\label{eqB.5}
\end{align}
\begin{rem}
If $v$ is a scalar ${\text{B}_p}$ function, then we may eliminate the $\V{e}$-dependence and define
\begin{align}
\frac{1}{m\pr{x, v}} = \sup_{r > 0} \set{ r : \Psi\pr{x,r; v} \le 1}.
\label{scalarmDef}
\end{align}
See \cite{She94}, \cite{She95}, \cite{She99}, for example.
\end{rem}
We recall the following lemma from \cite{She95}, for example, that applies to scalar functions.
\begin{lem}[cf. Lemma 1.4, \cite{She95}]
\label{lB.3}
Assume that $v \in {\text{B}_p}$ for some $p > \frac n 2$.
There exist constants $C, c, k_0 > 0$, depending on $n$, $p$, and $C_v$, so that for any $x, y \in \ensuremath{\mathbb{R}}^n$,
\begin{enumerate}
\item[(a)] If $\displaystyle \abs{x - y} \lesssim \frac{1}{m\pr{x, v}}$, then $\displaystyle m\pr{x, v} \simeq_{(n, p, C_V)} m\pr{y, v}$,
\item[(b)] $\displaystyle m\pr{y, v} \le C \brac{1 + \abs{x - y}m\pr{x, v}}^{k_0} m\pr{x, v}$,
\item[(c)] $\displaystyle m\pr{y, v} \ge \frac{c \, m\pr{x, v}}{\brac{1 + \abs{x -y}m\pr{x, v}}^{k_0/\pr{k_0+1}}}$.
\end{enumerate}
\end{lem}
As the properties described in this lemma will be very useful below, we seek auxiliary functions that also satisfy this set of results in the matrix setting.
We define two auxiliary functions as follows.
\begin{defn}[Lower and upper auxiliary functions]
Let $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$.
We define the \textbf{lower auxiliary function} as follows:
\begin{align}
\frac{1}{\underline{m}\pr{x, V}} = \sup_{r > 0} \set{ r : \min_{\V{e} \in \ensuremath{\mathbb{S}^{d-1}}} \innp{\Psi\pr{x,r; V} \V{e}, \V{e}} \le 1}.
\label{umDef}
\end{align}
The \textbf{upper auxiliary function} is given by
\begin{align}
\frac{1}{\overline{m}\pr{x, V}}
= \sup_{r > 0} \set{ r : \max_{\V{e} \in \ensuremath{\mathbb{S}^{d-1}}} \innp{\Psi\pr{x,r; V} \V{e}, \V{e}} \le 1}
= \sup_{r > 0} \set{ r : \abs{\Psi\pr{x,r; V}} \le 1}.
\label{omDef}
\end{align}
\end{defn}
\begin{rem}
\label{noND}
Since $\abs{\Psi\pr{x, r; V}}$ satisfies Lemma \ref{lB.1} whenever $V \in {\MC{B}_p}$, then for the upper auxiliary function, $\overline{m}\pr{x, V}$, we do not need to assume that $V \in \MC{ND}$.
\end{rem}
For $V$ fixed, we define
\begin{equation*}
\begin{aligned}
\underline{\Psi}\pr{x} &= \Psi\pr{x, \frac 1 {\underline{m}\pr{x, V}}; V} \\
\overline{\Psi}\pr{x} &= \Psi\pr{x, \frac 1 {\overline{m}\pr{x, V}}; V},
\end{aligned}
\end{equation*}
then observe that $\overline{\Psi}\pr{x} \le I \le \underline{\Psi}\pr{x}$.
In particular, for every $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{psi12Prop}
\innp{\overline{\Psi}\pr{x} \V{e}, \V{e}} \le 1 \le \innp{\underline{\Psi}\pr{x} \V{e}, \V{e}}.
\end{equation}
With this pair of functions in hand, we now seek to prove Lemma \ref{lB.3} for both $\underline{m}$ and $\overline{m}$.
The following pair of observations for each auxiliary function will allow us to prove the desired results.
\begin{lem}[Lower observation]
\label{compLem}
Let $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$.
If $\displaystyle c \ge \innp{\Psi\pr{x,r; V} \V{e}, \V{e}}$ for some $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$, then $\displaystyle r \leq \max\set{ 1, (C_Vc)^\frac{p}{2p - n}} \frac 1{\underline{m}(x, V)} $.
\end{lem}
\begin{proof}
If $r \le \frac{1}{\underline{m}\pr{x, V}}$, then we are done, so assume that $\frac 1 {\underline{m}\pr{x, V}} < r$.
Then it follows from \eqref{psi12Prop} and Lemma~\ref{lB.1} that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{align*}
1 &\le \innp{\underline{\Psi}\pr{x} \V{e}, \V{e}}
= \innp{\Psi\pr{x, \frac 1 {\underline{m}\pr{x, V}}; V} \V{e}, \V{e}}
\le C_V \pr{\frac 1 {\underline{m}\pr{x, V}r}}^{2 - \frac n p} \innp{\Psi\pr{x, r; V} \V{e}, \V{e}} \\
&\le C_V c \pr{\frac 1 {\underline{m}\pr{x, V}r}}^{2 - \frac n p} .
\end{align*}
The conclusion follows after algebraic simplifications.
\end{proof}
As we observed in Lemma \ref{normVBp}, if $V \in {\MC{B}_p}$, then $\abs{V} \in {\text{B}_p}$.
Thus, it is meaningful to discuss the quantity $m\pr{x, \abs{V}}$.
For $\overline{m}\pr{x, V}$, we rely on the following relationship regarding norms.
Note that by Remark \ref{noND}, we do not need to assume that $V \in \MC{ND}$ for this result.
\begin{lem}[Upper auxiliary function relates to norm]
\label{omCompLem}
If $V \in {\MC{B}_p}$ for some $p > \frac n 2$, then
$$\overline{m}(x, V) \le m(x, \abs{V}) \le \pr{d^2 C_V}^{\frac{2}{2p-n}} \overline{m}(x, V).$$
\end{lem}
\begin{proof}
For any $r > 0$, choose $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$ so that
\begin{align*}
\abs{\Psi(x, r ;V)}
= \innp{\Psi(x, r ;V) \V{e}, \V{e}}
= \innp{\pr{\frac{1}{r^{n-2}} \int_{Q\pr{x,r}} V\pr{y}dy } \V{e}, \V{e}}
= \frac{1}{r^{n-2}} \int_{Q\pr{x,r}} \innp{V\pr{y} \V{e}, \V{e}} dy.
\end{align*}
Since $\innp{V\pr{y} \V{e}, \V{e}} \le \abs{V}$, then $\abs{\Psi(x, r ;V)} \leq \Psi(x, r ; \abs{V})$.
It follows that $\frac{1}{\overline{m}\pr{x, V}} \geq \frac{1}{m\pr{x, \abs{V}}}$ so that
$${m(x, \abs{V})} \geq \overline{m}(x, V).$$
Let $\set{\V{e}_i}_{i=1}^d$ denote the standard basis of $\ensuremath{\mathbb{R}}^d$.
For any $r > 0$, it follows from \eqref{normProp} that
\begin{equation}
\label{normRelationship}
\begin{aligned}
\Psi(x, r, \abs{V})
&= r^{2-n} \int_{Q(x, r)} \abs{V(y)} \, dy
\le r^{2-n} \int_{Q(x, r)} d \sum_{j = 1}^d \innp{V(y)\V{e}_j, \V{e}_j} \, dy \\
&= d \sum_{j = 1}^d \innp{\Psi(x, r, V) \V{e}_j, \V{e}_j}
\le d^2 \abs{\Psi(x, r;V)}.
\end{aligned}
\end{equation}
Combining the fact that $\Psi\pr{x, \frac{1}{m\pr{x, \abs{V}}}; \abs{V}} = 1$ with the previous observation, Lemma \ref{BasicShenLem}, and the definition of $\overline{m}$ shows that
\begin{align*}
1 & = \Psi\pr{x, \frac{1}{m\pr{x, \abs{V}}}; \abs{V}}
\le d^2 \abs{\Psi\pr{x, \frac{1}{m\pr{x, \abs{V}}}; V}} \\
&\le d^2 C_V \brac{\frac{\overline{m}(x, V)}{{m}(x, \abs{V})}}^{2-\frac{n}{p}} \abs{\Psi\pr{x, \frac{1}{\overline{m}\pr{x, {V}}}; V}}
= d^2 C_V \brac{\frac{\overline{m}(x, V)}{{m}(x, \abs{V})}}^{2-\frac{n}{p}},
\end{align*}
and the second part of the inequality follows.
\end{proof}
Since $\abs{V} = \lambda_d$, the largest eigenvalue of $V$, then this result shows that $\overline{m}\pr{x, V} \simeq m\pr{x, \lambda_d}$, indicating why we call $\overline{m}$ the upper auxiliary function.
Now we use these lemmas to establish a number of important tools related to the functions $\underline{m}\pr{x, V}$ and $\overline{m}\pr{x, V}$.
From now on we will assume that $|\cdot|$ on $\ensuremath{\mathbb{R}}^n$ refers to the $\ell_\infty$ norm.
\begin{lem}[Auxiliary function properties]
\label{muoBounds}
If $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$, then both $\underline{m}(\cdot, V)$ and $\overline{m}(\cdot, V)$ satisfy the conclusions of Lemma \ref{lB.3} where all constants depend on $n$, $p$, and $C_V$.
For $\overline{m}(\cdot, V)$, the constants depend additionally on $d$ and we may eliminate the assumption that $V \in \MC{ND}$.
\end{lem}
\begin{proof}
First consider $\overline{m}(\cdot, V)$.
Lemma \ref{omCompLem} combined with Lemma \ref{normVBp} implies that all of these properties follow immediately from Lemma \ref{lB.3}.
Now consider $\underline{m}(\cdot, V)$.
Suppose $\abs{x - y} \le \frac{2^j -1}{\underline{m}\pr{x, V}}$ for some $j \in \ensuremath{\mathbb{N}}$.
Then $Q\pr{y, \frac 1 {\underline{m}\pr{x, V}}} \subset Q\pr{x, \frac {2^j} {\underline{m}\pr{x, V}}}$.
Choose $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$ so that $\innp{\Psi\pr{x, \frac 1 {\underline{m}\pr{x, V}}; V} \V{e}, \V{e}} = 1$.
Then
\begin{align*}
\innp{\Psi\pr{y, \frac 1 {\underline{m}\pr{x, V}}; V} \V{e}, \V{e}} \
&= \underline{m}\pr{x, V}^{n-2} \int_{Q\pr{y, \frac 1 {\underline{m}\pr{x, V}}}} \innp{V\pr{z} \V{e}, \V{e}} dz
\le \underline{m}\pr{x, V}^{n-2}\int_{Q\pr{x, \frac {2^j} {\underline{m}\pr{x, V}}}} \innp{V\pr{z} \V{e}, \V{e}} dz \\
&\le \underline{m}\pr{x, V}^{n-2} \gamma^j \int_{Q\pr{x, \frac {1} {\underline{m}\pr{x, V}}}} \innp{V\pr{z} \V{e}, \V{e}} dz
= \gamma^j \innp{\underline{\Psi}\pr{x} \V{e}, \V{e}}
= \gamma^j,
\end{align*}
where we have used Lemma \ref{Vdbl} and $\gamma$ denotes the doubling constant.
It then follows from Lemma \ref{compLem} that $\displaystyle \frac 1 {\underline{m}\pr{x, V}} \le \frac{\max\set{1, \pr{C_V \gamma^j}^{p/(2p-n)}}} {\underline{m}\pr{y, V}}$ or
\begin{align}
\underline{m}\pr{y, V} \le \pr{C_V \gamma^j}^{p/(2p-n)} \underline{m}\pr{x, V}.
\label{yxBd}
\end{align}
Since $\abs{x - y} \le \frac{2^j -1}{\underline{m}\pr{x, V}}$ and $\frac 1 {\underline{m}\pr{x, V}} \le \frac {\pr{C_V \gamma^j}^{p/(2p-n)}} {\underline{m}\pr{y, V}}$, then $\abs{x - y} \le \frac{\pr{2^j - 1} \pr{C_V \gamma^j}^{p/(2p-n)}}{\underline{m}\pr{y, V}}$.
Thus, $Q\pr{x, \frac 1 {\underline{m}\pr{y, V}}} \subset Q\pr{y, \frac {\pr{2^j - 1}\pr{C_V \gamma^j}^{p/(2p-n)}+1} {\underline{m}\pr{y, V}}}$.
Setting $\displaystyle \tilde j = \ceil{\ln\brac{\pr{2^j - 1}\pr{C_V \gamma^j}^{p/(2p-n)}+1} / \ln 2}$, it can be shown, as above, that
\begin{align*}
\innp{\Psi\pr{x, \frac 1 {\underline{m}\pr{y, V}}; V} \V{e}, \V{e}}
\le \gamma^{\tilde j},
\end{align*}
where now $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$ is such that $\innp{\underline{\Psi}\pr{y} \V{e}, \V{e}} = 1$.
Arguing as above, we see that $\frac{1}{\underline{m}\pr{y, V}} \le \frac{ \max\set{1, \pr{C_V \gamma^{\tilde j}}^{p/(2p-n)}} }{\underline{m}\pr{x, V}}$, or
\begin{align}
\underline{m}\pr{x, V} \le \pr{C_V \gamma^{\tilde j}}^{p/(2p-n)} \underline{m}\pr{y, V}.
\label{xyBd}
\end{align}
When $\abs{x - y} \lesssim \frac{1}{\underline{m}\pr{x, V}}$, we have that $j \simeq 1$ and $\tilde j \simeq 1$.
Then statement (a) is a consequence of \eqref{yxBd} and \eqref{xyBd}.
If $\abs{x - y} \le \frac 1 {\underline{m}\pr{x,V}}$, then part (a) implies that $\underline{m}\pr{y, V} \lesssim \underline{m}\pr{x, V}$ and the conclusion of (b) follows.
Otherwise, choose $j \in \ensuremath{\mathbb{N}}$ so that $\frac{2^{j-1}-1}{\underline{m}\pr{x, V}} \le \abs{x - y} < \frac{2^j-1}{\underline{m}\pr{x, V}}$.
From \eqref{yxBd}, we see that
$$\underline{m}\pr{y, V}
\le \pr{C_V \gamma}^{\frac p {2p-n}} \pr{2^{j-1}}^{\frac{p \ln\gamma}{\pr{2p-n} \ln 2 } } \underline{m}\pr{x, V}
\le \pr{C_V \gamma}^{\frac p {2p-n}} \brac{1 + \abs{x - y}\underline{m}\pr{x, V}}^{\frac{p \ln\gamma}{\pr{2p-n} \ln 2 } } \underline{m}\pr{x, V}.$$
Setting $C = \pr{C_V \gamma}^{\frac p {2p-n}}$ and $k_0 = \frac{p \ln\gamma}{\pr{2p-n} \ln 2 }$ gives the conclusion of (b).
If $\abs{x - y} \le \frac{1}{\underline{m}\pr{x, V}}$ or $\abs{x - y} \le \frac{1}{\underline{m}\pr{y, V}}$, then part (a) implies that $\underline{m}\pr{x, V} \lesssim \underline{m}\pr{y, V}$, and the conclusion of (c) follows.
Thus, we consider when $\abs{x - y} > \frac{1}{\underline{m}\pr{x, V}}$ and $\abs{x - y} > \frac{1}{\underline{m}\pr{y, V}}$.
Repeating the arguments from the previous paragraph with $x$ and $y$ interchanged, we see that
$$\underline{m}\pr{x, V} \le C \brac{1 + \abs{x - y}\underline{m}\pr{y, V}}^{k_0} \underline{m}\pr{y, V} \le 2^{k_0}C \abs{x - y}^{k_0}\underline{m}\pr{y, V}^{k_0+1}.$$
Rearranging gives that
\begin{align*}
\underline{m}\pr{y, V}
\ge \frac{2^{-k_0/(k_0+1)}C^{-1/(k_0+1)} \underline{m}\pr{x, V}}{ \pr{\underline{m}\pr{x, V}\abs{x - y}}^{k_0/(k_0+1)}}
\ge \frac{2^{-k_0/(k_0+1)}C^{-1/(k_0+1)} \underline{m}\pr{x, V}}{ \pr{1 + \underline{m}\pr{x, V}\abs{x - y}}^{k_0/(k_0+1)}}.
\end{align*}
Taking $c = 2^{-k_0/(k_0+1)}C^{-1/(k_0+1)}$ leads to the conclusion of (c).
\end{proof}
Using these auxiliary functions, we now define the associated Agmon distance functions.
\begin{defn}[Agmon distances]
Let $\underline{m}(\cdot, V)$ and $\overline{m}(\cdot, V)$ be as in \eqref{umDef} and \eqref{omDef}, respectively.
We define the \textbf{lower Agmon distance function} as
\begin{equation*}
\underline{d}(x, y, V) = \inf_{\gamma} \int_0^1 \underline{m}(\gamma(t), V) |\gamma'(t)|\, dt ,
\end{equation*}
and the \textbf{upper Agmon distance function} as
\begin{equation*}
\overline{d}(x, y, V) = \inf_{\gamma} \int_0^1 \overline{m}(\gamma(t), V) |\gamma'(t)|\, dt ,
\end{equation*}
where in both cases, the infimum ranges over all absolutely continuous $\gamma:[0,1] \to \ensuremath{\mathbb{R}}^n$ with $\gamma(0) = x$ and $\gamma(1) = y$.
\end{defn}
We make the following observation.
\begin{lem}[Property of Agmon distances]
\label{closeRemark}
If $|x-y| \leq \frac{C}{\underline{m}(x, V)}$, then $\underline{d}\pr{x, y, V} \lesssim_{(n, p, C_V)} C$.
If $|x-y| \leq \frac{C}{\overline{m}(x, V)}$, then $\overline{d}\pr{x, y, V} \lesssim_{(d, n, p , C_V)} C$.
\end{lem}
\begin{proof}
We only prove the first statement since the second one is analogous.
Let $x, y \in \ensuremath{\mathbb{R}}^n$ be as given.
Define $\gamma : \brac{0,1} \to \ensuremath{\mathbb{R}}^n$ by $\gamma\pr{t} = x + t\pr{y - x}$.
By Lemma \ref{muoBounds}(a), $\underline{m}\pr{\gamma\pr{t}, V} \lesssim_{(n, p, C_V)} \underline{m}\pr{x, V}$ for all $t \in \brac{0, 1}$.
It follows that
\begin{align*}
\underline{d}\pr{x, y, V}
&\le \int_0^1 \underline{m}(\gamma(t), V) \abs{\gamma'(t)} dt
\lesssim_{(n, p, C_V)} \int_0^1 \underline{m}\pr{x, V} \abs{x - y} dt
\lesssim_{(n, p, C_V)} C,
\end{align*}
as required.
\end{proof}
In future sections, the lower Agmon distance function will be an important tool for us once it has been suitably regularized.
We regularize this function $\underline{d}(\cdot, \cdot, V)$ by following the procedure from \cite{She99}.
Observe that by Theorem \ref{muoBounds}(c), $\underline{m}\pr{\cdot, V}$ is a slowly varying function; see \cite[Definition 1.4.7]{Hor03}, for example.
As such, we have the following.
\begin{lem}[cf. the proof of Lemma 3.3. in \cite{She99}]
\label{partofU}
There exist sequences $\set{x_j}_{j=1}^\infty \subset \ensuremath{\mathbb{R}}^n$ and $\set{\phi_j}_{j=1}^\infty \subset C^\infty_0\pr{\ensuremath{\mathbb{R}}^n}$ such that
\begin{itemize}
\item[(a)] $\displaystyle \ensuremath{\mathbb{R}}^n = \bigcup_{j=1}^\infty Q_j$, where $\displaystyle Q_j = Q\pr{x_j, \frac 1 {\underline{m}\pr{x_j, V}}}$,
\item[(b)] $\phi_j \in C^\infty_0\pr{Q_j}$, $0 \le \phi_j \le 1$, and $\displaystyle \sum_{j=1}^\infty \phi_j = 1$,
\item[(c)] $\abs{\nabla \phi_j\pr{x}} \lesssim_{(n, p, C_V)} \underline{m}\pr{x, V}$,
\item[(d)] $\displaystyle \sum_{j=1}^\infty \chi_{Q_j} \lesssim_{(n, p, C_V)} 1.$
\end{itemize}
\end{lem}
\begin{rem}
\label{partofURem}
Since $\overline{m}\pr{\cdot, V}$ is also a slowly varying function, the same result applies to $\overline{m}(\cdot, V)$ with constants that depend additionally on $d$.
\end{rem}
Using this lemma and \cite[Theorem 1.4.10]{Hor03}, we can follow the process from \cite[pp. 542]{She99} to establish the following pair of results.
\begin{lem}[Lemma 3.3 in \cite{She99}]
\label{RegLem0}
For each $y \in \ensuremath{\mathbb{R}}^n$, there exists nonnegative function $\varphi_V(\cdot, y) \in C^\infty(\ensuremath{\mathbb{R}}^n)$ such that for every $x \in \ensuremath{\mathbb{R}}^n$,
$$\abs{\varphi_V(x, y) - \underline{d}(x, y, V)} \lesssim_{(n, p, C_V)} 1$$
and
$$|\nabla_x \varphi_V(x, y)| \lesssim_{(n, p, C_V)} \underline{m}(x, V).$$
\end{lem}
\begin{lem}[Lemma 3.7 in \cite{She99}]
\label{RegLem1}
For each $y \in \ensuremath{\mathbb{R}}^n$, there exists a sequence of nonnegative, bounded functions $\set{\varphi_{V, j}\pr{\cdot, y}} \subset C^\infty(\ensuremath{\mathbb{R}}^n)$ such that for every $x \in \ensuremath{\mathbb{R}}^n$,
$$\varphi_{V, j} (x, y) \leq \varphi_V(x, y)$$
and
$$\varphi_{V, j} (x, y) \to \varphi_V(x, y) \text{ as } j \to \infty.$$
Moreover,
$$\abs{\nabla_x \varphi_{V,j}(x, y)} \lesssim_{(n, p, C_V)} \underline{m}(x, V).$$
\end{lem}
To conclude the section, we observe that under the stronger assumption that $V \in {\MC{B}_p} \cap {\MC{A}_\iny}$, we can prove a result analogous to Lemma \ref{omCompLem} for the smallest eigenvalue.
By Proposition \ref{la1Prop}, $\lambda_1 \in {\text{B}_p}$, so it is meaningful to discuss $m\pr{x, \lambda_1}$.
In subsequent sections, we will not assume that $V \in {\MC{A}_\iny}$, so this result should be treated as an interesting observation.
Its proof is provided at the end of Appendix \ref{TechProofs}.
\begin{prop}[Lower auxiliary function relates to $\lambda_1$]
\label{umCompLem}
If $V \in {\MC{B}_p} \cap \MC{ND} \cap {\MC{A}_\iny}$ for some $p > \frac n 2$, then
$$m(x, \lambda_1) \le \underline{m}(x, V) \lesssim m(x, \lambda_1),$$
where the implicit constant depends on $n$, $p$, $C_V$ and the ${\MC{A}_\iny}$ constants.
\end{prop}
This result leads to the following observation.
\begin{cor}
If $V \in {\MC{B}_p} \cap \MC{ND} \cap {\MC{A}_\iny}$ for some $p > \frac n 2$, then $m\pr{x, \lambda_1}$ satisfies the conclusions of Lemma \ref{lB.3} where the constants have additional dependence on the ${\MC{A}_\iny}$ constants.
\end{cor}
\begin{rem}
In fact, if we assume that $V \in {\MC{B}_p} \cap \MC{ND} \cap {\MC{A}_\iny}$, then we can show that Lemma \ref{muoBounds} holds for $\underline{m}\pr{\cdot, V}$ in the same way that we show it holds for $\overline{m}\pr{\cdot, V}$.
That is, we apply Lemma \ref{lB.3} to $\lambda_1$, then use Proposition \ref{umCompLem}.
\end{rem}
\section{Fefferman-Phong Inequalities}
\label{FPI}
In this section, we present and prove our matrix versions of the Fefferman-Phong inequalities.
The first result is a lower Fefferman-Phong inequality which holds with the lower auxiliary function from \eqref{umDef}.
This result will be applied in Section \ref{UpBds} where we establish upper bound estimates for the fundamental matrices.
A corollary to this lower Fefferman-Phong inequality, which is used in Section \ref{LowBds} to prove lower bound estimates for the fundamental matrices, is then provided.
In keeping with \cite{She99}, we also present the upper bound with the upper auxiliary function from \eqref{omDef}.
Before stating and proving the lower Fefferman-Phong inequality, we present the Poincar\'e inequality that will be used in its proof.
Additional and more complex matrix-valued Poincar\'e inequalities and related Chanillo-Wheeden type conditions appear in the forthcoming manuscript \cite{DI22}.
\begin{prop}[Poincar\'e inequality]
\label{PoincareIneqThm}
Let $V \in \mathcal{B}_{\frac{n}{2}}$.
For any open cube $Q \subset \ensuremath{\mathbb{R}}^n$ and any $\V{u} \in C^1(Q)$, we have
\begin{equation*}
{\int_Q \int_Q \abs{(V(Q))^{-\frac12} V^\frac12 (y) \pr{\V{u} (x) - \V{u} (y)}}^2 \, dx \, dy}
\lesssim_{\pr{d, n, C_V}} |Q| ^\frac{2}{n} {\int_Q \abs{ D \V{u} (x)}^2 \, dx}.
\end{equation*}
\end{prop}
We prove this result by following the arguments from the scalar version of the Poincar\'{e} inequality in Shen's article, \cite[Lemma 0.14]{She99}.
\begin{proof}
Fix a cube $Q$ and define the scalar weight $v_Q (y) = \abs{V(Q)^{-\frac12} V(y) V(Q)^{-\frac12}}$.
First we show that $v_Q \in B_{\frac n 2}$ with a comparable constant.
For an arbitrary cube $P$, observe that by \eqref{normProp}
\begin{align*}
\pr{\fint_{P} \abs{v_Q (y)}^{\frac n 2} \, dy}^{\frac 2 n}
&\le \brac{ \fint_{P} \pr{d \sum_{j = 1}^d \innp{V(Q)^{-\frac12} V(y) V(Q)^{-\frac12} \V{e}_j, \V{e}_j}}^{\frac n 2} \, dy }^{\frac 2 n} \\
&\le d^{2 -\frac 2 n} \sum_{j = 1}^d \pr{ \fint_{P} \innp{V(Q)^{-\frac12} V(y) V(Q)^{-\frac12} \V{e}_j, \V{e}_j}^{\frac n 2} \, dy }^{\frac 2 n} \\
&\le d^{2 -\frac 2 n} C_V \fint_{P} \sum_{j = 1}^d \innp{V(Q)^{-\frac12} V(y) V(Q)^{-\frac12} \V{e}_j, \V{e}_j}\, dy
\le d^{3 -\frac 2 n} C_V \fint_{P} \abs{v_{Q} (y)} \, dy,
\end{align*}
where we have used that $V \in \mathcal{B}_{\frac n 2}$ to reach the third line.
This shows that $v_Q \in B_{\frac n 2}$.
Since $v_Q \in B_{\frac{n}{2}}$, then it follows from \cite[Lemma 0.14]{She99} that
\begin{align*}
\int_Q \int_Q |(V(Q))^{-\frac12} & V^\frac12 (y) (\V{u} (x) - \V{u} (y))|^2 \, dx \, dy
\le {\int_Q \int_Q \abs{(V(Q))^{-\frac12} V^\frac12 (y)}^2 \abs{\V{u} (x) - \V{u} (y)}^2 \, dx \, dy}
\\ & = {\int_Q \int_Q \abs{\V{u} (x) - \V{u} (y)}^2 \, v_Q (y) dy \, dx}
= {\int_Q \int_Q \sum_{j = 1}^d \abs{u_j (x) - u_j (y)}^2 \, v_Q (y) dy \, dx}
\\ & \lesssim_{\pr{d, n, C_V}} |Q|^{\frac{2}{n}} v_Q (Q) \int_Q \sum_{j = 1}^d \abs{\nabla u_j (x)}^2 \, dx
= |Q|^{\frac{2}{n}} v_Q (Q) \int_Q \abs{ D \V{u} (x)}^2 \, dx.
\end{align*}
Since
\begin{align*}
v_Q (Q)
& = \int_Q \abs{V(Q)^{-\frac12} V(y) V(Q)^{-\frac12}} \, dy
\le d \sum_{j = 1}^d \int_Q \innp{V(Q)^{-\frac12} V(y) V(Q)^{-\frac12} \V{e}_j, \V{e}_j} \, dy
\\ & = d \sum_{j = 1}^d \innp{V(Q)^{-\frac12} V(Q) V(Q)^{-\frac12} \V{e}_j, \V{e}_j}
= d^2,
\end{align*}
the conclusion follows.
\end{proof}
Now we present the lower Fefferman-Phong inequality.
This result will be applied in Section \ref{UpBds} to prove the exponential upper bound on the fundamental matrix.
Note that we assume here that $V$ belongs to the first three matrix classes that were introduced in Section \ref{MWeights}.
\begin{lem}[Lower Auxiliary Function Fefferman-Phong Inequality]
\label{FPml}
Assume that $V \in {\MC{B}_p} \cap \MC{ND} \cap \MC{NC}$ for some $p > \frac{n}{2}$.
Then for any $ \V{u} \in C^1 _0(\ensuremath{\mathbb{R}}^n)$, it holds that
$$\int_{\ensuremath{\mathbb{R}}^n} \abs{\underline{m}\pr{x, V} \V{u}(x)}^2 \, dx
\lesssim_{(d, n, p, C_V, N_V)} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D\V{u} (x)}^2 \, dx
+ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{V^\frac12 (x) \V{u} (x)}^2 \, dx.$$
\end{lem}
\begin{proof}
For some $x_0 \in \ensuremath{\mathbb{R}}^n$, let $r_0 = \frac{1}{\underline{m}(x_0, V)}$ and set $Q = Q(x_0, r_0)$.
Property $\MC{NC}$ in \eqref{NCCond} shows that
\begin{align}
N_V \int_{Q} \abs{\V{u}(x)}^2 \, dx
&\le \int_{Q} \int_{Q} \innp{V^\frac12 (y) V(Q) ^{-1} V^\frac12 (y)\V{u}(x), \V{u}(x)} \, dy dx \nonumber \\
&= \int_{Q} \int_{Q} \abs{(V(Q)) ^{-\frac12} V^\frac12(y) \V{u}(x)}^2 \, dy dx \nonumber \\
& \lesssim \int_{Q} \int_{Q} \abs{(V(Q)) ^{-\frac12} V^\frac12(y)( \V{u}(x) - \V{u}(y))}^2 \, dy dx
+ \int_{Q} \int_{Q} \abs{(V(Q))^{-\frac12} V^\frac12(y) \V{u}(y)}^2 \, dy dx \nonumber \\
& \lesssim_{(d, n, C_V)} r_0^{2} \int_{Q} \abs{D\V{u}(x)}^2 \, dx
+ r_0^n \int_{Q} \abs{(V(Q))^{-\frac12} V^\frac12(y) \V{u}(y)}^2 \, dy,
\label{NCPoincIneq}
\end{align}
where the last line follows from an application of Proposition \ref{PoincareIneqThm}.
Now we multiply this inequality through by $r_0^{-2} = \underline{m}\pr{x_0, V}^{2}$, then apply Lemma \ref{muoBounds} to conclude that $\underline{m}\pr{x_0, V} \simeq_{(n, p, C_V)} \underline{m}\pr{x, V}$ on $Q$.
It follows that
\begin{align*}
\int_{Q} \abs{\underline{m}\pr{x, V} \V{u}(x)}^2 \, dx
&\lesssim_{(d, n, p, C_V, N_V)} \int_{Q} \abs{D\V{u}(x)}^2 \, dx + r_0^{n-2} \abs{(V(Q))^{-1}} \int_{Q} \abs{V^\frac12(y) \V{u}(y)}^2 \, dy.
\end{align*}
Since $r_0^{2-n}V(Q) = \Psi(x_0, r_0, V) \geq I$ implies that $r_0^{n-2} \abs{(V(Q))^{-1}} = \abs{\Psi(x_0, r_0, V)^{-1} } \leq 1$, then for any $Q = Q\pr{x_0, \frac 1 {\underline{m}\pr{x_0, V}}}$, we have shown that
\begin{align}
\label{loweronCubes}
\int_{Q} \abs{\underline{m}\pr{x, V} \V{u}(x)}^2 \, dx
&\lesssim_{(d, n, p, C_V, N_V)} \int_{Q} \abs{D\V{u}(x)}^2 \, dx + \int_{Q} \abs{V^\frac12(x) \V{u}(x)}^2 \, dx.
\end{align}
According to Lemma \ref{partofU}, there exists a sequence $\set{x_j}_{j=1}^\infty \subset \ensuremath{\mathbb{R}}^n$ such that if we define $\displaystyle Q_j = Q\pr{x_j, \frac 1 {\underline{m}\pr{x_j, V}}}$, then $\displaystyle \ensuremath{\mathbb{R}}^n = \bigcup_{j=1}^\infty Q_j$ and $\displaystyle \sum_{j=1}^\infty \chi_{Q_j} \lesssim_{(n, p, C_V)} 1.$
Therefore, it follows from \eqref{loweronCubes} that
\begin{align*}
\int_{\ensuremath{\mathbb{R}}^n}\abs{\underline{m}\pr{x, V} \V{u}(x)}^2 \, dx
&\le \sum_{j=1}^\infty \int_{Q_j} \abs{\underline{m}\pr{x, V} \V{u}(x)}^2 \, dx \\
&\lesssim_{(d, n, p, C_V, N_V)} \sum_{j=1}^\infty \pr{\int_{Q_j} \abs{D\V{u}(x)}^2 \, dx + \int_{Q_j} \abs{V^\frac12(x) \V{u}(x)}^2 \, dx} \\
&\lesssim_{(n, p, C_V)} \int_{\ensuremath{\mathbb{R}}^n} \abs{ D\V{u}(x)}^2 \, dx + \int_{\ensuremath{\mathbb{R}}^n} \abs{V^\frac12(x) \V{u}(x)}^2 \, dx,
\end{align*}
as required.
\end{proof}
\begin{rem}
If we assume that $\V{u} \equiv 1$ on $Q$, then the condition $\MC{NC}$ is necessary for \eqref{NCPoincIneq} to be true on all such cubes.
As such, the condition $\MC{NC}$ is very natural assumption to impose.
In fact, as we show in Appendix \ref{Examples}, there are matrix weights $V \in \pr{{\MC{B}_p} \cap \MC{ND}} \setminus \MC{NC}$ for which this Fefferman-Phong estimate fails to hold.
\end{rem}
Finally, if we replace $V$ by $\abs{V} I$, we are essentially reduced to the scalar setting and we only need that $\abs{V} \in {\text{B}_p}$.
In particular, we don't need to assume that $V \in \MC{NC}$.
As shown in Sections \ref{UpBds} and \ref{LowBds}, this result will be applied to prove the exponential lower bound on the fundamental matrix.
\begin{cor}[Norm Fefferman-Phong Inequality]
\label{FPmlCor}
Assume that $\abs{V} \in {\text{B}_p}$ for some $p > \frac{n}{2}$.
Then for any $ \V{u} \in C^1 _0(\ensuremath{\mathbb{R}}^n)$, it holds that
$$\int_{\ensuremath{\mathbb{R}}^n} \abs{m\pr{x, \abs{V}} \V{u}(x)}^2 \, dx
\lesssim_{(d, n, p, C_{\abs{V}})} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D\V{u} (x)}^2 \, dx
+ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{V} \abs{\V{u} (x)}^2 \, dx.$$
\end{cor}
To conclude the section, although we will not use it, we present the straightforward upper bound which is similar \cite[Theorem 1.13(b)]{She99}.
Notice that for this result, we only assume that $V \in {\MC{B}_p} \cap \MC{ND}$.
\begin{prop}[Upper Auxiliary Function Fefferman-Phong Inequality]
\label{FPmu}
Assume that $V \in {\MC{B}_p}$ for some $p > \frac n 2$.
For any $ \V{u} \in C_0^1(\ensuremath{\mathbb{R}}^n)$, it holds that
$$\int_{\ensuremath{\mathbb{R}}^n} \abs{V^\frac12(x) \V{u}(x)}^2 \, dx \lesssim_{(d, n, p, C_V)} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D\V{u} (x)}^2 \, dx + \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{\overline{m}(x, V) \V{u} (x)}^2 \, dx.$$
\end{prop}
\begin{proof}
Note that $\displaystyle \abs{V^\frac12(x) \V{u}(x)}^2 = \innp{V(x) \V{u}(x), \V{u}(x)} \le \abs{V\pr{x}} \abs{\V{u}\pr{x} }^2$.
For some $x_0 \in \ensuremath{\mathbb{R}}^n$, let $r_0 = \frac{1}{\overline{m}(x_0, V)}$ and set $Q = Q(x_0, r_0)$.
Then using the classical Poincar\'{e} inequality, we have that
\begin{align*}
\int_{Q} \abs{V\pr{x}} \abs{\V{u}\pr{x} }^2 \, dx
&= \frac{1}{\abs{Q}} \int_{Q} \int_{Q} \abs{V\pr{x}} \abs{\V{u}\pr{x} }^2 \, dx \, dy \\
&\lesssim \frac{1}{\abs{Q}} \int_{Q} \int_{Q} \abs{V\pr{x}} \abs{\V{u}\pr{x} - \V{u}\pr{y}}^2\, dx \, dy
+ \frac{1}{\abs{Q}} \int_{Q} \int_{Q} \abs{V\pr{x}} \abs{\V{u}\pr{y} }^2 \, dx \, dy \\
&\lesssim_{(n)} r_0^{2-n} \int_{Q} \abs{V\pr{x}} dx \int_{Q} \abs{D\V{u}\pr{y}}^2\, dy
+ r_0^{-n} \int_{Q} \abs{V\pr{x}} dx \int_{Q} \abs{\V{u}\pr{y} }^2 \, dy \\
&= \Psi\pr{x_0, r_0; \abs{V}}\pr{ \int_{Q} \abs{D\V{u}\pr{y}}^2\, dy
+ r_0^{-2} \int_{Q} \abs{\V{u}\pr{y} }^2 \, dy} \\
&\lesssim_{(d)} \int_{Q} \abs{D\V{u}\pr{y}}^2\, dy
+ r_0^{-2} \int_{Q} \abs{\V{u}\pr{y} }^2 \, dy,
\end{align*}
since by \eqref{normRelationship}, $d^{-2} \Psi\pr{x_0, r_0; \abs{V}} \le \abs{\Psi\pr{x_0, r_0; V}} = 1$.
We apply Lemma \ref{muoBounds} to conclude that for $x \in Q$, $r_0^{-1} = \overline{m}\pr{x_0, V} \simeq_{(d, n, p, C_V)} \overline{m}\pr{x, V}$.
In particular, for any $Q = Q\pr{x_0, \frac{1}{\overline{m}(x_0, V)}}$,
\begin{equation}
\label{upperonCubes}
\int_{Q} \abs{V^\frac12(x) \V{u}(x)}^2 \, dx
\lesssim_{(d, n, p, C_V)} \int_{Q} \abs{ D\V{u}(x)}^2\, dx
+ \int_{Q} \abs{\V{u}(x)}^2 \overline{m}(x, V)^{2}\, dx.
\end{equation}
To pass from cubes to $\ensuremath{\mathbb{R}}^n$, we follow the arguments from the proof of Lemma \ref{FPml} and use Remark \ref{partofURem}.
\end{proof}
\section{The Elliptic Operator}
\label{EllOp}
In this section, we introduce the generalized Schr\"odinger operators.
For this section and the subsequent two, we do not need to assume that our the matrix weight $V$ belongs to ${\MC{B}_p}$ and therefore work in a more general setting.
In particular, to define the operator, the fundamental matrices, and discuss a class of systems operators that satisfy a set of elliptic theory results, we only require nondegeneracy and local $p$-integrability of the zeroth order potential terms.
The stronger assumption that $V \in {\MC{B}_p}$ is not required until we establish more refined bounds for the fundamental matrices; namely the exponential upper and lower bounds.
As such, the next three sections are presented for $V$ in this more general setting.
For the leading operator, let $A^{\alpha \beta} = A^{\alpha \beta}\pr{x}$, for each $\alpha, \beta \in \set{ 1, \dots, n}$, be an $d \times d$ matrix with bounded measurable coefficients defined on $\ensuremath{\mathbb{R}}^n$.
We assume that there exist constants $0 < \lambda, \Lambda < \infty$ so that $A^{\alpha \beta}$ satisfies an ellipticity condition of the form
\begin{align}
\sum_{i, j = 1}^d \sum_{\alpha, \beta = 1}^n A^{\alpha \beta}_{ij}\pr{x} \xi_\beta^{j} \xi_\alpha^{i}
&\ge \lambda \sum_{i = 1}^d \sum_{\alpha = 1}^n \abs{\xi_\alpha^i}^2 = \lambda \abs{\xi}^2
\quad \text{ for all } \, x \in \ensuremath{\mathbb{R}}^d, \xi \in \ensuremath{\mathbb{R}}^{d \times n}
\label{ellip}
\end{align}
and a boundedness assumption of the form
\begin{align}
& \sum_{i, j = 1}^d \sum_{\alpha, \beta = 1}^n A_{ij}^{\alpha \beta}\pr{x} \xi_\beta^{j} \zeta_\alpha^{i}
\le \Lambda \sum_{i, j = 1}^d \sum_{\alpha, \beta = 1}^n \xi_\beta^{j} \zeta_\alpha^{i}
\le \Lambda \abs{\xi} \abs{\zeta}
\quad \text{ for all } x \in \ensuremath{\mathbb{R}}^d, \xi, \zeta \in \ensuremath{\mathbb{R}}^{d \times n}.
\label{Abd}
\end{align}
For the zeroth order term, we assume that
\begin{align}
V \in L^{\frac n 2}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}.
\label{VAssump}
\end{align}
In particular, since $V$ is a matrix weight, then $V$ is a $d \times d$, a.e. positive semidefinite, symmetric, $\ensuremath{\mathbb{R}}$-valued matrix function.
\begin{rem}
\label{VAssumpRem}
Note that if $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p \in \brac{\frac n 2, \infty}$, then since $V \in {\MC{B}_p}$ implies that $V \in L^p_{loc}\pr{\ensuremath{\mathbb{R}}^n}$ for some $p \ge \frac n 2$, such a choice of $V$ satisfies \eqref{VAssump}.
This more specific assumption on the potential functions will appear in Sections \ref{UpBds} and \ref{LowBds}.
\end{rem}
The equations that we study are formally given by
\begin{align}
\label{elEqDef}
\mathcal{L}_V \V{u} &
= -D_\alpha\pr{A^{\alpha \beta} D_\beta \V{u} } + V \V{u}.
\end{align}
To make sense of what it means for some function $\V{u}$ to satisfy \eqref{elEqDef}, we introduce new Hilbert spaces.
But first we recall some familiar and related Hilbert spaces.
For any open $\Omega \subset \ensuremath{\mathbb{R}}^n$, $W^{1,2}(\Omega)$ denotes the family of all weakly differentiable functions $u \in L^{2}(\Omega)$ whose weak derivatives are functions in $L^2(\Omega)$, equipped with the norm that is given by
$$\norm{u}_{W^{1,2}(\Omega)}^2 = \norm{u}_{L^{2}(\Omega)}^2 + \norm{D u}_{L^2(\Omega)}^2.$$
The space $W^{1,2}_0(\Omega)$ is defined to be the closure of $C^\infty_c(\Omega)$ with respect to $\norm{\cdot}_{W^{1,2}(\Omega)}$.
Recall that $C^\infty_c(\Omega)$ denotes the set of all infinitely differentiable functions with compact support in $\Omega$.
Another related class of functions will be used also.
For any open $\Omega \subset \ensuremath{\mathbb{R}}^n$, the space $Y^{1,2}(\Omega)$ is the family of all weakly differentiable functions $u \in L^{2^*}(\Omega)$ whose weak derivatives are functions in $L^2(\Omega)$, where $2^*=\frac{2n}{n-2}$.
We equip $Y^{1,2}(\Omega)$ with the norm
\begin{align*}
\norm{u}^2_{Y^{1,2}(\Omega)} := \norm{u}^2_{L^{2^*}(\Omega)} + \norm{D u}^2_{L^2(\Omega)}.
\end{align*}
Define $Y^{1,2}_0(\Omega)$ as the closure of $C^\infty_c(\Omega)$ in $Y^{1,2}(\Omega)$.
When $\Omega = \ensuremath{\mathbb{R}}^n$, $Y^{1,2}\pr{\ensuremath{\mathbb{R}}^n} = Y^{1,2}_0\pr{\ensuremath{\mathbb{R}}^n}$ (see, e.g., Appendix A in \cite{DHM18}).
By the Sobolev inequality,
\begin{equation*}
\norm{u}_{L^{2^*}(\Omega)}
\le c_n \norm{D u}_{L^2(\Omega)} \quad \text{for all $u \in Y^{1,2}_0(\Omega)$.}
\end{equation*}
It follows that $W^{1,2}_0(\Omega) \subset Y^{1,2}_0(\Omega)$ with set equality when $\Omega$ has finite measure.
The bilinear form on $Y_0^{1,2}(\Omega)$ that is given by
\begin{align*}
\innp{u, v}_{Y_0^{1,2}(\Omega)} := \int_{\Omega} D_\alpha u D_\alpha v
\end{align*}
defines an inner product on $Y_0^{1,2}(\Omega)$.
With this inner product, $Y_0^{1,2}(\Omega)$ is a Hilbert space with norm
\begin{align*}
\norm{u}_{Y_0^{1,2}(\Omega)} := \innp{u, u}_{Y_0^{1,2}(\Omega)}^{1/2} = \norm{Du}_{L^2(\Omega)}.
\end{align*}
We refer the reader to \cite[Appendix A]{DHM18} for further properties of $Y^{1,2}(\Omega)$, and some relationships between $Y^{1,2}(\Omega)$ and $W^{1,2}(\Omega)$.
These spaces can be generalized to vector-valued functions in the usual way.
Towards the development of our new function spaces, we start with the associated inner products.
For any $V$ as in \eqref{VAssump} and any $\Omega \subset \ensuremath{\mathbb{R}}^n$ open and connected, let $\innp{\cdot, \cdot}^2_{W_V^{1,2}(\Omega)} : C_c^\infty(\Omega) \times C_c^\infty(\Omega) \to \ensuremath{\mathbb{R}}$ be given by
\begin{align*}
\innp{\V{u}, \V{v}}_{W_V^{1,2}(\Omega)}
= \int_{\Omega} \innp{V \V{u}, \V{v}} + \innp{D \V{u}, D \V{v}}.
\end{align*}
This inner product induces a norm, $\norm{\cdot}^2_{W_V^{1,2}(\Omega)} : C_c^\infty(\Omega) \to \ensuremath{\mathbb{R}}$ that is defined by
\begin{align*}
\norm{\V{u}}^2_{W_V^{1,2}(\Omega)}
:= \norm{V^{1/2} \V{u}}^2_{L^{2}(\Omega)} + \norm{D \V{u}}_{L^2(\Omega)}^2
= \int_{\Omega} \innp{V \V{u}, \V{u}} + \innp{D \V{u}, D \V{u}}.
\end{align*}
The nondegeneracy condition described by \eqref{NDCond} ensures that this is indeed a norm and not just a semi-norm.
In particular, if $\norm{D \V{u}}_{L^2(\Omega)} = 0$, then $\V{u} = \V{c}$ a.e., where $\V{c}$ is a constant vector.
But then by \eqref{NDCond}, $\norm{V^{1/2} \V{u}}^2_{L^{2}(\Omega)} = \norm{V^{1/2} \V{c}}^2_{L^{2}(\Omega)} = 0$ iff $\V{c} = \V{0}$.
For any $\Omega \subset \ensuremath{\mathbb{R}}^n$ open and connected, we use the notation $L_V^2(\Omega)$ to denote the space of $V$-weighted square integrable functions.
That is,
$$L_V^2(\Omega) = \set{\V{u} : \Omega \to \ensuremath{\mathbb{R}}^d : \norm{V^{1/2} \V{u}}_{L^2\pr{\Omega}} < \infty}.$$
For any $V$ as in \eqref{VAssump} and any $\Omega \subset \ensuremath{\mathbb{R}}^n$ open and connected, define each space $W_{V,0}^{1,2}(\Omega)$ as the completion of $C_c^\infty(\Omega)$ with respect to the norm $\norm{\cdot}_{W_V^{1,2}(\Omega)}$.
That is,
\begin{equation}
\label{WV012Def}
W_{V,0}^{1,2}(\Omega) = \overline{C_c^\infty(\Omega)}^{\norm{\cdot}_{W_V^{1,2}(\Omega)}}.
\end{equation}
The following proposition clarifies the meaning of our trace zero Sobolev space.
\begin{prop}
\label{W12V0Properties}
Let $V$ be as in \eqref{VAssump} and let $\Omega \subset \ensuremath{\mathbb{R}}^n$ be open and connected.
For every sequence $\{\V{u}_k\}_{k=1}^\infty \subset C_c ^\infty (\Omega)$ that is Cauchy with respect to the $W_{V}^{1,2}(\Omega)$-norm, there exists a $\V{u} \in L_V ^2(\Omega) \cap Y^{1,2}_0(\Omega)$ for which
$$\lim_{k \to \infty} \norm{\V{u}_k - \V{u}}_{W^{1,2}_V(\Omega)}^2 = \lim_{k \to \infty} \pr{\ensuremath{\int_{\Om}} \abs{V^\frac12 \pr{\V{u}_k - \V{u}}}^2 + \ensuremath{\int_{\Om}} \abs{D\V{u}_k - D\V{u}}^2} = 0.$$
\end{prop}
\begin{proof}
Since $\{\V{u}_k\} \subset C_c ^\infty (\Omega)$ is Cauchy in the $W_{V}^{1,2}(\Omega)$ norm, $\set{V^{1/2} \V{u}_k}$ is Cauchy in $L^2(\Omega)$, and thus there exists $\V{h} \in L^2(\Omega)$ so that
\begin{equation}
\label{L2Lim}
V^{1/2} \V{u}_k \to \V{h} \quad \text{ in } \quad L^2(\Omega).
\end{equation}
Similarly, since $\set{D\V{u}_k}$ is Cauchy in $L^2(\Omega)$, there exists $U \in L^2(\Omega)$ so that
\begin{equation}
\label{L2LimD}
D\V{u}_k \to U \quad \text{ in } \quad L^2(\Omega).
\end{equation}
By the Sobolev inequality applied to $\V{u}_k-\V{u}_j$, we have
$$\norm{\V{u}_k-\V{u}_j}_{L^{2^*}(\Omega)} \leq c_n \norm{D\pr{\V{u}_k - \V{u}_j}}_{L^{2}(\Omega)} \leq c_n \norm{\V{u}_k-\V{u}_j}_{W_V^{1,2}(\Omega)}.$$
In particular, $\set{\V{u}_k}$ is also Cauchy in $L^{2^*}(\Omega)$ and then there exists $\V{u} \in L^{2^*}(\Omega)$ so that
\begin{equation}
\label{L2*Lim}
\V{u}_k \to \V{u} \quad \text{ in } \quad L^{2^*}(\Omega).
\end{equation}
For any $\Omega' \Subset \ensuremath{\mathbb{R}}^n$, observe that another application of H\"older's inequality shows that
\begin{align*}
\norm{V^{1/2} \V{u}_k - V^{1/2} \V{u} }_{L^{2}\pr{\Omega \cap \Omega'}}
&\le \pr{\int_{\Omega \cap \Omega'} \abs{V} \abs{ \V{u}_k - \V{u}}^2 }^{\frac 1 2}
\le \norm{V}_{L^{\frac n 2}\pr{\Omega'}}^{2} \norm{\V{u}_k - \V{u} }_{L^{2^*}\pr{\Omega}}.
\end{align*}
Since $V \in L^{\frac n 2}_{\loc}(\ensuremath{\mathbb{R}}^n)$, then $\norm{V}_{L^{\frac n 2}\pr{\Omega'}} < \infty$ and we conclude that $V^{1/2} \V{u}_k \to V^{1/2} \V{u}$ in $L^2\pr{\Omega \cap \Omega'}$ for any $\Omega' \Subset \ensuremath{\mathbb{R}}^n$.
By comparing this statement with \eqref{L2Lim}, we deduce that $V^{1/2} \V{u} = \V{h}$ in $L^2(\Omega)$ and therefore a.e., so that \eqref{L2Lim} holds with $\V{h} = V^{1/2} \V{u}$.
Moreover, $\V{u} \in L^2_V\pr{\Omega}$.
Next we show that $D\V{u} = U$ weakly in $\Omega$.
Let $\V{\xi} \in C^\infty_c(\Omega)$.
Then for $j = 1, \ldots, n$, we get from \eqref{L2*Lim} and \eqref{L2LimD} that
\begin{align*}
\int_{\Omega} \innp{\V{u} (x), D_j \V{\xi} (x)} \, dx
&= \lim_{k \to \infty} \int_{\Omega} \innp{\V{u}_k (x), D_j \V{\xi} (x)} \, dx
= -\lim_{k \to \infty} \int_{\Omega} \innp{D_j \V{u}_k (x), \V{\xi} (x)} \, dx \\
&= - \int_{\Omega} \innp{U_j(x), \V{\xi} (x)} \, dx,
\end{align*}
where $U_j$ denotes the $j^{\textrm{th}}$ column of $U$.
That is, $D\V{u} = U \in L^2(\Omega)$.
In particular, \eqref{L2LimD} holds with $U = D\V{u}$.
Finally, in combination with \eqref{L2LimD} and \eqref{L2*Lim}, this shows that $\V{u} \in Y^{1,2}_0(\Omega)$.
\end{proof}
By Proposition \ref{W12V0Properties}, associated to each equivalence class of Cauchy sequences $[\{\V{u}_k\}] \in W_{V, 0} ^{1, 2} (\Omega)$ is a function $\V{u} \in L_V ^2(\Omega) \cap Y^{1,2}_0(\Omega)$ with
$$\lim_{k \to \infty} \norm{\V{u}_k - \V{u}}_{W^{1,2}_V(\Omega)} = 0$$
so that
$$ \norm{[\{\V{u}_k\}]}_{W_{V}^{1, 2} (\Omega)} := \lim_{k \to \infty} \norm{\V{u}_k }_{W^{1,2}_V(\Omega)} = \norm{ \V{u}}_{W^{1,2}_V(\Omega)}.$$
In fact, this defines a norm on weakly-differentiable vector-valued functions $\V{u}$ for which $\norm{ \V{u}}_{W^{1,2}_V(\Omega)} < \infty$.
It follows that the function $\V{u}$ is unique and this shows that $W_{ V, 0} ^{1, 2} (\Omega)$ isometrically imbeds into the space $L_V^2(\Omega) \cap Y^{1,2}_0(\Omega)$ equipped with the norm $\norm{\cdot}_{W^{1,2}_V(\Omega)}$.
Going forward, we will slightly abuse notation and denote each element in $W_{V, 0}^{1, 2} (\Omega)$ by its unique associated function $\V{u} \in L_V ^2(\Omega) \cap Y^{1,2}_0(\Omega)$.
To define the nonzero trace spaces that we need below to prove the existence of fundamental matrices, we use restriction.
That is, define the space
\begin{equation}
\label{WV12Def}
W_{V}^{1,2}(\Omega)
= \set{\V{u}\rvert_\Omega : \V{u} \in W_{V,0}^{1,2}(\ensuremath{\mathbb{R}}^n)}
\end{equation}
and equip it with the $W_{V}^{1,2}(\Omega)$-norm.
Note that $W_{V}^{1,2}(\ensuremath{\mathbb{R}}^n) = W_{V, 0}^{1,2}(\ensuremath{\mathbb{R}}^n)$.
Moreover, when $\Omega \ne \ensuremath{\mathbb{R}}^n$, $W_{V}^{1,2}(\Omega)$ may not be complete so we simply treat it as an inner product space.
We stress that in general, $W_{V}^{1,2}(\Omega)$ should \textit{not} be thought of as a kind of ``Sobolev space," but should instead be viewed as a convenient collection of functions used in the construction from \cite{DHM18}.
Specifically, the construction of fundamental matrices from \cite{DHM18} uses the restrictions of elements from appropriate ``trace zero Hilbert-Sobolev spaces" defined on $\ensuremath{\R^n}$.
For us, $W_{V,0}^{1,2}(\ensuremath{\R^n})$ plays the role of the trace zero Hilbert-Sobolev space.
Also, as an immediate consequence of Proposition \ref{W12V0Properties} we have the following.
\begin{cor}
\label{W12VProperties}
Let $V$ be as in \eqref{VAssump} and let $\Omega \subset \ensuremath{\mathbb{R}}^n$ be open and connected.
If $\V{u} \in W_V ^{1, 2} (\Omega)$, then $\V{u} \in L_V ^2(\Omega) \cap Y^{1,2}(\Omega)$ and there exists $\{\V{u}_k \}_{k=1}^\infty \subset C_c ^\infty(\ensuremath{\R^n})$ for which
$$\lim_{k \to \infty} \norm{\V{u}_k - \V{u}}_{W^{1,2}_V(\Omega)}^2 = 0.$$
\end{cor}
We now formally fix the notation and then we will discuss the proper meaning of the operators at hand.
For every $\V{u} = \pr{u^1,\ldots, u^d }^T$ in $W^{1,2}_{V}(\Omega)$ (and hence in $Y^{1,2}\pr{\Omega}$), we define $\mathcal{L}_0 \V{u} = - D_\alpha \pr{A^{\alpha \beta} D_\beta \V{u}}$.
Component-wise, we have $\pr{\mathcal{L}_0 \V{u}}^i = - D_\alpha \pr{A_{ij}^{\alpha \beta} D_\beta u^j}$ for each $i = 1, \ldots, d$.
The second-order operator is written as
$$\mathcal{L}_V = \mathcal{L}_0 + V,$$
see \eqref{elEqDef}.
Component-wise, $\pr{ \mathcal{L}_V \V{u}}^i = -D_\alpha\pr{A_{ij}^{\alpha \beta} D_\beta u^j} + V_{ij} u^j$ for each $i = 1, \ldots, d$.
The transpose operator of $\mathcal{L}_0$, denoted by $\mathcal{L}_0^*$, is defined by $\mathcal{L}_0^* \V{u} = - D_\alpha \brac{\pr{A^{\alpha\beta}}^* D_\beta \V{u}}$, where $\pr{A^{\alpha \beta}}^* = \pr{A^{\beta\alpha}}^T$, or rather $\pr{A_{ij}^{\alpha\beta}}^* = A_{ji}^{\beta\alpha}$.
Note that the adjoint coefficients, $\pr{A_{ij}^{\alpha\beta}}^*$ satisfy the same ellipticity assumptions as $A_{ij}^{\alpha\beta}$ given by \eqref{ellip} and \eqref{Abd}.
Take $V^* = V^T (= V$, since $V$ is assumed to be symmetric).
The adjoint operator to $\mathcal{L}_V$ is given by
\begin{align}
\label{el*OpDef}
\mathcal{L}_V^* \V{u}
&:=\, \mathcal{L}_0^* \V{u} + V^* \V{u}
= -D_\alpha\brac{\pr{A^{\beta\alpha}}^T D_\beta \V{u}} + V^T \V{u}.
\end{align}
All operators, $\mathcal{L}_0, \mathcal{L}_0^*, \mathcal{L}_V, \mathcal{L}_V^*$ are understood in the sense of distributions on $\Omega$.
Specifically, for every $\V{u} \in W^{1,2}_{V}(\Omega)$ and $\V{v}\in C_c^\infty(\Omega)$, we use the naturally associated bilinear form and write the action of the functional $\mathcal{L}_V\V{u}$ on $\V{v}$ as
\begin{align*}
({\mathcal{L}_V}\V{u}, \V{v})
=\mathcal{B}_V\brac{\V{u}, \V{v}}
&= \int_\Omega \innp{A^{\alpha \beta} D_\beta \V{u}, D_\alpha \V{v}} + \innp{V \, \V{u}, \V{v}}
= \int_\Omega A_{ij}^{\alpha \beta} D_\beta u^j D_\alpha v^i + V_{ij} u^j v^i.
\end{align*}
It is straightforward to check that for such $\V{v}, \V{u}$ and for the coefficients satisfying \eqref{Abd}, the bilinear form above is well-defined and finite since $V \in L^\frac{n}{2}_{\loc}$.
We explore these details in the next section.
Similarly, $\mathcal{B}_V^*\brac{\cdot, \cdot}$ denotes the bilinear operator associated to $\mathcal{L}_V^*$, given by
\begin{align*}
(\mathcal{L}_V^*\V{u}, \V{v})
=\mathcal{B}_V^*\brac{\V{u}, \V{v}}
&= \int \innp{ \pr{A^{\beta \alpha}}^T D_\beta \V{u}, D_\alpha \V{v}} + \innp{V^T \, \V{u}, \V{v}}
= \int A_{ji}^{\beta \alpha} D_\beta u^j D_\alpha v^i + V_{ji} u^j v^i .
\end{align*}
Clearly, $\mathcal{B}_V\brac{\V{v},\V{u}}=\, \mathcal{B}_V^*\brac{\V{u},\V{v}}$.
For any vector distribution $\V{f}$ on $\Omega$ and $\V{u}$ as above, we always understand ${\mathcal L}\V{u}= \V{f} $ on $\Omega$ in the sense of distributions; that is, as $\mathcal{B}\brac{\V{u},\V{v}}= \V{f}(\V{v})$ for all $\V{v}\in C_c^\infty(\Omega)$.
Typically $\V{f}$ will be an element of some $L^\ell(\Omega)$ space and so the action of $\V{f}$ on $\V{v}$ is then simply $\displaystyle \int \V{f}\cdot \V{v}.$
The identity ${\mathcal L}^*\V{u}= \V{f}$ is interpreted similarly.
We define the associated local spaces as
$$\WT{W}^{1,2}_{V, \loc}(\Omega) = \{\V{u} \text{ weakly differentiable on } \Omega : \|\V{u}\|_{W^{1, 2} _V (\Omega')} < \infty \text{ for every } \Omega' \Subset \Omega\},$$
where the tilde notation here is meant to emphasize that this notion of local differs from the standard one.
Note that ${W}^{1,2}_{V}(\Omega) \subset \WT{W}^{1,2}_{V, \loc}(\Omega)$.
Moreover, the operators and bilinear forms described above may all be defined in the sense of distributions for any $\V{u} \in \WT{W}^{1,2}_{V, \loc}(\Omega)$.
\begin{rem}
Given open connected sets $U \subset \Omega \subset \ensuremath{\R^n}$, we can define $W_{V}^{1,2}(U)$ via restriction from $W_{V, 0}^{1,2}(\Omega)$.
That is, the space is given by
\begin{equation}
\label{WV12Def}
W_{V}^{1,2}(U)
= \set{\V{u}\rvert_U : \V{u} \in W_{V,0}^{1,2}(\Omega)}
\end{equation}
and is equipped with the $W_{V}^{1,2}(U)$-norm.
This viewpoint may be useful when $V$ is only locally integrable or positive definite on proper subsets of $\ensuremath{\mathbb{R}}^n$.
In particular, this approach could be used in the study of Green's functions (which we do not focus on in this paper).
\end{rem}
\section{Fundamental Matrix Constructions}
\label{FundMat}
We maintain the assumptions from the previous section.
That is, $A^{\alpha \beta}$ is a coefficient matrix that satisfies boundedness \eqref{Abd} and ellipticity \eqref{ellip}, and $V$ is a locally integrable matrix weight that satisfies \eqref{VAssump}.
The elliptic operator $\mathcal{L}_V$ is defined formally by \eqref{elEqDef}.
For any open, connected $\Omega \subset \ensuremath{\mathbb{R}}^n$, $V$ is used to define the Hilbert spaces $W^{1,2}_{V,0}\pr{\Omega}$ and the inner product spaces $W^{1,2}_{V}\pr{\Omega} := W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}\rvert_\Omega$.
To justify the existence of fundamental matrices associated to our generalized Schr\"odinger operators, we use the constructions and results presented in \cite{DHM18}.
By the fundamental matrix, we mean the following.
\begin{defn}[Fundamental Matrix]
\label{d3.3}
We say that a matrix function $\Gamma^V\pr{x,y}= \pr{\Gamma^V_{ij}\pr{x,y}}_{i,j=1}^d$ defined on $\set{\pr{x,y} \in \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n : x \ne y}$ is the \textbf{fundamental matrix} of $\mathcal{L}_V$ if it satisfies the following properties:
\begin{enumerate}
\item[a)] $\Gamma^V\pr{\cdot, y}$ is locally integrable and $\mathcal{L}_V \Gamma^V\pr{\cdot, y} = \delta_y I$ for all $y \in \ensuremath{\mathbb{R}}^n$ in the sense that for every $\V{\phi} = \pr{\phi^1, \ldots, \phi^d}^T \in C^\infty_c\pr{\ensuremath{\mathbb{R}}^n}^{d}$,
\begin{align*}
&\int_{\ensuremath{\mathbb{R}}^n} A_{ij}^{\alpha \beta} D_\beta \Gamma^V_{jk}\pr{\cdot, y} D_\alpha \phi^i + V_{ij} \Gamma^V_{jk}\pr{\cdot, y} \phi^i
= \phi^k\pr{y}.
\end{align*}
\item[b)] For all $y \in \ensuremath{\mathbb{R}}^n$ and $r > 0$, $\Gamma^V(\cdot, y) \in Y^{1,2}\pr{\ensuremath{\mathbb{R}}^n \setminus B\pr{y, r}}$. \\
\item[c)] For any $\V{f} = \pr{f^1, \ldots, f^d}^T \in L^\infty_c\pr{\ensuremath{\mathbb{R}}^n}$, the
function $\V{u} = \pr{u^1, \ldots, u^d}^T$ given by
$$u^k\pr{y} = \int_{\ensuremath{\mathbb{R}}^n} \Gamma^V_{jk}\pr{x,y} f^j\pr{x} \,dx$$
belongs to $W^{1,2}_{V,0}(\ensuremath{\mathbb{R}}^n)$ and satisfies $\mathcal{L}_V^* \V{u} = \V{f}$ in the sense that for every $\phi = \pr{\phi^1, \ldots, \phi^d}^T \in C^\infty_c\pr{\ensuremath{\mathbb{R}}^n}^{d}$,
\begin{align*}
&\int_{\ensuremath{\mathbb{R}}^n} A_{ij}^{\alpha \beta} D_\alpha u^i D_\beta \phi^j + V_{ij} u^i\phi^j
= \int_{\ensuremath{\mathbb{R}}^n} f^j \phi^j.
\end{align*}
\end{enumerate}
We say that the matrix function $\Gamma^V\pr{x,y}$ is the \textbf{continuous fundamental matrix} if it satisfies the conditions above and is also continuous.
\end{defn}
We restate the following theorem from \cite{DHM18}.
The stated assumptions and properties will be described below.
\begin{thm}[Theorem 3.6 in \cite{DHM18}]
\label{t3.6}
Assume that \rm{\ref{A1} - \ref{A7}} as well as properties {\rm{(IB)}} and {\rm{(H)}} hold.
Then there exists a unique continuous fundamental matrix, $\Gamma^{V}(x,y)=\pr{\Gamma^{V}_{ij}(x,y)}_{i,j=1}^d, \,\set{x\ne y}$, that satisfies Definition \ref{d3.3}.
We have $\Gamma^V(x,y)= \Gamma^{V*}(y,x)^T$, where $\Gamma^{V*}$ is the unique continuous fundamental matrix associated to $\mathcal{L}_V^*$ as defined in \eqref{el*OpDef}.
Furthermore, $\Gamma^V(x,y)$ satisfies the following estimates:
\begin{align}
&\norm{\Gamma^V(\cdot, y)}_{Y^{1,2}\pr{\ensuremath{\mathbb{R}}^n\setminus B(y,r)}}
+ \norm{\Gamma^V(x, \cdot)}_{Y^{1,2}\pr{\ensuremath{\mathbb{R}}^n\setminus B(x,r)}}
\le C r^{1-\frac{n}{2}}, \quad \forall r>0,
\label{eq3.55} \\
&\norm{\Gamma^V(\cdot, y)}_{L^q\pr{B(y,r)}}
+ \norm{\Gamma^V(x, \cdot)}_{L^q\pr{B(x,r)}}
\le C_q r^{2-n+\frac{n}{q}}, \quad \forall q\in \left[1, \tfrac{n}{n-2}\right), \quad \forall r>0,
\label{eq3.56} \\
& \norm{D \Gamma^V\pr{\cdot, y}}_{L^{q}\pr{B(y,r)}}
+ \norm{D \Gamma^V\pr{x, \cdot}}_{L^{q}\pr{B(x,r)}}
\le C_q r^{1-n +\frac{n}{q}}, \qquad \forall q \in \left[ 1, \tfrac{n}{n-1}\right), \quad \forall r>0,
\label{eq3.57} \\
& \abs{\set{x \in \ensuremath{\mathbb{R}}^n : \abs{\Gamma^V\pr{x,y}} > \tau}}
+ \abs{\set{y \in \ensuremath{\mathbb{R}}^n : \abs{\Gamma^V\pr{x,y}} > \tau}}
\le C \tau^{- \frac{n}{n-2}}, \quad \forall \tau > 0,
\label{eq3.58} \\
& \abs{\set{x \in \ensuremath{\mathbb{R}}^n : \abs{D_x \Gamma^V\pr{x,y}} > \tau}}
+ \abs{\set{y \in \ensuremath{\mathbb{R}}^n : \abs{D_y \Gamma^V\pr{x,y}} > \tau}}
\le C \tau^{- \frac{n}{n-1}}, \quad \forall \tau >0,
\label{eq3.59} \\
& \abs{\Gamma^V\pr{x,y}} \le C \abs{x - y}^{2 - n}, \qquad \forall x \ne y ,
\label{eq3.60}
\end{align}
where each constant depends on $d, n, \Lambda, \lambda$, and $C_{\rm{IB}}$, and each $C_q$ depends additionally on $q$.
Moreover, for any $0<R\le R_0<|x-y|$,
\begin{align}
&\abs{\Gamma^V\pr{x,y} - \Gamma^V\pr{z,y}}
\le C_{R_0} C \pr{\frac{|x-z|}{R}}^\eta R^{2-n}
\label{eq3.61}
\end{align}
whenever $|x-z|<\frac{R}{2}$ and
\begin{align}
&\abs{\Gamma^V\pr{x,y} - \Gamma^V\pr{x,z}}
\le C_{R_0} C \pr{\frac{|y-z|}{R}}^\eta R^{2-n}
\label{eq3.62}
\end{align}
whenever $|y-z|<\frac{R}{2}$, where $C_{R_0}$ and $\eta=\eta(R_0)$ are the same as in assumption {\rm{(H)}}.
\end{thm}
To justify the existence of $\Gamma^V$ satisfying Definition \ref{d3.3} and the results in Theorem \ref{t3.6}, it suffices to show that for our Hilbert space $W^{1,2}_{V, 0}\pr{\ensuremath{\mathbb{R}}^n}$ (and the associated inner product spaces $W^{1,2}_V(\Omega)$ where $\Omega \subset \ensuremath{\mathbb{R}}^n$), operators $\mathcal{L}_V$, $\mathcal{L}_V^*$, and bilinear forms $\mathcal{B}_V$, $\mathcal{B}_V^*$ that were introduced in the previous section, the assumptions \rm{\ref{A1} - \ref{A7}} from \cite{DHM18} hold.
In addition to properties \rm{\ref{A1} - \ref{A7}}, we must also {\em assume} that we are in a setting where de Giorgio-Nash-Moser theory holds.
Therefore, we assume the following interior boundedness (IB) and H\"older continuity (H) conditions:
\begin{itemize}
\item[(IB)]
\label{IB}
We say that \rm{(IB)} holds if whenever $\V{u} \in W^{1,2}_V(B\pr{0, 4R})$ is a weak solution to $\mathcal{L} \V{u} = \V{f}$ or $\mathcal{L}^* \V{u} = \V{f}$ in $B(0,2R)$, for some $R>0$, where $\V{f} \in L^\ell\pr{B(0,2R)}$
for some $\ell \in \pb{ \frac{n}{2}, \infty}$, then for any $q \ge 1$,
\begin{equation}
\norm{\V{u}}_{L^\infty\pr{B(0,R)}}
\le C_{\rm{IB}} \brac{ R^{- \frac n q}\norm{\V{u}}_{L^q\pr{B(0,2R)}} + R^{2 - \frac{n}{\ell}} \|\V{f}\|_{L^\ell\pr{B(0,2R)}}},
\label{eq3.47}
\end{equation}
where the constant $C_{\rm{IB}}>0$ is independent of $R>0$.
\item[(H)]
\label{H}
We say that \rm{(H)} holds if whenever $\V{u} \in W^{1,2}_V(B(0, 2R_0))$ is a weak solution to $\mathcal{L} \V{u} = \V{0}$ or $\mathcal{L}^* \V{u} = \V{0}$ in $B(0,R_0)$ for some $R_0>0$, then there exists $\eta \in \pr{0, 1}$ and $C_{R_0}>0$, both depending on $R_0$, so that whenever $0 < R \le R_0$,
\begin{align}
\sup_{x, y \in B(0,R/2), x \ne y} \frac{\abs{\V{u}\pr{x} - \V{u}\pr{y}}}{\abs{x - y}^\eta}
&\le C_{R_0} R^{-\eta} \pr{\fint_{B(0,R)} \abs{\V{u}}^{2^*}}^{\frac 1 {2^*}}.
\label{eq3.48}
\end{align}
\end{itemize}
For systems of equations, the assumptions (IB) and (H) may actually fail.
However, for the class of weakly coupled Schr\"odinger systems that are introduced in the next section, we prove that these assumptions are valid.
To establish (IB) in that setting, it suffices to consider $V \in L^{\frac n 2}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}$, while our validation of (H) requires the stronger assumption that $V \in L^{\frac n 2+}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}$.
For the simpler, scalar setting, we refer the reader to \cite[Section 5]{DHM18} for such a discussion of validity.
For many of the scalar settings discussed in \cite[Section 5]{DHM18}, one must assume, as is standard, that $V \in L^{\frac n 2 +}_{\loc}\pr{\ensuremath{\mathbb{R}}^n}$.
Now we proceed to recall and check that \rm{\ref{A1} - \ref{A7}} from \cite{DHM18} hold for our setting.
Since we are working with fundamental matrices, we only need the following conditions to hold when $\Omega = \ensuremath{\mathbb{R}}^n$.
However, we'll show that the assumptions actually hold in the full generality from \cite{DHM18}.
Recall that $V \in L^{\frac n 2}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}$ and for any $\Omega \subset \ensuremath{\mathbb{R}}^n$ open and connected, $W^{1,2}_{V,0}(\Omega) = \overline{C^\infty_c(\Omega)}^{\norm{\cdot}_{W^{1,2}_V}}$ and
$W^{1,2}_{V}(\Omega)$ is defined via restriction as $W^{1,2}_{V}(\Omega) = W^{1,2}_{V}(\ensuremath{\mathbb{R}}^n) \rvert_\Omega$.
Moreover, by Proposition \ref{W12V0Properties} and Corollary \ref{W12VProperties}, these spaces consist of weakly-differentiable, vector-valued $L^1_{\loc}$ functions.
\begin{enumerate}[label=A\arabic*)]
\item\label{A1}
\textit{Restriction property: For any $U\subset \Omega$, if $\V{u} \in W^{1,2}_V\pr{\Omega}$, then $\V{u}|_U \in W^{1,2}_V(U)$ with $\left\|\V{u}|_U\right\|_{W^{1,2}_V(U)} \le \left\|\V{u}\right\|_{W^{1,2}_V\pr{\Omega}}$.}
The restriction property holds by definition.
That is, for any $U \subset \Omega \subset \ensuremath{\mathbb{R}}^n$, if $\V{u} \in W_V^{1,2}(\Omega)$, then there exists $\V{v} \in W^{1,2}_V(\ensuremath{\mathbb{R}}^n)$ for which $\V{v} \rvert_\Omega = \V{u}$.
Since $\V{v} \rvert_U = \V{u} \rvert_U$, then $\V{u} \rvert_U \in W^{1,2}_V(U)$ and $\norm{\V{u}\rvert_U}_{W^{1,2}_V(U)} \le \norm{\V{u}}_{W^{1,2}_V(\Omega)}$.
\item\label{A2}
\textit{Containment of smooth compactly supported functions: $C_c^\infty\pr{\Omega}$ functions belong to $W^{1,2}_{V}\pr{\Omega}$.
The space $W^{1,2}_{V,0}\pr{\Omega}$, defined as the closure of $C_c^\infty\pr{\Omega}$ with respect to the $W^{1,2}_{V}\pr{\Omega}$-norm, is a Hilbert space with respect to some $\|\cdot\|_{W^{1,2}_{V,0}(\Omega)}$ such that $\displaystyle \|\V{u}\|_{W^{1,2}_{V,0}(\Omega)} \simeq \|\V{u}\|_{W^{1,2}_{V}(\Omega)}$ for all $\V{u} \in W^{1,2}_{V,0}(\Omega)$.}
To establish that $C^\infty_c(\Omega) \subset W^{1,2}_V(\Omega)$, we'll show that $C^\infty_c(\Omega) \subset W^{1,2}_{V, 0}(\Omega)$ and $W^{1,2}_{V,0}(\Omega) \subset W^{1,2}_V(\Omega)$.
The first containment follows from the definition: $W_{V,0}^{1,2}(\Omega)$ is defined as the closure of $C_c^\infty(\Omega)$ with respect to the $W_V^{1,2}(\Omega)$-norm and is a Hilbert space with respect to that same norm.
To establish the second containment, let $\V{u} \in W^{1,2}_{V, 0}\pr{\Omega}$.
Then there exists $\set{\V{u}_k}_{k = 1}^\infty \subset C^\infty_c\pr{\Omega}$ such that $\V{u}_k \to \V{u}$ in $W^{1,2}_V(\Omega)$.
It follows that $\V{u} \in W^{1,2}_V(\ensuremath{\mathbb{R}}^n)$ since $\set{\V{u}_k}_{k = 1}^\infty \subset C^\infty_c\pr{\ensuremath{\mathbb{R}}^n}$ and $\V{u}_k \to \V{u}$ in $W^{1,2}_V(\ensuremath{\mathbb{R}}^n)$.
However, $\V{u} \rvert_\Omega = \V{u}$, so we conclude that $\V{u} \in W^{1,2}_V(\Omega)$, as required.
\item\label{A3}
\textit{Embedding in $Y^{1,2}_0\pr{\ensuremath{\mathbb{R}}^n}$: The space $W^{1,2}_{V,0}\pr{\Omega}$ is continuously embedded into $Y^{1,2}_0\pr{\Omega}$ and respectively, there exists $c_0>0$ such that for any $\V{u} \in W^{1,2}_{V,0}\pr{\Omega}$, $\norm{\V{u}}_{Y^{1,2}_0{\pr{\Omega}}} \le c_0 \norm{\V{u}}_{W^{1,2}_{V}\pr{\Omega}}$}.
Proposition \ref{W12V0Properties} shows that $W^{1,2}_{V,0}(\Omega)$ is contained in $Y^{1,2}_0(\Omega)$.
In fact, for any $\V{u} \in W^{1,2}_{V,0}(\Omega)$, since
\begin{align}
\norm{\V{u}}_{Y^{1,2}_0{(\Omega)}} \lesssim \norm{\V{u}}_{W^{1,2}_{V}(\Omega)},
\label{A3Check}
\end{align}
then $W^{1,2}_{V,0}(\Omega)$ is continuously embedded into $Y^{1,2}_0(\Omega)$.
Moreover, a Sobolev embedding implies that for any $\V{u} \in W^{1,2}_{V,0}(\Omega)$,
\begin{align*}
\norm{\V{u}}_{L^{2^*}{(\Omega)}} \le \norm{D\V{u}}_{L^{2}(\Omega)}.
\end{align*}
\item\label{A4}
\textit{Cutoff properties:For any $U\subset \ensuremath{\mathbb{R}}^n$ open and connected
\begin{equation}
\label{eq2.7}
\begin{array}{c}
\mbox{ $\V{u}\in W^{1,2}_V(\Omega)$ and $\xi\in C_c^\infty(U) \quad \Longrightarrow \quad \V{u} \xi\in W^{1,2}_V(\Omega \cap U)$,} \\
\mbox{ $\V{u}\in W^{1,2}_V(\Omega)$ and $\xi\in C_c^\infty(\Omega \cap U) \quad \Longrightarrow \quad \V{u} \xi\in W^{1,2}_{V,0}(\Omega \cap U)$,} \\
\mbox{ $\V{u}\in W^{1,2}_{V,0}(\Omega)$ and $\xi\in C_c^\infty(\ensuremath{\mathbb{R}}^n) \quad \Longrightarrow \quad \V{u} \xi\in W^{1,2}_{V,0}(\Omega)$.}
\end{array}
\end{equation}
with $\|\V{u} \xi\|_{W^{1,2}_V(\Omega\cap U)}\leq C_\xi \, \|\V{u}\|_{W^{1,2}_V(\Omega)}$ in the first two cases.}
To establish \eqref{eq2.7}, first let $\V{u} \in W_V^{1,2}(\Omega)$.
Since $\V{u} \in W_V^{1,2}(\Omega)$, then there exists $\V{v} \in W_{V,0}^{1,2}\pr{\ensuremath{\mathbb{R}}^n}$ such that $\V{v}|_\Omega = \V{u}$.
Moreover, there exists $\set{\V{v}_k}_{k=1}^\infty \subset C^\infty_c\pr{\ensuremath{\mathbb{R}}^n}$ such that $\norm{\V{v}_k - \V{v}}_{W^{1,2}_{V}\pr{\ensuremath{\mathbb{R}}^n}} \to 0$.
If $\xi \in C^\infty_c\pr{U}$, then $\set{\V{v}_k \xi} \subset C^\infty_c\pr{\ensuremath{\mathbb{R}}^n}$.
We first show that
$$\lim_{k \to \infty }\norm{\V{v}_k \xi - \V{v} \xi}_{W^{1,2}_{V}\pr{\ensuremath{\mathbb{R}}^n}} = 0.$$
Observe that
\begin{align*}
\norm{\V{v}_k \xi - \V{v} \xi}_{W^{1,2}_{V}\pr{\ensuremath{\mathbb{R}}^n}}^2
&= \norm{V^{1/2}\pr{\V{v}_k \xi - \V{v} \xi}}_{L^{2}\pr{\ensuremath{\mathbb{R}}^n}}^2
+ \norm{D\pr{\V{v}_k \xi - \V{v} \xi}}_{L^{2}\pr{\ensuremath{\mathbb{R}}^n}}^2 \\
&\lesssim \norm{V^{1/2}\pr{\V{v}_k - \V{v}} \xi }_{L^{2}\pr{\ensuremath{\mathbb{R}}^n}}^2
+ \norm{D\pr{\V{v}_k - \V{v}} \xi }_{L^{2}\pr{\ensuremath{\mathbb{R}}^n}}^2
+ \norm{\pr{\V{v}_k - \V{v}} D \xi}_{L^{2}\pr{\ensuremath{\mathbb{R}}^n}}^2 \\
&\lesssim \norm{\V{v}_k - \V{v}}_{W^{1,2}_{V}\pr{\ensuremath{\mathbb{R}}^n}}^2
+ \norm{\V{v}_k - \V{v}}_{L^{2}\pr{U}}^2,
\end{align*}
where the constants depend on $\xi$.
There is no loss in assuming that $U$ is bounded, so an application of H\"older's inequality shows that
\begin{align*}
\norm{\V{v}_k - \V{v}}_{L^{2}\pr{U}}
&\le \norm{\V{v}_k - \V{v}}_{L^{2^*}\pr{\ensuremath{\mathbb{R}}^n}} \abs{U}^{\frac{2^*-2}{2^*2}}
\lesssim \norm{\V{v}_k - \V{v}}_{Y^{1,2}_0\pr{\ensuremath{\mathbb{R}}^n}}.
\end{align*}
Combining the previous two inequalities, then applying \eqref{A3Check}, we see that
\begin{align*}
\norm{\V{v}_k \xi - \V{v} \xi}_{W^{1,2}_{V}\pr{\ensuremath{\mathbb{R}}^n}}^2
&\lesssim \norm{\V{v}_k - \V{v}}_{W^{1,2}_{V}\pr{\ensuremath{\mathbb{R}}^n}}^2 \to 0.
\end{align*}
In particular, $\V{v} \xi \in W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$.
Since $\xi$ is compactly supported on $U$, then $\pr{\V{v} \xi}\rvert_{\Omega \cap U} = \V{v}|_\Omega \xi = \V{u} \xi$ and we conclude that $\V{u} \xi \in W^{1,2}_V\pr{\Omega \cap U}$.
Now assume that $\xi \in C^\infty_c\pr{\Omega \cap U}$.
For each $k \in \ensuremath{\mathbb{N}}$, define $\V{u}_k = \V{v}_k|_{\Omega} \in C^\infty(\Omega)$.
Then $\set{\V{u}_k \xi} \subset C^\infty_c\pr{\Omega \cap U}$.
Since $\norm{\V{v}_k \xi - \V{v} \xi}_{W^{1,2}_{V}\pr{\ensuremath{\mathbb{R}}^n}} \to 0$, as shown above, then by the restriction property {\rm\ref{A1}}, $\norm{\V{u}_k \xi - \V{u} \xi}_{W^{1,2}_{V}\pr{\Omega \cap U}} \to 0$ as well.
It follows that $\V{u} \xi \in W^{1,2}_{V,0}\pr{\Omega \cap U}$, as required.
The third line of \eqref{eq2.7} follows immediately from the arguments above.
\end{enumerate}
We require that $\mathcal{B}$ and $\mathcal{B}^*$ can be extended to bounded and accretive bilinear forms on $W^{1,2}_{V,0}(\Omega) \times W^{1,2}_{V,0}(\Omega)$ so that the Lax-Milgram theorem may be applied in $W^{1,2}_{V,0}(\Omega)$.
The next two assumptions capture this requirement.
\begin{enumerate}[label=A\arabic*), resume]
\item\label{A5} {\it Boundedness hypotheses:
There exists a constant $\Gamma > 0$ so that $\mathcal{B}\brac{\V{u}, \V{v}} \le \Gamma \norm{\V{u}}_{W^{1,2}_{V}} \norm{\V{v}}_{W^{1,2}_{V}}$ for all $\V{u}, \V{v} \in W^{1,2}_{V,0}\pr{\Omega}$.}
For any $\V{u}, \V{v} \in W^{1,2}_{V,0}(\Omega)$, it follows from \eqref{Abd} that
\begin{align*}
\mathcal{B}\brac{\V{u}, \V{v}}
\le \Lambda \int \innp{D\V{u}, D\V{v}}
+ \int \innp{V\V{u}, \V{v}}
\le \pr{\Lambda + 1} \norm{\V{u}}_{W^{1,2}_{V}} \norm{\V{v}}_{W^{1,2}_{V}},
\end{align*}
so we may set $\Gamma = \Lambda + 1$.
\item\label{A6} {\it Coercivity hypotheses:
There exists a $\gamma > 0$ so that $\gamma \norm{\V{u}}_{W^{1,2}_V}^2 \le \mathcal{B}\brac{\V{u}, \V{u}}$ for any $\V{u} \in W^{1,2}_{V,0}\pr{\Omega}$.
}
For any $\V{u} \in W^{1,2}_{V,0}(\Omega)$, it follows from \eqref{ellip} that
\begin{align*}
\lambda \norm{\V{u}}_{W^{1,2}_V}^2 \le \mathcal{B}\brac{\V{u}, \V{u}},
\end{align*}
so we take $\lambda = \gamma$.
\end{enumerate}
Using \rm{\ref{A1} - \ref{A6}}, we prove the following.
\begin{lem}[Caccioppoli inequality]
\label{l4.1}
Let $\Omega \subset \ensuremath{\mathbb{R}}^n$ be open and connected.
Let $\V{u} \in W^{1,2}_V\pr{\Omega}$ and $\zeta \in C^\infty(\ensuremath{\mathbb{R}}^n)$ with $D\zeta \in C_c^\infty(\ensuremath{\mathbb{R}}^n)$
be such that $\V{u} \zeta \in W^{1,2}_{V,0}(\Omega)$, $ \partial^i\zeta \,\V{u}\in L^2(\Omega)$, $i=1,...,n$,
and $\displaystyle \mathcal{B}\brac{\V{u}, \V{u} \zeta^2} \le \int \V{f} \, \cdot \V{u}\, \zeta^2$ for some $\V{f} \in L^{\ell}\pr{\Omega}$, $\ell \in \pb{ \frac n 2, \infty}$.
Then
\begin{equation}
\label{eq4.1}
\int \abs{D \V{u}}^2 \zeta^2 \le C \int \abs{\V{u}}^2 \abs{D \zeta}^2 + c \abs{\int \V{f}\cdot \V{u} \,\zeta^2 },
\end{equation}
where $C = C\pr{n, \lambda, \Lambda}$, $c = c\pr{n, \lambda}$.
\end{lem}
\begin{proof}
Let $\V{u}$, $\zeta$ be as in the statement.
Since $D\zeta \in C_c^\infty(\ensuremath{\mathbb{R}}^n)$, $\zeta$ is a constant outside some large ball (call it $C_\zeta$) so that $C_\zeta-\zeta \in C_c^\infty(\ensuremath{\mathbb{R}}^n)$.
Then, by \rm{\ref{A4}}, $\V{u}\zeta^2=C_\zeta \V{u}\zeta-(C_\zeta-\zeta) \V{u}\zeta \in W^{1,2}_{V, 0}\pr{\Omega}$ as well.
A computation shows that
\begin{align*}
\mathcal{B}\brac{ \V{u} \zeta, \V{u} \zeta}
=& \mathcal{B}\brac{\V{u}, \V{u} \zeta^2}
+ \int - \innp{A^{\alpha \beta} D_\beta \V{u}, \V{u}} \zeta D_\alpha \zeta + \innp{A^{\alpha \beta} \V{u}, D_\alpha \V{u}} \zeta D_\beta \zeta + \innp{A^{\alpha \beta} \V{u}, \V{u}} D_\beta \zeta \, D_\alpha \zeta,
\end{align*}
where \rm{\ref{A5}} and \rm{\ref{A6}} ensure that $\mathcal{B}[\cdot, \cdot]$ is well-defined at all of its inputs.
By hypothesis, we have $\displaystyle \mathcal{B}\brac{\V{u}, \V{u} \zeta^2} \le \abs{ \int \V{f} \cdot \V{u}\, \zeta^2}$.
By the boundedness assumption in \eqref{Abd},
\begin{align*}
& \int - \innp{A^{\alpha \beta} D_\beta \V{u}, \V{u}} \zeta D_\alpha \zeta + \innp{A^{\alpha \beta} \V{u}, D_\alpha \V{u}} \zeta D_\beta \zeta + \innp{A^{\alpha \beta} \V{u}, \V{u}} D_\beta \zeta \, D_\alpha \zeta \\
\le& 2 \Lambda \int \abs{D \V{u}} \abs{D \zeta} \abs{\V{u}} \zeta + \Lambda \int \abs{D \zeta}^2 \abs{\V{u}}^2
\le \pr{ \frac{4 \Lambda^2}{\lambda} + \Lambda} \int \abs{\V{u}}^2 \abs{D \zeta}^2
+ \frac{\lambda}{4} \int \abs{D \V{u}}^2 \zeta^2.
\end{align*}
It follows from the inequalities above and the coercivity assumption on $\mathcal{B}$ described by \rm{\ref{A6}} combined with \rm{\ref{A3}} that
\begin{align*}
&\frac{\lambda}{2} \int \abs{D \V{u}}^2 \zeta^2
- \lambda \int \abs{\V{u}}^2 \abs{D \zeta}^2
\le \lambda \int \abs{D\pr{\V{u} \zeta}}^2
\le \lambda \norm{\V{u} \zeta}_{W^{1,2}_V}^2
\le \mathcal{B}\brac{ \V{u} \zeta, \V{u} \zeta} \\
\le& \pr{ \frac{4 \Lambda^2}{\lambda} + \Lambda } \int \abs{\V{u}}^2 \abs{D \zeta}^2
+ \frac{\lambda}{4} \int \abs{D \V{u}}^2 \zeta^2
+ \abs{ \int \V{f} \cdot \V{u}\, \zeta^2}.
\end{align*}
The assumptions that $\V{u} \zeta \in W^{1,2}_{V,0}(\Omega)$, $ \partial^i\zeta \,\V{u}\in L^2(\Omega)$, $i=1,...,n$, and $D\zeta \in C_c^\infty(\ensuremath{\mathbb{R}}^n)$ ensure that the first and the second integrals above are finite.
Therefore, we can rearrange to reach the conclusion.
\end{proof}
This gives the final assumption from \cite{DHM18}:
\begin{enumerate}[label=A\arabic*), resume]
\item\label{A7} {\it The Caccioppoli inequality:
If $\V{u} \in W^{1,2}_V\pr{\Omega}$ is a weak solution to $\mathcal{L} \V{u} = \V{0}$ in $\Omega$ and $\zeta \in C^\infty(\ensuremath{\mathbb{R}}^n)$ is such that $D\zeta \in C_c^\infty (\Omega)$, $\zeta \V{u} \in W^{1,2}_{V,0}\pr{ \Omega}$, and $\partial^i\zeta \,\V{u}\in L^2(\Omega)$, $i=1, ..., n$, then there exists $C = C\pr{n, \lambda, \Lambda}$ so that
\begin{align*}
\int \abs{D \V{u}}^2 \zeta^2 \le C \int \abs{\V{u}}^2 \abs{D \zeta}^2.
\end{align*}
Note that $C$ is independent of the set on which $\zeta$ and $D\zeta$ are supported.
}
\end{enumerate}
In conclusion, the fundamental solution for the operator $\mathcal{L}_V$ defined in \eqref{elEqDef}, denoted by $\Gamma^V$, exists and satisfies Definition \ref{d3.3} as well as the properties listed in Theorem \ref{t3.6} whenever we assume that assumptions (IB) and (H) hold for $\mathcal{L}_V$.
In fact, given that assumptions \rm{A1)} through \rm{A7)} hold for general $\Omega \subset \ensuremath{\mathbb{R}}^n$, not just $\Omega = \ensuremath{\mathbb{R}}^n$, the framework here allows us to also discuss Green's matrices as defined in \cite[Definition 3.9]{DHM18}, for example.
That is, whenever we assume that assumptions (IB) and (H) hold for $\mathcal{L}_V$, the results of \cite[Theorem 3.10]{DHM18} hold for the Green's matrix.
As we show in the next section, there are many examples of vector-valued Schr\"odinger operators that satisfy assumptions (IB) and (H).
However, for the boundary boundedness assumption (BB) introduced in \cite[Section 3.4]{DHM18}, it is not clear to us when any vector-valued Schr\"odinger operators satisfy this assumption.
As such, determining whether the global estimates for Green's matrices as described in \cite[Corollary 3.12]{DHM18} hold for operators $\mathcal{L}_V$ is an interesting question, but is beyond the scope of this current investigation.
\section{Elliptic Theory for Weakly Coupled Systems}
\label{ellipExamples}
In this section, we introduce a class of elliptic systems called weakly coupled Schr\"odinger operators and show that they satisfy the elliptic theory assumptions from Section \ref{FundMat}.
In particular, these are elliptic systems for which the fundamental matrices that were described in the previous section may be directly proven to exist without having to \textit{assume} that (\rm{IB}) and (\rm{H}) hold.
That is, for the class of weakly coupled Schr\"odinger operators that we introduce in the next paragraph, we prove here that local boundedness and H\"older continuity actually hold.
We introduce the class of {\bf weakly coupled Schr\"odinger operators}.
As above, let the leading coefficients be given by $A^{\alpha \beta} = A^{\alpha \beta}(x)$, where for each $\alpha, \beta \in \set{ 1, \dots, n}$, $A^{\alpha \beta}$ is a $d \times d$ matrix with bounded measurable coefficients.
Here we impose the condition that $A^{\alpha \beta}(x) = a^{\alpha \beta}(x) I_d$, where each $a^{\alpha \beta}$ is scalar-valued and $I_d$ is the $d \times d$ identity matrix.
That is, $A^{\alpha \beta}_{ij}(x) = a^{\alpha \beta}(x) \delta_{ij}$.
As usual, we assume that there exist constants $0 < \lambda, \Lambda < \infty$ so that $A^{\alpha \beta}$ satisfies the ellipticity condition described by \eqref{ellip} and the boundedness condition \eqref{Abd}.
For the zeroth-order term, let $V$ satisfy \eqref{VAssump}.
That is, $V$ is a nondegenerate, symmetric, positive semidefinite $d \times d$ matrix function in $L^{\frac n 2}_{\loc}\pr{\ensuremath{\mathbb{R}}^n}$.
The equations that we study are formally given by \eqref{elEqDef}.
With our additional conditions on the leading coefficients, the operator takes the component-wise form
\begin{equation}
\label{LVWDefn}
\pr{ \mathcal{L}_V \V{u}}^i = -D_\alpha\pr{A_{ij}^{\alpha \beta} D_\beta u^j} + V_{ij} u^j = -D_\alpha\pr{a^{\alpha \beta} D_\beta u^i} + V_{ij} u^j
\end{equation}
for each $i = 1, \ldots, d$.
As discussed in Sections \ref{EllOp} and \ref{FundMat}, weak solutions exists and belong to the space $W_V^{1,2}(\Omega)$.
Given the specific structure of the leading coefficient matrix, we refer to these elliptic systems as ``weakly coupled".
We begin with a lemma that will be applied in the Moser boundedness arguments below.
\begin{lem}[Local boundedness lemma]
\label{winW12V}
If $\V{u} \in W^{1,2}_V(B_2)$, then for any $k > 0$, it holds that $\displaystyle\V{w} = \V{w}\pr{k} := \frac{\V{u}}{\sqrt{\abs{\V{u}}^2 + k^2}} \in W^{1,2}_V(B_2)$ as well.
\end{lem}
\begin{proof}
Since $\V{u} \in W^{1,2}_V(B_2)$, then $\V{u}$ is the restriction of an element $\V{u} \in W^{1,2}_V(\ensuremath{\mathbb{R}}^n)$, so there exists $\set{\V{u}_j}_{j=1}^\infty \subset C^\infty_c(\ensuremath{\mathbb{R}}^n)$ so that $\V{u}_j \to \V{u}$ in $W^{1,2}_V(\ensuremath{\mathbb{R}}^n)$.
For each $j \in \ensuremath{\mathbb{N}}$, define $\V{w}_j := \V{u}_j \pr{\abs{\V{u}_j}^2 + k^2}^{-\frac 1 2} \in C^\infty_c(\ensuremath{\mathbb{R}}^n)$.
We will show that $\V{w}_j \to \V{w}$ in $W^{1,2}_V(B_2)$.
To simplify notation, let $v_j = \pr{\abs{\V{u}_j}^2 + k^2}^{\frac 1 2}$ and $v = \pr{\abs{\V{u}}^2 + k^2}^{\frac 1 2}$.
Observe that
\begin{equation}
\label{wnwDiff}
\begin{aligned}
\V{w}_j - \V{w}
&= \frac{\V{u}_j}{ v_j} - \frac{\V{u}}{v}
= \frac{\V{u}_j - \V{u}}{ v_j}
+ \frac{\V{u}\pr{\abs{v }^2 -\abs{v_j}^2}}{ v_j v\pr{v_j + v}}
= v_j^{-1} \brac{\V{u}_j - \V{u}
+ \V{w} \innp{\V{u} - \V{u}_j, \frac{\V{u} + \V{u}_j}{v + v_j}}}.
\end{aligned}
\end{equation}
Since $\displaystyle \abs{\V{w}}, \abs{\frac{\V{u} + \V{u}_j}{v + v_j}} \le 1$ and $v_j \ge k$ for all $j \in \ensuremath{\mathbb{N}}$, then $\displaystyle \abs{\V{w}_j - \V{w}} \le 2 k^{-1} \abs{\V{u}_j - \V{u}}$ and it follows from a Sobolev embedding that
\begin{align*}
\pr{\int_{B_2} \abs{ \V{w}_j - \V{w}}^{2^*}}^{\frac{n-2}{n}}
&\lesssim k^{-2} \pr{ \int_{B_2} \abs{ \V{u}_j - \V{u}}^{2^*}}^{\frac{n-2}{n}}
\le k^{-2} \pr{ \int_{\ensuremath{\mathbb{R}}^n} \abs{ \V{u}_j - \V{u}}^{2^*}}^{\frac{n-2}{n}}
\le k^{-2} c_n \int_{\ensuremath{\mathbb{R}}^n} \abs{D\V{u}_j - D\V{u}}^2.
\end{align*}
Since $\V{u}_j \to \V{u}$ in $W^{1,2}_V(\ensuremath{\mathbb{R}}^n)$, then $D\V{u}_j \to D\V{u}$ in $L^2(\ensuremath{\mathbb{R}}^n)$, so we deduce that both $\V{w}_j \to \V{w}$ in $L^{2^*}(B_2)$ and $\V{u}_j \to \V{u}$ in $L^{2^*}(B_2)$.
Therefore, there exists subsequences $\set{\V{w}_{j_i}}_{i = 1}^\infty$ and $\set{\V{u}_{j_i}}_{i = 1}^\infty$ so that $\V{w}_{j_i} \to \V{w}$ a.e. and $\V{u}_{j_i} \to \V{u}$ a.e.
In particular, we relabel so that $\V{w}_{j} \to \V{w}$ a.e. and $\V{u}_{j} \to \V{u}$ a.e.
From \eqref{wnwDiff} and that $k \le v_j$, we have
\begin{align*}
k^2 \abs{V^{\frac 1 2}\pr{\V{w}_j - \V{w}}}^2
&\le \innp{V\pr{\V{u}_j - \V{u}
+ \V{w} \innp{\V{u} - \V{u}_j, \frac{\V{u} + \V{u}_j}{v + v_j}}},
\V{u}_j - \V{u}
+ \V{w} \innp{\V{u} - \V{u}_j, \frac{\V{u} + \V{u}_j}{v + v_j}}} \\
&= \abs{V^{\frac 1 2}\pr{\V{u}_j - \V{u}}}^2
+ \abs{V^{\frac 1 2} \V{w}}^2 \innp{\V{u} - \V{u}_j, \frac{\V{u} + \V{u}_j}{v + v_j}}^2
+ 2\innp{V\pr{\V{u}_j - \V{u}} , \V{w}} \innp{\V{u} - \V{u}_j, \frac{\V{u} + \V{u}_j}{v + v_j}} \\
&\le 2 \abs{V^{\frac 1 2}\pr{\V{u}_j - \V{u}}}^2
+ 2 \abs{V} \abs{\V{u} - \V{u}_j}^2 \abs{\V{w}}^2 \abs{\frac{\V{u} + \V{u}_j}{v + v_j}}^2 \\
&\le 2 \abs{V^{\frac 1 2}\pr{\V{u}_j - \V{u}}}^2
+ 2 \abs{V} \abs{\V{u}_j - \V{u}}^2 ,
\end{align*}
where we have applied Cauchy-Schwarz and that $\displaystyle \abs{\V{w}}, \abs{\frac{\V{u} + \V{u}_j}{v + v_j}} \le 1$ for all $j \in \ensuremath{\mathbb{N}}$.
It follows from H\"older and Sobolev inequalities that
\begin{align*}
\int_{B_2} \innp{V\pr{\V{w}_j - \V{w}}, \V{w}_j - \V{w}}
&\le 2 k^{-2} \int_{B_2} \abs{V^{\frac 1 2}\pr{\V{u}_j - \V{u}}}^2
+ 2 k^{-2} \int_{B_2} \abs{V} \abs{\V{u}_j - \V{u}}^2 \\
&\le 2 k^{-2} \int_{B_2} \abs{V^{\frac 1 2}\pr{\V{u}_j - \V{u}}}^2
+ 2 k^{-2} \pr{\int_{B_2} \abs{V}^{\frac n 2}}^{\frac{2}{n}} \pr{\int_{\ensuremath{\mathbb{R}}^n} \abs{\V{u}_j - \V{u}}^{2^*}}^{\frac{n-2}{n}} \\
&\le 2 k^{-2} \int_{\ensuremath{\mathbb{R}}^n} \abs{V^{\frac 1 2}\pr{\V{u}_j - \V{u}}}^2
+ 2 k^{-2} c_n \norm{V}_{L^{\frac n 2}(B_2)} \int_{\ensuremath{\mathbb{R}}^n} \abs{D\V{u}_j - D\V{u}}^2.
\end{align*}
Since $\V{u}_j \to \V{u}$ in $W^{1,2}_V(\ensuremath{\mathbb{R}}^n)$, this shows that $V^{\frac 1 2}\V{w}_j \to V^{\frac 1 2}\V{w}$ in $L^2(B_2)$, or $\V{w}_j \to \V{w}$ in $L^2_V\pr{B_2}$.
Now we consider the gradient terms.
Since
\begin{align*}
D\V{w}_j
&= \frac{D\V{u}_j}{ v_j} - \frac{\V{u}_j \innp{D \V{u}_j, \V{u}_j}}{ v_j^3}
= v_j^{-1}\brac{ D\V{u}_j - \V{w}_j \innp{D \V{u}_j, \V{w}_j}}
\end{align*}
and analogously for $D\V{w}$, then
\begin{align*}
D\V{w}_j - D\V{w}
&= \frac{D\V{u}_j}{ v_j} - \frac{D\V{u}}{v} + \frac{\V{w} \innp{D \V{u}, \V{w}}}{v} - \frac{\V{w}_j \innp{D \V{u}_j, \V{w}_j}}{ v_j}
= A_j + B_j,
\end{align*}
where
\begin{align*}
A_j &= v_j^{-1}\brac{ D\V{u}_j- D\V{u}
- \V{w}_j \innp{D \V{u}_j - D \V{u}, \V{w}_j}} \\
B_j &= \pr{\V{w}_j \innp{D \V{u}, \V{w}_j} - D\V{u}} \innp{\frac{\V{u}_j - \V{u}}{v_j v}, \frac{\V{u}_j + \V{u}}{v_j + v}}
+ v^{-1} \brac{ \pr{\V{w} - \V{w}_j} \innp{D \V{u}, \V{w}}
+ \V{w}_j \innp{D \V{u}, \V{w} - \V{w}_j} }.
\end{align*}
This shows that
\begin{align*}
\lim_{j \to \infty} \int_{B_2} \abs{D\V{w}_j - D\V{w} }^2
&\lesssim \lim_{j \to \infty} \int_{B_2} \abs{A_j}^2
+ \lim_{j \to \infty} \int_{B_2} \abs{B_j}^2 .
\end{align*}
Since $v_j, v \ge k$, $\abs{\V{w}_j}, \abs{\V{w}} \le 1$ for all $j \in \ensuremath{\mathbb{N}}$, and $D\V{u}_j \to D\V{u}$ in $L^2(B_2)$, then
\begin{align*}
\lim_{j \to \infty }\int_{B_2} \abs{A_j}^2
&\lesssim k^{-2} \lim_{j \to \infty} \int_{B_2} \abs{D\V{u}_j- D\V{u}}^2
= 0.
\end{align*}
On the other hand, since $\abs{B_j} \le 8 k^{-1} \abs{D\V{u}}$ and $\abs{D\V{u}}^2 \in L^1(\ensuremath{\mathbb{R}}^n)$, then the Lebesgue Dominated Convergence Theorem shows that
\begin{align*}
\lim_{j \to \infty }\int_{B_2} \abs{B_{j}}^2
&= \int_{B_2} \lim_{j \to \infty } \abs{B_{j}}^2 .
\end{align*}
Because $v_j, v \ge k$, $\abs{\V{w}_j}, \abs{\V{w}} \le 1$ for all $j \in \ensuremath{\mathbb{N}}$, $\abs{D\V{u}} < \infty$ a.e., $\V{w}_{j} \to \V{w}$ a.e., and $\V{u}_{j} \to \V{u}$ a.e., then $\displaystyle \lim_{j \to \infty} B_{j} = 0$ a.e. and we deduce that
\begin{align*}
\lim_{j \to \infty }\int_{B_2} \abs{B_{j}}^2
&= 0.
\end{align*}
Thus, we conclude that $D\V{w}_j \to D\V{w}$ in $L^2(B_2)$.
In combination with the fact that $\V{w}_j \to \V{w}$ in $L^2_V(B_2)$, we have shown that $\V{w}_j \to \V{w}$ in $W^{1,2}_{V}(B_2)$, as required.
\end{proof}
With the above lemma, we prove local boundedness of solutions to weakly coupled systems.
\begin{prop}[Local boundedness of vector solutions]
\label{MoserBounded}
With $B_r = B(0, r)$, assume that $B_{4R} \subset \Omega$.
Let $\mathcal{L}_V$ be as given in \eqref{LVWDefn}, where $A$ is bounded and uniformly elliptic as in \eqref{Abd} and \eqref{ellip}, and $V$ satisfies \eqref{VAssump}.
Assume that $\V{f} \in L^\ell(B_{2R})$ for some $\ell > \frac n 2$.
Let $\V{u} \in W_{V} ^{1, 2} (B_{4R})$ satisfy $\mathcal{L}_V \V{u} = \V{f}$ in the weak sense on $B_{2R}$.
That is, for any $\V{\phi} \in W^{1,2}_{V,0}(B_{2R})$, it holds that
\begin{equation}
\label{WeakSolDef}
\int_{B_{2R}} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha\V{\phi}} + \int_{B_{2R}} \innp{V \V{u}, \V{\phi}}
= \int_{B_{2R}} \innp{\V{f}, \V{\phi}}.
\end{equation}
Then, for any $q \geq 1$, we have
\begin{equation}
\label{IB1}
\norm{\V{u}}_{L^\infty(B_{R})} \leq C \brac{R^{-\frac{n}{q}} \norm{\V{u}}_{L^q(B_{2R})} + R^{2 - \frac n \ell} \|\V{f}\|_{L^\ell(B_{2R})}},
\end{equation}
where the constant $C$ depends on $n$, $d$, $\lambda$, $\Lambda$, $q$ and $\ell$.
\end{prop}
\begin{rem}
\label{MoserBoundCorRem}
Note that the constant $C$ in Proposition \ref{MoserBounded} is independent of $V$ and $R$.
Therefore, this result establishes that estimate \eqref{eq3.47} in (\rm{IB}) holds for our weakly-coupled systems.
\end{rem}
\begin{rem}
\label{positiveVExplanation}
This statement assumes that $V \in L^{\frac n 2}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}$, but the proof only uses that $V$ is positive semidefinite.
As described in previous sections, the additional conditions on $V$ ensure that each $W^{1,2}_V\pr{\Omega}$ is a well-defined inner product space over which we can talk about weak solutions.
As such, we maintain that $V$ satisfies \eqref{VAssump}.
However, if a different class of solution functions were considered, this proof would carry through assuming only that $V \ge 0$ a.e.
\end{rem}
\begin{rem}
\label{radiiRemark}
There is nothing special about the choice of $R$, $2R$ and $4R$ here except for the ordering $R < 2R < 4R$, the scale of differences $4R - 2R, 2R - R \simeq R$, and that this statement matches that of \eqref{eq3.47} in (IB).
In applications of this result, we may modify the choices of radii while maintaining the ordering and difference properties, keeping in mind that the associated constants will change in turn.
\end{rem}
\begin{proof}
We assume first that $R = 1$.
For some $k > 0$ to be specified below, define the scalar function
$$v = v\pr{k} := \pr{\abs{\V{u}}^2 + k^2}^{\frac 1 2}$$
and the associated vector function
$$\V{w} = \V{w}\pr{k} := \V{u} \, v^{-1}.$$
Observe that $v \ge k > 0$ and since $v > \abs{\V{u}}$, then $\abs{\V{w}} \le 1$.
In fact, since $v \le \abs{\V{u}} + k$ and $\V{u} \in W^{1,2}_V(B_2)$ implies that $\V{u} \in L^{2}(B_2)$, then $v \in L^2(B_2)$.
Moreover, since $\displaystyle D_\beta v = \innp{D_\beta \V{u}, \V{w}}$, then $\abs{D_\beta v} \le \abs{D_\beta \V{u}}$ and we deduce that each $D_\beta v \in L^2\pr{B_R}$.
In particular, $v \in W^{1,2}(B_2)$.
An application of Lemma \ref{winW12V} implies that $\V{w} \in W^{1,2}_V(B_2)$.
In particular, since $\V{w}$ and $v^{-1}$ are bounded, then $\displaystyle D_\alpha \V{w} = \brac{ D_\alpha\V{u} - \V{w} \innp{D_\alpha \V{u}, \V{w}}} v^{-1} \in L^2(B_2)$.
Let $\varphi \in C^\infty_c(B_2)$ satisfy $\varphi \ge 0$ in $B_2$ and note that $D\pr{\V{w} \,\varphi} \in L^{2}(B_2)$.
Then
\begin{equation}
\label{vkPDE1}
\begin{aligned}
\int_{B_2} a^{\alpha \beta} D_\beta v \, D_\alpha \varphi
&= \int_{B_2} a^{\alpha \beta} \innp{D_\beta \V{u}, \V{w}} \, D_\alpha \varphi \\
&= \int_{B_2} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \pr{\V{w} \varphi} }
- \int_{B_2} a^{\alpha \beta} \innp{D_\beta \V{u}, v \, D_\alpha \V{w}} \, v^{-1} \varphi.
\end{aligned}
\end{equation}
To simplify the last term, observe that
\begin{align*}
\innp{D_\beta \V{u}, v \, D_\alpha \V{w} }
&= \innp{D_\beta \V{u}, D_\alpha\V{u} - \V{w} \innp{D_\alpha \V{u}, \V{w}}}
= \innp{D_\beta \V{u}, D_\alpha\V{u}}
- \innp{D_\beta \V{u}, \V{w}} \innp{D_\alpha \V{u}, \V{w}},
\end{align*}
while
\begin{align*}
\innp{v \, D_\beta \V{w}, v \, D_\alpha \V{w}}
&= \innp{D_\beta\V{u} - \V{w} \innp{D_\beta \V{u}, \V{w}}, D_\alpha\V{u} - \V{w} \innp{D_\alpha \V{u}, \V{w}}} \\
&= \innp{D_\beta\V{u}, D_\alpha\V{u}}
- \pr{1+ k^2v^{-2}}\innp{D_\beta\V{u}, \V{w}} \innp{D_\alpha \V{u}, \V{w}}.
\end{align*}
By combining the previous two expressions, we see that
\begin{align*}
a^{\alpha \beta} \innp{D_\beta \V{u}, v \, D_\alpha \V{w}} v^{-1} \varphi
&= a^{\alpha \beta} \innp{v \, D_\beta \V{w}, v\, D_\alpha \V{w}} v^{-1} \varphi
+ a^{\alpha \beta} \innp{D_\beta\V{u}, \V{w}} \innp{D_\alpha \V{u}, \V{w}} k^2v^{-3} \varphi,
\end{align*}
where all of the terms in these expressions are integrable since $D \V{u}, v \, D\V{w} \in L^2(B_2)$ and $\V{w}, v^{-1}, \varphi, a^{\alpha \beta} \in L^\infty(B_2)$.
Then \eqref{vkPDE1} becomes
\begin{equation*}
\begin{aligned}
\int_{B_2} a^{\alpha \beta} D_\beta v \, D_\alpha \varphi
&= \int_{B_2} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \pr{\V{w} \varphi} } \\
&- \int_{B_2} a^{\alpha \beta} \innp{D_\beta \V{w}, D_\alpha \V{w}} \, v \varphi
- \int_{B_2} a^{\alpha \beta} \innp{D_\beta\V{u}, \V{w}} \innp{D_\alpha \V{u}, \V{w}} \, k^2 v^{-3} \varphi \\
&\le \int_{B_2} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \pr{\V{w} \varphi} },
\end{aligned}
\end{equation*}
where we have used that $a^{\alpha \beta}$ is elliptic to eliminate the last two terms.
Since $\varphi \in C^\infty_c(B_2)$ and $\V{w} \in W^{1,2}_V(B_2)$, then A4) implies that
$\displaystyle \V{\phi} := \V{w} \varphi= \frac{\V{u}\varphi}{v} \in W^{1,2}_{V,0}(B_2)$, so we may use \eqref{WeakSolDef} with $\displaystyle \V{\phi}$ to get
\begin{align*}
\int_{B_2} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \pr{\V{w} \varphi}}
&= \int_{B_2} \innp{\V{f}, \V{w}} \varphi
- \int_{B_2} \innp{V \V{u}, \V{u}} \frac{\varphi}{v}
\le \int_{B_2} \innp{\V{f}, \V{w}} \varphi,
\end{align*}
since $V$ is positive semidefinite.
By setting $F = \innp{\V{f}, \V{w}} \in L^\ell(B_2)$ and combining the previous two inequalities, we see that for any $\varphi \in C^\infty_c(B_2)$ with $\varphi \ge 0$, it holds that
\begin{align}
\label{WeakSubHarmonIneq}
\int_{B_2} a^{\alpha \beta} D_\beta v \, D_\alpha \varphi
&\le \int_{B_2} F \varphi.
\end{align}
An application of Lemma \ref{densityLemma} implies that \eqref{WeakSubHarmonIneq} holds for any $\varphi \in W^{1,2}_0\pr{B_1}$ with $\varphi \ge 0$ a.e., and we then have that $- \di\pr{A \nabla v} \le F$ in the standard weak sense on $B_2$.
Then \cite[Theorem 4.1]{HL11}, for example, shows that for any $q \ge 1$,
\begin{align*}
\norm{\V{u}}_{L^\infty(B_1)}
\le \norm{v}_{L^\infty(B_1)}
\le C \brac{ \norm{v}_{L^q(B_2)} + \norm{F}_{L^\ell(B_2)}}
\le C \brac{ \norm{\V{u}}_{L^q(B_2)} + \norm{k}_{L^q(B_2)} + \norm{F}_{L^\ell(B_2)}},
\end{align*}
where $C$ depends on $n, d, \lambda, \Lambda, q, \ell$.
With $q = 2$, the righthand side is finite and therefore $\V{u} \in L^\infty(B_1)$.
Setting $\displaystyle k = \frac{\norm{\V{u}}_{L^\infty(B_1)}}{2C \abs{B_2}^{\frac 1 q}}$, noting that $\displaystyle \norm{F}_{L^\ell(B_2)} \le \|\V{f}\|_{L^\ell(B_2)}$, we get
\begin{align*}
\norm{\V{u}}_{L^\infty(B_1)}
\le 2 C \brac{ \norm{\V{u}}_{L^q(B_2)} + \|\V{f}\|_{L^\ell(B_2)}},
\end{align*}
as required.
For the general case of $R \ne 1$, we rescale.
That is, we apply the result above with $\V{u}_R(x) = \V{u}(Rx)$, $A_R(x) = A(Rx)$, $V_R (x) = R^2 V(Rx),$ and $\V{f}_R (x) = R^2 \V{f} (Rx)$ to get \eqref{IB1} in general.
\end{proof}
\begin{lem}
\label{densityLemma}
Let $C^\infty_c(\Omega)^+ = C^\infty_c(\Omega) \cap \set{\varphi : \varphi \ge 0 \text{ in } \Omega}$ and let $W^{1,2}_0(\Omega)^+ = W^{1,2}_0(\Omega) \cap \set{u : u \ge 0 \text{ a.e. in } \Omega}$.
For any ball $B \subset \ensuremath{\mathbb{R}}^n$, $C^\infty_c(B)^+$ is dense in $W^{1,2}_0(B)^+$.
\end{lem}
\begin{proof}
Assume that $B = B_1$ and let $u \in W^{1,2}_0(B_1)^+$.
Let $\phi \in C_c^\infty(B_1)$ be a standard mollifier and set $\phi_t = t^{-n} \phi\pr{\frac{\cdot}{t}} \in C_c^\infty(B_t)$.
For every $k \in \ensuremath{\mathbb{N}}$, define $v_k = \phi_{k^{-1}} \ast u \in C^\infty_c\pr{B_{1 + k^{-1}}}$, then set $u_k = v_k\pr{\pr{1 + k^{-1}}\cdot} \in C^\infty_c\pr{B_1}$.
Since $u \in W^{1,2}_0(B_1)^+$, then $u_k \ge 0$ in $B_1$ so that $\set{u_k}_{k = 1}^\infty \subset C^\infty_c\pr{B_1}^+$.
The aim is to show that $u_k \to u$ in $W^{1,2}(B_1)$.
Let $\varepsilon > 0$ be given.
Since $u$ may be extended by zero to a function defined on all of $\ensuremath{\mathbb{R}}^n$, then regarding $u \in L^2\pr{\ensuremath{\mathbb{R}}^n}$, there exists $g = g_\varepsilon \in C_c\pr{\ensuremath{\mathbb{R}}^n}$ such that $\norm{u - g}_{L^2\pr{\ensuremath{\mathbb{R}}^n}} < \varepsilon$.
As $g$ is uniformly continuous, then there exists $K \in \ensuremath{\mathbb{N}}$ so that $\norm{g\pr{\pr{1 + k^{-1}} \cdot} - g}_{L^2\pr{\ensuremath{\mathbb{R}}^n}} < \varepsilon$ whenever $k \ge K$.
By extending all functions to $\ensuremath{\mathbb{R}}^n$, we see that
\begin{align*}
\norm{u_k - u}_{L^2(B_1)}
&\le \norm{v_k\pr{\pr{1 + k^{-1}} \cdot} - u\pr{\pr{1 + k^{-1}} \cdot}}_{L^2(\ensuremath{\mathbb{R}}^n)}
+ \norm{u\pr{\pr{1 + k^{-1}} \cdot} - g\pr{\pr{1 + k^{-1}} \cdot}}_{L^2(\ensuremath{\mathbb{R}}^n)} \\
&+ \norm{g\pr{\pr{1 + k^{-1}} \cdot} - g}_{L^2(\ensuremath{\mathbb{R}}^n)}
+ \norm{g - u}_{L^2(\ensuremath{\mathbb{R}}^n)} \\
&= \frac{\norm{v_k- u}_{L^2(\ensuremath{\mathbb{R}}^n)} + \norm{u- g}_{L^2(\ensuremath{\mathbb{R}}^n)}}{\pr{1 + k^{-1}}^{\frac n 2} } + \norm{g\pr{\pr{1 + k^{-1}} \cdot} - g}_{L^2(\ensuremath{\mathbb{R}}^n)}
+ \norm{g - u}_{L^2(\ensuremath{\mathbb{R}}^n)} .
\end{align*}
Since $v_k \to u$ in $L^2(B_2)$, then there exists $M \in \ensuremath{\mathbb{N}}$ so that $\norm{v_k - u}_{L^2(\ensuremath{\mathbb{R}}^n)} < \varepsilon$ whenever $k \ge M$.
In particular, if $k \ge \max\set{K, M}$, then $\norm{u_k - u}_{L^2(B_1)} < 4 \varepsilon$, so we deduce that $u_k \to u$ in $L^2\pr{\ensuremath{\mathbb{R}}^n}$ and hence in $L^2\pr{B_1}$.
Since $\nabla v_k = \phi_{k^{-1}} \ast \nabla u$ in $B_{1 + k^{-1}}$, then an analogous argument shows that $\nabla u_k \to \nabla u$ in $L^2\pr{B_1}$, completing the proof.
\end{proof}
The next main result of this section is the following H\"older continuity result.
\begin{prop}[H\"older continuity]
\label{HolderContThm}
With $B_r = B(0, r)$, assume that $B_{2R_0} \subset \Omega$.
Let $\mathcal{L}_V$ be as given in \eqref{LVWDefn}, where $A$ is bounded and uniformly elliptic as in \eqref{Abd} and \eqref{ellip}, and $V$ $\in L^{\frac n 2 +}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}$.
Assume that $\V{u} \in W_{V} ^{1, 2}(B_{2R_0})$ is a weak solution to $\mathcal{L}_V \V{u} = 0$ in $B_{3R_0/2}$.
Then there exist constants $c_1, c_2, c_3 > 0$, all depending on $n$, $p$, $\lambda$, and $\Lambda$, such that if
$$\eta := -c_1 \brac{\log \pr{\min\set{ \frac {c_2} {R_0}\|V\|_{L^p(B_{R_0})} ^{-\frac{1}{2 - \frac{n}{p}}}, c_3, \frac 1 2 }}}^{-1} \in \pr{ 0, 1},$$
then for any $R \le R_0$,
\begin{equation}
\label{HolderCont}
\sup_{\substack{x, y \in B_{R/2} \\ x \neq y}} \frac{\abs{\V{u} (x) - \V{u}(y)}}{|x - y|^\eta}
\leq 4 R^{-\eta} \|\V{u}\|_{L^\infty(B_R)}.
\end{equation}
In fact, for any $q \ge 1$, there exists $c_4\pr{n, q}$ so that
\begin{equation}
\label{HolderContp}
\sup_{\substack{x, y \in B_{R/2} \\ x \neq y}} \frac{\abs{\V{u} (x) - \V{u}(y)}}{|x - y|^\eta}
\leq c_4 R^{-\eta - \frac n q} \|\V{u}\|_{L^q(B_{3R/2})}.
\end{equation}
\end{prop}
\begin{rem}
\label{HolderRem}
We point out that the assumption on $V$ in this proposition is stronger than in previous statements.
First, we now need $V \in L^{\frac n 2 +}_{\loc}$, as opposed to $V \in L^{\frac n 2}_{\loc}$, in order to apply the Harnack inequality -- a crucial step in the proof.
Second, the assumption that $V$ is positive semidefinite is used in the application of Lemma \ref{MoserBounded}.
Finally, the full power of $V \in \MC{ND}$ is used to ensure that the spaces $W_{V} ^{1, 2}$ are well-defined.
However, if we were to use the spaces $W^{1,2}$ or $Y^{1,2}$ in place of $W_{V} ^{1, 2}$ to define our weak solutions, it may be possible to drop the requirement that $V \in \MC{ND}$ and establish \eqref{HolderContp} by only assuming that $V$ is positive semidefinite.
In fact, if we knew a priori that the weak solution is bounded (and therefore didn't need to resort to Lemma \ref{MoserBounded}), we could prove \eqref{HolderCont} by only assuming that $V \in L^{\frac n 2 +}_{\loc}$ without imposing that $V$ is positive semidefinite anywhere.
\end{rem}
\begin{rem}
\label{HolderRemark}
Although the choice radii in this statement do not match those in the statement of the H\"older continuity assumption \eqref{eq3.48} from (H), this presentation suits the proof well.
As usual, the the radii may be modified to give precisely \eqref{eq3.48} from (H), but we will not do that here.
\end{rem}
The proof was inspired by the arguments in \cite{Caf82} (see also \cite{Pin06} for a more detailed account of this method), which proves H\"{o}lder continuity for very general nonlinear elliptic systems.
To prove this result, we will carefully iterate the following lemma.
\begin{lem}[Iteration lemma]
\label{itLemma}
Let $B_r = B(0, r)$.
Let $\rho \le 1$ and $\V{\nu}_* \in \ensuremath{\mathbb{R}}^d$ with $|\V{\nu}_*| \leq 2$.
Let $\mathcal{L}_V$ be as given in \eqref{LVWDefn}, where $A$ is bounded and uniformly elliptic as in \eqref{Abd} and \eqref{ellip}, and $V \in L^{\frac n 2 +}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}$.
Assume that $\V{u} \in W_{V} ^{1, 2}(B_{3\rho})$ is a weak solution to $\mathcal{L}_V \V{u} = -V \V{\nu}_*$ in $B_{2\rho}$.
If $\|\V{u}\|_{L^\infty(B_\rho)} \leq M \leq 1$, then there exists $\delta = \delta(n, p, \lambda, \Lambda) \in (0, 1)$ and a universal constant $c_0 > 0$ so that for any $0 < \theta \le 1$, it holds that
\begin{equation}
\label{CaffIneq1}
\sup_{x \in B_{\theta\rho/2}}\abs{\V{u}(x) - \delta a_{B_{3\theta\rho/4}} \V{u}} \leq M (1 - \delta) + c_0 M^{\frac 1 2} \pr{\theta\rho}^{1-\frac{n}{2p}} \|V\|_{L^p(B_1)}^{\frac 1 2}.
\end{equation}
\end{lem}
Before we prove this lemma, let us briefly discuss its application.
To run the arguments in \cite{Caf82}, we look at functions of the form $\V{u}_k = \V{u} - \V{\nu}_k$ for constant vectors $\V{\nu}_k$ to be inductively selected, where here $\mathcal{L}_V\V{u} = 0$.
However, we then have
\begin{align*}
\mathcal{L}_V\V{u}_k
&= - \di \pr{A \nabla \V{u}_k} + V \V{u}_k
= - \di \pr{A \nabla \V{u}} + V \pr{\V{u} - \V{\nu}_k}
= \mathcal{L}_V\V{u} - V \V{\nu}_k
= - V \V{\nu}_k.
\end{align*}
\begin{proof}
For some constant vector $\hat{\nu} \in \mathbb{S}^{n-1} \cup \set{\V{0}}$ to be determined later, set
$$h(x) = \frac12 M^2 + M - \frac12 \abs{\V{u}(x)}^2 - \innp{\V{u}(x), \hat{\nu}} \ge 0.$$
Since $\V{u} \in W^{1,2}_V(B_{2\rho}) \cap L^\infty(B_\rho)$ by assumption, then $h \in W^{1,2}(B_{\rho}) \cap L^\infty(B_\rho)$.
For any $\varphi \in C^\infty_c\pr{B_{\rho}}^+$, it holds that
\begin{align*}
\int_{B_{\rho}} a^{\alpha \beta} D_\beta h \, D_\alpha \varphi
&= - \int_{B_{\rho}} a^{\alpha \beta} \innp{D_\beta \V{u}, \V{u}} \, D_\alpha \varphi
- \int_{B_{\rho}} a^{\alpha \beta} \innp{D_\beta\V{u}, \hat{\nu}} \, D_\alpha \varphi \\
&= - \int_{B_{\rho}} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \brac{\varphi\pr{\V{u} + \hat{\nu}}} }
+ \int_{B_{\rho}} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \V{u}} \, \varphi.
\end{align*}
Since $\varphi\pr{\V{u} + \hat{\nu}} \in W^{1,2}_{V,0}(B_{\rho})$ by \eqref{eq2.7} in A4) and $\mathcal{L}_V \V{u} = -V \V{\nu}_*$ weakly in $B_\rho$, then
\begin{align*}
\int_{B_{\rho}} a^{\alpha \beta} D_\beta h \, D_\alpha \varphi
&= \int_{B_{\rho}} \innp{V \V{u}, \varphi\pr{\V{u} + \hat{\nu}}}
+ \int_{B_{\rho}} \innp{V \V{\nu}_*, \varphi\pr{\V{u} + \hat{\nu}}}
+ \int_{B_{\rho}} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \V{u}} \, \varphi \\
&= \int_{B_{\rho}} \brac{ \innp{V \V{u}, \V{u} } + \innp{V \V{u}, \hat{\nu}} + \innp{V \V{\nu}_*, \V{u} }+ \innp{V \V{\nu}_*, \hat{\nu}}} \varphi
+ \int_{B_{\rho}} a^{\alpha \beta} \innp{D_\beta \V{u}, D_\alpha \V{u}} \, \varphi \\
&\ge - 5 \int_{B_{\rho}} \abs{V} \varphi,
\end{align*}
since $\abs{\V{u}}, \abs{\hat{\nu}} \le 1$, $\abs{\V{\nu}_*} \le 2$, and $A$ is elliptic.
An application of Lemma \ref{densityLemma} implies that
$-\di\pr{A \nabla h} \geq - 5 |V| $ weakly on $B_{\rho}$.
By the weak Harnack inequality \cite[Theorem $4.15$]{HL11}, since $\abs{V} \in L^p\pr{B_{\rho}}$ for some $p > \frac n 2$, then there exists $\delta_1(n, p, \lambda, \Lambda) > 0$ so that
$$\delta_1 a_{B_{3\rho/4}} h
\le \inf_{B_{\rho/2}} h + 5 \rho^{2-\frac{n}{p}} \|V\|_{L^p(B_\rho)}
\le \inf_{B_{\rho/2}} h + 5 \rho^{2-\frac{n}{p}} \|V\|_{L^p(B_1)} .$$
Since $\frac12 (M^2 - \abs{\V{u}}^2) \geq 0$, then for any $x \in B_{\rho/2}$,
$$\delta_1\brac{M - \innp{ a_{B_{3\rho/4}} \V{u}(x), \hat{\nu}} }
\le \delta_1 a_{B_{3\rho/4}} h
\le \frac12 M^2 + M - \frac12 \abs{\V{u}(x)}^2 - \innp{\V{u}(x), \hat{\nu}} + 5 \rho^{2-\frac{n}{p}} \|V\|_{L^p(B_1)}.$$
Now we fix $x \in B_{\rho/2}$ and set
$\hat{\nu} = \left\{ \begin{array}{ll} \V{u}(x)/\abs{\V{u}(x)} & \text{ if } \V{u}(x) \neq 0 \\ \V{0} & \text{ otherwise} \end{array}\right.$, $r(x) = \abs{\V{u}(x)}/M$, and define $\theta = \theta(x)$ so that $\innp{a_{B_{3\rho/4}} \V{u}, \V{u}(x)} = \abs{\V{u}(x)} \abs{ a_{B_{3\rho/4}} \V{u}} \cos \theta$.
Then the previous inequality may be written as
$$M \delta_1 \pr{1 - \frac{\abs{ a_{B_{3\rho/4}} \V{u}} }{M}\cos \theta }
\le M \pr{1 - r(x)}\brac{\frac M2 \pr{1 + r(x)} + 1} + 5 \rho^{2-\frac{n}{p}} \|V\|_{L^p(B_1)}.$$
Since $M\brac{\frac12 M \pr{1 + r(x)} + 1} \le M(M + 1)$ and $M \le 1$, then
$$\frac{\delta_1}{2} \pr{1 - \frac{\abs{ a_{B_{3\rho/4}} \V{u}} }{M}\cos \theta }
\le \frac{\delta_1}{M + 1} \pr{1 - \frac{\abs{ a_{B_{3\rho/4}} \V{u}} }{M}\cos \theta }
\le 1 - r(x) + \frac{5 \rho^{2-\frac{n}{p}}}{M\pr{M + 1}} \|V\|_{L^p(B_1)}.$$
Since $r(x) \in \brac{0, 1}$ and $\delta_1 \le 2$, then $\frac{\delta_1}{2}\pr{1 - r(x)} \le 1 - r(x)$ or, equivalently, $\frac{\delta_1}{2} \le 1 - r(x) + r(x) \frac{\delta_1}{2}$.
Therefore, it follows from the previous inequality that
\begin{align*}
\frac{\delta_1}{2} \pr{1 - r(x) \frac{\abs{ a_{B_{3\rho/4}} \V{u}} }{M}\cos \theta }
&\le 1 - r(x) + r(x) \frac{\delta_1}{2}\pr{1 - \frac{\abs{ a_{B_{3\rho/4}} \V{u}} }{M}\cos \theta } \\
&\le 1 - r(x) + r(x) \brac{1 - r(x) + \frac{5 \rho^{2-\frac{n}{p}}}{M\pr{M + 1}} \|V\|_{L^p(B_1)}}
\end{align*}
Rearranging this equation and using that $r(x) \le 1$, we see that
\begin{align*}
r(x)^2 - \frac{\delta_1}{2} r(x) \frac{\abs{ a_{B_{3\rho/4}} \V{u}} }{M} \cos \theta
&\le 1 - \frac{\delta_1}{2} + \frac{5 \rho^{2-\frac{n}{p}}}{M} \|V\|_{L^p(B_1)}.
\end{align*}
Since $ \frac{\abs{ a_{B_{3\rho/4}} \V{u}} }{M} \leq 1$, with $\delta := \frac{\delta_1}{4}$, this implies that
\begin{align*}
\frac{1}{M^2} \abs{\V{u}(x) - \delta a_{B_{3\rho/4}} \V{u}} ^2
&= r(x)^2 - 2 \delta r(x) \frac{\abs{a_{B_{3\rho/4}} \V{u} }}{M} \cos \theta + \pr{\delta \frac{\abs{a_{B_{3\rho/4}} \V{u}}}{M}} ^2 \\
&\leq 1 - 2 \delta + \delta^2 + \frac{5 \rho^{2-\frac{n}{p}}}{M} \|V\|_{L^p(B_1)}
= \pr{1 - \delta}^2 + \frac{5 \rho^{2-\frac{n}{p}}}{M} \|V\|_{L^p(B_1)}.
\end{align*}
As this inequality holds for any $x \in B_{\rho/2}$, it follows that
\begin{equation}
\label{CaffIneq2}
\sup_{x \in B_{\rho/2}}\abs{\V{u}(x) - \delta a_{B_{3\rho/4}} \V{u}}^2 \leq M^2 (1 - \delta)^2 + 5M \rho^{2-\frac{n}{p}} \|V\|_{L^p(B_1)}.
\end{equation}
Taking a square root gives \eqref{CaffIneq1} when $\theta = 1$.
As all of the above arguments still hold with $\rho$ replaced by $\theta \rho$ for any $0 < \theta < 1$, we get \eqref{CaffIneq1} in general.
\end{proof}
This lemma is used to recursively define a sequence of functions and constant vectors and establish bounds for them.
\begin{lem}[Sequence lemma]
\label{seqLemma}
Let $B_r = B(0, r)$.
Let $\mathcal{L}_V$ be as given in \eqref{LVWDefn}, where $A$ is bounded and uniformly elliptic as in \eqref{Abd} and \eqref{ellip}, and $V\in L^{\frac n 2 +}_{\loc}\pr{\ensuremath{\mathbb{R}}^n} \cap \MC{ND}$.
Assume that $\V{u} \in W_{V} ^{1, 2}(B_{3})$ is a weak solution to $\mathcal{L}_V \V{u} = 0$ in $B_{2}$.
Assume further that $\|\V{u}\|_{L^\infty(B_{1})} \leq 1$.
Let $\delta = \delta\pr{n, p, \lambda, \Lambda} \in \pr{0, 1}$ and $c_0 > 0$ be as in Lemma \ref{itLemma}.
Recursively define the sequences $\set{\V{\nu}_k}_{k=0}^\infty \subset \ensuremath{\mathbb{R}}^d$ and $\set{\V{u}_k}_{k=0}^\infty \subset W^{1,2}_V(B_1)$ as follows:
Let $\V{\nu}_0 = \V{0}$, $\V{u}_0(x) = \V{u}(x)$, and for all $k \in \ensuremath{\mathbb{Z}}_{\ge 0}$, set
\begin{align*}
&\V{\nu}_{k+1} = \V{\nu}_{k} + \delta a_{B_{3/2 \pr{\theta/ 2}^{k+1}}} \V{u}_k \\
&\V{u}_{k+1}(x) = \V{u}(x) - \V{\nu}_{k+1}.
\end{align*}
If $\theta \le \min\set{\pr{\frac{ \delta}{2 c_0 \norm{V}_{L^p(B_1)}^{1/2}} }^\frac{1}{1-\frac{n}{2p}}, 2\pr{1-\frac{\delta}{2}}^\frac{1}{2-\frac{n}{p}}, 1}$, then for all $k \in \ensuremath{\mathbb{N}}$, it holds that
\begin{equation} \label{CaffInd1}
\begin{aligned}
& |\V{\nu}_k| \leq \delta \sum_{i = 0}^{k - 1} \pr{1 - \frac{\delta}{2}}^i \\
& \sup_{x \in B_{\pr{\theta/2}^k}}\abs{\V{u}_k (x)} \leq \pr{1 - \frac{\delta}{2}}^k.
\end{aligned}
\end{equation}
\end{lem}
\begin{proof}
Since $\theta \le 1$, then Lemma \ref{itLemma} is applicable with $\rho = 1$, $M = 1$, and $\V{\nu}_* = \V{0}$, so from \eqref{CaffIneq1} we get that
$$\sup_{x \in B_{\theta/2}}\abs{\V{u}(x) - \delta a_{B_{3\theta/4}} \V{u}}
\leq (1 - \delta) + c_0 {\theta}^{1-\frac{n}{2p}} \|V\|_{L^p(B_1)}^{\frac 1 2}.$$
Since $\displaystyle \theta \le \pr{\frac{ \delta}{2 c_0 \norm{V}_{L^p(B_1)}^{1/2}} }^\frac{1}{1-\frac{n}{2p}}$ implies that $c_0 \theta ^{1-\frac{n}{2p}} \|V\|_{L^p (B_1)} ^\frac12 \leq \frac{\delta}{2}$, then
$$\sup_{x \in B_{\theta/2}}\abs{\V{u}(x) - \delta a_{B_{3\theta/4}} \V{u}}
\leq \pr{1 - \frac \delta 2}.$$
By defining $\V{\nu}_1 = \delta a_{B_{3\theta/4}} \V{u}$ and $\V{u}_1 = \V{u}(x) - \V{\nu}_1$, we get that $\|\V{u}_1\|_{L^\infty (B_{\theta/2})} \leq \pr{1 - \frac{\delta}{2}}$, $\abs{\V{\nu}_1} \leq \delta $, and $\mathcal{L}_V\V{u}_1 = -V\V{\nu}_1$.
Thus, we may apply Lemma \ref{itLemma} again, this time with $\V{u} = \V{u}_1$, $\rho = \theta/2$, $M = 1 - \frac \delta 2$, and $\V{\nu}_* = \V{\nu}_1$.
An application of \eqref{CaffIneq1} gives
$$\sup_{x \in B_{\pr{\theta/2}^2}}\abs{\V{u}_1(x) - \delta a_{B_{3\theta^2/8}} \V{u}_1}
\leq \pr{1 - \frac \delta 2} (1 - \delta) + c_0 \pr{1 - \frac \delta 2}^{\frac 1 2} \pr{\frac{\theta}2}^{1-\frac{n}{2p}} \theta^{1-\frac{n}{2p}} \|V\|_{L^p(B_1)} ^\frac12.$$
Since $\theta \le 2\pr{1-\frac{\delta}{2}}^\frac{1}{2-\frac{n}{p}}$, we have
\begin{equation}
\label{Theta}
\pr{\frac{\theta}{2}} ^{1 - \frac{n}{2p}} \leq \pr{1 - \frac{\delta}{2} }^{\frac 1 2}
\; \text{ and } \;
c_0 \theta ^{1-\frac{n}{2p}} \|V\|_{L^p (B_1)} ^\frac12 \leq \frac{\delta}{2},
\end{equation}
where the second bound is as above.
It follows that
\begin{equation*}
\sup_{x \in B_{\pr{\theta/2}^2}}\abs{\V{u}_1(x) - \delta a_{B_{3\theta^2/8}} \V{u}_1}
\leq (1 - \delta)\pr{1 - \frac{\delta}{2}} + \frac{ \delta}{2} \pr{1 - \frac{\delta}{2}}
= \pr{1 - \frac{\delta}{2}}^2.
\end{equation*}
We now define
\begin{align*}
&\V{\nu}_2
= \V{\nu}_{1} + \delta a_{B_{3\theta^2/8}} \V{u}_1
\\
&\V{u}_2(x)
= \V{u}(x) - \V{\nu}_2
= \V{u}_1(x) - \delta a_{B_{3\theta^2/8}} \V{u}_1
\end{align*}
and we have $\|\V{u}_2\|_{L^\infty \pr{B_{\pr{\theta/4}^2}}} \leq \pr{1 - \frac{\delta}{2}}^2$ and $\abs{\V{\nu}_2} \le \abs{\V{\nu}_{1}} + \delta \abs{a_{B_{3\theta^2/8}} \V{u}_1} \leq \delta + \delta \pr{1 - \frac{\delta}{2}}$ since $\frac{3\theta^2}8 \le \frac \theta 2$ and $\|\V{u}_1\|_{L^\infty (B_{\theta/2})} \leq \pr{1 - \frac{\delta}{2}}$.
Notice that we have already proved \eqref{CaffInd1} for $k = 0, 1, 2$, so we now prove it for $k > 2$ via induction.
Assume that \eqref{CaffInd1} holds for some $k \ge 2$, and then
\begin{align*}
|\V{\nu}_{k+1}|
&\leq |\V{\nu}_{k}| + \delta \abs{a_{B_{3/2 \pr{\theta/ 2}^{k+1}}} \V{u}_k}
\leq \delta \sum_{i = 0}^{k - 1} \pr{1 - \frac{\delta}{2}}^i + \delta \norm{\V{u}_k}_{L^\infty\pr{B_{3/2 \pr{\theta/ 2}^{k+1}}}} \\
&\leq \delta \sum_{i = 0}^{k - 1} \pr{1 - \frac{\delta}{2}}^i + \delta \norm{\V{u}_k}_{L^\infty\pr{B_{\pr{\theta/ 2}^{k}}}}
\leq \delta \sum_{i = 0}^{k - 1} \pr{1 - \frac{\delta}{2}}^i + \delta \pr{1 - \frac \delta 2}^k
= \delta \sum_{i = 0}^{k } \pr{1 - \frac{\delta}{2}}^i.
\end{align*}
Since
$$\abs{\V{\nu}_k} < \delta \sum_{i = 0}^{\infty} \pr{1 - \frac{\delta}{2}}^i = 2,$$
then an application of Lemma \ref{itLemma} with $\V{u} = \V{u}_k$, $\rho = \pr{\theta/2}^k$, $M = \pr{1 - \frac \delta 2}^k$, and $\V{\nu}_* = \V{\nu}_k$ gives us
\begin{align*}
\sup_{x \in B_{\pr{\theta/2}^{k+1}}}\abs{\V{u}_k(x) - \delta a_{B_{3/2\pr{\theta/2}^{k+1}}} \V{u}_k}
&\leq \pr{1 - \frac \delta 2}^k (1 - \delta) + c_0 \pr{1 - \frac \delta 2}^{\frac k 2} \theta^{1-\frac{n}{2p}} \pr{\frac \theta 2}^{k-\frac{k n}{2p}} \|V\|_{L^p(B_1)} ^\frac12.
\end{align*}
Combining this with \eqref{Theta} and that $\V{u}_{k+1} = \V{u}_k - \delta a_{B_{3/2 \pr{\theta/ 2}^{k+1}}} \V{u}_k$ shows that
\begin{equation*}
\sup_{x \in B_{\pr{\theta/2}^{k+1}}}\abs{\V{u}_{k+1}(x)}
\leq \pr{1 - \frac{\delta}{2}}^k (1 - \delta) + \pr{1 - \frac{\delta}{2}}^k \pr{\frac{\delta}{2}} = \pr{1 - \frac{\delta}{2}}^{k+1},
\end{equation*}
which completes the proof of \eqref{CaffInd1}.
\end{proof}
Using Lemma \ref{seqLemma}, we give the proof of Proposition \ref{HolderContThm}.
\begin{proof}[Proof of Proposition \ref{HolderContThm}]
Assume first that $R_0 = 2$.
Then $\V{u} \in W_{V} ^{1, 2}(B_{4})$ is a weak solution to $\mathcal{L}_V \V{u} = 0$ in $B_{3}$.
An application of Proposition \ref{MoserBounded} with modified radii (see Remark \ref{radiiRemark}) implies that $\V{u} \in L^\infty(B_2)$.
For any $R \le 2$ and $x_0 \in B\pr{0, \frac R 2}$, since $B\pr{x_0, \frac R 2} \subset B(0, 2)$, then $\V{u} \in L^\infty\pr{B\pr{x_0, \frac R 2}}$.
Define
$$\V{u}_R(x) = \V{u}\pr{x_0 + \frac R 2 x} / \norm{\V{u}}_{L^\infty\pr{B\pr{x_0, \frac R 2}}}$$
and note that $\|\V{u}_R\|_{L^\infty(B_1)} = 1$.
Since $\V{u} \in W_{V} ^{1, 2}(B_4)$ is a weak solution to $\mathcal{L}_V \V{u} = 0$ in $B_3$ by hypothesis, then it holds that $\V{u}_R \in W_{V} ^{1, 2}(B_3)$ is a weak solution to $\mathcal{L}_{V_R} \V{u}_R = 0$ in $B_2$, where $V_R(x) = \pr{\frac R 2}^2 V\pr{x_0 + \frac R 2 x}$.
Because $\norm{V_R}_{L^p(B_1)} = \pr{\frac R 2}^{2 - \frac n p} \norm{V}_{L^p\pr{B\pr{x_0, \frac R 2}}} \le \norm{V}_{L^p(B_2)}$, then with $\delta > 0$ is as in Lemma \ref{itLemma}, we set
\begin{equation}
\label{thetaDefn}
\theta = \min\set{\pr{\frac{ \delta}{2 c_0 }}^\frac{1}{1-\frac{n}{2p}} \norm{V}_{L^p(B_2)} ^{-\frac{1}{2-\frac{n}{p}}}, 2\pr{1-\frac{\delta}{2}}^\frac{1}{2-\frac{n}{p}}, 1}.
\end{equation}
It follows that the hypotheses of Lemma \ref{seqLemma} are satisfied for any such $\V{u}_R$.
Define $\eta = \log \pr{1 - \frac{\delta}{2}} \brac{\log \pr{\frac \theta 2}}^{-1} > 0$ so that $\pr{1 - \frac{\delta}{2}} = \pr{\frac{\theta}{2}} ^\eta$.
Since $\delta \in \pr{0, 1}$, then $\theta \le 1 < 2\pr{1-\frac{\delta}{2}}$, so that $\eta \le 1$.
Observe that since $\delta \in (0, 1)$, then $\pr{\frac 2 \theta}^\eta = \pr{1 - \frac{\delta}{2}} ^{-1}< 2$.
For $0 < r \leq 1$, choose $k \in \ensuremath{\mathbb{Z}}_{\ge 0}$ so that
$$\pr{\frac{\theta}{2}} ^{k+1} < r \leq \pr{\frac{\theta}{2}}^k.$$
With any $x_0 \in B(0, 1)$, let $\displaystyle \underset{r}{\osc} \, \V{u}_R = \sup_{x, y \in B_r } \abs{\V{u}_R (x) - \V{u}_R(y)}$ and observe that with the notation from Lemma \ref{seqLemma}, we have
\begin{align*}
\underset{r}{\osc} \, \V{u}_R
&\le \sup_{x, y \in B_{\pr{\theta/2}^k} } \abs{\V{u}_R(x) - \V{u}_R(y)}
= \sup_{x, y \in B_{\pr{\theta/2}^k} } \abs{\pr{\V{u}_R(x) - \nu_k} - \pr{\V{u}_R(y) - \nu_k} } \\
&= \sup_{x, y \in B_{\pr{\theta/2}^k} } \abs{\V{u}_{R,k} (x) - \V{u}_{R,k}(y)}.
\end{align*}
It then follows from an application of \eqref{CaffInd1} in Lemma \ref{seqLemma} that
\begin{equation*}
\underset{r}{\osc} \, \V{u}_R
\le 2 \pr{1 - \frac \delta 2}^{k}
= 2 \pr{\frac2{\theta}}^\eta \pr{\frac{\theta}{2}}^{\eta(k+1)}
\le 4 r ^\eta.
\end{equation*}
Take $x, y \in B\pr{0, \frac R 2}$ and set $\tilde{r} = \frac{|x-y|}{2R} < \frac 1 2$.
For any $c > 1$, it holds that $\pm \frac{x - y}{2R} \in B(0, c\tilde{r})$, so we choose $c \in (1, 2]$ for which $c \tilde{r} \le 1$.
Then we have
\begin{align*}
\abs{\V{u}_R\pr{\frac{x - y}{2R}} - \V{u}_R\pr{\frac{y - x}{2R}}}
\leq \underset{c \tilde r}{\osc} \, \V{u}_R
\leq 4 (c\tilde{r}) ^\eta
\le 4 \pr{\frac{|x-y|}{R}}^\eta.
\end{align*}
With $x_0 = \frac 1 2 (x + y) \in B\pr{0, \frac R 2}$, we have $\displaystyle \V{u}_R\pr{\frac{x - y}{2R}} = {\V{u}\pr{x_0 + R\frac{x - y}{2R}}}/{\norm{\V{u}}_{L^\infty(B(x_0, \frac R 2))}} = \V{u}(x)/\norm{\V{u}}_{L^\infty(B(x_0, \frac R 2))}$ and $\displaystyle \V{u}_R\pr{\frac{y - x}{2R}} = \V{u}(y)/\norm{\V{u}}_{L^\infty(B(x_0, \frac R 2))}$.
Therefore,
\begin{align*}
\abs{\V{u}\pr{x} - \V{u}(y)}
&= \abs{\V{u}_R\pr{\frac{x - y}{2R}} - \V{u}_R\pr{\frac{y - x}{2R}}} \norm{\V{u}}_{L^\infty(B(x_0, \frac R 2))}
\le 4 \pr{\frac{|x-y|}{R}}^\eta \norm{\V{u}}_{L^\infty(B(0, R))}.
\end{align*}
Since $x, y \in B\pr{0, \frac R 2}$ were arbitrary, the proof of \eqref{HolderCont} is complete for any $R \le 2 = R_0$.
Estimate \eqref{HolderContp} follows from \eqref{HolderCont} and an application of Proposition \ref{MoserBounded} with a modified choice of radii (again).
As usual, the case of $R_0 \ne 2$ follows from a scaling argument.
With $V_{R_0}(x) = \pr{\frac {R_0}2}^2 V\pr{\frac{R_0}2 x}$, we have
$$\norm{V_{R_0}}_{L^p(B_2)}^{-\frac{1}{2 - \frac{n}{p}}}
= \brac{\pr{\frac{R_0}2}^{2 - \frac n p} \norm{V}_{L^p(B_{R_0})}}^{-\frac{1}{2 - \frac{n}{p}}}
= \frac 2 {R_0} \norm{V}_{L^p(B_{R_0})}^{-\frac{1}{2 - \frac{n}{p}}},$$
so the definition of $\theta$ in \eqref{thetaDefn} changes accordingly.
\end{proof}
Propositions \ref{MoserBounded} and \ref{HolderContThm} (after modifying the choice of radii) show that assumptions (\rm{IB}) and (\rm{H}) hold for any operator in the class of weakly coupled Schr\"odinger systems.
Accordingly, the results of Section \ref{FundMat} hold for all such elliptic systems.
That is, the fundamental matrices associated to these systems exist and satisfy Definition \ref{d3.3} as well as the statements in Theorem \ref{t3.6}.
Finally, we point out that many of these results may be extended to weakly coupled elliptic systems with nontrivial first-order terms.
Since we do not consider such operators in this article, we don't include those details.
\section{Upper Bounds}
\label{UpBds}
We now prove an exponential decay upper bound for the fundamental matrix associated to our elliptic operator.
Going forward, the elliptic operator $\mathcal{L}_V$ is given by \eqref{elEqDef}, where the matrix $A$ satisfies ellipticity and boundedness as described by \eqref{ellip} and \eqref{Abd}, respectively.
For the zeroth order term, we assume now that $V \in {\MC{B}_p} \cap \MC{ND} \cap \MC{NC}$ for some $p \ge \frac n 2$.
As pointed out in Remark \ref{VAssumpRem}, the assumption that $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p \ge \frac n 2$ implies that \eqref{VAssump} holds.
Therefore, this current setting is more restrictive than that of the last three sections.
Since $V \in {\MC{B}_p}$ for some $p \ge \frac n 2$, then Lemma \ref{GehringLemma} implies that $V \in L^{\frac n 2+}_{\loc}$.
This is meaningful since the H\"older continuity results for weakly coupled systems given in Proposition \ref{HolderContThm} hold in this setting.
As such, there is no loss in assuming that $p > \frac n 2$ and we will do that throughout.
We impose the additional condition that $V \in {\MC{B}_p} \cap \MC{NC}$ so that we may apply the Fefferman-Phong inequality described by Lemma \ref{FPml}.
We also require that assumptions {\rm{(IB)}} and {\rm{(H)}} hold so that we can meaningfully discuss our fundamental matrices and draw conclusions about them.
We follow the general arguments of \cite{She99}.
Our first lemma is as follows.
\begin{lem}[Upper bound lemma]
\label{MainUpperBoundLem}
Let $\mathcal{L}_V$ be given by \eqref{elEqDef}, where $A$ satisfies \eqref{ellip} and \eqref{Abd}, and $V \in {\MC{B}_p} \cap \MC{ND} \cap \MC{NC}$ for some $p > \frac n 2$.
Let $B \subset \ensuremath{\mathbb{R}}^n$ be a ball.
Assume that $\V{u} \in W_{V}^{1, 2}(\ensuremath{\mathbb{R}}^n \backslash B)$ is a weak solution to $\mathcal{L}_V \V{u} = 0$ in $\ensuremath{\mathbb{R}}^n \backslash B$.
Let $\phi \in C_c ^\infty (\ensuremath{\mathbb{R}}^n)$ satisfy $\phi = 0$ on $2B$ and let $g \in C^1(\ensuremath{\mathbb{R}}^n)$ be a nonnegative function satisfying $|\nabla g(x)| \lesssim_{(n, p, C_V)} \underline{m}(x, V)$ for every $x \in \ensuremath{\mathbb{R}}^n$.
Then there exists $\varepsilon_0$, $C_0$, both depending on $d, n, p, C_V, N_V, \lambda, \Lambda$, such that whenever $\varepsilon \in \pr{0, \varepsilon_0}$, it holds that
\begin{equation*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \underline{m}(\cdot, V)^2 \abs{\phi \V{u}}^2 e^{2 \epsilon g} \, \le C_0 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 |\nabla \phi|^2 e^{2\epsilon g} .
\end{equation*}
\end{lem}
The proof is a modification of the proof of Proposition $6.5$ in \cite{MP19}.
\begin{proof}
Since $\V{u} \in W_{V}^{1, 2}(\ensuremath{\mathbb{R}}^n \backslash B)$, then by the definition of $W^{1,2}_V(\Omega)$, there exists $\V{v} \in W_{V,0}^{1, 2}(\ensuremath{\mathbb{R}}^n)$ such that $\V{v}\rvert_{\ensuremath{\mathbb{R}}^n \setminus B} = \V{u}$.
Define the function $\V{\psi} = \phi e^{\epsilon g} \V{v} = f \V{u}$.
By a modification to the arguments in {\ref{A4}}, since $\phi \in C_c^\infty(\ensuremath{\mathbb{R}}^n)$ and $g \in C^1\pr{\ensuremath{\mathbb{R}}^n}$, it holds that $\V{\psi} \in W_{V,0}^{1, 2}(\ensuremath{\mathbb{R}}^n)$.
A similar argument shows that $f^2 \V{u} \in W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$ as well.
We adopt notation used in \cite{Amb15}: For $\ensuremath{\mathbb{R}}^d$-valued functions $\V{u}$ and $\V{v}$, and for a scalar function $f$, we write
$$A \, D\V{u} \, D\V{v} = A_{ij} ^{\alpha \beta} D_\beta u^i D_\alpha v^j, \qquad (\V{u} \otimes \nabla f )_{i\beta} = u^i D_\beta f.$$
By uniform ellipticity \eqref{ellip}, we have
\begin{equation*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \lambda \abs{D \V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}}
\le \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} A D\V{\psi} D\V{\psi} + \innp{V \V{\psi}, \V{\psi}}.
\end{equation*}
Using that
$$D\V{\psi} = D(f \V{u}) = \V{u} \otimes \nabla f + f D\V{u},$$
we get
\begin{align}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \lambda \abs{D \V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}}
\lesssim& \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} A (\V{u} \otimes \nabla f)(\V{u} \otimes \nabla f)
+ A (\V{u} \otimes \nabla f) (f D \V{u})
+ A (f D\V{u}) (\V{u} \otimes \nabla f) \nonumber \\
&+ \int_{\ensuremath{\mathbb{R}}^n} A (f D\V{u}) (f D\V{u})
+ \innp{V\V{u}, f^2 \V{u}}.
\label{MainUpperBoundLemEst1}
\end{align}
Since $\V{v} \in W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$, $\V{v}\rvert_{\ensuremath{\mathbb{R}}^n \setminus B} = \V{u} \in W^{1,2}_V\pr{\ensuremath{\mathbb{R}}^n \setminus B}$ is a weak solution away from $B$, and $f^2 \V{u} \in W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$ is supported away from $B$, then
$$\mathcal{B}\brac{\V{u}, f^2 \V{u}} = 0.$$
That is,
\begin{align*}
0 & = \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} A \, D\V{u} \, D(f^2\V{u})
+ \innp{V\V{u}, f^2 \V{u}}
=\inrn2 A\, D\V{u} (f \V{u} \otimes \nabla f )
+ A \, D\V{u} \, f^2 D \V{u}
+ \innp{V\V{u}, f^2 \V{u}}.
\end{align*}
Plugging this into \eqref{MainUpperBoundLemEst1} gives
\begin{align}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D \V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}}
&\lesssim_{(\lambda)} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} A (\V{u} \otimes \nabla f)(\V{u} \otimes \nabla f)
+ A (\V{u} \otimes \nabla f) (f D \V{u})
- A (f D\V{u}) (\V{u} \otimes \nabla f).
\label{MainUpperBoundLemEst2}
\end{align}
Now we obtain an upper bound for the right hand side of \eqref{MainUpperBoundLemEst2}.
Using the boundedness of $A$ from \eqref{Abd} and Cauchy's inequality, we get that for any $\delta' > 0$,
$$\abs{A (\V{u} \otimes \nabla f)(f D\V{u})} \le \Lambda |\V{u}| |f| |D\V{u}| |\nabla \V{f}|
\leq \delta' |f|^2 |D\V{u}|^2 + \frac{C(\Lambda)}{\delta'} |\V{u}|^2 |\nabla f|^2 $$
and similarly
$$\abs{A (f D\V{u}) (\V{u} \otimes \nabla f)} \le \delta' |f|^2 |D\V{u}|^2 + \frac{C(\Lambda)}{\delta'} |\V{u}|^2 |\nabla f|^2,$$
while
$$|A (\V{u} \otimes \nabla f)(\V{u} \otimes \nabla f)| \le \Lambda |\V{u}|^2 |\nabla f|^2.$$
Then with $\delta \simeq_{(\lambda)} \delta'$, we see that
\begin{equation}
\label{MainUpperBoundLemEst3}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D \V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}}
\le \delta \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |f|^2 |D\V{u}|^2 + C\pr{\delta, \lambda, \Lambda} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 |\nabla f|^2.
\end{equation}
Since $\V{\psi} = f \V{u}$, then
\begin{align*}
|D\V{\psi}| ^2
&= \innp{\V{u} \otimes \nabla f + f D\V{u}, \V{u} \otimes \nabla f + f D\V{u}}_{\text{tr}} \\
&= |f|^2 |D\V{u}|^2 + 2 f \innp{D\V{u}, \V{u} \otimes \nabla f}_{\text{tr}} + |\V{u}|^2 | \nabla f|^2.
\end{align*}
The Cauchy inequality implies that
\begin{align*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |f|^2 |D\V{u}|^2
&\leq \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |D\V{\psi}|^2 + |\V{u}|^2 | \nabla f|^2 + 2 |f| |D\V{u}| | \V{u} \otimes \nabla f| \\
&\leq \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |D\V{\psi}|^2 + |\V{u}|^2 | \nabla f|^2 + \frac12 |f|^2 |D\V{u}|^2 + 2 |\V{u}|^2 |\nabla f|^2.
\end{align*}
Since $\displaystyle \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |f|^2 |D\V{u}|^2 < \infty$, then we can absorb the third term into the left to get
\begin{align*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}}|f|^2 |D\V{u}|^2
& \leq \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} 2 |D\V{\psi}|^2 + 6 |\V{u}|^2 | \nabla f|^2.
\end{align*}
Plugging this expression into \eqref{MainUpperBoundLemEst3} shows that
\begin{equation*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D \V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}}
\le 2 \delta \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |D\V{\psi}|^2
+ C\pr{\delta, \lambda, \Lambda} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 | \nabla f|^2.
\end{equation*}
Setting $\delta = \frac 1 3$, we see that
\begin{equation}
\label{psiNormBound}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D \V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}}
\le C(\lambda, \Lambda) \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 | \nabla f|^2.
\end{equation}
To apply Lemma \ref{FPml} to $\V{\psi}$, we require that $\V{\psi} \in C^1_0\pr{\ensuremath{\mathbb{R}}^n}^d$, so we use a limiting argument.
Since $\V{\psi} \in W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$, then {\ref{A2}} gives that there exists $\set{\V{\psi}_k}_{k = 1}^\infty \subset C^\infty_c\pr{\ensuremath{\mathbb{R}}^n}^d$ for which $\V{\psi}_k \to \V{\psi}$ in $W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$.
Moreover, since $\V{\psi}_k \to \V{\psi}$ in $L^{2^*}\pr{\ensuremath{\mathbb{R}}^n}$ (as shown in the proof of Proposition \ref{W12V0Properties}), there exists a subsequence that converges a.e. to $\V{\psi}$.
After relabeling the indices, we may assume that $\V{\psi}_k \to \V{\psi}$ a.e. and in $W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$.
Fatou's Lemma followed by Lemma \ref{FPml} applied to $\V{\psi}_k \in C^\infty_c\pr{\ensuremath{\mathbb{R}}^n}$ gives
\begin{align*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{\psi}|^2 \underline{m}(\cdot, V)^2
&\le \liminf_{k \to \infty} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{\psi_k}|^2 \underline{m}(\cdot, V)^2 \\
&\le \liminf_{k \to \infty} c_1 \pr{ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D\V{\psi_k}}^2 + \innp{V \V{\psi}_k, \V{\psi}_k} }
= c_1 \pr{ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D\V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}} },
\end{align*}
where the last line uses convergence in $W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$ and $c_1 = c_1\pr{d, n, p, C_V, N_V}$.
Combining this inequality with \eqref{psiNormBound} shows that
\begin{align*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\phi \V{u}|^2 \underline{m}(\cdot, V)^2 e^{2 \epsilon g}
\le c_1 \pr{ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{D\V{\psi}}^2 + \innp{V \V{\psi}, \V{\psi}} }
\le c_2 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 | \nabla f|^2 \, dx \\
& \le 2 c_2 \epsilon^2 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\nabla g|^2 e^{2 \epsilon g} |\phi \V{u}|^2
+ 2 c_2 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 |\nabla \phi|^2 e^{2\epsilon g} \\
& \le 2 c_3 \epsilon^2 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \underline{m}(\cdot, V)^2 |\phi \V{u}|^2 e^{2 \epsilon g}
+ 2 c_2 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 |\nabla \phi|^2 e^{2\epsilon g},
\end{align*}
where $c_2 = c_2\pr{d, n, p, C_V, N_V, \lambda, \Lambda}$ and $c_3 = c_3\pr{d, n, p, C_V, N_V, \lambda, \Lambda}$.
If $\varepsilon$ is sufficiently small, we may absorb the first term on the right into the left, completing the proof.
\end{proof}
\begin{rem}
\label{lemHypChanges}
This proof uses that $\V{u}, f^2 \V{u} \in W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$ in order to make sense of the expression $\mathcal{B}\brac{\V{u}, f^2 \V{u}}$.
It also uses that $f \in C^1_0\pr{\ensuremath{\mathbb{R}}^n}$ and $\V{u} \in W^{1,2}_{V,0}\pr{\ensuremath{\mathbb{R}}^n}$ to say that $f D\V{u} \in L^2\pr{\ensuremath{\mathbb{R}}^n}$.
Other assumptions on $\V{u}$ would still allow these arguments to carry through.
More specifically, we can apply Lemma \ref{MainUpperBoundLem} with $\V{u} \in Y^{1,2}_{\loc}\pr{\ensuremath{\mathbb{R}}^n}$ and $f \in C^1_0\pr{\ensuremath{\mathbb{R}}^n}$.
To see this, let $\supp f \subset \Omega$ where $\overline{\Omega} \Subset \ensuremath{\mathbb{R}}^n$ and observe that since
\begin{align*}
\mathcal{B}\brac{\V{u}, f^2 \V{u}}
&= \int_{\ensuremath{\mathbb{R}}^n} \innp{A^{\alpha \beta} D_\beta \V{u}, D_\alpha \pr{f^2 \V{u}}}
+ \innp{V \, \V{u}, f^2 \V{u}} \\
&= \int_\Omega f^2 \innp{A^{\alpha \beta} D_\beta \V{u}, D_\alpha \V{u}}
+ 2 f \innp{A^{\alpha \beta} D_\beta \V{u}, \V{u} \, D_\alpha f }
+ f^2 \innp{V \, \V{u}, \V{u}}
\end{align*}
then by applications of H\"older's inequality,
\begin{align*}
\abs{\mathcal{B}\brac{\V{u}, f^2 \V{u}}}
&\le \Lambda \norm{f}_{L^\infty\pr{\Omega}}^2 \norm{D \V{u}}_{L^2\pr{\Omega}}^2
+ 2 \Lambda \norm{f}_{L^\infty\pr{\Omega}} \norm{D \V{u}}_{L^2\pr{\Omega}} \norm{\V{u}}_{L^{2^*}\pr{\Omega}} \norm{D f}_{L^n\pr{\Omega}} \\
&+ \norm{f}_{L^\infty\pr{\Omega}}^2 \norm{V}_{L^{\frac n 2}\pr{\Omega}} \norm{\V{u}}_{L^{2^*}\pr{\Omega}}^2 \\
&\le 2 \Lambda \norm{f}_{L^\infty\pr{\Omega}}^2 \norm{D \V{u}}_{L^2\pr{\Omega}}^2
+ \pr{ \norm{D f}_{L^n\pr{\Omega}}^2+ \norm{f}_{L^\infty\pr{\Omega}}^2 \norm{V}_{L^{\frac n 2}\pr{\Omega}}} \norm{\V{u}}_{L^{2^*}\pr{\Omega}}^2 \\
&\lesssim_{\pr{\Lambda, \norm{f}}} \norm{\V{u}}_{Y^{1,2}\pr{\Omega}}^2.
\end{align*}
In particular, this shows that $\mathcal{B}\brac{\V{u}, f^2 \V{u}}$ is well-defined and finite.
Moreover, since $D \V{u} \in L^2_{\loc}\pr{\ensuremath{\mathbb{R}}^n}$ and $\overline{\supp f }\Subset \ensuremath{\mathbb{R}}^n$, then $f D\V{u} \in L^2\pr{\ensuremath{\mathbb{R}}^n}$.
\end{rem}
\begin{rem}
\label{constantLVRem}
Going forward, we say that a constant $C$ depends on $\mathcal{L}_V$ to mean that $C$ has the same dependence as the constants in Theorem \ref{t3.6}.
That is, $C = C\pr{\mathcal{L}_V}$ means that $C$ depends on the package $d, n, \lambda, \Lambda$, and $C_{\rm{IB}}$.
\end{rem}
We now prove our upper bound.
\begin{thm}[Exponential upper bound]
\label{UppBoundThm}
Let $\mathcal{L}_V$ be given by \eqref{elEqDef}, where $A$ satisfies \eqref{ellip} and \eqref{Abd}, and $V \in {\MC{B}_p} \cap \MC{ND} \cap \MC{NC}$ for some $p > \frac n 2$.
Assume that {\rm{(IB)}} and {\rm{(H)}} hold.
Let $\Gamma^V(x, y)$ denote the fundamental matrix of $\mathcal{L}_V$ and let $\varepsilon_0$ be as given in Lemma \ref{MainUpperBoundLem}.
For any $\varepsilon < \varepsilon_0$, there exists $C = C(\mathcal{L}_V, p, C_V, N_V, \varepsilon)$ so that for all $x, y \in \ensuremath{\mathbb{R}}^n$,
\begin{equation*}
\abs{\Gamma^V(x, y)} \leq \frac{C e^{-\epsilon \underline{d}(x, y, V)}}{|x-y|^{n-2}}.
\end{equation*}
\end{thm}
The following proof is similar to that of \cite[Theorem 6.7]{MP19}.
\begin{proof}
Fix $x, y \in \ensuremath{\mathbb{R}}^n$ with $x \neq y$.
If $\underline{d}\pr{x, y, V} \lesssim_{(n, p, C_V)} 1$, then $e^{-C\varepsilon} \le e^{-\varepsilon \underline{d}\pr{x, y, V}}$, so the result follows from \eqref{eq3.60} in Theorem \ref{t3.6}.
Therefore, we focus on $x, y \in \ensuremath{\mathbb{R}}^n$ for which $\underline{d}\pr{x, y, V} \gtrsim_{(n, p, C_V)} 1$.
By Lemma \ref{closeRemark}, we can assume $|x - y| > \frac{C}{\underline{m}(x, V)}$ since otherwise $\underline{d}(x, y, V) \lesssim_{(n, p, C_V)} 1$.
Likewise, we can assume $|x - y| > \frac{C}{\underline{m}(y, V)}$.
Finally, we can assume
\begin{equation}
\label{MPEq6.10}
B\pr{x, \frac{4}{\underline{m}(x, V)}} \cap B\pr{y, \frac{4}{\underline{m}(y, V)}} = \emptyset
\end{equation}
for if not, then the triangle inequality shows that
$$|x - y| \leq 8 \max \set{ \frac{1}{\underline{m}(x, V)}, \frac{1}{\underline{m}(y, V)}} $$
so that again $\underline{d}(x, y, V) \lesssim_{(n, p, C_V)} 1$.
Let $r = \frac{1}{\underline{m}(y, V)}$ and pick $M > 0$ large enough so that $B(y, 4r) \subseteq B(0, M)$.
Let $\phi \in C_c ^\infty(\ensuremath{\mathbb{R}}^n)$ be such that $\phi \equiv 0$ on $B(y, 2r), \phi \equiv 1$ on $B(0, M) \backslash B(y, 4r), \phi \equiv 0$ on $\ensuremath{\mathbb{R}}^n \backslash B(0, 2M)$,
\begin{equation*}
|\nabla \phi| \leq \frac{1}{4r} \text{ on } B(y, 4r) \backslash B(y, 2r)
\text{ and }
|\nabla \phi| \leq \frac{2}{M} \text{ on } B(0, 2M) \backslash B(0, M).
\end{equation*}
The next step is to apply Lemma \ref{MainUpperBoundLem}.
We take $B = B(y, r)$, $\V{u}$ to be each of the individual columns of $\Gamma^V(\cdot, y)$, and $g = \varphi_{V, j}\pr{\cdot, y}$, where $\varphi_{V, j} \in C^\infty(\ensuremath{\mathbb{R}}^n)$ is as in Lemma \ref{RegLem1}.
Since $\Gamma^V\pr{\cdot, y} \in Y^{1,2}\pr{\ensuremath{\mathbb{R}}^n \setminus B}$, then it can be shown that $\phi^2 \Gamma^V\pr{\cdot, y} e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}} \in Y^{1,2}_0\pr{\ensuremath{\mathbb{R}}^n \setminus B}$.
Since $C^\infty_c\pr{\ensuremath{\mathbb{R}}^n \setminus B}$ is dense in $Y^{1,2}_0\pr{\ensuremath{\mathbb{R}}^n \setminus B}$, then the expression $\mathcal{B}\brac{\Gamma^V\pr{\cdot, y} , \phi^2 \Gamma^V\pr{\cdot, y} e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}}}$ is meaningful and equals zero, see Definition \ref{d3.3}(a).
Moreover, since $\Gamma^V\pr{\cdot, y} \in Y^{1,2}\pr{\ensuremath{\mathbb{R}}^n \setminus B}$, then $\phi D \Gamma^V\pr{\cdot, y} e^{\epsilon \varphi_{V, j}\pr{\cdot, y}} \in L^2\pr{\ensuremath{\mathbb{R}}^n}$.
In particular, according to Remark \ref{lemHypChanges}, we can apply Lemma \ref{MainUpperBoundLem}.
Doing so, we see that for any $\varepsilon < \varepsilon_0$,
\begin{align*}
\int_{B(0, M) \backslash B(y, 4r)} & \underline{m}(\cdot, V) ^2 | {\Gamma^V}(\cdot, y)|^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}}
\leq \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \underline{m}(\cdot, V) ^2 |\phi \Gamma^V(\cdot, y)|^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}} \\
&\le C_0 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} | {\Gamma^V}(\cdot, y)|^2 |\nabla \phi |^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}} \\
&\le \frac{C_0}{r^2} \int_{B(y, 4r) \backslash B(y, 2r)} | {\Gamma^V}(\cdot, y)|^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}}
+ \frac{C_0}{M^2} \int_{B(0, 2M) \backslash B(0, M)} | {\Gamma^V}(\cdot, y)| ^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}}.
\end{align*}
For each fixed $j$, $\varphi_{V, j}\pr{\cdot, y}$ is bounded on $\ensuremath{\mathbb{R}}^n$.
Applying \eqref{eq3.60} then shows that
\begin{equation*}
\frac{1}{M^2} \int_{B(0, 2M) \backslash B(0, M)} | {\Gamma^V}(\cdot, y)| ^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}}
\lesssim M^{n-2} (M - |y|)^{4 - 2n} \rightarrow 0 \text{ as } M \rightarrow \infty,
\end{equation*}
and so
\begin{equation}
\label{MPEq6.12}
\int_{\ensuremath{\mathbb{R}}^n \backslash B(y, 4r)} \underline{m}(\cdot, V) ^2 | {\Gamma^V}(\cdot, y)|^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}}
\le \frac{C_0}{r^2} \int_{B(y, 4r) \backslash B(y, 2r)} | {\Gamma^V}(\cdot, y)|^2 e^{2\epsilon \varphi_{V, j}\pr{\cdot, y}}.
\end{equation}
By Lemma \ref{closeRemark} and our choice of $r$, if $z \in B(y, 4r) \backslash B(y, 2r)$, then $\underline{d}(z, y, V) \lesssim_{(n, p, C_V)} 1$.
It follows from Lemmas \ref{RegLem1} and \ref{RegLem0} that $\varphi_{V, j}(z) \leq \varphi_{V}(z) \lesssim_{(n, p, C_V)} 1$.
Combining this observation with \eqref{eq3.60}, \eqref{MPEq6.12}, Fatou's Lemma, and Lemma \ref{RegLem1} shows that
\begin{equation*}
\int_{\ensuremath{\mathbb{R}}^n \backslash B(y, 4r)} \underline{m}(z, V)^2 \abs{\Gamma^V(z, y)}^2 e^{2\epsilon \varphi_{V}(z, y)} dz
\lesssim_{(\mathcal{L}_V, p, C_V, N_V)} r^{2-n}.
\end{equation*}
If we set $R = \frac{1}{\underline{m}(x, V)}$ then \eqref{MPEq6.10} shows that $B(x, R) \subseteq \ensuremath{\mathbb{R}}^n \backslash B(y, 4r)$.
Consequently,
\begin{equation*}
\int_{B(x, R)} \underline{m}(z, V) ^2 | {\Gamma^V}(z, y)|^2 e^{2\epsilon \underline{d}(z, y, V)} \, dz
\lesssim_{(\mathcal{L}_V, p, C_V, N_V)} r^{2-n}.
\end{equation*}
An application of the triangle inequality and Lemma \ref{closeRemark} shows that for $z \in B(x, R)$,
\begin{align*}
\underline{d}\pr{x, y, V}
&\le \underline{d}\pr{x, z, V} + \underline{d}\pr{z, y, V}
\le L + \underline{d}\pr{z, y, V},
\end{align*}
where $L = L\pr{n, p, C_V}$, so that
$$ e^{2\epsilon \underline{d}(z, y, V)} \geq C\pr{n, p, C_V, \varepsilon} e^{2 \epsilon \underline{d}(x, y, V)}.$$
Furthermore, Lemma \ref{muoBounds}(a) shows that $R^{-1} = \underline{m}(x, V) \simeq_{(n, p, C_V)} \underline{m}(z, V)$ so that
\begin{equation}
\label{MPEq6.14}
\pr{\fint_{B(x, R)} |{\Gamma^V}(z, y)|^2 \, dz }^\frac12
\lesssim_{(\mathcal{L}_V, p, C_V, N_V, \varepsilon)} [\underline{m}(x, V) \underline{m}(y, V)]^{(n-2)/2}e^{-\epsilon \underline{d}(x, y, V)}.
\end{equation}
Choose $\gamma : \brac{0,1} \to \ensuremath{\mathbb{R}}^n$ so that $\gamma\pr{0} = x$, $\gamma\pr{1} = y$ and
\begin{align*}
2 \underline{d}\pr{x, y, V} \ge \int_0^1 \underline{m}\pr{\gamma\pr{t}, V} \abs{\gamma'\pr{t}} dt.
\end{align*}
It follows from Lemma \ref{muoBounds}(c) that
\begin{align*}
\underline{d}\pr{x, y, V}
\ge \frac c 2 \int_0^1 \frac{\underline{m}\pr{x, V}\abs{\gamma'\pr{t}} dt}{\brac{1 + \abs{\gamma\pr{t} - x} \underline{m}\pr{x, V}}^{k_0/(k_0+1)}}
= \frac c 2 \int_0^1 \frac{\abs{\widetilde \gamma \,'\pr{t}} dt}{\brac{1 + \abs{\widetilde \gamma\pr{t}}}^{k_0/(k_0+1)}},
\end{align*}
where $\widetilde \gamma : \brac{0,1} \to \ensuremath{\mathbb{R}}^n$ is a shifted, rescaled version of $\gamma$.
That is, $\widetilde \gamma(0) = 0$ and $\widetilde \gamma(1) = \underline{m}\pr{x, V} \pr{y - x}$.
This integral is bounded from below by the geodesic distance from $0$ to $\underline{m}\pr{x, V} \pr{y - x}$ in the metric
$$\frac{dz}{\pr{1 + \abs{z}}^{k_0/(k_0+1)}}.$$
A computation shows that the straight line path achieves this minimum.
Therefore,
\begin{align*}
\underline{d}\pr{x, y, V}
&\ge \frac c 2 \int_0^1 \frac{\underline{m}\pr{x, V} \abs{y - x} dt}{\brac{1 + \underline{m}\pr{x, V} t \abs{y - x}}^{k_0/(k_0+1)}}
= \frac {c(k_0+1)} 2 \brac{ \pr{1+\underline{m}\pr{x, V} \abs{y - x}}^{1/(k_0+1)} - 1} \\
&\ge C \pr{\underline{m}\pr{x, V} \abs{y - x}}^{1/(k_0+1)},
\end{align*}
where we have used that $\abs{x - y} \ge \frac 4 {\underline{m}\pr{x, V}}$ to reach the final line.
In particular, for any $\varepsilon' > 0$, it holds that
\begin{align}
\label{distlowBd}
\underline{m}\pr{x, V} \abs{y - x}
\leq \frac{1}{C^{k_0 + 1}} \underline{d}(x, y, V)^{k_0 + 1}
\leq \frac{1}{C^{k_0 + 1}} C_{\epsilon'} e^{\epsilon' \underline{d}(x, y, V)/2},
\end{align}
where $C_{\epsilon'} > 0$ depends on $\epsilon'$.
A similar argument shows that
\begin{align*}
\underline{m}\pr{y, V} \abs{y - x}
\leq \frac{1}{C^{k_0 + 1}} C_{\epsilon'} e^{\epsilon' \underline{d}(x, y, V)/2}.
\end{align*}
Multiplying these two bounds gives
\begin{equation*}
\underline{m}\pr{x, V} \underline{m}\pr{y, V} \leq C^{-2(k_0 + 1)} C_{\epsilon'}^2 e^{\epsilon' \underline{d}(x, y, V)} \abs{y - x}^{-2}.
\end{equation*}
Define $\epsilon' = \frac{\varepsilon}{n-2}$.
We then substitute this upper bound into \eqref{MPEq6.14} and simplify to get
\begin{equation}
\label{averagedBound}
\pr{\fint_{B(x, R)} |{\Gamma^V}(z, y)|^2 \, dz }^\frac12
\lesssim_{(\mathcal{L}_V, p, C_V, N_V, \varepsilon)} \frac{e^{-\epsilon \underline{d}(x, y, V)/2}}{\abs{x - y}^{n-2}}.
\end{equation}
Finally, since we assume that $y \not \in B(x, R)$, then $\mathcal{L}_V \Gamma^V\pr{\cdot, y} = 0$ in $B(x, R)$.
In particular, \eqref{eq3.47} from assumption {\rm{(IB)}} is applicable, so that
\begin{align*}
\abs{\Gamma^V\pr{x, y}}
&\le \norm{\Gamma^V\pr{\cdot, y}}_{L^\infty\pr{B(x, R/2)}}
\le C_{\rm{IB}} \pr{\fint_{B(x, R)} |{\Gamma^V}(z, y)|^2 \, dz }^\frac12,
\end{align*}
and the conclusion follows by combining the previous two inequalities.
\end{proof}
\begin{rem}
As in \cite{MP19}, if instead of assuming {\rm{(IB)}} and {\rm{(H)}}, we assume that $\Gamma^V$ exists and satisfies the pointwise bound described by \eqref{eq3.60}, then \eqref{averagedBound} holds.
\end{rem}
Define the diagonal matrix $\Lambda = \abs{V} I$, where $\abs{V} = \lambda_d \in {\text{B}_p}$ is the largest eigenvalue of $V$ and $I$ denotes the $d \times d$ identity matrix.
Then set $\mathcal{L}_\La = -D_\alpha\pr{A^{\alpha \beta} D_\beta } + \Lambda$ to be the associated Schr\"odinger operator.
We let $\Gamma^\Lambda$ denote the fundamental matrix for $\mathcal{L}_\La$.
Since the assumptions imposed to make sense of $\Gamma^V$ are inherited for $\Gamma^\Lambda$, then $\Gamma^\Lambda$ exists and satisfies the conclusions of Theorem \ref{t3.6} as well.
Because $\Lambda$ is diagonal, then its upper and lower auxiliary functions coincide and are equal to $\overline{m}\pr{x, V}$.
That is, $\underline{m}\pr{x, \Lambda} = \overline{m}\pr{x, \Lambda} = \overline{m}\pr{x, V}$ so that $\underline{d}\pr{x, \Lambda} = \overline{d}\pr{x, \Lambda} = \overline{d}\pr{x, V}$.
As such, we can obtain an upper bound for $\Gamma^\Lambda$ without having to assume that $V \in \MC{NC}$ or even that $V \in \MC{ND}$, see Remark \ref{noND}.
We accomplish this by applying the following lemma in place of Lemma \ref{MainUpperBoundLem}.
\begin{lem}[Upper bound lemma for $V = \Lambda$]
\label{MainUpperBoundCor}
Let $\mathcal{L}_\La$ be as defined above, where $A$ satisfies \eqref{ellip} and \eqref{Abd}, and $\abs{V} \in {\text{B}_p}$ for some $p > \frac n 2$.
Let $B \subset \ensuremath{\mathbb{R}}^n$ be a ball.
Assume that $\V{u} \in W_{\abs{V}I}^{1, 2}(\ensuremath{\mathbb{R}}^n \backslash B)$ is a weak solution to $\mathcal{L}_\La \V{u} = 0$ in $\ensuremath{\mathbb{R}}^n \backslash B$.
Let $\phi \in C_c ^\infty (\ensuremath{\mathbb{R}}^n)$ satisfy $\phi = 0$ on $2B$ and let $g \in C^1(\ensuremath{\mathbb{R}}^n)$ be a nonnegative function satisfying $|\nabla g(x)| \lesssim_{(n, p, C_V)} m(x, \abs{V})$ for every $x \in \ensuremath{\mathbb{R}}^n$.
Then there exists $\varepsilon_1$, $C_1$, both depending on $d, n, p, C_{\abs{V}}, \lambda, \Lambda$, such that whenever $\varepsilon \in \pr{0, \varepsilon_1}$, it holds that
\begin{equation*}
\ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} m(\cdot, \abs{V})^2 \abs{\phi \V{u}}^2 e^{2 \epsilon g} \, \le C_1 \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} |\V{u}|^2 |\nabla \phi|^2 e^{2\epsilon g} .
\end{equation*}
\end{lem}
The proof of this result exactly follows that of Lemma \ref{MainUpperBoundLem} except that the Fefferman-Phong inequality described by Corollary \ref{FPmlCor} is used in place of Lemma \ref{FPml}.
We arrive at the following corollary to Theorem \ref{UppBoundThm}.
\begin{cor}[Exponential upper bound for $V = \Lambda$]
\label{UppBoundCor}
Let $\mathcal{L}_\La = -D_\alpha\pr{A^{\alpha \beta} D_\beta} + \abs{V} I$, where $A$ satisfies \eqref{ellip} and \eqref{Abd}, and $\abs{V} \in {\text{B}_p}$ for some $p > \frac n 2$.
Assume that {\rm{(IB)}} and {\rm{(H)}} hold.
Let $\Gamma^\Lambda(x, y)$ denote the fundamental matrix of $\MC{L}_\Lambda$ and let $\varepsilon_1$ be as given in Lemma \ref{MainUpperBoundCor}.
For any $\varepsilon < \varepsilon_1$, there exists $C = C(\mathcal{L}_\La, p, C_{\abs{V}}, \varepsilon)$ so that for all $x, y \in \ensuremath{\mathbb{R}}^n$,
\begin{equation*}
\abs{\Gamma^\Lambda(x, y)} \leq \frac{C e^{-\epsilon \overline{d}(x, y, V)}}{|x-y|^{n-2}}.
\end{equation*}
\end{cor}
This result will be used to obtain lower bounds in the next section.
\section{Lower Bounds}
\label{LowBds}
Here we prove an exponential decay lower bound for the fundamental matrix associated to our elliptic operator.
As before, the elliptic operator $\mathcal{L}_V$ is given by \eqref{elEqDef}, where the matrix $A$ satisfies ellipticity and boundedness as described by \eqref{ellip} and \eqref{Abd}, respectively.
For the zeroth order term, we assume that $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$.
In contrast to the upper bound section, we will not require that $V \in \MC{NC}$.
In fact, many of the results in this section hold when we assume that $\abs{V} \in {\text{B}_p}$ (instead of $V \in {\MC{B}_p}$) and accordingly replace all occurrences of $\overline{m}\pr{\cdot, V}$ with $m\pr{\cdot, \abs{V}}$.
The assumption that $V \in \MC{ND}$ ensures that the spaces $W^{1,2}_{V, 0}\pr{\ensuremath{\mathbb{R}}^n}$ are Hilbert spaces and we require this for Lemma \ref{UniquenessLem}, for example.
Moreover, the Hilbert spaces are crucial to the fundamental matrix constructions in Section \ref{FundMat}.
We also assume that conditions {\rm{(IB)}} and {\rm{(H)}} hold so that we can meaningfully discuss our fundamental matrices and draw conclusions about them.
Further on, we will impose a pair of additional assumptions for fundamental matrices.
As with {\rm{(IB)}} and {\rm{(H)}}, these assumptions are known to hold in the scalar setting.
Let ${\Gamma}^0(x, y)$ denote the fundamental matrix for the homogeneous operator $\mathcal{L}_0$ that we get when $V \equiv 0$.
That is, $\mathcal{L}_0 := -D_\alpha\pr{A^{\alpha \beta} D_\beta}$.
Since the assumptions imposed to make sense of $\Gamma^V$ are inherited for $\Gamma^0$, the conclusions of Theorem \ref{t3.6} hold for $\Gamma^0$.
Recall that $\mathcal{L}^\Lambda = \mathcal{L}_0 + \Lambda$, where $\Lambda = \abs{V} I$ and $\Gamma^\Lambda$ denotes the associated fundamental matrix.
In \cite{She99}, a clever presentation of $\Gamma^0 - \Gamma^V$ is used to prove bounds for that difference function.
Here, we take a slightly different approach and look at both $\Gamma^0 - \Gamma^\Lambda$ and $\Gamma^\Lambda - \Gamma^V$, then combine the bounds.
Using the fundamental matrix associated to the operator with a diagonal matrix as an intermediary allows us to prove the bounds that we require for the lower bound estimates without having to assume that $V \in \MC{NC}$ or impose other conditions.
We begin with the representation formula.
To establish this result, we follow the ideas from \cite{MP19}.
\begin{lem}[Representation formula]
\label{UniquenessLem}
Assume that the coefficient matrix $A$ satisfies boundedness \eqref{Abd} and ellipticity \eqref{ellip}, and that $V$ is a locally integrable matrix weight that satisfies \eqref{VAssump}.
Assume also that conditions {\rm{(IB)}} and {\rm{(H)}} hold.
Let $\Gamma^0$, $\Gamma^\Lambda$, and $\Gamma^V$ denote the fundamental matrices of $\mathcal{L}_0$ $\mathcal{L}_\La$, and $\mathcal{L}_V$, respectively.
Then
\begin{align*}
\Gamma^0(x, y) - \Gamma^V (x, y)
&= \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \Gamma^0(x, z) \Lambda(z) \Gamma^\Lambda(z, y) \, dz
+ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \Gamma^\Lambda (x, z)\brac{V(z) - \Lambda(z)} \Gamma^V(z, y) \, dz.
\end{align*}
\end{lem}
\begin{proof}
Let $\pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$ denote the dual space to $W_{V, 0}^{1, 2} (\ensuremath{\R^n})$.
Given $\V{f} \in \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$, an application of the Lax-Milgram theorem shows that there exists unique $\V{u} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n})$ for which
$$\mathcal{B}_V\brac{\V{u}, \V{v}} = \V{f}(\V{v}) \quad \text{for every } \V{v} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n}).$$
We denote $\V{u}$ by $\mathcal{L}_V ^{-1} \V{f}$, so that $\mathcal{L}_V^{-1} : \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}' \rightarrow W_{V, 0}^{1, 2} (\ensuremath{\R^n})$ and
\begin{equation}
\label{LaxEq}
\mathcal{B}_V\brac{\mathcal{L}_V^{-1} \V{f}, \V{v}} = \V{f}(\V{v}) \quad \text{for every } \V{v} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n}).
\end{equation}
Note that the inverse mapping $\mathcal{L}_V : W_{V, 0}^{1, 2} (\ensuremath{\R^n}) \rightarrow \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$ satisfies $\pr{\mathcal{L}_V\V{u}}(\V{v}) = \mathcal{B}_V\brac{\V{u}, \V{v}}$ for every $\V{v} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n})$.
In particular,
$$(\mathcal{L}_V \mathcal{L}_V^{-1} \V{f}) (\V{v}) = \mathcal{B}_V\brac{\mathcal{L}_V^{-1} \V{f}, \V{v}} = \V{f}(\V{v}) \quad \text{for every } \V{v} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n})$$
showing that $\mathcal{L}_V \mathcal{L}_V^{-1}$ acts as the identity on $\pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$.
On the other hand, if $\V{f} = \mathcal{L}_V \V{u} $, then $\V{f}(\V{v}) = \mathcal{B}_V\brac{\V{u}, \V{v}}$ for any $\V{v} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n})$.
It follows that
$$\V{u} = \mathcal{L}_V^{-1} \V{f} = \mathcal{L}_V^{-1} \mathcal{L}_V \V{u}$$
and we conclude that $\mathcal{L}_V^{-1} \mathcal{L}_V$ is the identity on $W_{V, 0}^{1, 2} (\ensuremath{\R^n})$.
Since $\mathcal{B}_V\brac{\V{v}, \V{u}} = \mathcal{B}_V^*\brac{\V{u}, \V{v}}$ for every $\V{u}, \V{v} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n})$, then analogous statements may be made for $\mathcal{L}_V^{*}$ and $\pr{\mathcal{L}_V^*}^{-1}$.
Since $\norm{\V{u}}_{W_{V, 0}^{1, 2} (\ensuremath{\R^n})} \leq \norm{\V{u}}_{W_{\Lambda, 0}^{1, 2} (\ensuremath{\R^n})}$, then $W_{\Lambda, 0}^{1, 2} (\ensuremath{\R^n}) \subseteq W_{V, 0}^{1, 2} (\ensuremath{\R^n})$ so that $\pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}' \subseteq \pr{W_{\Lambda, 0}^{1, 2} (\ensuremath{\R^n})}'$.
It follows that for any $\V{f} \in \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$, $\mathcal{L}_V \mathcal{L}_\La ^{-1} \V{f} \in \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$.
Observe that if $\V{u}, \V{v} \in W_{\Lambda, 0}^{1, 2} (\ensuremath{\R^n})$, then
$$\brac{\pr{\mathcal{L}_\La - \mathcal{L}_V}\V{u}}(\V{v}) = \mathcal{B}_{\Lambda}\brac{\V{u}, \V{v}} - \mathcal{B}_V\brac{\V{u}, \V{v}} = \innp{(\Lambda - V) \V{u}, \V{v}}_{L^2(\ensuremath{\R^n})}.$$
Thus, with $\V{f} \in \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$, we deduce that
\begin{align}
\label{MainRepEq}
\V{f} - \mathcal{L}_V \mathcal{L}_\La ^{-1} \V{f}
= \pr{\mathcal{L}_\La - \mathcal{L}_V}\mathcal{L}_\La^{-1} \V{f}
= (\Lambda - V) \mathcal{L}_\La^{-1} \V{f}.
\end{align}
Since $\V{f}, \mathcal{L}_V \mathcal{L}_\La ^{-1} \V{f} \in \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$ as noted above, then $(\Lambda - V) \mathcal{L}_\La^{-1} \V{f} \in \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$ as well.
It follows that
$$\mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f} \in W_{V, 0}^{1, 2} (\ensuremath{\R^n}).$$
By applying $\mathcal{L}_V^{-1}$ to both sides of \eqref{MainRepEq}, we see that
\begin{equation}
\label{MainRepEq2}
\mathcal{L}_V^{-1} \V{f} = \mathcal{L}_\La ^{-1} \V{f} + \mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f}.
\end{equation}
For $\V{\phi} \in C_c^\infty (\ensuremath{\R^n}) \subseteq \pr{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}'$ that acts via $\displaystyle \V{\phi}(\V{u}) = \innp {\V{u}, \V{\phi}}_{L^2(\ensuremath{\R^d})}$, we see from \eqref{LaxEq} that
\begin{align*}
\mathcal{B}_V\brac{\mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f} , \pr{\mathcal{L}_V^*}^{-1} \V{\phi}}
& = \pr{(\Lambda - V) \mathcal{L}_\La^{-1} \V{f}}\pr{\pr{\mathcal{L}_V^*}^{-1} \V{\phi}}
= \innp{(\Lambda - V) \mathcal{L}_\La^{-1} \V{f}, \pr{\mathcal{L}_V^*}^{-1} \V{\phi}}_{L^2(\ensuremath{\R^n})},
\end{align*}
and
\begin{align*}
\mathcal{B}_V\brac{\mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f} , \pr{\mathcal{L}_V^*}^{-1} \V{\phi}}
&= \mathcal{B}_V^*\brac{\pr{\mathcal{L}_V^*}^{-1} \V{\phi}, \mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f} }
= \V{\phi} \pr{\mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f}} \\
& = \innp{\mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f}, \V{\phi}}_{L^2(\ensuremath{\R^n})}.
\end{align*}
Combining these observations shows that
\begin{equation}
\label{DualIdent}
\innp{\mathcal{L}_V ^{-1} (\Lambda - V) \mathcal{L}_\La^{-1} \V{f}, \V{\phi}}_{L^2(\ensuremath{\R^n})} = \innp{(\Lambda - V) \mathcal{L}_\La^{-1} \V{f}, \pr{\mathcal{L}_V^*}^{-1} \V{\phi}}_{L^2(\ensuremath{\R^n})} .
\end{equation}
Pairing \eqref{MainRepEq2} with $\V{\phi}$ in an inner product, integrating over $\ensuremath{\mathbb{R}}^n$, and using \eqref{DualIdent} then gives
\begin{equation}
\label{equalityToExpand}
\innp{\mathcal{L}_V^{-1} \V{f}, \V{\phi}}_{L^2(\ensuremath{\R^n})} = \innp{\mathcal{L}_\La ^{-1} \V{f}, \V{\phi}}_{L^2(\ensuremath{\R^n})} + \innp{(\Lambda - V) \mathcal{L}_\La^{-1} \V{f}, \pr{\mathcal{L}_V^*}^{-1} \V{\phi}}_{L^2(\ensuremath{\R^n})}.
\end{equation}
Recall from Definition \ref{d3.3} that $\displaystyle \mathcal{L}_V^{-1} \V{f}(x) = \int_{\ensuremath{\mathbb{R}}^n} \Gamma^V\pr{x,y} \V{f}(y) dy$ for any $\V{f} \in L^\infty_c\pr{\ensuremath{\mathbb{R}}^n}^d$.
By taking $\V{f}, \V{\phi} \in C_c ^\infty(\ensuremath{\R^n})$ with disjoint supports, it follows that
\begin{align*}
\innp{\mathcal{L}_V^{-1} \V{f}, \V{\phi}}_{L^2(\ensuremath{\R^n})}
&= \int_{\ensuremath{\mathbb{R}}^n} \innp{\int_{\ensuremath{\mathbb{R}}^n} \Gamma^V\pr{x,y} \V{f}(y) dy, \V{\phi}(x)} dx
= \int_{\ensuremath{\mathbb{R}}^n}\int_{\ensuremath{\mathbb{R}}^n} \innp{ \Gamma^V\pr{x,y} \V{f}(y), \V{\phi}(x)} dy dx,
\end{align*}
where the application of Fubini is justified by the fact that $\Gamma^V$ is locally bounded away from the diagonal.
A similar equality holds for the second term in \eqref{equalityToExpand}.
For the last term in \eqref{equalityToExpand}, observe that
\begin{align*}
\innp{(\Lambda - V) \mathcal{L}_\La^{-1} \V{f}, \pr{\mathcal{L}_V^*}^{-1} \V{\phi}}_{L^2(\ensuremath{\R^n})}
&=\int_{\ensuremath{\mathbb{R}}^n} \innp{(\Lambda(z) - V(z)) \int_{\ensuremath{\mathbb{R}}^n} \Gamma^V\pr{z,y} \V{f}(y) dy, \int_{\ensuremath{\mathbb{R}}^n} \Gamma^{V*}\pr{z,x} \V{\phi}(x) dx} dz \\
&=\int_{\ensuremath{\mathbb{R}}^n}\int_{\ensuremath{\mathbb{R}}^n}\int_{\ensuremath{\mathbb{R}}^n} \innp{\Gamma^{V*}\pr{z,x}^T (\Lambda(z) - V(z)) \Gamma^V\pr{z,y} \V{f}(y) , \V{\phi}(x) } dz dy dx \\
&=\int_{\ensuremath{\mathbb{R}}^n}\int_{\ensuremath{\mathbb{R}}^n} \innp{\brac{\int_{\ensuremath{\mathbb{R}}^n} \Gamma^{V}\pr{x,z} (\Lambda(z) - V(z)) \Gamma^V\pr{z,y} dz} \V{f}(y) , \V{\phi}(x) } dy dx,
\end{align*}
where we have used the property that $\Gamma^V(x, z) = \Gamma^{V*}(z, x)^T$.
Putting it all together gives
\begin{equation}
\label{FLCoVRes}
0 = \int_{\ensuremath{\R^n}} \int_{\ensuremath{\R^n}}\innp{\brac{\Gamma^V (x, y) - \Gamma^{\Lambda} (x, y) - \int_{\ensuremath{\R^n}}\Gamma^V (x, z)(\Lambda(z) -V(z)) \Gamma^\Lambda (z, y) \, dz} \V{f}(y), \V{\phi}(x)} \, dy \, dx.
\end{equation}
By \eqref{eq3.59} in Theorem \ref{t3.6}, the functions $\displaystyle \Gamma^V (x, y)$ and $\displaystyle \Gamma^{\Lambda}(x, y)$ are locally bounded on $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta$.
As shown in Lemma \ref{localBoundedIntegrals} below, $\displaystyle\int_{\ensuremath{\R^n}}\Gamma^V (x, z)(\Lambda(z) -V(z)) \Gamma^\Lambda (z, y) \, dz$ is also locally bounded on $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta$.
It follows that $\displaystyle \Gamma^V (x, y) - \Gamma^{\Lambda} (x, y) - \int_{\ensuremath{\R^n}}\Gamma^V (x, z)(\Lambda(z) -V(z)) \Gamma^\Lambda (z, y) \, dz \in L^1_{\loc}\pr{\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta}$.
As \eqref{FLCoVRes} holds for all $\V{f}, \V{\phi} \in C_c ^\infty(\ensuremath{\R^n})$ with disjoint supports, then an application of Lemma \ref{fundCoV} shows that for a.e. $(x, y) \in \ensuremath{\R^n} \times \ensuremath{\R^n}$,
\begin{equation}
\label{VLambdaRep}
\Gamma^V (x, y)= \Gamma^{\Lambda} (x, y) - \int_{\ensuremath{\R^n}}\Gamma^V (x, z)\brac{V(z) - \Lambda(z)} \Gamma^\Lambda (z, y) \, dz.
\end{equation}
Since $\norm{\V{u}}_{Y_{0}^{1, 2} (\ensuremath{\R^n})} \leq \norm{\V{u}}_{W_{V, 0}^{1, 2} (\ensuremath{\R^n})}$ implies that $W_{\Lambda, 0}^{1, 2} (\ensuremath{\R^n}) \subseteq Y_{0}^{1, 2} (\ensuremath{\R^n})$, then $\pr{Y_{0}^{1, 2} (\ensuremath{\R^n})}' \subseteq \pr{W_{\Lambda, 0}^{1, 2} (\ensuremath{\R^n})}'$.
In particular, all of the arguments from above hold with $V$ replaced by $0$, so we get that for a.e. $(x, y) \in \ensuremath{\R^n} \times \ensuremath{\R^n}$,
\begin{equation}
\label{0LambdaRep}
\Gamma^0 (x, y)= \Gamma^{\Lambda} (x, y) + \int_{\ensuremath{\R^n}}\Gamma^0 (x, z)\Lambda(z) \Gamma^\Lambda (z, y) \, dz.
\end{equation}
Subtracting \eqref{VLambdaRep} from \eqref{0LambdaRep} leads to the conclusion of the lemma.
\end{proof}
For completeness, we prove the following version of the fundamental lemma of calculus of variations.
\begin{lem}[Fundamental lemma of calculus of variations for matrix-valued functions]
\label{fundCoV}
Let $G\pr{x,y}$ be a $d \times d$ matrix function defined on $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta$ that is locally integrable.
If
$$\int_{\ensuremath{\R^n}} \int_{\ensuremath{\R^n}}\innp{G(x,y) \V{f}(y), \V{\phi}(x)} \, dy \, dx = 0$$
for every $\V{f}, \V{\phi} \in C_c ^\infty(\ensuremath{\R^n})$ with disjoint supports, then $G(x, y) = 0$ a.e. on $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n$.
\end{lem}
\begin{proof}
Assume first that $G(x,y)$ is continuous.
For the sake of contradiction, assume that $G(x,y)$ is not identically zero on $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta$.
Then there exists some $\pr{x_0, y_0} \in \ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n$, $x_0 \ne y_0$, for which $G(x_0, y_0) \ne 0$.
This means that there exists some vector $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$ for which $\innp{G(x_0, y_0) \V{e}, \V{e}} \ne 0$.
Without loss of generality $\innp{G(x_0, y_0) \V{e}, \V{e}} = 2\varepsilon > 0$.
Since $G(x, y)$ is continuous, then there exists $\delta < \frac 1 5 \dist\pr{x_0, y_0}$ for which $\innp{G(x, y) \V{e}, \V{e}} \ge \varepsilon$ whenever $\pr{x,y} \in B_\delta\pr{x_0} \times B_\delta\pr{y_0}$.
Let $\eta \in C^\infty_c\pr{B_2}$, where $B_2 \subset \ensuremath{\mathbb{R}}^n$, be a non-negative cutoff function for which $\eta \equiv 1$ on $B_{1}$.
Now we let $\V{f}(x) = \eta\pr{\frac{x - x_0}\delta} \V{e}$ and $\V{\phi}\pr{y} = \eta\pr{\frac{y - y_0}\delta} \V{e}$ and observe that $\supp \V{f} \subset B_{2\delta}\pr{x_0}$, $\supp \V{\phi} \subset B_{2\delta}\pr{y_0}$ so that the supports of $\V{f}$ and $\V{\phi}$ are disjoint.
Moreover,
\begin{align*}
\int_{\ensuremath{\R^n}} \int_{\ensuremath{\R^n}}\innp{G(x,y) \V{f}(y), \V{\phi}(x)} \, dy \, dx
&= \int_{\ensuremath{\R^n}} \int_{\ensuremath{\R^n}}\innp{G(x,y) \eta\pr{\frac{x - x_0}\delta} \V{e}, \eta\pr{\frac{y - y_0}\delta} \V{e}} \, dy \, dx \\
&\ge \int_{B_\delta\pr{x_0}} \int_{B_\delta\pr{y_0}}\innp{G(x,y) \V{e}, \V{e}} \, dy \, dx
= \varepsilon \abs{B_\delta}^2
> 0,
\end{align*}
which gives a contradiction.
If $G\pr{x,y}$ is not continuous, we use that continuous functions are dense in $L^1$ to locally approximate $G(x,y)$, and then apply the previous result.
\end{proof}
Next, we establish that the integral functions in Lemma \ref{UniquenessLem} are locally integrable away from the diagonal.
\begin{lem}[Local integrability on $\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta$]
\label{localBoundedIntegrals}
Assume that $A$ satisfies boundedness \eqref{Abd} and ellipticity \eqref{ellip}, and that $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$.
Assume also that {\rm{(IB)}} and {\rm{(H)}} hold so that $\Gamma^0$, $\Gamma^\Lambda$, and $\Gamma^V$, the fundamental matrices of $\mathcal{L}_0$ $\mathcal{L}_\La$, and $\mathcal{L}_V$, respectively, exist and satisfy the conclusions of Theorem \ref{t3.6}.
Define $\displaystyle G(x,y) = \int_{\ensuremath{\R^n}}\Gamma^V (x, z)\brac{V(z) - \Lambda(z)} \Gamma^\Lambda (z, y) \, dz$ and $\displaystyle H(x,y) = \int_{\ensuremath{\R^n}}\Gamma^0 (x, z)\Lambda(z)\Gamma^\Lambda (z, y) \, dz$.
Then $G, H \in L^1_{\loc}\pr{\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta}$.
\end{lem}
\begin{proof}
We show that $G \in L^1_{\loc}\pr{\ensuremath{\mathbb{R}}^n \times \ensuremath{\mathbb{R}}^n \setminus \Delta}$ and note that the argument for $H$ is analogous.
Set $r = \abs{x - y}$ and let $\varepsilon = \frac{\varepsilon_1}2$, where $\varepsilon_1 > 0$ is as in Lemma \ref{MainUpperBoundCor}.
An application of Lemma \ref{UniquenessLem} followed by Corollary \ref{UppBoundCor} along with the bound \eqref{eq3.60} from Theorem \ref{t3.6} applied to $\Gamma^V$ shows that
\begin{equation}
\label{GBound}
\begin{aligned}
\abs{G\pr{x, y}}
&\le \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{\Gamma^V (x, z)} \abs{V(z) - \Lambda(z)} \abs{\Gamma^\Lambda(z, y)} \, dz
\lesssim_{(\mathcal{L}_V, p, C_V)} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \frac{e^{-\epsilon \overline{d}(x, z, V)}\, |V(z) - \Lambda(z)| }{|z-x|^{n-2} |z-y|^{n-2}}dz \\
\lesssim& \int_{B(x, \frac r 2)} \frac{ |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
+ \int_{B(y, \frac r 2)} \frac{ |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz \\
+& \int_{\ensuremath{\mathbb{R}}^n \setminus \pr{B(x, \frac r 2) \cup B(y, \frac r 2)}} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz.
\end{aligned}
\end{equation}
For the first term, an application of H\"older's inequality shows that
\begin{equation}
\label{xBall}
\begin{aligned}
\int_{B(x, \frac r 2)} & \frac{ |V(z)| \, dz}{|z-x|^{n-2} |z-y|^{n-2}}
\lesssim_{(n)} r^{2-n}\int_{B(x, \frac r 2)} \frac{|V(z)| \, dz}{|z-x|^{n-2}} \\
&\le r^{2-n} \norm{V}_{L^p\pr{B(x, \frac r 2)}} \pr{\int_{0}^{r/2} \rho^{n-1 + \frac {p\pr{2-n}} {p-1}} d\rho}^{\frac {p-1} p}
\lesssim_{\pr{n, p}} \norm{V}_{L^p\pr{B(x, \frac r 2)}} r^{4 - 2n + \frac {pn} {p-1}}.
\end{aligned}
\end{equation}
An analogous argument shows that for the second term in \eqref{GBound}, we get
\begin{align}
\label{yBall}
\int_{B(y, \frac r 2)} \frac{|V(z)| \, dz}{|z-x|^{n-2} |z-y|^{n-2}}
&\lesssim_{\pr{n, p}} \norm{V}_{L^p\pr{B(y, \frac r 2)}} r^{4 - 2n + \frac {pn} {p-1}}.
\end{align}
We now turn to the third integral in \eqref{GBound}.
Observe that with $R = \frac 1 {\overline{m}\pr{x, V}}$,
\begin{equation}
\label{thirdIntegral}
\begin{aligned}
\int_{\ensuremath{\mathbb{R}}^n \setminus \pr{B(x, \frac r 2) \cup B(y, \frac r 2)}} &\frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
\lesssim \int_{\ensuremath{\mathbb{R}}^n \setminus B(x, \frac r 2)} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{2n-4}} dz \\
&\le \int_{B(x,R) \setminus B(x, \frac r 2)} \frac{|V(z)|}{|z-x|^{2n-4}} dz
+ \int_{\ensuremath{\mathbb{R}}^n \setminus B(x, R)} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{2n-4}} dz.
\end{aligned}
\end{equation}
Assuming that $R \ge \frac r 2$, choose $J \in \ensuremath{\mathbb{Z}}_{\ge 0}$ so that $2^{J-1} r \le R \le 2^J r$.
Let $q = p$ if $n \ge 4$ and $q \in \pr{\frac 3 2, \min\set{p, 3}}$ if $n = 3$.
Since $q \le p$, then by Lemma \ref{GehringLemma}, $V \in \MC{B}_{q}$ as well with the same uniform ${\MC{B}_p}$ constant.
Let $q'$ denote the H\"older conjugate of $q$.
An application of H\"older's inequality shows that
\begin{equation*}
\begin{aligned}
\int_{B(x, R) \setminus B(x, \frac r 2)} \frac{|V(z)|}{|z-x|^{2n-4}} dz
&\le \pr{\int_{B(x, R)} \abs{V(z)}^q dz}^{\frac 1 q} \pr{\int_{B(x, R) \setminus B(x, \frac r 2)} \frac{1}{|z-x|^{q'\pr{2n-4}}} dz}^{\frac 1 {q'}}.
\end{aligned}
\end{equation*}
Now
\begin{align*}
\pr{\int_{B(x, R)} \abs{V(z)}^q dz}^{\frac 1 q}
&= \abs{B(x, R)}^{\frac 1 q} \pr{\fint_{B(y, R)} \abs{V(z)}^q dz}^{\frac 1 q}
\lesssim_{\pr{d, n, q, C_V}} R^{\frac n q -2} \pr{\frac 1 {R^{n-2}}\int_{B(x, R)} \abs{V(z)} dz} \\
&= R^{\frac n q -2} \Psi\pr{x, \frac 1 {\overline{m}\pr{x, V}}; \abs{V}}
\lesssim_{(d, n, p, C_V)} R^{\frac n q -2},
\end{align*}
where we have used \eqref{normRelationship}.
On the other hand,
\begin{align*}
\pr{\int_{B(x, R) \setminus B(x, \frac r 2)} \frac{1}{|z-x|^{q'\pr{2n-4}}} dz}^{\frac 1 {q'}}
&= \pr{\int_{\frac r 2}^R \rho^{n- 2q'\pr{n-2}-1} d\rho}^{\frac 1 {q'}}
\lesssim_{\pr{n, q}} r^{4 - n - \frac n q},
\end{align*}
where we have used that $n- 2q'\pr{n-2} < 0$, which follows from the definition of $q$.
Combining the previous two inequalities shows that
\begin{equation}
\label{I31}
\begin{aligned}
\int_{B(x, R) \setminus B(x, \frac r 2)} \frac{|V(z)|}{|z-x|^{2n-4}} dz
&\lesssim_{(d, n, p, C_V)} r^{2 - n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q}.
\end{aligned}
\end{equation}
For the exterior integral, we have
\begin{equation}
\label{I32}
\begin{aligned}
&\int_{\ensuremath{\mathbb{R}}^n \setminus B(x, R)} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{2n-4}} dz
= \sum_{j=1}^\infty \int_{B(x, 2^{j} R) \setminus B(x, 2^{j-1} R)} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{2n-4}} dz \\
\le& \sum_{j=1}^\infty \pr{\int_{B(x, 2^{j} R) \setminus B(x, 2^{j-1} R)} \frac{1}{|z-x|^{q' \pr{2n-4}}} dz}^{\frac 1 {q'}} \pr{\int_{B(x, 2^{j} R) \setminus B(x, 2^{j-1} R)} e^{-q \epsilon \overline{d}(z, x, V)}\, |V(z)|^{q} dz}^{\frac 1 {q}} \\
\lesssim_{(n)}& \sum_{j=1}^\infty \pr{2^j R}^{4 - 2n + \frac n {q'} + \frac n {q}} \pr{\fint_{B(x, 2^{j} R) \setminus B(x, 2^{j-1} R)} e^{-q \epsilon \overline{d}(z, x, V)}\, |V(z)|^{q} dz}^{\frac 1 {q}}.
\end{aligned}
\end{equation}
We may repeat the arguments used to reach \eqref{distlowBd} and conclude that if $\abs{x -z} \ge 2^{j-1}R = \frac{2^{j-1}}{\overline{m}\pr{x,V}}$, then for any $\varepsilon' > 0$,
\begin{equation}
\label{expDistBound}
e^{\varepsilon' \overline{d}(z, x, V)} \gtrsim_{(d, n, p, C_V, \varepsilon')} \overline{m}\pr{x, V} \abs{x - z} \gtrsim 2^{j}.
\end{equation}
For $\varepsilon'$ to be specified below, it follows that with $c = \frac \varepsilon {\varepsilon'} \ln 2$,
\begin{align*}
\pr{\fint_{B(x, 2^{j} R) \setminus B(x, 2^{j-1} R)} e^{-q \epsilon \overline{d}(z, x, V)}\, |V(z)|^{q} dz}^{\frac 1 {q}}
&\lesssim_{(\mathcal{L}_V, p, C_V, \varepsilon', c)} e^{- cj} \pr{\fint_{B(x, 2^{j} R)} |V(z)|^{q} dz}^{\frac 1 {q}} \\
&\le e^{- cj} C_V \fint_{B(x, 2^{j} R)} |V(z)| dz
\lesssim_{(C_V)} e^{- cj} \gamma^{j} \pr{2^j R}^{-n} \int_{B(x, R)} |V(z)| dz \\
&= R^{-2}\pr{\frac{\gamma}{e^c 2^n}}^j \Psi\pr{x, R, \abs{V}}
\lesssim_{(d)} R^{-2}\pr{\frac{\gamma}{e^c 2^n}}^j ,
\end{align*}
where we have used that $V \in \MC{B}_{q}$ and $\Psi\pr{x, R; \abs{V}} \le d^2$.
Substituting this expression into \eqref{I32} shows that
\begin{align*}
\int_{\ensuremath{\mathbb{R}}^n \setminus B(x, R)} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{2n-4}} dz
&\lesssim_{(\mathcal{L}_V, p, C_V, \varepsilon', c)} R^{2-n}\sum_{j=1}^\infty \pr{\frac{\gamma}{e^c 4^{n-2}}}^j.
\end{align*}
By choosing $\varepsilon' \simeq_{(\gamma,n)} \varepsilon$ sufficiently small, we can ensure that $c = c\pr{\gamma, n}$ is large enough for the series to converge and then
\begin{align}
\label{outerBallBd}
\int_{\ensuremath{\mathbb{R}}^n \setminus B(x, R)} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)|}{|z-x|^{2n-4}} dz
&\lesssim_{(\mathcal{L}_V, p, C_V)} \overline{m}\pr{x, V}^{n-2}.
\end{align}
Combining \eqref{GBound} with \eqref{xBall}, \eqref{yBall}, \eqref{thirdIntegral}, \eqref{I31} and \eqref{outerBallBd} then shows that
\begin{align*}
\abs{G\pr{x,y}}
&\lesssim_{(\mathcal{L}_V, p, C_V)} \pr{\norm{V}_{L^p\pr{B(x, \frac r 2)}} + \norm{V}_{L^p\pr{B(y, \frac r 2)}}} r^{4 - 2n + \frac {pn} {p-1}}
+ r^{2 - n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q}
+ \overline{m}\pr{x, V}^{n-2}.
\end{align*}
Let $K, L \subset \ensuremath{\mathbb{R}}^n$ be compact sets with disjoint support.
Set $d = \diam \pr{K} + \diam\pr{L} + \dist\pr{K, L}$ and define $M$ to be the closed $d$-neighborhood of $K \cup L$, another compact set.
Note that for any $x \in K$ and any $y \in L$, $\displaystyle \norm{V}_{L^p\pr{B(x, \frac r 2)}} + \norm{V}_{L^p\pr{B(y, \frac r 2)}} \le \norm{V}_{L^p\pr{M}}$.
As $r = \abs{x - y}$, then $r \le d$.
Finally, since $\overline{m}\pr{x, V}$ is bounded on compact sets, then we may conclude that $G\pr{x,y}$ is bounded on $K \times L$.
It follows that $G(x,y)$ is locally integrable away from the diagonal, as required.
\end{proof}
Using the representation formula from Lemma \ref{UniquenessLem} and many arguments from the proof of Lemma \ref{localBoundedIntegrals}, we can now bound the difference between $\Gamma^V$ and $\Gamma^0$.
We use $\Gamma^\Lambda$ as an intermediary because this allows us to use the upper bound described by Corollary \ref{UppBoundCor} instead of the one for $\Gamma^V$ given in Theorem \ref{UppBoundThm}.
The advantage to this approach is that we don't need to assume that $V \in \MC{NC}$.
\begin{lem}[Lower bound lemma]
\label{SmallScaleLowerLem}
Let $\mathcal{L}_V$ be given by \eqref{elEqDef}, where $A$ satisfies \eqref{ellip} and \eqref{Abd}, and $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$.
Assume that {\rm{(IB)}} and {\rm{(H)}} hold.
Let $\Gamma^V(x, y)$ denote the fundamental matrix of $\mathcal{L}_V$.
Let $x, y \in \ensuremath{\mathbb{R}}^n$ be such that $|x-y| \le \frac{1}{\overline{m}(x, V)}$.
Set $\alpha = 2 - \frac n {q}$, where $q = p$ if $n \ge 4$ and $q \in \pr{\frac 3 2, \min\set{p, 3}}$ if $n = 3$.
Then there exists a constant $C_2 = C_2\pr{\mathcal{L}_V, p, C_V}$ for which
\begin{equation*}
|\Gamma^V (x, y) - \Gamma^0(x, y)| \le C_2 \frac{\brac{|x-y| \overline{m}(x, V)}^{\alpha} }{|x-y|^{n-2}}.
\end{equation*}
\end{lem}
\begin{proof}
Set $r = \abs{x - y}$ and let $\varepsilon = \frac{\varepsilon_1}2$, where $\varepsilon_1 > 0$ is as in Lemma \ref{MainUpperBoundCor}.
An application of Lemma \ref{UniquenessLem} followed by Corollary \ref{UppBoundCor} along with the bound \eqref{eq3.60} from Theorem \ref{t3.6} applied to $\Gamma^0$ and $\Gamma^V$ shows that
\begin{align}
|\Gamma^V (x, y) - \Gamma^0(x, y)|
\le& \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{\Gamma^0(x, z)} \abs{\Lambda(z)} \abs{\Gamma^\Lambda(z, y)} \, dz
+ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \abs{\Gamma^\Lambda (x, z)} \abs{V(z) - \Lambda(z)} \abs{\Gamma^V(z, y)} \, dz
\nonumber \\
\lesssim&_{(\mathcal{L}_V, p, C_V)} \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \frac{e^{-\epsilon \overline{d}(z, y, V)}\, |\Lambda(z)| }{|z-x|^{n-2} |z-y|^{n-2}}dz
+ \ensuremath{\int_{\ensuremath{\mathbb{R}}^n}} \frac{e^{-\epsilon \overline{d}(x, z, V)}\, |V(z) - \Lambda(z)| }{|z-x|^{n-2} |z-y|^{n-2}}dz
\nonumber \\
\lesssim& \int_{B(x, \frac r 2)} \frac{ |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
+ \int_{B(y, \frac r 2)} \frac{ |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
\nonumber \\
+& \int_{\ensuremath{\mathbb{R}}^n \setminus \pr{B(x, \frac r 2) \cup B(y, \frac r 2)}} \frac{e^{-\epsilon \overline{d}(z, y, V)}\, |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
+ \int_{\ensuremath{\mathbb{R}}^n \setminus \pr{B(x, \frac r 2) \cup B(y, \frac r 2)}} \frac{e^{-\epsilon \overline{d}(x, z, V)}\, |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz.
\label{FSdiffBound}
\end{align}
For the first term in \eqref{FSdiffBound}, we take a different approach from the previous proof and we get
\begin{align*}
\int_{B(x, \frac r 2)} & \frac{|V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
\lesssim_{(n)} r^{2-n}\int_{B(x, \frac r 2)} \frac{|V(z)|}{|z-x|^{n-2}} dz \\
&= r^{2-n} \sum_{j=1}^\infty \int_{B(x, \frac r {2^{j}}) \setminus B(x, \frac r {2^{j+1}})} \frac{|V(z)|}{|z-x|^{n-2}} dz
\lesssim r^{2-n} \sum_{j=1}^\infty \pr{\frac r {2^{j}}}^{2-n} \int_{B(x, \frac r {2^{j}})} |V(z)| dz \\
&= r^{2-n} \sum_{j=1}^\infty \Psi\pr{x, \frac r {2^{j}}; \abs{V}}
\le r^{2-n} \sum_{j=1}^\infty C_V \brac{\frac{r \, \overline{m}\pr{x, V}}{2^j}}^{2 - \frac n p} \Psi\pr{x, \frac 1 {\overline{m}\pr{x, V}}; \abs{V}},
\end{align*}
where we have applied Lemma \ref{BasicShenLem} to reach the last line.
(We remark that a version of this inequality was established in \cite[Remark 0.13]{She99} using a different argument.)
By \eqref{normRelationship} in the proof of Lemma \ref{omCompLem}, $\Psi\pr{x, \frac 1 {\overline{m}\pr{x, V}}; \abs{V}} \le d^2 \abs{\Psi\pr{x, \frac 1 {\overline{m}\pr{x, V}}; V}} = d^2$.
Since $p > \frac n 2$, the series converges and we see that
\begin{align}
\label{xBall1}
\int_{B(x, \frac r 2)} \frac{|V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
&\lesssim_{(d, n, p, C_V)} r^{2-n} \brac{r \, \overline{m}\pr{x, V}}^{2 - \frac n p}
\le r^{2-n} \brac{r \, \overline{m}\pr{x, V}}^{2 - \frac n q},
\end{align}
since $q \le p$.
An analogous argument shows that the second term in \eqref{FSdiffBound} satisfies
\begin{align}
\label{yBall1}
\int_{B(y, \frac r 2)} \frac{ |V(z)|}{|z-x|^{n-2} |z-y|^{n-2}} dz
&\lesssim_{(d, n, p, C_V)} r^{2-n} \brac{r \, \overline{m}\pr{y, V}}^{2 - \frac n q}
\lesssim_{(d, n, p, C_V)} r^{2-n} \brac{r \, \overline{m}\pr{x, V}}^{2 - \frac n q},
\end{align}
since Lemma \ref{muoBounds} and the assumption that $|x-y| \le \frac{1}{\overline{m}(x, V)}$ imply that $\overline{m}\pr{x, V} \simeq_{(d, n, p, C_V)} \overline{m}\pr{y, V}$.
We now turn to the fourth integral in \eqref{FSdiffBound}.
By the arguments in the proof of Lemma \ref{localBoundedIntegrals}, we combine \eqref{thirdIntegral} with \eqref{I31} and \eqref{outerBallBd} to get
\begin{equation}
\label{fourthIntegral}
\begin{aligned}
\int_{\ensuremath{\mathbb{R}}^n \setminus \pr{B(x, \frac r 2) \cup B(y, \frac r 2)}} \frac{e^{-\epsilon \overline{d}(x, z, V)}\, |V(z)| \, dz}{|z-x|^{n-2} |z-y|^{n-2}}
&\lesssim \int_{B(x,R) \setminus B(x, \frac r 2)} \frac{|V(z)| \, dz}{|z-x|^{2n-4}}
+ \int_{\ensuremath{\mathbb{R}}^n \setminus B(x, R)} \frac{e^{-\epsilon \overline{d}(z, x, V)}\, |V(z)| \, dz}{|z-x|^{2n-4}} \\
&\lesssim_{(d, n, p, C_V)} r^{2 - n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q}
+ \overline{m}\pr{x, V}^{n-2}.
\end{aligned}
\end{equation}
An analogous argument applied to the third integral in \eqref{FSdiffBound} shows that
\begin{equation}
\label{thirdIntegral1}
\begin{aligned}
\int_{\ensuremath{\mathbb{R}}^n \setminus \pr{B(x, \frac r 2) \cup B(y, \frac r 2)}} \frac{e^{-\epsilon \overline{d}(z, y, V)}\, |V(z)| \, dz}{|z-x|^{n-2} |z-y|^{n-2}}
&\lesssim_{(d, n, p, C_V)} r^{2 - n} \brac{r \overline{m}\pr{y, V}}^{2 - \frac n q} + \overline{m}\pr{y, V}^{n-2} \\
&\lesssim_{(d, n, p, C_V)} r^{2 - n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q} + \overline{m}\pr{x, V}^{n-2},
\end{aligned}
\end{equation}
where we have again applied Lemma \ref{muoBounds} to conclude that $\overline{m}\pr{x, V} \simeq_{(d, n, p, C_V)} \overline{m}\pr{y, V}$.
Substituting \eqref{xBall1} -- \eqref{thirdIntegral1} into \eqref{FSdiffBound} shows that
\begin{equation*}
\label{differenceBound}
\begin{aligned}
|\Gamma^V (x, y) - \Gamma^0(x, y)|
&\lesssim_{\pr{\mathcal{L}_V, p, C_V}} r^{2 - n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q} + \overline{m}\pr{x, V}^{n-2}
\lesssim r^{2 - n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q},
\end{aligned}
\end{equation*}
where we have used that
\begin{align*}
\overline{m}\pr{x, V}^{n-2}
&= r^{2 - n} \brac{r \overline{m}\pr{x, V}}^{n - 2}
= r^{2-n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q} \brac{r \overline{m}\pr{x, V}}^{n - 2 - \pr{2 - \frac n q}}
\le r^{2-n} \brac{r \overline{m}\pr{x, V}}^{2 - \frac n q},
\end{align*}
since $r \, \overline{m}\pr{x, V} \le 1$ and $n\pr{1 + \frac 1 {q}} \ge 4$ by definition.
The conclusion of the lemma follows.
\end{proof}
We now prove our lower bound.
To do this, we assume that the following scale-invariant Harnack inequality holds for matrix solutions to our equation.
\begin{itemize}
\item[(SIH)]
We say that \rm{(SIH)} holds if there exists a small constant $c_S$ so that whenever $x_0 \in \ensuremath{\mathbb{R}}^n$ and $r \le \frac {c_S} {\overline{m}(x_0, V)}$, with $B = B(x_0, r)$, the following holds:
If $U$ is a $d \times d$ matrix for which $\V{u}_i \in W^{1,2}_V(2B)$ is a weak solution to $\mathcal{L}_V \V{u}_i = \V{0}$ for each $i = 1, 2 \ldots, d$, then for every $\V{e} \in \ensuremath{\mathbb{R}}^d$, it holds that
\begin{equation}
\label{siHi}
\sup_{x \in B} \abs{\innp{U \V{e}, \V{e}}} \le C_{\text{H}} \inf_{x \in B} \abs{\innp{U \V{e},\V{e}}},
\end{equation}
where the constant $C_{\text{H}}$ depends only on $d, n, \lambda, \Lambda$, and $V$.
\end{itemize}
The standard Harnack inequality has a constant that typically grows with the size of the domain and the norm of $V$.
Since the constant here is independent of $r$, we refer to this as the ``scale-invariant" version of the inequality.
Of course, since we are working in a systems setting, there is no guarantee that this estimate, or any of the standard de Giorgi-Nash-Moser results, necessarily hold.
As such, we assume that $\mathcal{L}_V$ is chosen so that {\rm{(IB)}, {\rm{(H)}, and {\rm{(SIH)} all hold.
To convince ourselves that these are reasonable assumptions to make, we refer the reader to \cite{MP19} and \cite{DHM18}, where the validity of these assumptions in the scalar setting is shown.
Finally, we also need to assume the following lower bound on the fundamental matrix of the homogeneous operator.
\begin{itemize}
\item[(LB)]
We say that \rm{(LB)} holds if there exists a constant $c_0$ so that for every $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{LB}
\abs{\innp{\Gamma^0\pr{x,y} \V{e}, \V{e}}} \ge \frac{c_0}{\abs{x-y}^{n-2}}.
\end{equation}
\end{itemize}
In \cite{HK07}, the fundamental and Green's matrices for homogeneous elliptic systems are extensively studied.
Although such a bound does not necessary follow from the collection of results presented in \cite{HK07}, this result is shown to hold in the scalar setting; see \cite[Theorem 1.1]{GW82}.
\begin{thm}[Exponential lower bound]
\label{LowerBoundThm}
Let $\mathcal{L}_V$ be given by \eqref{elEqDef}, where $A$ satisfies \eqref{ellip} and \eqref{Abd}, and $V \in {\MC{B}_p} \cap \MC{ND}$ for some $p > \frac n 2$.
Assume that {\rm{(IB)}}, {\rm{(H)}}, \rm{(SIH)}, and \rm{(LB)} hold.
Let $\Gamma^V(x, y)$ denote the fundamental matrix of $\mathcal{L}_V$.
Then there exist constants $C = C\pr{\mathcal{L}_V, p, C_V, C_{\text{H}}, c_S, c_0}$, $\varepsilon_2 = \varepsilon_2\pr{d, n, p, C_V, C_{\text{H}}, c_S}$ so that for every $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{upperExpBound}
\abs{\innp{\Gamma^V (x, y) \V{e}, \V{e}}} \geq C \frac{e^{-\epsilon_2 \overline{d}(x, y, V)}}{|x-y|^{n-2}}.
\end{equation}
\end{thm}
\begin{rem}
\label{differentDistance}
If we make the weaker assumption that $\abs{V} \in {\text{B}_p}$ (instead of assuming that $V \in {\MC{B}_p}$), then all of the statements in this section still hold with $\overline{m}\pr{\cdot, V}$ replaced by $m\pr{\cdot, \abs{V}}$.
Accordingly, the conclusion described by \eqref{upperExpBound} still holds with $\overline{d}(x, y, V)$ replaced by $d(x, y, \abs{V})$.
\end{rem}
\begin{rem}\label{differentLowerBounds}
Versions of this result still hold with either $\abs{\Gamma^V (x, y)}$ or $\abs{\Gamma^V (x, y) \V{e}}$ on the left side of \eqref{upperExpBound} in place of $\abs{\innp{\Gamma^V (x, y) \V{e}, \V{e}}}$ if we replace the assumptions \rm{(SIH)} and \rm{(LB)} accordingly.
\end{rem}
We follow the arguments from \cite[Theorem 4.15]{She99} and \cite[Theorem 7.27]{MP19}, with appropriate modifications for our systems setting.
\begin{proof}
By Lemma \ref{muoBounds} and the proof of \cite[Proposition 3.25]{MP19}, there exists $A = A\pr{d, n, p, C_V}$ large enough so that
\begin{equation}
\label{Adefn}
x \not \in B\pr{y, \frac{2}{\overline{m}(y, V)}} \text{ whenever } |x-y|\geq \frac{A}{\overline{m}(x, V)}.
\end{equation}
Similarly, with $c_1 = \min\set{\pr{\frac {c_0}{2C_2}}^{1/\alpha}, 1}$, where $c_0$ is from \rm{(LB)} and $C_2\pr{\mathcal{L}_V, p, C_V}$ and $\alpha\pr{n, p}$ are from Lemma \ref{SmallScaleLowerLem}, an analogous argument shows that there exists $c_2 = c_2\pr{d, n, p, C_V, c_1}$ sufficiently small so that
\begin{equation}
\label{c2defn}
y \not \in B\pr{z, \frac{2 c_2}{\overline{m}(z, V)}} \text{ whenever } |z-y|\geq \frac{c_1}{\overline{m}(y, V)}.
\end{equation}
Since $c_1 = c_1\pr{\mathcal{L}_V, p, C_V, c_0}$, then $c_2 = c_2\pr{\mathcal{L}_V, p, C_V, c_0}$ as well.
We prove our bound in three settings: when $\abs{x - y}$ is small, medium, and large.
The constant $A$ is used to distinguish between the medium and the large settings, while $c_1$ is used to distinguish between the small and medium settings.
The small setting is used as a tool to prove the medium setting, so we start there.
Assume that we are in the small-scale setting where $ |z-y| \le \frac{c_1}{\overline{m}(z, V)}$.
By \rm{(LB)}, the triangle inequality, and Lemma \ref{SmallScaleLowerLem}, since $|z-y| \le \frac 1{\overline{m}(z, V)}$, then for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{align*}
\frac{c_0}{\abs{z-y}^{n-2}}
&\le \abs{\innp{\Gamma^0\pr{z,y} \V{e}, \V{e}}}
\le \abs{\innp{\pr{\Gamma^0\pr{z,y} - \Gamma^V\pr{z,y}} \V{e}, \V{e}}} + \abs{\innp{\Gamma^V\pr{z,y} \V{e}, \V{e}}} \\
&\le \abs{\Gamma^0\pr{z,y} - \Gamma^V\pr{z,y}} + \abs{\innp{\Gamma^V\pr{z,y} \V{e}, \V{e}}}
\le C_2 \frac{\brac{|z-y| \overline{m}(z, V)}^{\alpha}}{|z-y|^{n-2}} + \abs{\innp{\Gamma^V\pr{z,y} \V{e}, \V{e}}}.
\end{align*}
Since $c_1$ is defined so we may absorb the first term into the left, it follows that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{smallLowBd}
\abs{\innp{\Gamma^V\pr{z,y} \V{e}, \V{e}}} \ge \frac{c_0}{2\abs{z-y}^{n-2}} \quad \text{ whenever } \, |z-y| \le \frac{c_1}{\overline{m}(z, V)}.
\end{equation}
Lemma \ref{muoBounds} implies that $\overline{m}(z, V) \simeq_{(d, n, p, C_V)} \overline{m}(y, V)$, so after redefining $c_1\pr{\mathcal{L}_V, p, C_V, c_0}$ if we need to, we also have that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{smallLowBdFlipped}
\abs{\innp{\Gamma^V\pr{z,y} \V{e}, \V{e}}} \ge \frac{c_0}{2\abs{z-y}^{n-2}} \quad \text{ whenever } \, |z-y| \le \frac{c_1}{\overline{m}(y, V)}.
\end{equation}
We now consider the midrange setting where $|x-y| \in \brac{\frac{c_1}{\overline{m}(x, V)}, \frac A {\overline{m}(x, V)}}$.
There is no loss in assuming that $c_2 \le c_S$, where $c_S$ is the small constant from \rm{(SIH)}.
Construct a chain $\set{z_i}_{i=1}^N$ of $N$ elements along the straight line connecting $x$ and $y$ so that $\abs{y - z_1} = \frac{c_1}{\overline{m}(y, V)}$, $\abs{z_{i+1} - z_i} = \frac{c_2}{\overline{m}(z_i, V)}$ for $i = 1, \ldots, N$, and $\abs{x - z_N} \le \frac{c_2}{\overline{m}(z_N, V)}$.
Since Lemma \ref{muoBounds} implies that $\overline{m}(z, V) \simeq_{(d, n, p, C_V)} \overline{m}(x, V)$ for any point $z$ along the line between $x$ and $y$, then $N \lesssim_{(d, n, p, C_V, c_1, c_2)} A$.
Since $\abs{z_j - y} \ge \abs{y - z_1} = \frac{c_1}{\overline{m}(y, V)}$ for all $j = 1, \ldots, N$, then \eqref{c2defn} shows that $y \not \in B\pr{z_j, \frac{2 c_2}{\overline{m}(z_j, V)}}$.
In particular, with $U = \Gamma^V(\cdot, y)$ then $\mathcal{L}_V \V{u}_i = 0$ weakly on each $B\pr{z_j, \frac{2 c_2}{\overline{m}(z_j, V)}}$.
Then we see by repeatedly applying the scale-invariant Harnack inequality \rm{(SIH)} that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation*}
\begin{aligned}
\abs{\innp{\Gamma^V (x, y) \V{e}, \V{e}}}
&\ge \inf_{B(z_N, \frac{c_2}{\overline{m}(z_N, V)})} \abs{\innp{\Gamma^V (\cdot, y) \V{e}, \V{e}}}
\ge C_{\text{H}}^{-1} \sup_{B(z_N, \frac{c_2}{\overline{m}(z_N, V)})} \abs{\innp{\Gamma^V (\cdot, y) \V{e}, \V{e}}}
\ge C_{\text{H}}^{-1} \abs{\innp{\Gamma^V (z_N, y) \V{e}, \V{e}}} \\
&\ge C_{\text{H}}^{-1} \inf_{B(z_{N-1}, \frac{c_2}{\overline{m}(z_{N-1}, V)})} \abs{\innp{\Gamma^V (\cdot, y) \V{e}, \V{e}}}
\ge C_{\text{H}}^{-2} \sup_{B(z_{N-1}, \frac{c_2}{\overline{m}(z_{N-1}, V)})} \abs{\innp{\Gamma^V (\cdot, y) \V{e}, \V{e}}} \\
&\ge C_{\text{H}}^{-2} \abs{\innp{\Gamma^V (z_{N-1}, y) \V{e}, \V{e}}}
\ge \ldots
\ge C_{\text{H}}^{-N} \abs{\innp{\Gamma^V (z_{1}, y) \V{e}, \V{e}}}
\ge \frac{C_{\text{H}}^{-N} c_0}{2 \abs{z_1-y}^{n-2}},
\end{aligned}
\end{equation*}
where the last bound follows from \eqref{smallLowBdFlipped}.
However, $\abs{z_{1} - y} = \frac{c_1}{\overline{m}(y, V)} \le C_A \frac{c_1}{\overline{m}(x, V)} \le C_A \abs{x-y}$.
Therefore, for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
$$\abs{\innp{\Gamma^V (x, y) \V{e}, \V{e}}} \ge \frac{C_{\text{H}}^{-N} c_0}{2 \pr{C_A \abs{x-y}}^{n-2}} \quad \text{whenever} \, |x-y| \in \brac{\frac{c_1}{\overline{m}(x, V)}, \frac A {\overline{m}(x, V)}}.$$
Combining this observation with \eqref{smallLowBd} shows that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{medLowerBd}
\abs{\innp{\Gamma^V (x, y) \V{e}, \V{e}}} \ge \frac{C_3}{\abs{x-y}^{n-2}} \quad \text{ whenever } \, |x-y| \le \frac{A}{\overline{m}(x, V)}.
\end{equation}
An application of Lemma \ref{muoBounds} implies that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{equation}
\label{medLowerBdFlipped}
\abs{\innp{\Gamma^V (x, y) \V{e}, \V{e}}} \ge \frac{C_3}{\abs{x-y}^{n-2}} \quad \text{ whenever } \, |x-y| \le \frac{A}{\overline{m}(y, V)},
\end{equation}
where $C_3 = C_3\pr{\mathcal{L}_V, p, C_V, C_{\text{H}}, c_0}$ is possibly redefined.
By the proof of Lemma \ref{closeRemark}, if $|x-y| \le \frac A{\overline{m}(x, V)}$, then $\overline{d}(x, y, V) \lesssim_{(d, n, p, C_V)} 1$.
In particular, this observation combined with \eqref{medLowerBd} gives the result \eqref{upperExpBound} in the setting where $|x-y| \le \frac A{\overline{m}(x, V)}$.
Now consider the final (large-scale) setting where $|x-y| > \frac A{\overline{m}(x, V)}$.
Choose $\gamma : \brac{0, 1} \to \ensuremath{\mathbb{R}}^n$ with $\gamma(0) = x, \gamma(1) = y$, and
$$\int_0^1 \overline{m}(\gamma(t), V) |\gamma'(t)| \, dt \leq 2 \overline{d}(x, y, V).$$
Let
$$t_0 = \sup\set{ t \in [0, 1] : |x - \gamma(t)|\leq \frac{A}{\overline{m}(x, V)}} < 1.$$
If $|\gamma(t_0) - y| \leq \frac 1{\overline{m}(\gamma(t_0), V)}$, then
$$\abs{x - y} \le \abs{x - \gamma(t_0)} + \abs{\gamma(t_0) - y} \le \frac{A}{\overline{m}(x, V)} + \frac 1{\overline{m}(\gamma(t_0), V)} \le \frac{\tilde A}{\overline{m}(x, V)},$$
since Lemma \ref{muoBounds} implies that $\overline{m}(\gamma(t_0), V) \simeq_{(d, n, p, C_V)} \overline{m}(x, V)$.
In this case, we may repeat the arguments from the previous paragraph to reach the conclusion of the theorem.
To proceed, we assume that $|x-y| > \frac A{\overline{m}(x, V)}$ and $|\gamma(t_0) - y| > \frac 1{\overline{m}(\gamma(t_0), V)}$.
Since $\overline{m}(\cdot, V)$ is locally bounded above and below, we can recursively define a finite sequence $0 < t_0 < t_1 < \ldots < t_\ell \le 1$ as follows.
For $j = 0, 1, \ldots, \ell$, let
$$t_j = \inf \set{ t \in [t_{j-1}, 1] : |\gamma(t) - \gamma(t_{j-1}) | \geq \frac{1}{\overline{m}(\gamma(t_{j-1}), V)} }.$$
Then set $B_{j} = B\pr{\gamma(t_{j}), \frac{1}{\overline{m}(\gamma(t_{j}), V)} }$.
Define $I_{j} = [t_{j}, t_{j+1})$ for $j = 0, 1, \ldots, \ell-1$, and set $I_{\ell} = \brac{t_\ell, 1}$.
Observe that for $j = 0, 1, \ldots, \ell$,
$$\gamma(t) \in B_j \text{ for all } t \in I_{j}.$$
In particular, Lemma \ref{muoBounds} implies that $\overline{m}\pr{\gamma\pr{t}, V} \simeq_{(d, n, p, C_V)} \overline{m}\pr{\gamma\pr{t_{j}}, V}$ whenever $t \in I_{j}$.
Moreover, for $j = 0, 1, \ldots, \ell-1$,
$$|\gamma(t_{j+1}) - \gamma(t_{j})| = \frac{1}{\overline{m}(\gamma(t_{j}), V)}.$$
Thus,
\begin{align*}
\int_0^1 \overline{m}(\gamma(t), V) |\gamma'(t)| \, dt
&\ge \sum_{j = 0}^{\ell-1} \int_{I_j} \overline{m}(\gamma(t), V) |\gamma'(t)|\, dt
\gtrsim_{(d, n, p, C_V)} \sum_{j = 0}^{\ell-1} \overline{m}(\gamma(t_{j}), V) \int_{I_j} |\gamma'(t)|\, dt \\
&\geq \sum_{j = 0}^{\ell-1} \overline{m}(\gamma(t_{j}), V) |\gamma(t_{j+1}) - \gamma(t_{j})|
= \ell.
\end{align*}
Recalling how we defined $\gamma$, this shows that
\begin{equation}
\label{ellBound}
\ell \le C_4 \, \overline{d}(x, y, V),
\end{equation}
where $C_4 = C_4\pr{d, n, p, C_V}$.
We defined $t_0$ so that whenever $t \ge t_0$, $|x - \gamma(t)| \ge \frac{A}{\overline{m}(x, V)}$.
Therefore, by the choice of $A$ from \eqref{Adefn}, for each $j = 0, \ldots, \ell$, $x \not \in 2B_j$.
This means that if $U = \Gamma^V(\cdot, x)$ then $\mathcal{L}_V \V{u}_i = 0$ weakly on each $2B_j$.
Thus, repeated applications of the scale-invariant Harnack inequality from {\rm(SIH)} show that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{align*}
\abs{\innp{U\pr{\gamma\pr{t_0}} \V{e}, \V{e}}}
&\le \widetilde C_{\text{H}} \abs{\innp{U\pr{\gamma\pr{t_1}} \V{e}, \V{e}}}
\le \ldots
\le \widetilde C_{\text{H}}^\ell \abs{\innp{U\pr{\gamma\pr{t_\ell}} \V{e}, \V{e}}}
\le \widetilde C_{\text{H}}^{\ell+1} \abs{\innp{U\pr{\gamma\pr{1}} \V{e}, \V{e}}},
\end{align*}
where $\widetilde C_{\text{H}} = C_{\text{H}}^\beta$ and $\beta$ depends on $c_S$ from {\rm(SIH)}.
Since $\gamma(1) = y$, then
\begin{align*}
\abs{\innp{\Gamma^V\pr{y, x} \V{e}, \V{e}}}
&\ge \widetilde C_{\text{H}}^{-\pr{\ell+1}} \abs{\innp{\Gamma^V\pr{\gamma(t_0), x} \V{e}, \V{e}}}
\ge \widetilde C_{\text{H}}^{-\pr{\ell+1}} \frac{C_3}{\abs{\gamma(t_0) -x}^{n-2}},
\end{align*}
where \eqref{medLowerBdFlipped} was applicable since $|\gamma(t_0) - x| \le \frac{A}{\overline{m}(x, V)}$.
Continuing on, since $|\gamma(t_0) - x| < \abs{x-y}$, we get that for any $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$,
\begin{align*}
\abs{\innp{\Gamma^V\pr{y, x} \V{e}, \V{e}}}
&\ge \frac{C_3 \exp\pr{-\ell \log \widetilde C_{\text{H}}}}{\widetilde C_{\text{H}} \abs{x-y}^{n-2}}
\ge \frac{C_3}{\widetilde C_{\text{H}} } \frac{\exp\pr{- C_4 \log \widetilde C_{\text{H}} \, \overline{d}(x, y, V) }}{\abs{x-y}^{n-2}},
\end{align*}
where we have applied \eqref{ellBound} in the final step.
As this bound is symmetric in $x$ and $y$, the conclusion \eqref{upperExpBound} follows.
\end{proof}
Finally, let us briefly discuss the connection between our upper and lower auxiliary functions and the Landscape functions that were mentioned in the introduction.
\begin{rem}
\label{LandscapeRem}
For all $x \in \ensuremath{\R^n}$, define
$$u(x) = \int_{\ensuremath{\R^n}} \abs{\Gamma_V(x, y)} \, dy.$$
We decompose $\ensuremath{\R^n}$ into the disjoint union of the ball $B\pr{x, \frac{1}{\underline{m}\pr{x,V}}}$ and the annuli $B\pr{x, \frac{2^{j}}{\underline{m}\pr{x,V}}} \backslash B\pr{x, \frac{2^{j-1}}{\underline{m}\pr{x,V}}}$ for $j \in \ensuremath{\mathbb{N}}$, then assuming the conditions of Theorem \ref{UppBoundThm}, we argue as in Lemma \ref{localBoundedIntegrals} to show that $u(x) \lesssim \underline{m}\pr{x,V}^{-2} $ for all $x \in \ensuremath{\R^n}$.
On the other hand, for all $x \in \ensuremath{\R^n}$, Remark \ref{differentLowerBounds} tells us that (under appropriate conditions)
$$u(x) \geq \int_{B\pr{x, \frac{1}{\overline{m}\pr{x,V}}} } \abs{\Gamma_V(x, y)} \, dy \geq \int_{B\pr{x, \frac{1}{\overline{m}\pr{x,V}}} } \frac{e^{-\varepsilon \overline{d}(x, y, V)}}{|x-y|^{n-2}} \, dy \gtrsim \overline{m}\pr{x,V}^{-2}.$$
As mentioned in the introduction, this connection was previously found in \cite{Po21} for scalar elliptic operators $\MC{L}_v$ with a nonnegative scalar potential $v$ on $\ensuremath{\R^n}$.
In the scalar setting, it holds that $\overline{m}\pr{x,v} = \underline{m}\pr{x,v}$ for all $x \in \ensuremath{\R^n}$.
If we denote this common function by $m(\cdot, v),$ it follows that $u(x) \simeq m(x, v)^{-2}$ for all $x \in \ensuremath{\R^n}$.
Moreover, since the fundamental solution of such an operator is positive, we see that $u$ satisfies $\MC{L}_v u = 1$, which means that $u$ inherits desirable qualities that are not satisfied by $m(\cdot, v)$.
We refer the reader to Theorems $1.18$ and $ 1.31$ in \cite{Po21} for additional details.
\end{rem}
\begin{appendix}
\section{The Noncommutativity Condition}
\label{Examples}
In this section, we are trying to further motivate the $\MC{NC}$ condition that was introduced in Section \ref{MWeights}.
To do this, we show that the set of matrix weights $\pr{{\MC{B}_p} \cap \MC{ND}} \setminus \MC{NC}$ is nonempty and that there is a matrix function in this space that fails to satisfy the Fefferman-Phong inequality described by Lemma \ref{FPml}.
As such, we hope to convince our reader that the additional assumption $V \in \MC{NC}$ is justified for our purposes.
We explore the properties of the matrix function that was introduced in Example \ref{notNCEx}.
With $x = \pr{x_1, \ldots, x_n} \in \ensuremath{\mathbb{R}}^n$, and $\abs{x} = \sqrt{x_1^2 + \ldots + x_n^2} \ge 0$, recall that $V : \ensuremath{\mathbb{R}}^n \to \ensuremath{\mathbb{R}}^{2 \times 2}$ is defined as
\begin{equation}
\label{VExDef}
V(x) = \begin{bmatrix}1 & \abs{x}^2 \\ \abs{x}^2 & \abs{x}^4 \end{bmatrix} = \begin{bmatrix}1 & x_1^2 + \ldots x_n^2 \\ x_1^2 + \ldots x_n^2 & \pr{x_1^2 + \ldots x_n^2}^2 \end{bmatrix}.
\end{equation}
We begin with a result regarding polynomial matrices.
\begin{prop}
\label{ExampleProp}
Let $V:\ensuremath{\mathbb{R}}^n \rightarrow \ensuremath{\mathbb{R}}^{d \times d}$ be any $d \times d$ matrix with polynomial entries.
Then $V^*V \in {\MC{B}_p}$ for every $p > 1$.
\end{prop}
\begin{proof}
First, if $P : \ensuremath{\mathbb{R}}^n \rightarrow \mathbb{\ensuremath{\mathbb{R}}}_{\ge 0}$ is any nonnegative polynomial, then there exists $C > 0$, depending only on $n$ and the degree of $P$, so that for any cube $Q$
\begin{equation*}
\fint_Q |P(x)| \, dx \leq \sup_{x \in Q} |P(x)| \leq C \fint_Q |P(x)| \, dx.
\end{equation*}
The proof is from \cite{Fef83}: By the equivalence of norms on any finite dimensional vector space, the above equation is trivial if $Q$ is the unit cube centered at $0$.
The general case follows from dilation and translation.
Therefore, for any $p > 1$, $P$ is a scalar ${\text{B}_p}$ function with a ${\text{B}_p}$ constant that depends only on $n$ and the degree of $P$.
Let $k = \max\set{\deg \pr{V_{ij}}}_{i, j=1}^d$ and observe that for any $\V{e} \in \ensuremath{\R^d}$, $P(x; \V{e}) = \innp{V^* (x) V(x) \V{e}, \V{e}} $ is a nonnegative polynomial of degree at most $2k$.
By the conclusion of the previous paragraph, for any $p > 1$, $P(x; \V{e})$ belongs to ${\text{B}_p}$ with a constant depending only on $n$ and the degree of $P(x; \V{e})$.
In particular, for any $p > 1$ and any $\V{e} \in \ensuremath{\mathbb{R}}^d$, $P(x; \V{e})$ belongs to ${\text{B}_p}$ with a constant that is independent of $\V{e}$.
Since $V^*V$ is symmetric and positive semidefinite, then we conclude that $V^* V$ belongs to ${\MC{B}_p}$.
\end{proof}
We immediately have the following.
\begin{cor}
\label{ExampleCor}
Let $V:\ensuremath{\mathbb{R}}^n \rightarrow \ensuremath{\mathbb{R}}^{d \times d}$ be any $d \times d$, symmetric positive semidefinite matrix with polynomial entries.
Then $V \in {\MC{B}_p}$ for every $p > 1$.
\end{cor}
It follows that for $V$ as defined in \eqref{VExDef}, $V \in {\MC{B}_p}$.
Since $V$ also satisfies \eqref{NDCond}, then $V \in \MC{ND}$.
Now we show that $V$ is an example of positive definite polynomial matrix that doesn't satisfy our noncommutativity condition.
\begin{lem}[$\MC{NC}$ is a proper subset of ${\MC{B}_p} \cap \MC{ND}$]
\label{notNC}
For $V$ as defined in \eqref{VExDef}, $V \notin \MC{NC}$.
\end{lem}
\begin{proof}
A computation shows that
$$V^{\frac 1 2}(x) = \frac 1 {\sqrt{1 + \abs{x}^4}}\brac{\begin{array}{ll} 1 & \abs{x}^2 \\ \abs{x}^2 & \abs{x}^4 \end{array}}.$$
For any $x \in \ensuremath{\mathbb{R}}^n$,
\begin{align*}
\Psi\pr{x, r; V}
&= \frac 1 {r^{n-2}} \int_{Q(x, r)} V(y) dy
= \frac {1} {r^{n-2}} \int_{x_n - r}^{x_n+r} \ldots \int_{x_1 - r}^{x_1+r} \brac{\begin{array}{cc} 1 & y_1^2 + \ldots + y_n^2 \\ y_1^2 + \ldots + y_n^2 & \pr{y_1^2 + \ldots + y_n^2}^2 \end{array}} dy_1 \ldots dy_n.
\end{align*}
Computing, we have
\begin{align*}
\int_{Q(x, r)} \pr{y_1^2 + \ldots + y_n^2} dy
=& \sum_{j=1}^n \int_{x_n - r}^{x_n+r} \ldots \int_{x_1 - r}^{x_1+r} y_j^2 \, dy_1 \ldots dy_n
= \sum_{j=1}^n \pr{2r}^{n-1} \int_{x_j - r}^{x_j+r} y_j^2 \, dy_j \\
=& \sum_{j=1}^n \pr{2r}^{n-1} \pr{2 x_j^2 r + \frac 2 3 r^3}
= \pr{2r}^{n} \pr{\abs{x}^2 + \frac {n}3 r^2}.
\end{align*}
Define $\hat y_j = \left\{ \begin{array}{ll} \pr{y_2, \ldots, y_n} & j = 1 \\ \pr{y_1, \ldots, y_{j-1}, y_{j+1}, \ldots, y_n} & j = 2, \ldots, n-1 \\ \pr{y_1, \ldots, y_{n-1}} & j = n \end{array}\right. \in \ensuremath{\mathbb{R}}^{n-1}$ and let $Q_j = Q\pr{\hat x_j, r} \subset \ensuremath{\mathbb{R}}^{n-1}$.
Then we have
\begin{align*}
\int_{Q(x, r)} \pr{y_1^2 + \ldots + y_n^2}^2 dy
=& \sum_{j=1}^n \int_{x_n - r}^{x_n+r} \cdots \int_{x_1 - r}^{x_1+r} y_j^2 \pr{y_1^2 + \ldots + y_n^2} dy_1 \ldots dy_n \\
=& \sum_{j=1}^n \int_{x_n - r}^{x_n+r} \cdots \int_{x_1 - r}^{x_1+r} y_j^4 dy_1 \ldots dy_n
+ \sum_{j=1}^n \int_{x_n - r}^{x_n+r} \cdots \int_{x_1 - r}^{x_1+r} y_j^2 \abs{\hat y_j}^2 dy_1 \ldots dy_n \\
=& \sum_{j=1}^n \pr{2r}^{n-1} \int_{x_j - r}^{x_j+r} y_j^4 \, dy_j
+ \sum_{j=1}^n \pr{\int_{x_j - r}^{x_j+r} y_j^2 \, dy_j} \brac{\int_{Q_j} \abs{\hat y_j}^2 d\hat y_j}
+ \ldots \\
=& \sum_{j=1}^n \pr{2r}^{n} \pr{x_j^4 + 2 x_j^2 r^2 + \frac 1 5 r^4}
+ \sum_{j=1}^n \pr{2r}^{n} \pr{x_j^2 + \frac 1 3 r^2} \pr{\abs{\hat x_j}^2 + \frac {n-1}3 r^2} \\
=& 2^n r^n \abs{x}^4
+ 2^{n+1} r^{n+2} \frac {n+ 2}3 \abs{x}^2
+ 2^n r^{n+4} \frac{\pr{5n+4}n}{45}.
\end{align*}
Therefore,
\begin{align*}
\Psi\pr{x, r; V}
&= \frac 1 {r^{n-2}} \int_{Q(x, r)} V(y) dy
= 2^n r^2 \brac{\begin{array}{cc} 1 & \abs{x}^2 + \frac {n}3 r^2 \\ \abs{x}^2 + \frac {n}3 r^2 & \abs{x}^4 + 2 \frac {n+ 2}3 \abs{x}^2 r^{2} + \frac{\pr{5n+4}n}{45}r^{4} \end{array}}.
\end{align*}
The characteristic polynomial of the inner matrix is
\begin{align*}
& \lambda^2 - \lambda \pr{1 + \abs{x}^4 + \frac {2n+ 4}3 \abs{x}^2 r^{2} + \frac{5n^2+4n}{45}r^{4}}
+ \frac {4}3 \abs{x}^2 r^{2} + \frac{4n}{45}r^{4}
\end{align*}
so its eigenvalues are
\begin{align*}
\frac{\pr{1 + \abs{x}^4 + \frac {2n+ 4}3 \abs{x}^2 r^{2} + \frac{5n^2+4n}{45}r^{4}} \pm \sqrt{\pr{1 + \abs{x}^4 + \frac {2n+ 4}3 \abs{x}^2 r^{2} + \frac{5n^2+4n}{45}r^{4}}^2 - 4\pr{\frac {4}3 \abs{x}^2 r^{2} + \frac{4n}{45}r^{4}} }}{2}.
\end{align*}
We choose $\underline{r} = \underline{r}(x)$ optimally so that $\Psi\pr{x, \underline{r}; V} \ge I$.
That is,
$$2^{n-1} \underline{r}^2 \pr{1 + \abs{x}^4 + \frac {2n+ 4}3 \abs{x}^2 \underline{r}^{2} + \frac{5n^2+4n}{45}\underline{r}^{4}} \brac{1 - \sqrt{1 - 4\frac{\frac {4}3 \abs{x}^2 \underline{r}^{2} + \frac{4n}{45}\underline{r}^{4}}{\pr{1 + \abs{x}^4 + \frac {2n+ 4}3 \abs{x}^2 \underline{r}^{2} + \frac{5n^2+4n}{45}\underline{r}^{4}}^2} }} = 1.$$
When $\abs{x} \gg 1$, we can perform a Taylor expansion of the root term, and we then have to solve
$$2^{n} \underline{r}^2 \pr{\frac {4}3 \abs{x}^2 \underline{r}^{2} + \frac{4n}{45}\underline{r}^{4}} \approx \pr{1 + \abs{x}^4 + \frac {2n+ 4}3 \abs{x}^2 \underline{r}^{2} + \frac{5n^2+4n}{45}\underline{r}^{4}} $$
or
$$\abs{x}^4 - \pr{\frac {2^{n+2}}3 \underline{r}^2 - \frac {2n+ 4}3} \underline{r}^{2} \abs{x}^2 + \frac{5n^2+4n}{45}\underline{r}^{4} - \frac{2^{n+2}n}{45}\underline{r}^{6} + 1 \approx 0.$$
We see that a solution is given by
\begin{align*}
\abs{x}^2 &\approx \frac {2^{n+1}}3 \underline{r}^4 - \frac {n+ 2}3 \underline{r}^{2} + \sqrt{\pr{\frac {2^{n+1}}3 \underline{r}^4 - \frac {n+ 2}3 \underline{r}^{2}}^2 + \pr{\frac{2^{n+2}n}{45}\underline{r}^{6} - \frac{5n^2+4n}{45}\underline{r}^{4} - 1}}.
\end{align*}
In particular, $\underline{r} \to \infty$ when $\abs{x} \to \infty$.
Therefore, we may construct an increasing sequence $\set{R_m}_{m=1}^\infty$ such that whenever $\abs{x_m} = R_m$, $\underline{r}(x_m)^2 = m$.
That is,
\begin{align*}
& 2^{n-1} m \pr{1 + \abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 m + \frac{5n^2+4n}{45}m^2} \brac{1 - \sqrt{1 - \tfrac{4 \pr{\frac {4}3 \abs{x_m}^2 m + \frac{4n}{45}m^2}}{\pr{1 + \abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 m + \frac{5n^2+4n}{45}m^2}^2} }} = 1
\end{align*}
so that
\begin{align*}
\abs{x_m}^4 - \pr{ \frac {2^{n+2}}3 m^2 - \frac {2n+ 4}3 m}\abs{x_m}^2 - \pr{\frac{2^{n+2}n}{45}m^3 - \frac{5n^2+4n}{45}m^2 - 1}
&\approx 0
\end{align*}
and then
\begin{align*}
\abs{x_m}^2 & \approx \pr{\frac {2^{n+1}}3 m^2 - \frac {n+ 2}3 m} + \sqrt{\pr{\frac {2^{n+1}}3 m^2 - \frac {n+ 2}3 m}^2 + \pr{\frac{2^{n+2}n}{45}m^3 - \frac{5n^2+4n}{45}m^2 - 1}}.
\end{align*}
Therefore, $\abs{x_m} = c_m m$, where $\set{c_m}_{m=1}^\infty$ is bounded.
With $r_m = \underline{r}(x_m)$, we have $r_m = \sqrt{m}$.
In particular, $\frac{r_m}{\abs{x_m}} \to 0$ as $m \to \infty$.
Now we set $Q_m = Q(x_m, r_m)$ and calculate
\begin{align*}
V(Q_m)
&= \int_{Q_m} V(y) dy
= \pr{2 r_m}^n \brac{\begin{array}{cc} 1 & \abs{x_m}^2 + \frac {n}3 r_m^2 \\ \abs{x_m}^2 + \frac {n}3 r_m^2 & \abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 r_m^{2} + \frac{\pr{5n+4}n}{45} r_m^{4} \end{array}}
\end{align*}
so that
\begin{align*}
V(Q_m)^{-1}
&= \frac{3 r_m^{-n-2}}{2^{n+2}\pr{\abs{x_m}^2 + \frac{n}{15} r_m^2}} \brac{\begin{array}{cc} \abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 r_m^{2} + \frac{\pr{5n+4}n}{45} r_m^{4} & -\pr{\abs{x_m}^2 + \frac {n}3 r_m^2} \\ -\pr{\abs{x_m}^2 + \frac {n}3 r_m^2} & 1 \end{array}}.
\end{align*}
Then
\begin{align*}
& \frac{2^{n+2}}{3} r_m^{n+2} \pr{\abs{x_m}^2 + \frac{n}{15} r_m^2}\pr{1 + \abs{y}^4} V(y)^{\frac 1 2} V(Q_m)^{-1} V(y)^{\frac 1 2} \\
=& \brac{\begin{array}{ll} 1 & \abs{y}^2 \\ \abs{y}^2 & \abs{y}^4 \end{array}}
\brac{\begin{array}{cc} \abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 r_m^{2} + \frac{\pr{5n+4}n}{45} r_m^{4} & -\pr{\abs{x_m}^2 + \frac {n}3 r_m^2} \\ -\pr{\abs{x_m}^2 + \frac {n}3 r_m^2} & 1 \end{array}}
\brac{\begin{array}{ll} 1 & \abs{y}^2 \\ \abs{y}^2 & \abs{y}^4 \end{array}}
\\
=& \pr{\abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 r_m^{2} + \frac{\pr{5n+4}n}{45} r_m^{4} - 2 \pr{\abs{x_m}^2 + \frac {n}3 r_m^2} \abs{y}^2 + \abs{y}^4} \brac{\begin{array}{ll} 1 & \abs{y}^2 \\ \abs{y}^2 & \abs{y}^4 \end{array}}
\\
\end{align*}
and we see that
\begin{align*}
& \frac{2^{n+2}}{3} r_m^{n+2} \pr{\abs{x_m}^2 + \frac{n}{15} r_m^2} \innp{V(y)^{\frac 1 2} V(Q_m)^{-1} V(y)^{\frac 1 2} \V{e}_1, \V{e}_1} \\
=& \frac{\abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 r_m^{2} + \frac{\pr{5n+4}n}{45} r_m^{4} - 2 \pr{\abs{x_m}^2 + \frac {n}3 r_m^2} \abs{y}^2 + \abs{y}^4}{1 + \abs{y}^4}.
\end{align*}
Therefore,
\begin{align*}
&\frac{4 r_m^{2}}{3} \pr{\abs{x_m}^2 + \frac{n}{15} r_m^2} \int_{Q_m} \innp{V(y)^{\frac 1 2} V(Q_m)^{-1} V(y)^{\frac 1 2} \V{e}_1, \V{e}_1} dy \\
=& \pr{\abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 r_m^{2} + \frac{\pr{5n+4}n}{45} r_m^{4} - 1}\fint_{Q_m} \frac{1}{1 + \abs{y}^4} dy
- 2 \pr{\abs{x_m}^2 + \frac {n}3 r_m^2} \fint_{Q_m} \frac{\abs{y}^2}{1 + \abs{y}^4} dy
+ 1
\\
\le& \pr{\abs{x_m}^4 + \frac {2n+ 4}3 \abs{x_m}^2 r_m^{2} + \frac{\pr{5n+4}n}{45} r_m^{4} - 1} \frac{1}{1 + \pr{\abs{x_m} - r_m}^4}
+ 2 \pr{\abs{x_m}^2 + \frac {n}3 r_m^2} \frac{\pr{\abs{x_m} + r_m}^2}{1 + \pr{\abs{x_m} - r_m}^4}
+ 1 \\
=& \frac{4 + \pr{\frac {4n+ 28}3} \pr{\frac{r_m}{\abs{x_m}}}^{2} + \pr{\frac{4n-12}{3}} \pr{\frac{r_m}{\abs{x_m}}}^3 + \brac{\frac{\pr{5n+34}n}{45}+ 1} \pr{\frac{r_m}{\abs{x_m}}}^{4} }{1 - 4 \pr{\frac{r_m}{\abs{x_m}}} + 6 \pr{\frac{r_m}{\abs{x_m}}}^2 - 4 \pr{\frac{r_m}{\abs{x_m}}}^3 + \pr{\frac{r_m}{\abs{x_m}}}^4 + \abs{x_m}^{-4}}.
\end{align*}
It follows that
\begin{align*}
&\lim_{m \to \infty}\int_{Q_m} \innp{V(y)^{\frac 1 2} V(Q_m)^{-1} V(y)^{\frac 1 2} \V{e}_1, \V{e}_1} dy \\
\le& \lim_{m \to \infty} \frac{3}{4 r_m^{2}\pr{\abs{x_m}^2 + \frac{n}{15} r_m^2}} \pr{ \frac{4 + \pr{\frac {4n+ 28}3} \pr{\frac{r_m}{\abs{x_m}}}^{2} + \pr{\frac{4n-12}{3}} \pr{\frac{r_m}{\abs{x_m}}}^3 + \brac{\frac{\pr{5n+34}n}{45}+ 1} \pr{\frac{r_m}{\abs{x_m}}}^{4} }{1 - 4 \pr{\frac{r_m}{\abs{x_m}}} + 6 \pr{\frac{r_m}{\abs{x_m}}}^2 - 4 \pr{\frac{r_m}{\abs{x_m}}}^3 + \pr{\frac{r_m}{\abs{x_m}}}^4 + \abs{x_m}^{-4}}} \\
=& 0.
\end{align*}
In particular, there is no constant $c > 0$ so that for all cubes $Q = Q(x, \frac 1 {\underline{m}(x, V)})$ and all $\V{e} \in \ensuremath{\mathbb{R}}^d$,
$$\int_{Q} \innp{V(y)^{\frac 1 2} V(Q)^{-1} V(y)^{\frac 1 2} \V{e}, \V{e}} dy \ge c \abs{\V{e}},$$
showing that this choice of $V \in {\MC{B}_p}$ does not belong to $\mathcal{NC}$.
\end{proof}
Next, we show that this choice of $V$ violates the Fefferman-Phong inequality described by Lemma \ref{FPml}.
\begin{lem}[Failure of the Fefferman-Phong inequality]
For $V$ as defined in \eqref{VExDef}, there is no choice of constant $C$ so that for every $\V{u} \in C^1_0\pr{\ensuremath{\mathbb{R}}^n}$, it holds that
\begin{align}
\label{FPInq}
\int_{\ensuremath{\mathbb{R}}^n} \underline{m}\pr{x, V}^2 \abs{\vec{u}}^2
\le C \pr{\int_{\ensuremath{\mathbb{R}}^n} \abs{D\vec{u}}^2 + \int_{\ensuremath{\mathbb{R}}^n} \innp{V \vec{u}, \vec{u}}}.
\end{align}
\end{lem}
\begin{proof}
We will construct a sequence $\set{\V{u}_R} \subset C^1_0\pr{\ensuremath{\mathbb{R}}^n}$ that violates this inequality as $R \to \infty$.
With $\displaystyle \vec{u} = \brac{- \abs{x}^2, 1}^T$, we see that $V \vec{u} = \vec{0}$.
For any $R \gg 1$, define $\xi_R \in C^\infty_0\pr{\ensuremath{\mathbb{R}}^n}$ so that $\xi_R \equiv 1$ when $x \in B_{2R} \setminus B_R$ and $\supp \xi_R \subset B_{3R} \setminus B_{R/2}$.
In particular, $\supp \nabla \xi_R \subset \pr{B_{3R} \setminus B_{2R}} \cup \pr{B_R \setminus B_{R/2}}$ with $\abs{\nabla \xi_R} \lesssim \frac 1 R$.
Then if we define $\vec{u}_R = \vec{u} \xi_R$, we see that $\vec{u}_R \in C^1_0\pr{\ensuremath{\mathbb{R}}^n}$ and for any choice of $R > 0$, $V \vec{u}_R = \vec{0}$.
In particular,
$$\int_{\ensuremath{\mathbb{R}}^n} \innp{V \vec{u}_R, \vec{u}_R} = 0.$$
Now
\begin{align*}
D\vec{u}_R
= \brac{\begin{array}{c} - 2 \vec{x} \xi_R \\ 0\end{array}}
+ \brac{\begin{array}{c} - \abs{x}^2 \nabla \xi_R \\ \nabla \xi_R \end{array}}
\end{align*}
so that
\begin{align*}
\int_{\ensuremath{\mathbb{R}}^n} \abs{D\vec{u}_R}^2
&\le \int_{B_{3R} \setminus B_{R/2}} 4 \abs{x}^2
+ \int_{\pr{B_{3R} \setminus B_{2R}} \cup \pr{B_R \setminus B_{R/2}}} \pr{\abs{x}^4 +1}\abs{\nabla \xi_R}^2
\lesssim \abs{B_{3R}} R^2 \simeq R^{n+2}
\end{align*}
and then
\begin{equation}
\label{RHS}
\int_{\ensuremath{\mathbb{R}}^n} \abs{D\vec{u}}^2 + \int_{\ensuremath{\mathbb{R}}^n} \innp{V \vec{u}, \vec{u}}
\lesssim_{(d, n)} R^{n+2}.
\end{equation}
Recall from the proof of Lemma \ref{notNC} that there is a bounded sequence $\set{c_m}_{m=1}^\infty \subset \ensuremath{\mathbb{R}}$ so that if $\abs{x_m} = c_m m$ for all $m \in \ensuremath{\mathbb{N}}$, then $\underline{r}(x_m) = \sqrt{m}$.
In other words, $\underline{r}(x_m) = \sqrt{\frac{\abs{x_m}}{c_m}}$.
Since $\underline{r}(x) = \frac{1}{\underline{m}(x)}$, then we conclude that $\underline{m}(x) \simeq \sqrt{ \frac{1}{\abs{x}}}$ whenever $\abs{x} \gg 1$.
Thus, we see that
\begin{align}
\label{LHS}
\int_{\ensuremath{\mathbb{R}}^n} \underline{m}\pr{x, V}^2 \abs{\vec{u}_R}^2
\ge \int_{B_{2R} \setminus B_R} \underline{m}\pr{x, V}^2 \abs{\vec{u}_R}^2
\gtrsim \int_{B_{2R} \setminus B_R} \underline{m}\pr{R, V}^2 R^4
\simeq R^{n+3}.
\end{align}
If \eqref{FPInq} were to hold, then there is a $C > 0$ so that $R^{n+3} \le C R^{n+2}$ for all $R \gg 1$.
As this is impossible, the proof is complete.
\end{proof}
\section{The ${\MC{A}_{2,\iny}}$, ${\MC{A}_\iny}$, $\RBM,$ and ${\MC{B}_p}$ Classes of Matrices.}
\label{AiApp}
The goal of this appendix is to provide precise and concrete connections between the classes of matrix weights that were introduced in Section \ref{MWeights}: ${\MC{A}_{2,\iny}}$ (and more generally ${\MC{A}_{p,\iny}}$, which will be defined momentarily), $\RBM$, ${\MC{A}_\iny},$ and ${\MC{B}_p}$.
Further, we will make our presentation almost entirely self-contained for the reader who is unfamiliar with the theory of ${\MC{A}_{p,\iny}}$ matrix weights.
Throughout this section, unless otherwise stated, we assume that $1 < p < \infty$.
Let $V$ be a complex-valued matrix weight defined on $\ensuremath{\R^n}$; that is, $V$ is a Hermitian positive semidefinite $d \times d$ matrix function with $\abs{V} \in L_{\T{loc}}^1 (\ensuremath{\R^n})$.
Note that in the body of this paper, we assume that $V$ is real-valued and symmetric.
As pointed out in the introduction, for our purposes, there is no loss of generality in replacing ``complex Hermitian" with ``real symmetric".
However, here within the appendix, we follow the standard convention in matrix weight theory and work with complex Hermitian matrix weights.
It should be noted that a matrix weight $V$ is (unless otherwise stated) not necessarily positive definite a.e. on $\ensuremath{\R^n}$.
We begin by proving some useful lemmas regarding matrix weights.
First we have what is known as the ``matrix Jensen's inequality" from \cite{NT96, Vol97}.
For the sake of completeness, we include the proof.
\begin{lem}[Matrix Jensen inequality]
\label{MatrixJensen}
If $V$ is a positive definite matrix weight, then for any cube $Q \subset \ensuremath{\mathbb{R}}^n$,
\begin{equation*}
\det \fint_Q V(x) \, dx \geq \exp \pr{\fint_Q \ln \det V (x) \, dx}.
\end{equation*}
\end{lem}
\begin{proof}
Let $A$ be a matrix with $|\det A| = 1$.
If $W$ is a Hermitian, positive definite $d \times d$ matrix, then the classical arithmetic mean-geometric mean inequality shows that
\begin{equation*}
(\det W)^\frac{1}{d} = \brac{\det (A ^* W A)}^\frac{1}{d} \leq \frac{1}{d} \tr (A ^* W A),
\end{equation*}
with equality when $A = (\det W)^\frac{1}{2d} W^{-\frac12}$.
In particular, when $W$ is positive definite,
\begin{equation*}
(\det W)^\frac{1}{d} = \inf \set{\frac{1}{d} \tr (A ^* W A) : |\det A| = 1\ }.
\end{equation*}
Thus, we have
\begin{equation}
\label{DetConvexIneq}
\begin{aligned}
\pr{\det \fint_Q V(x) \, dx }^\frac{1}{d}
&= \inf_{|\det A| = 1} \frac{1}{d} \tr A ^* \pr{\fint_Q V(x) \, dx} A
= \inf_{|\det A| = 1} \frac{1}{d} \tr \fint_Q A ^* V(x) A \, dx \\
&\geq \fint_Q \pr{ \inf_{|\det A| = 1} \frac{1}{d} \tr A ^* V(x) A } \, dx
= \fint_Q \brac{\det V(x)}^\frac{1}{d} \, dx.
\end{aligned}
\end{equation}
Combining \eqref{DetConvexIneq} with Jensen's inequality then gives us
$$\pr{\det \fint_Q V(x) \, dx }^\frac{1}{d} \geq \exp \pr{\fint_Q \ln \brac{\det V(x)}^\frac{1}{d} \, dx},$$
which completes the proof.
\end{proof}
Next we state and prove a well-known result that is sometimes called the ``Hadamard determinant inequality".
\begin{lem}[Determinant lemma]
\label{DetProp}
If $A$ is a positive semidefinite Hermitian matrix and $\{\V{e}_j\}_{j=1}^d$ is any orthonormal basis of $\ensuremath{\mathbb{C}}^d$, then $\displaystyle \det A \leq \prod_{j = 1}^d \innp{A \V{e}_j, \V{e}_j} \leq \prod_{j = 1}^d \abs{A \V{e}_j}$.
\end{lem}
\begin{proof}
The proof that we present is from \cite{Bow01}.
The second inequality follows immediately from the first, which we now prove.
Let $A$ have eigenvalues $\{\lambda_j\}_{j=1}^d$ with a corresponding orthonormal basis of eigenvectors $\{\V{f}_j\}_{j=1}^d$ so that
$$\innp{A \V{e}_i, \V{e}_i} = \sum_{ j = 1}^d \lambda_j \abs{\innp{\V{e}_i, \V{f}_j}}^2 = \sum_{ j = 1}^d \lambda_j (f_i ^j)^2,$$ where we have set $\abs{\innp{\V{e}_i, \V{f}_j}} = f_j ^i. $
Using the weighted arithmetic-geometric mean inequality, we have that
\begin{align*}
\det A & = \prod_{j = 1}^d \lambda_j
= \prod_{j = 1}^d \lambda_j ^{\sum_{i = 1}^d (f_j ^i)^2}
= \prod_{i = 1}^d \pr{ \prod_{j = 1}^d \lambda_j ^{(f_j ^i)^2}}
\leq \prod_{i = 1}^d \pr {\sum_{j = 1}^d \lambda_j (f_j ^i)^2 }
= \prod_{i = 1}^d \innp{A \V{e}_i, \V{e}_i} ,
\end{align*}
as required.
\end{proof}
\begin{defn}[$p$-nondegenerate]
We say that a matrix weight $V$ is {\bf $p$-nondegenerate} if for every $\V{e} \in \ensuremath{\C^d}$, it holds that
\begin{equation*}
\label{pNonDeg}
\abs{V^\frac{1}{p} (x) \V{e}} > 0 \quad \text{a.e. on} \;\; \ensuremath{\R^n}.
\end{equation*}
\end{defn}
In the setting where $V$ is $p$-nondegenerate, for any cube $Q$, the map $\displaystyle \V{e} \mapsto \pr{\fint_Q \abs{V^\frac{1}{p} (x) \V{e}} ^p \, dx}^\frac{1}{p}$ defines a norm on $\ensuremath{\C^d}$.
Thus, the John ellipsoid theorem implies the existence of a ``reducing matrix", defined as follows.
\begin{defn}[Reducing matrix]
If $V$ is a $p$-nondegenerate matrix weight, then for every cube $Q \subset \ensuremath{\mathbb{R}}^n$, there exists a positive definite, Hermitian $d \times d$ matrix $R_Q ^p (V)$, called a {\bf reducing matrix}.
This matrix $R_Q ^p (V)$ has the property that for any $\vec e \in \ensuremath{\C^d}$,
\begin{equation}
\label{reducingDef}
\pr{\fint_Q \abs{V^{\frac 1 p}(x) \vec e}^p dx}^{\frac 1 p} \leq \abs{R_Q^p(V) \vec e}
\leq \sqrt{d} \pr{\fint_Q \abs{V^{\frac 1 p}(x) \vec e}^p dx}^{\frac 1 p}.
\end{equation}
\end{defn}
\noindent
See \cite{NT96}, p. 79 for a proof with the same lower bound and a slightly worse upper bound of $d$.
The reducing matrix need not be unique, but the choice of $R_Q ^p (V)$ is insignificant.
Note that if $p = 2$, then a computation shows that
\begin{equation*}
\abs{\pr{\fint_Q V(y) dy}^\frac12 \V{e}}^2
= \fint_Q \abs{V^\frac12 (x) \V{e}}^2 \, dx.
\end{equation*}
That is, if $p = 2$, then $\displaystyle \pr{\fint_Q V}^\frac12$ is a reducing matrix for $V$.
Also, observe that
\begin{equation}
\label{reducingpCons}
\begin{aligned}
\pr{\fint_Q \innp{V(x) \vec e, \V{e}}^{p} dx}^{\frac 1{2p}}
& = \pr{\fint_Q \abs{V^{\frac 1 2}(x) \vec e}^{2p} dx}^{\frac 1{2p}}
\leq \abs{R_Q^{2p} (V^p) \vec e} \\
& \leq \sqrt{d} \pr{\fint_Q \abs{V^{\frac 1 2}(x) \vec e}^{2p} dx}^{\frac 1{2p}}
= \sqrt d \pr{\fint_Q \innp{V(x)\vec e, \V{e}}^{p} dx}^{\frac 1{2p}},
\end{aligned}
\end{equation}
showing that $R_Q^{2p} (V^p)$ is a reducing matrix for the norm
$\displaystyle e \mapsto \pr{\fint_Q \innp{V(x)\vec e, \V{e}}^{p} dx}^{\frac 1{2p}}$.
We now introduce the ${\MC{A}_{p,\iny}}$ class from \cite{NT96, Vol97}.
In contrast to the scalar setting where there is a single class of ${\text{A}_\iny}$ weights, in the matrix setting, there is such a class for each $p$.
\begin{defn}[${\MC{A}_{p,\iny}}$]
We say that $V \in {\MC{A}_{p,\iny}}$\, if $V$ is a $p$-nondegenerate matrix weight and there exists a constant $A_{V} = A_{V,p} > 0$ so that for every cube $Q \subset \ensuremath{\mathbb{R}}^n$, it holds that
\begin{equation}
\label{Apinfone}
\det R_Q ^p (V) \le A_{V} \exp \pr{ \fint_Q \ln \det V^\frac{1}{p} (x) \, dx}.
\end{equation}
\end{defn}
For example, when $p = 2$, it follows from the observation above that $V \in {\MC{A}_{2,\iny}}$ if there exists a constant $A_V > 0$ so that for every cube $Q \subset \ensuremath{\mathbb{R}}^n$, we have
\begin{equation}
\label{Apinfone2}
\det \fint_Q V \le A_V \exp \pr{ \fint_Q \ln \det V (x) \, dx}.
\end{equation}
We now prove that a matrix weight $V \in {\MC{A}_{p,\iny}}$ is in fact positive definite a.e., which appears to previously unknown.
\begin{prop}
Let $V \in {\MC{A}_{p,\iny}}$ be a matrix weight.
If $p \geq 2$, then $\pr{\det V}^{\frac{2}{dp}} \in \text{A}_\infty$, while if $1 < p < 2$, then $\pr{\det V}^\frac{1}{dp} \in \text{A}_\infty$.
In particular, $V$ is positive definite a.e.
\end{prop}
\begin{proof}
For some $Q \subset \ensuremath{\mathbb{R}}^n$, let $R_Q ^p(V)$ be a reducing matrix for $V$ and take $\set{\V{e}_j}_{j=1}^d$ to be an orthonormal basis of eigenvectors of $R_Q ^p(V)$.
If $p \geq 2$, then applying \eqref{DetConvexIneq} to $V^\frac{2}{p}$ and using Lemma \ref{DetProp} shows that
\begin{align*}
\fint_Q \brac{\det V(x)}^{\frac{2}{dp}} dx
&\leq \pr{\det \fint_Q V^\frac{2}{p}(x) dx}^\frac{1}{d}
\leq \brac{\prod_{j = 1}^d \innp{\pr{\fint_Q V^\frac{2}{p}(x) dx} \V{e}_j, \V{e}_j}}^\frac{1}{d}
= \brac{\prod_{j = 1}^d {\fint_Q \innp{V^\frac{2}{p}(x) \V{e}_j, \V{e}_j} dx} }^\frac{1}{d} \\
&= \brac{\prod_{j = 1}^d {\fint_Q \abs{V^{\frac{1}{p}}(x) \V{e}_j}^2 dx}}^\frac{1}{d}
\leq \brac{\prod_{j = 1}^d {\fint_Q \abs{V^{\frac{1}{p}}(x) \V{e}_j}^p dx}}^\frac{2}{d p},
\end{align*}
where the last inequality follows form H\"{o}lder's inequality.
Applications of \eqref{reducingDef}, that $\{\V{e}_j\}$ is an orthonormal basis of eigenvectors of $R_Q ^p(V)$, and \eqref{Apinfone} then give us that
\begin{align*}
\fint_Q \brac{\det V(x)}^{\frac{2}{dp}} dx
&\leq \brac{\prod_{j = 1}^d \pr{\fint_Q \abs{V^{\frac{1}{p}}(x) \V{e}_j}^p dx}^{\frac 1 p}}^\frac{2}{d}
\le \brac{\prod_{j = 1}^d \abs{R_Q ^p(V) \V{e}_j}}^\frac{2}{d}
= \brac{\det R_Q ^p(V)} ^\frac{2}{d} \\
&\lesssim \exp \pr{\fint_Q \ln \brac{\det V(x)}^\frac{2}{dp} \, dx }.
\end{align*}
By the classical reverse Jensen characterization of scalar ${\text{A}_\iny}$ weights (see \cite[Theorem 7.3.3]{Gra14}), we conclude that $\pr{\det V}^{\frac{2}{dp}} \in {\text{A}_\iny}$.
If $p \in \pr{1, 2}$, then applying the argument above to $V^{\frac 1 p}$ shows that
\begin{align*}
\fint_Q \brac{\det V(x)}^{\frac{1}{dp}} dx
&\leq \brac{\prod_{j = 1}^d {\fint_Q \innp{V^\frac{1}{p}(x)\V{e}_j, \V{e}_j} dx} }^\frac{1}{d}
\leq \brac{\prod_{j = 1}^d {\fint_Q \abs{V^{\frac{1}{p}}(x) \V{e}_j} dx}}^\frac{1}{d}
\leq \brac{\prod_{j = 1}^d {\fint_Q \abs{V^{\frac{1}{p}}(x) \V{e}_j}^p dx}}^\frac{1}{d p} \\
&\le \brac{\prod_{j = 1}^d \abs{R_Q ^p(V) \V{e}_j}}^\frac{1}{d}
= \brac{\det R_Q ^p(V)} ^\frac{1}{d}
\lesssim \exp \pr{\fint_Q \ln \brac{\det V(x)}^\frac{1}{dp} \, dx },
\end{align*}
from which it follows that $\pr{\det V}^\frac{1}{dp} \in \T{A}_\infty$.
\end{proof}
Now we give another characterization of the ${\MC{A}_{p,\iny}}$ class of matrices from \cite{Vol97}.
\begin{lem}[${\MC{A}_{p,\iny}}$ characterization]
\label{ApiProperty}
Let $V$ be a $p$-nondegenerate matrix weight.
Then $V \in {\MC{A}_{p,\iny}}$ iff $V$ is positive definite and there exists a constant $C > 0$ so that for every $Q \subset \ensuremath{\mathbb{R}}^n$ and every $\V{e} \in \ensuremath{\mathbb{C}}^d$, it holds that
\begin{equation}
\label{Apinftwo}
\exp\pr{\fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx} \le C \abs{ (R_Q ^p(V))^{-1} \V{e} }.
\end{equation}
\end{lem}
This result was originally proved in \cite[p. 451]{Vol97}.
\begin{proof}
Let $Q \subset \ensuremath{\mathbb{R}}^n$ be arbitrary and assume that \eqref{Apinftwo} holds for every $\V{e} \in \ensuremath{\mathbb{C}}^d$.
Let $\set{\V{e}_i}_{i = 1}^d$ be an orthonormal basis of eigenvectors for $R_Q ^p(V)$, and consequently for $R_Q ^p(V)^{-1}$.
For each $i = 1, \ldots, d$, taking the logarithm of the assumption \eqref{Apinftwo} shows that
\begin{equation*}
\fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e}_i | \, dx
\leq \ln C + \ln \abs{(R_Q ^p(V))^{-1} \V{e}_i }.
\end{equation*}
Applying Lemma \ref{DetProp}, the inequality on the previous line, and then summing gives
\begin{align*}
\fint_Q \ln \det V^{-\frac{1}{p}} (x) \, dx
&\leq \sum_{i = 1} ^d \fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e}_i | dx
\leq d\ln C + \sum_{i = 1} ^d \ln \abs{ (R_Q ^p(V))^{-1} \V{e}_i } \\
&= d\ln C + \ln \pr{\prod_{i = 1}^d \abs{ (R_Q ^p(V))^{-1} \V{e}_i } }
= d \ln C + \ln \det (R_Q ^p(V))^{-1}.
\end{align*}
After rearrangement, this is equivalent to \eqref{Apinfone}.
Since $Q$ was arbitrary, it follows that $V \in {\MC{A}_{p,\iny}}$.
Now assume that $V \in {\MC{A}_{p,\iny}}$.
It follows from \eqref{Apinfone} that for any $Q \subset \ensuremath{\mathbb{R}}^n$,
\begin{equation}
\label{ApinfoneA}
\fint_Q \ln \det \brac{V^{-\frac{1}{p}} (x) R_Q ^p(V)} \, dx \leq c.
\end{equation}
Define the matrix $B(x) = R_Q ^p(V) V^{-\frac{2}{p}} (x) R_Q ^p(V)$ and let $0 < \lambda_1(x) \leq \cdots \leq \lambda_d(x)$ be the eigenvalues of $B(x)$ with corresponding normalized eigenvectors $\V{e}_1 (x), \ldots, \V{e}_d (x)$.
Thus, for any fixed unit vector $\V{e}$, we have
\begin{equation}
\label{AinfEstOne}
\begin{aligned}
\det \brac{V^{-\frac{1}{p}} (x) R_Q ^p(V)}
& = \brac{\det B(x)} ^\frac12
= \pr{\prod_{i = 1}^d \innp{B(x) \V{e}_i (x), \V{e}_i (x)}}^\frac12 \\
& \geq \pr{\innp{B(x) \V{e}, \V{e}} \prod_{i = 1}^{d-1} \innp{B(x) \V{e}_i (x), \V{e}_i (x)}}^\frac12
= \prod_{i = 1}^{d} \abs{V^{-\frac{1}{p}} (x) R_Q ^p(V) \V{f}_i (x)},
\end{aligned}
\end{equation}
where we have set $\V{f}_i (x) = \V{e}_i (x)$ for $i = 1, \ldots, d-1$, and $\V{f}_d (x)= \V{e}$.
However, for any (constant) orthonormal basis $\set{\V{g}_j}_{j=1}^d$ of $\ensuremath{\C^d}$, we have
\begin{align*}
\fint_Q \abs{R_Q ^p(V) ^{-1} V^{\frac{1}{p}} (x) \V{f}_i (x)}^p \, dx
&\lesssim_{(p)} \sum_{j = 1}^d \fint_Q \abs{ \innp{ \V{f}_i (x), V^{\frac{1}{p}} (x) R_Q ^p(V) ^{-1} \V{g}_j }} ^p \, dx\\
&\leq \sum_{j = 1}^d \fint_Q \abs{ V^{\frac{1}{p}} (x) R_Q ^p(V) ^{-1} \V{g}_j } ^p \, dx
\leq \sum_{j = 1}^d \abs{ R_Q ^p(V) R_Q ^p(V) ^{-1} \V{g}_j } ^p
= d.
\end{align*}
Therefore,
\begin{equation}
\label{AinfEstTwo}
\begin{aligned}
\sum_{i = 1}^d \fint_Q \ln ^+ \abs{R_Q ^p(V) ^{-1} V^{\frac{1}{p}} (x) \V{f}_i (x)} \, dx
&= \frac{1}{p} \sum_{i = 1}^d \fint_Q \ln ^+ \abs{R_Q ^p(V) ^{-1} V^{\frac{1}{p}} (x) \V{f}_i (x)}^p \, dx \\
&\leq \frac{1}{p} \sum_{i = 1}^d \fint_Q \abs{R_Q ^p(V) ^{-1} V^{\frac{1}{p}} (x) \V{f}_i (x)}^p \, dx \leq c(d, p).
\end{aligned}
\end{equation}
Note that for any invertible matrix $A$ and any unit vector $\V{c}$ we have $1 = \innp{A \V{c}, A^{-1} \V{c}} \leq \abs{A \V{c}} \abs{A^{-1} \V{c}}$.
In particular,
\begin{equation}
\label{InvIneq}
\abs{A \V{c}}^{-1} \leq \abs{A^{-1} \V{c}} .
\end{equation}
Using that $\V{f}_d = \V{e}$ and $\ln \le \ln^+$, the fact that $\ln x = \ln ^+ x - \ln ^+ x^{-1}$ followed by applications of \eqref{AinfEstOne}, \eqref{InvIneq}, \eqref{ApinfoneA}, and \eqref{AinfEstTwo} shows that
\begin{align*}
\fint_Q \ln \abs{V^{-\frac{1}{p}} (x) R_Q ^p(V) \V{e} } \, dx
&\leq \sum_{i = 1}^d \fint_Q \ln^+ \abs{V^{-\frac{1}{p}} (x) R_Q ^p(V) \V{f}_i (x)} \, dx \\
&= \sum_{i = 1}^d \fint_Q \ln \abs{V^{-\frac{1}{p}} (x) R_Q ^p(V) \V{f}_i (x)} \, dx
+ \sum_{i = 1}^d \fint_Q \ln^+ \abs{V^{-\frac{1}{p}} (x) R_Q ^p(V) \V{f}_i (x)}^{-1} \, dx \\
&\leq \fint_Q \ln \det \brac{V^{-\frac{1}{p}} (x) R_Q ^p(V)} \, dx
+ \sum_{i = 1}^d \fint_Q \ln^+ \abs{R_Q ^p(V)^{-1} V^{\frac{1}{p}} (x) \V{f}_i (x)} \, dx \\
& \leq c + c(d, p) = C'.
\end{align*}
Now we replace $\V{e}$ with $\frac{R_Q ^p(V)^{-1} \V{e}}{\abs{R_Q ^p(V)^{-1} \V{e}}}$ for an arbitrary vector $\V{e} \in \ensuremath{\mathbb{C}}^d$ to get that
$$\fint_Q \ln \abs{V^{-\frac{1}{p}} (x) \V{e} } \, dx \leq C' + \ln \abs{R_Q ^p(V)^{-1} \V{e}}.$$
After exponentiating both sides, this gives \eqref{Apinftwo}, as required.
\end{proof}
As was shown in \cite{NT96,Vol97}, for any matrix weight, the reverse inequalities to both \eqref{Apinfone} and \eqref{Apinftwo} hold with constant $C = 1$.
\begin{lem}[Reverse inequalities]
\label{OtherMatrixJensen}
Let $V$ be a $p$-nondegenerate matrix weight.
Then for any cube $Q \subset \ensuremath{\mathbb{R}}^n$ and any $\V{e} \in \ensuremath{\mathbb{C}}^d$, it holds that
\begin{equation*}
\det R_Q ^p (V) \geq \exp \fint_Q \ln \det V^\frac{1}{p} (x) \, dx
\end{equation*}
and
\begin{equation*}
\exp\pr{ \fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx} \geq \abs{ (R_Q ^p (V))^{-1} \V{e} }.
\end{equation*}
\end{lem}
\begin{proof}
To prove the first inequality, let $\set{\V{e}_i}_{i=1}^d$ be an orthonormal basis of eigenvectors for $R_Q ^p(V)$.
Applications of Lemma \ref{MatrixJensen}, Lemma \ref{DetProp}, H\"{o}lder's inequality, and \eqref{reducingDef} show that
\begin{align*}
\exp \pr{\fint_Q \ln \det V^\frac{1}{p} (x) \, dx}
&\leq \det \pr{\fint_Q V^\frac{1}{p}(x) dx}
\leq \prod_{i = 1}^d \abs{\pr{\fint_Q V^\frac{1}{p}(x) \, dx} \V{e}_i}
\le \prod_{i = 1}^d \fint_Q \abs{V^\frac{1}{p} (x) \V{e}_i} \, dx \\
&\leq \prod_{i = 1}^d \pr{\fint_Q \abs{V^\frac{1}{p} (x) \V{e}_i}^p \, dx}^\frac{1}{p}
\leq \prod_{i = 1}^d \abs{R_Q ^p (V) \V{e}_i} = \det R_Q ^p (V),
\end{align*}
as required.
For the second inequality, observe that for any $\V{e}, \V{f} \in \ensuremath{\C^d}$
\begin{equation*}
\abs{\innp{\V{e}, \V{f}}} \leq |V^{-\frac{1}{p}}(x) \V{e}| |V^\frac{1}{p} (x)\V{f}|.
\end{equation*}
Taking logarithms and averages then shows that
\begin{equation*}
\ln \abs{\innp{\V{e}, \V{f}}}
\leq \fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx
+ \fint_Q \ln |V^\frac{1}{p} (x) \V{f}| \, dx
\leq \fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx
+ \pr{\fint_Q \ln |V^\frac{1}{p} (x) \V{f}|^p \, dx}^{\frac 1 p},
\end{equation*}
where we have applied H\"older's inequality to the second term on the right.
Thus, Jensen's inequality implies that
\begin{align*}
\abs{\innp{\V{e}, \V{f}}}
&\leq \exp\pr{\fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx } \brac{ \exp\pr{ \fint_Q \ln |V^\frac{1}{p} (x) \V{f} |^p \, dx }}^\frac{1}{p} \\
&\leq \exp\pr{\fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx } \pr{ \fint_Q |V^\frac{1}{p} (x) \V{f} |^p \, dx }^\frac{1}{p}
\leq \exp\pr{\fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx } \abs{R_Q ^p(V) \V{f}},
\end{align*}
where we used \eqref{reducingDef} to reach the last line.
Replacing $\V{f}$ with $(R_Q ^p(V))^{-1} \V{f}$ and using duality shows that
\begin{equation*}
\abs{(R_Q ^p(V))^{-1} \V{e}}
\leq \exp\pr{\fint_Q \ln |V^{-\frac{1}{p}} (x) \V{e} | \, dx},
\end{equation*}
as desired.
\end{proof}
We also need the following elementary Lemma from \cite{NT96}.
\begin{lem}[Determinant to norm lemma]
\label{DetToNormLem}
Let $A$ be a $d \times d$ matrix for which $|\det A| \leq C < \infty$ and $|A\V{e} | \geq |\V{e}|$ for any $\V{e} \in \ensuremath{\C^d}$.
Then $\|A\| \leq C$.
\end{lem}
\begin{proof}
Let $\lambda_1, \ldots, \lambda_d$ denote the eigenvalues of $A^*A$.
The second condition implies that $\innp{A^*A \V{e}, \V{e}} \geq |\V{e}|^2$ for any $\V{e} \in \ensuremath{\C^d}$, so it holds that $\displaystyle \min_j |\lambda_j| \geq 1$.
Since the first condition implies that
$$ \prod_j |\lambda_j| = \det A^* A = \abs{\det A}^2 \leq C^2,$$
we must have that
$$\abs{A}^2 = \abs{A ^* A} = \max_j \lambda_j \leq C^2$$
and the conclusion follows.
\end{proof}
If $V \in \MC{ND}$, where $\MC{ND}$ is the ``nondegenerate" class of matrix weights introduced and discussed in Section \ref{MWeights}, then for any measurable set $E$ with $|E| > 0$, it holds that $\displaystyle \int_E V > 0$.
That is, for any $\V{e} \in \ensuremath{\C^d}$, we have
\begin{equation*}
0 < \innp{\pr{\int_E V(x) \, dx} \V{e}, \V{e}} = \int_E \innp {V(x) \V{e}, \V{e}} \, dx = \int_E \abs{V^\frac12 (x) \V{e}}^2 \, dx.
\end{equation*}
It follows that $V^p$ is $2p$-nondegenerate.
In particular, for each cube $Q \subset \ensuremath{\mathbb{R}}^n$, there exists a reducing matrix $R_Q ^{2p} (V^p)$.
We now state and prove a determinant characterization of the matrix class ${\MC{B}_p}$.
\begin{lem}[${\MC{B}_p}$ determinant characterization]
\label{BpDef}
If $V \in \MC{ND}$, then the following are equivalent:
\begin{itemize}
\item[(i)] There exists a constant $C > 0$ so that for every cube $Q \subset \ensuremath{\mathbb{R}}^n$,
\begin{equation}
\label{BpDefOne}
\det \brac{R_Q^{2p} (V^p)} \le C \det \left(\fint_Q V\pr{x} dx \right)^\frac12.
\end{equation}
\item[(ii)] There exists a constant $C > 0$ so that for every cube $Q \subset \ensuremath{\mathbb{R}}^n$ and every $\vec e \in \ensuremath{\mathbb{C}}^d$,
\begin{equation}
\label{BpDefTwo2}
\pr{\fint_Q \innp{V\pr{x} \vec e, \vec e}^p dx}^{1/p} \le C \innp{\pr{ \fint_Q V\pr{x} dx } \vec e, \vec e} .
\end{equation}
\end{itemize}
\end{lem}
\begin{rem}
Notice that the condition described by \eqref{BpDefTwo2} is our classicial definition of $V \in {\MC{B}_p}$ as presented in Section \ref{MWeights}.
Therefore, this proposition gives an alternative definition in terms of determinants and reducing matrices.
\end{rem}
\begin{proof}
We first prove that \eqref{BpDefOne} implies \eqref{BpDefTwo2}.
Observe that for any $\V{e} \in \ensuremath{\C^d}$, we have by H\"older's inequality and the property of the reducing matrix in \eqref{reducingDef} that
\begin{equation*}
\abs{\pr{\fint_Q V(x) \, dx }^\frac12 \V{e}}^2
= \fint_Q |V^\frac12(x) \V{e}|^2 \, dx
\leq \pr{\fint_Q \abs{\brac{V^p(x)}^\frac{1}{2p} \V{e}}^{2p} \, dx }^\frac{1}{p}
\leq \abs{\brac{R_Q^{2p} (V^p)}\V{e} }^2.
\end{equation*}
Thus, for any $\V{e} \in \ensuremath{\mathbb{C}}^d$,
\begin{equation*}
\abs{\brac{R_Q^{2p} (V^p)} \pr{\fint_Q V(x) \, dx }^{-\frac12} \V{e} } \geq |\V{e}|,
\end{equation*}
while the assumption of \eqref{BpDefOne} implies that
\begin{equation*}
\det \set{\brac{R_Q^{2p} (V^p)} \pr{\fint_Q V(x) \, dx }^{-\frac12}} \leq C.
\end{equation*}
An application of Lemma \ref{DetToNormLem} shows that
$$\norm{\brac{R_Q^{2p} (V^p)} \pr{\fint_Q V(x) \, dx }^{-\frac12} } \leq C.$$
Therefore, it follows from \eqref{reducingpCons} that
\begin{equation*}
\pr{\fint_Q \innp{V(x) \V{e}, \V{e}}^p \, dx} ^\frac{1}{p}
\leq \abs{R_Q ^{2p} (V^p) \V{e}}^2
\leq C^2 \abs{\pr{\fint_Q V(x) dx} ^\frac12 \V{e}}^2
= C^2 \innp{\pr{\fint_Q V(x) \, dx} \V{e}, \V{e}},
\end{equation*}
showing that \eqref{BpDefTwo2} holds.
For the converse, assume that \eqref{BpDefTwo2} holds.
As demonstrated above, this assumption is equivalent to
\begin{align*}
& \abs{R_Q ^{2p} (V^p) \V{e}} ^2 \leq C \abs{\pr{\fint_Q V(x) dx}^\frac12 \V{e}}^2 \qquad \forall \V{e} \in \ensuremath{\C^d} \\
\Leftrightarrow & \abs{R_Q ^{2p} (V^p) \pr{\fint_Q V(x) dx} ^{-\frac12} \V{e}} ^2 \leq C \abs{ \V{e}}^2 \qquad \forall \V{e} \in \ensuremath{\C^d} \\
\Leftrightarrow & \norm{\pr{\fint_Q V(x) dx} ^{-\frac12} \brac{R_Q^{2p} (V^p)} ^2 \pr{\fint_Q V(x) dx} ^{-\frac12} } = \norm{ R_Q ^{2p} (V^p) \pr{\fint_Q V(x) dx} ^{-\frac12}}^2 \leq C^2.
\end{align*}
It follows that
\begin{equation*}
\det \brac{ \pr{\fint_Q V(x) dx} ^{-\frac12} \brac{R_Q^{2p} (V^p)} ^2 \pr{\fint_Q V(x) dx} ^{-\frac12}} \leq C^{2d}
\end{equation*}
which implies \eqref{BpDefOne}, as required.
\end{proof}
Recall the classical assertion that for $p > 1$, a scalar weight $v \in {\text{B}_p}$ iff $v^p \in {\text{A}_\iny}$.
The following proposition connects the classes ${\MC{B}_p}$ with ${\MC{A}_{p,\iny}}$ and provides a matrix analogue to the aforementioned scalar result.
See also \cite[Corollary 3.8]{Ros16} for a related result.
\begin{prop}
\label{AinfBpLem}
If $V \in \MC{ND}$, then $V^p \in \mathcal{A}_{2p, \infty}$ iff $V \in {\MC{A}_{2,\iny}} \cap {\MC{B}_p}$.
\end{prop}
\begin{proof}
Assume that $V \in \MC{ND} \cap {\MC{A}_{2,\iny}} \cap {\MC{B}_p}$.
Since $V \in \MC{ND} \cap {\MC{B}_p}$, then the conclusions from Lemma \ref{BpDef} hold.
As $\displaystyle \pr{\fint_Q V(x) dx}^\frac12$ is a $p = 2$ reducing matrix, then \eqref{Apinfone} holds with $\displaystyle R_Q^2(V) = \pr{\fint_Q V(x) dx}^\frac12$ since $V \in {\MC{A}_{2,\iny}}$.
Combining \eqref{BpDefOne} and \eqref{Apinfone} shows that for any cube $Q \subset \ensuremath{\mathbb{R}}^n$, we have
\begin{equation*}
\det \brac{R_Q ^{2p} (V^p)}
\le C \det \pr{\fint_Q V(x) dx} ^\frac12
\le C \exp \pr{\fint_Q \ln \det V^\frac12 (x) \, dx}.
\end{equation*}
Comparing with \eqref{Apinfone}, this shows that $V^p \in \mathcal{A}_{2p, \infty}$.
Conversely, assume that $V^p \in \mathcal{A}_{2p, \infty}$ and $V \in \MC{ND}$.
By the definition of $\mathcal{A}_{2p, \infty}$ as in \eqref{Apinfone}, then by an application Lemma \ref{MatrixJensen}, we see that for any cube $Q \subset \ensuremath{\mathbb{R}}^n$,
\begin{align*}
\det \brac{R_Q ^{2p} (V^p)}
&\le C \exp \pr{\fint_Q \ln \det V^\frac12(x) \, dx}
= C \brac{\exp \pr{\fint_Q \ln \det V(x) \, dx}}^\frac12
\leq \det \left( \fint_Q V (x) \, dx\right)^\frac12.
\end{align*}
Since $V \in \MC{ND}$, Lemma \ref{BpDef} implies that $V \in {\MC{B}_p}$.
For any $\V{e} \in \ensuremath{\C^d}$, we have by H\"older's inequality and \eqref{reducingDef} that
$$\abs{\pr{\fint_Q V(x) dx}^\frac12 \V{e}}
= \pr{\fint_Q \abs{V^\frac12 (x) \V{e}} ^2 \, dx}^\frac12
\leq \pr{\fint_Q \abs{V^\frac12 (x) \V{e}} ^{2p} \, dx}^\frac{1}{2p}
\leq \abs{R_Q ^{2p} \pr{V^p} \V{e}} $$
so that
\begin{align*}
\abs{\pr{R_Q ^{2p} (V^p) }^{-1} \pr{\fint_Q V(x) dx} \pr{R_Q ^{2p} (V^p)} ^{-1}}
&= \abs{\pr{R_Q ^{2p} (V^p)} ^{-1} \pr{\fint_Q V(x) dx}^\frac12}^2 \\
&= \abs{\pr{\fint_Q V(x) dx}^\frac12 \pr{R_Q ^{2p} (V^p)}^{-1} }^2
\leq 1.
\end{align*}
Therefore, for any cube $Q \subset \ensuremath{\mathbb{R}}^n$,
$$\det \pr{\fint_Q V(x) dx}^\frac12
\le \det R_Q ^{2p} \pr{V^p}
\le C \exp \pr{\fint_Q \ln \det V^\frac12 (x) \, dx },$$
where the second inequality uses \eqref{Apinfone} since $V^p \in \mathcal{A}_{2p, \infty}$.
In particular, since $\displaystyle \pr{\fint_Q V(x) dx}^\frac12 $ is a reducing matrix for $p = 2$, it follows from \eqref{Apinfone} that $V \in {\MC{A}_{2,\iny}}$.
\end{proof}
Now that we have discussed the ${\MC{A}_{p,\iny}}$ classes of matrices, we seek the connections between the classes ${\MC{A}_{p,\iny}}$ and ${\MC{A}_\iny}$.
We first recall the definition of ${\MC{A}_\iny}$ from \cite{Dall15}.
\begin{defn}[${\MC{A}_\iny}$]
We say that $V \in {\MC{A}_\iny}$ if for any $\epsilon > 0$, there exists $\delta > 0$ so that for any cube $Q \subset \ensuremath{\mathbb{R}}^n$, it holds that
\begin{equation}
\label{AinfIneq}
\abs{\set{x \in Q: V\pr{x} \geq \delta \fint_Q V(y) dy}} \geq (1-\epsilon) |Q|.
\end{equation}
\end{defn}
For the proofs below, we will use an alternative version of this definition which appears in \cite{Dall15}, described by the next lemma.
\begin{lem}[${\MC{A}_\iny}$ characterization]
\label{AiChar}
$V \in {\MC{A}_\iny}$ iff for any $\epsilon > 0$ there exists $\gamma > 0$ such that for any cube $Q \subset \ensuremath{\mathbb{R}}^n$, it holds that
\begin{equation}
\label{AinfIneq2}
\abs{\set{x \in Q : \norm{\pr{\fint_Q V\pr{y} \, dy}^\frac12 V^{-\frac12} (x)} > \gamma}} < \epsilon |Q|.
\end{equation}
\end{lem}
\begin{proof}
Examining the ${\MC{A}_\iny}$ condition described by \eqref{AinfIneq}, we see that
\begin{align*}
V(x) \geq \delta \fint_Q V(y) dy
& \Leftrightarrow \innp{V(x) \V{e}, \V{e}} \geq \delta \innp{\pr{\fint_Q V(y) \, dy}\V{e}, \V{e}}, \quad \forall \V{e} \in \ensuremath{\C^d} \\
& \Leftrightarrow \delta^{-1} |\V{e}|^2 \geq \innp{\pr{\fint_Q V(y) \, dy} V^{-\frac12} (x)\V{e}, V^{-\frac12} (x)\V{e}}, \quad \forall \V{e} \in \ensuremath{\C^d} \\
& \Leftrightarrow \delta^{-1} |\V{e}|^2 \geq \abs{\pr{\fint_Q V(y) \, dy}^\frac12 V^{-\frac12} (x) \V{e} }^2, \quad \forall \V{e} \in \ensuremath{\C^d} \\
& \Leftrightarrow \delta^{-1}\geq \norm{\pr{\fint_Q V(y) \, dy}^\frac12 V^{-\frac12} (x)}^2.
\end{align*}
The conclusion follows from setting $\gamma = \delta^{-1}$.
\end{proof}
Our next pair of results examine the relationship between ${\MC{A}_{2,\iny}}$, $\RBM$, and ${\MC{A}_\iny}$.
We first prove the following more general result which implies the first inclusion.
\begin{prop}
\label{AinfLem}
If $p \geq 1$ and $V^p \in \mathcal{A}_{2p, \infty}$, then $V \in {\MC{A}_\iny}$.
In particular, ${\MC{A}_{2,\iny}} \subseteq {\MC{A}_\iny}$.
\end{prop}
\begin{proof}
Let $p \geq 1$ and assume that $V^p \in \mathcal{A}_{2p, \infty}$.
Note that that $V^p$ is $2p$-nondegenerate by definition.
Let $Q \subset \ensuremath{\mathbb{R}}^n$ and let $\set{\V{e}_i}_{i = 1}^d$ be the standard orthonormal basis of $\ensuremath{\C^d}$.
For any $\lambda > 0$, let $\MC{J}_i (Q)$ denote the collection of maximal dyadic subcubes $J$ of $Q$ satisfying
\begin{equation}
\label{subcubeDefEqn}
\abs{\brac{R_J ^{2p} (V^p)}^{-1} \brac{R_Q ^{2p} (V^p)} \V{e}_i} > e^{C\lambda},
\end{equation}
where $C > 0$ is independent of $Q$ and $J$ (and will be determined below).
Then it is enough to prove the following claim:
\begin{clm}
\label{SubcubeClaim}
If $C> 0$ is sufficiently large (and independent of $Q$ and $J$), then
\begin{equation}
\label{DecayingStopEst}
\sum_{i = 1}^d \sum_{J \in \MC{J}_i (Q)} \abs{J} < \frac{1}{\lambda} |Q| .
\end{equation}
\end{clm}
Before proving the claim, we show how it leads to the conclusion that $V \in {\MC{A}_\iny}$.
If $\displaystyle x \in Q \backslash \pr{\bigcup_{i = 1}^d \bigcup_{J \in \MC{J}_i (Q)} J}$, then for any dyadic subcube $L$ of $Q$ containing $x$, we must have that
\begin{equation*}
\norm{R_Q ^{2p} (V^p)\brac{R_L ^{2p} (V^p)} ^{-1} }
= \norm{\brac{R_L ^{2p} (V^p)} ^{-1} R_Q ^{2p} (V^p)}
\leq e^{C \lambda}.
\end{equation*}
It follows that for any $\V{e} \in \ensuremath{\C^d}$,
\begin{equation}
\label{LcubeObs}
\abs{R_Q ^{2p} (V^p) \V{e}}
\leq e^{C \lambda} \abs{R_L ^{2p} (V^p) \V{e}}
\leq \sqrt{d} e^{C \lambda} \pr{\fint_L \abs{V^\frac12 (y) \V{e} }^{2p} \, dy}^\frac{1}{2p},
\end{equation}
where we have applied \eqref{reducingpCons} in the last inequality.
Applications of H\"older's inequality, \eqref{reducingpCons}, then \eqref{LcubeObs} combined with the Lebesgue differentiation theorem show that for any $\V{e} \in \ensuremath{\C^d}$,
\begin{equation*}
\abs{\pr{\fint_Q V(y) \, dy }^\frac12 \V{e}}
= \pr{\fint_Q \abs{V^\frac12(y) \V{e}}^2 \, dy}^\frac12 \leq \pr{\fint_Q \abs{V^\frac12(y) \V{e}}^{2p} \, dy}^\frac{1}{2p}
\leq \abs{R_Q ^{2p} (V^p) \V{e}}
\leq \sqrt{d} e^{C \lambda} \abs{V^\frac12 (x) \V{e}}.
\end{equation*}
However, this implies that, modulo a set of measure zero,
\begin{equation*}
Q \backslash \pr{\bigcup_{i = 1}^d \bigcup_{J \in \MC{J}_i (Q)} J}
\subseteq \set{x \in Q : \norm{\pr{\fint_Q V\pr{y} \, dy}^\frac12 V^{-\frac12} (x)} \leq \sqrt{d} e^{C\lambda}}.
\end{equation*}
In particular, an application of \eqref{DecayingStopEst} shows that
\begin{equation*}
\abs{\set{x \in Q : \norm{\pr{\fint_Q V\pr{y} \, dy}^\frac12 V^{-\frac12} (x)} > \sqrt{d} e^{C\lambda }}}
\leq \sum_{i = 1}^d \sum_{J \in \MC{J}_i (Q)} \abs{J} < \frac{1}{\lambda} |Q|.
\end{equation*}
Since \eqref{AinfIneq2} holds with $\lambda = \frac{1}{\epsilon}$ and $\gamma = \sqrt{d} e^{C \lambda}$, then it follows from Lemma \ref{AiChar} that $V \in {\MC{A}_\iny}$.
To complete the proof, we now establish Claim \ref{SubcubeClaim}.
Note that this claim was implicitly proved in the proof of Lemma $3.1$ in \cite{Vol97}, but we include the details for the sake of completion.
Let $J \in \MC{J}_i (Q)$.
Taking logarithms in \eqref{subcubeDefEqn}, then applying Lemma \ref{OtherMatrixJensen} shows that
\begin{equation*}
C \lambda
< \ln \abs{\brac{R_J ^{2p} (V^p)}^{-1} R_Q ^{2p} (V^p) \V{e}_i}
\leq \fint_J \ln \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i} \, dx
\leq \fint_J \ln^+ \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i} \, dx,
\end{equation*}
where the last inequality ensures that the integrand is nonnegative.
Since each collection $\MC{J}_i (Q)$ is disjoint by maximality, then we may sum to get
\begin{align*}
\sum_{i = 1}^d \sum_{J \in \MC{J}_i (Q)} \abs{J}
&\leq \frac{1}{C\lambda} \sum_{i = 1}^d \sum_{J \in \MC{J}_i (Q)} \int_J \ln^+ \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i} \, dx
\leq \frac{1}{C\lambda} \sum_{i = 1}^d \int_Q \ln^+ \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i} \, dx \\
&= \frac{1}{C\lambda} \sum_{i = 1}^d \int_Q \ln \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i} \, dx
+ \frac{1}{C\lambda} \sum_{i = 1}^d \int_Q \ln^+ \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i}^{-1} \, dx,
\end{align*}
since $\ln ^+ x = \ln x + \ln ^+ x^{-1}$.
As $V^p \in \mathcal{A}_{2p, \infty}$, then it follows from Lemma \ref{ApiProperty} that for some $C' > 0$ independent of $Q$,
\begin{equation*}
\int_Q \ln \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i} \, dx
\leq C' |Q|.
\end{equation*}
An application of \eqref{InvIneq} with $A = V^{-\frac12} (x) R_{Q}^{2p} (V^p)$ and $\V{c} = \V{e}_i$ gives
\begin{align*}
\int_Q \ln^+ \abs{V^{-\frac12} (x) R_Q ^{2p} (V^p) \V{e}_i}^{-1} \, dx
&\leq \int_Q \ln^+ \abs{\brac{R_Q ^{2p} (V^p)}^{-1} V^{\frac12} (x) \V{e}_i} \, dx
\leq |Q| \fint_Q \abs{\brac{R_Q ^{2p} (V^p)}^{-1} V^{\frac12} (x) \V{e}_i} \, dx \\
&\leq |Q| \pr{\fint_Q \abs{\brac{R_Q ^{2p} (V^p)}^{-1} V^{\frac12} (x) \V{e}_i}^{2p} \, dx}^\frac{1}{2p}
\le C(d, p) |Q|,
\end{align*}
where we have applied H\"older's and the same argument used to prove \eqref{AinfEstTwo} for the last two inequalities, respectively.
Combining the previous three observations shows that
\begin{align*}
\sum_{i = 1}^d \sum_{J \in \MC{J}_i (Q)} \abs{J}
\leq \frac{d\pr{C' + C(d, p)}}{C\lambda} |Q|.
\end{align*}
In particular, Claim \ref{SubcubeClaim} holds whenever $C > d\brac{C' + C(d, p)}$.
\end{proof}
Next we show that the inclusion may be reversed.
In fact, for the final result of this appendix, we prove three equivalent conditions for nondegenerate matrices.
But first, we recall the following definition of the reverse Brunn-Minkowski class of matrices.
\begin{defn}[$\RBM$]
We say that a matrix weight $V$ belongs to the {\bf reverse Brunn-Minkowski class}, $V \in \RBM$, if there exists a constant $B_V > 0$ so that for any cube $Q \subset \ensuremath{\R^n}$, it holds that
$$\pr{\det \fint_Q V(x) dx}^\frac{1}{d} \leq B_V \fint_Q \brac{\det V(x)}^\frac{1}{d} dx.$$
\end{defn}
\begin{prop}
\label{AtwoinfAinfProp}
If $V \in \MC{ND}$, then the following are equivalent:
\begin{itemize}
\item[a)] $V \in {\MC{A}_{2,\iny}}$,
\item[b)] $V \in {\MC{A}_\iny}$,
\item[c)] $V \in \RBM$ and $(\det V)^\frac{1}{d} \in \T{A}_\infty$.
\end{itemize}
\end{prop}
\begin{proof}
That $a) \Rightarrow b)$ was proved in Proposition \ref{AinfLem}.
We now prove $b) \Rightarrow c)$.
Let $V \in \MC{ND} \cap {\MC{A}_\iny}$.
For any $\varepsilon > 0$, let $\delta = \delta(\varepsilon)$ be as given in the definition of ${\MC{A}_\iny}$.
Let $\displaystyle \set{\V{e}_k(x)}_{k = 1}^d$ be an orthonormal basis of eigenvectors for $V(x)$ and let $S\subseteq Q$ be the set on the lefthand side of \eqref{AinfIneq} so that $|S| \geq (1 - \epsilon) |Q|$. Then for any $x \in S$, we have that
\begin{equation}
\label{DetAinfone}
\begin{aligned}
\det V(x)
&= \prod_{k = 1}^d \innp{V(x) \V{e}_k(x), \V{e}_k(x)}
\geq \delta \prod_{k = 1}^d \innp{\pr{\fint_Q V(y) dy} \V{e}_k(x), \V{e}_k(x)}
\geq \delta \det \pr{\fint_Q V(y) dy} \\
&\geq \delta \pr{ \fint_Q \brac{\det V(y)}^\frac{1}{d} \, dy}^d,
\end{aligned}
\end{equation}
where here we have used both Lemma \ref{DetProp} and \eqref{DetConvexIneq}.
In other words,
\begin{equation*}
\abs{\set{ x \in Q : \brac{\det V (x) }^\frac{1}{d} \ge \delta^\frac{1}{d} \fint_Q \brac{\det V(y)}^\frac{1}{d} dy }}
\geq |S|
\geq (1 - \epsilon) |Q| ,
\end{equation*}
which shows that $\pr{\det V}^\frac{1}{d} \in \T{A}_\infty$.
Next, to show that $V \in \RBM,$ we use the first line of \eqref{DetAinfone} with $\epsilon = \frac12$ and $\delta = \delta\pr{\frac12}$ to get
\begin{align*}
\fint_Q \brac{\det V(y)}^\frac{1}{d} \, dy
&\ge \frac 1 {|Q|} \int_S \brac{\det V(x)} ^\frac{1}{d} \, dx
\ge \frac 1 {|Q|} \int_S \delta^{\frac 1 d} \brac{\det \pr{\fint_Q V(y) dy}}^{\frac 1 d}
\ge \frac{\delta^{\frac 1 d}}{2} \brac{\det \pr{\fint_Q V(y) dy}}^{\frac 1 d},
\end{align*}
as required.
Finally we prove that $c) \Rightarrow a)$.
If $(\det V) ^\frac{1}{d} \in \T{A}_\infty$, then by the classical reverse Jensen characterization of scalar ${\text{A}_\iny}$ weights (again see \cite[Theorem 7.3.3]{Gra14}), there exists $C > 0$ so that for any $Q \subset \ensuremath{\mathbb{R}}^n$, we have
\begin{equation*}
\fint_Q \brac{\det V(x)} ^\frac{1}{d} dx
\le C \exp \pr{\fint_Q \ln \brac{\det V(x)}^\frac{1}{d} dx }
= C \brac{\exp \pr{\fint_Q \ln \det V(x) dx}}^\frac{1}{d}.
\end{equation*}
However, combining this bound with $V \in \RBM$ gives us
$$\pr{\det \fint_Q V(x) dx}^\frac{1}{d} \leq B_V \fint_Q \brac{\det V(x)}^\frac{1}{d} dx \leq B_V C \brac{\exp \pr{\fint_Q \ln \det V(x) dx}}^\frac{1}{d},$$
which by \eqref{Apinfone2} shows that $V \in {\MC{A}_{2,\iny}}$ and completes the proof.
\end{proof}
\section{Technical Proofs}
\label{TechProofs}
This final appendix provides the technical proofs that were skipped in the body of the paper.
We first prove Proposition \ref{la1Prop}, which states that if $V \in {\MC{B}_p} \cap {\MC{A}_\iny}$, then $\lambda_1$, the smallest eigenvalue of $V$, belongs to ${\text{B}_p}$.
\begin{proof}[Proof of Proposition \ref{la1Prop}]
Let $\varepsilon > 0$.
Since $V \in {\MC{A}_\iny}$, we may choose $\delta > 0$ so that \eqref{Ainf} holds.
For some $Q \subset \ensuremath{\mathbb{R}}^n$, define
$$S = \set{x \in Q: V\pr{x} \geq \delta \fint_Q V\pr{y} dy}.$$
Let $\lambda_1 \le \lambda_2 \le \ldots \le \lambda_d$ denote the eigenvalue functions of $V$ with associated orthonormal eigenvectors $\set{\V{v}_i}_{i=1}^d$.
That is, for each $i = 1, \ldots, d$, $V \V{v}_i = \lambda_i \V{v}_i$.
Observe that for any $x \in S$,
\begin{align*}
\lambda_1^p\pr{x}
&= \innp{V\pr{x} \V{v}_1\pr{x}, \V{v}_1\pr{x}}^p
\ge \brac{ \innp{ \pr{ \delta \fint_Q V\pr{y} dy} \V{v}_1\pr{x}, \V{v}_1\pr{x}}}^p \\
&= \delta^p \brac{ \fint_Q \innp{ V\pr{y} \V{v}_1\pr{x}, \V{v}_1\pr{x} dy}}^p
\ge \pr{\frac \delta {C_V}}^p \fint_Q \innp{ V\pr{y} \V{v}_1\pr{x}, \V{v}_1\pr{x}}^p dy,
\end{align*}
where the last inequality follows from the assumption that $V \in {\MC{B}_p}$.
Now by diagonalization,
\begin{align*}
\innp{V\pr{y} \V{v}_1\pr{x}, \V{v}_1\pr{x}}
&= \sum_{j=1}^d \lambda_j\pr{y} \abs{\innp{\V{v}_1\pr{x}, \V{v}_j\pr{y}}}^2
\ge \lambda_1\pr{y} \sum_{j=1}^d\abs{\innp{\V{v}_1\pr{x}, \V{v}_j\pr{y}}}^2
= \lambda_1\pr{y} \abs{\V{v}_1\pr{x}}^2 \\
&= \lambda_1\pr{y}.
\end{align*}
Combining these inequalities shows that $\displaystyle \lambda_1^p\pr{x} \ge \pr{\frac \delta {C_V}}^p \fint_Q \lambda_1^p\pr{y} dy$.
If we define $\delta' = \pr{\frac \delta {C_V}}^p$ and $\displaystyle S' = \set{x \in Q: \lambda_1^p\pr{x} \geq \delta' \fint_Q \lambda_1^p\pr{y} dy}$, then we see that $S \subset S'$.
In particular, $\abs{S'} \ge \abs{S} \ge \pr{1 - \varepsilon} \abs{Q}$.
Therefore, $\lambda_1^p \in {\text{A}_\iny}$.
It follows from a scalar result that $\lambda_1 \in {\text{B}_p}$, as required.
\end{proof}
Now we prove Proposition \ref{BickelToProveProp} which states that a matrix weight of the form $A = \pr{a_{ij} \abs{x}^{\gamma_{ij}}}_{i, j = 1}^d$ belongs to ${\MC{A}_{2,\iny}} \cap {\MC{B}_p}$ if $A = \pr{a_{ij}}_{i, j = 1}^d$ is a Hermitian, positive definite matrix and $\gamma_{ij} = \frac 1 2 \pr{\gamma_i + \gamma_j}$ for some $\V{\gamma} \in \ensuremath{\mathbb{R}}^d$ with $\gamma_{i} > - \frac{n}{p}$.
\begin{proof}[Proof of Proposition \ref{BickelToProveProp}]
First observe that since $V$ is positive definite, then $V \in \MC{ND}$.
By Proposition \ref{AinfBpLem}, $V \in {\MC{A}_{2,\iny}} \cap {\MC{B}_p}$ iff $V^p \in \mathcal{A}_{2p, \infty}$.
Therefore, we will show that $V^p \in \mathcal{A}_{2p, \infty}$.
By Lemma \ref{ApiProperty}, $V^p \in \mathcal{A}_{2p, \infty}$ iff there exists a constant $C > 0$ so that for every $\V{e} \in \ensuremath{\C^d}$ and every cube $Q \subset \ensuremath{\mathbb{R}}^n$, it holds that
\begin{equation*}
\exp\pr{\fint_Q \ln |V^{-\frac{1}{2}} (x) \V{e} | \, dx} \le C \abs{ \brac{R_Q ^{2p}(V^p)}^{-1} \V{e} },
\end{equation*}
where $R_Q^{2p}$ is a reducing matrix of $V^p$; see \eqref{reducingDef}.
This condition is equivalent to the existence of $C > 0$ so that for every unit vector $\V{e} \in \ensuremath{\C^d}$ and every cube $Q \subset \ensuremath{\mathbb{R}}^n$, it holds that
\begin{equation}
\label{BickelToProve}
\exp\pr{\fint_Q \ln |V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} | \, dx} \le C .
\end{equation}
Therefore, to prove this proposition, we will show that there exists a constant $C > 0$ so that \eqref{BickelToProve} holds for every unit vector $\V{e} \in \ensuremath{\C^d}$ and every cube $Q \subset \ensuremath{\mathbb{R}}^n$.
First, using the facts that $\ln x \leq \abs{\ln x} = \abs{ \ln ^+ x - \ln ^+ x^{-1}} \leq \ln ^+ x + \ln ^+ x^{-1}$ and $\abs{A\V{e}}^{-1} \leq \abs{A^{-1} \V{e}}$ for any invertible Hermitian matrix $A$ and any unit vector $\V{e}$, we get
\begin{equation}
\label{BickelEst0}
\begin{aligned}
&\brac{\exp\pr{\fint_Q \ln |V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} | \, dx} }^2
= \exp\brac{\fint_Q \ln \pr{|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^2} \, dx} \\
\le& \exp\brac{\fint_Q \set{\ln^+ \pr{|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^2} + \ln^+ \pr{|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^{-2}}} \, dx} \\
=& \exp\brac{\fint_Q \ln^+ \pr{|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^2} \, dx}
\exp \brac{\fint_Q \ln^+ \pr{|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^{-2}} \, dx} \\
\le& \exp\brac{\fint_Q \ln^+ \pr{|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^2} \, dx}
\exp\brac{\fint_Q \ln ^+ \pr{\abs{\brac{R_Q ^{2p}(V^p)}^{-1} V^{\frac{1}{2}} (x) \V{e} }^2} \, dx} \\
=:& E_1 \times E_2.
\end{aligned}
\end{equation}
We estimate $E_2$.
If $\{e_j\}_{j = 1}^d$ is any orthonormal basis of $\ensuremath{\C^d}$, then since $\ln ^+x \leq x$ for $x > 0$, we get
\begin{equation}
\label{BickelEst1}
\begin{aligned}
E_2
&= \exp\brac{\fint_Q \ln ^+ \pr{\abs{\brac{R_Q ^{2p}(V^p)}^{-1} V^{\frac{1}{2}} (x) \V{e} }^2} \, dx}
\leq \exp\pr{\fint_Q \abs{V^{\frac{1}{2}} (x) \brac{R_Q ^{2p}(V^p)}^{-1} }^{2} \, dx} \\
&\leq \prod_{j = 1}^d \exp\pr{\fint_Q \abs{V^{\frac{1}{2}} (x) \brac{R_Q ^{2p}(V^p)}^{-1} \V{e}_j }^{2} \, dx} \\
&\leq \prod_{j = 1}^d\brac{\exp\pr{\fint_Q \abs{V^{\frac{1}{2}} (x) \brac{R_Q ^{2p}(V^p)}^{-1} \V{e}_j }^{2p} \, dx}}^\frac{1}{p} \\
&\leq \prod_{j = 1}^d\brac{\exp \pr{\abs{R_Q^{2p} (V^p) \brac{R_Q ^{2p}(V^p)}^{-1} \V{e}_j }^{2p}} }^\frac{1}{p}
\leq \exp \pr{\frac{d}{p}},
\end{aligned}
\end{equation}
where we have applied H\"older's inequality followed by the reducing matrix property from \eqref{reducingDef}.
Next, we estimate $E_1$.
We start with some preliminary estimates.
For any unit vector $\V{e} \in \ensuremath{\mathbb{C}}^d$ and any $x \in Q$, observe that by another application of \eqref{reducingDef},
\begin{equation}
\label{BickelEst2}
\begin{aligned}
|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^2
&\leq |R_Q ^{2p}(V^p) V^{-\frac{1}{2}} (x) |^2
\leq \sum_{j = 1}^d |R_Q ^{2p}(V^p) V^{-\frac{1}{2}} (x) \V{e}_j |^2 \\
&\leq d \sum_{j = 1}^d \pr{\fint_Q \abs{V^\frac12 (y) V^{-\frac{1}{2}} (x) \V{e}_j }^{2p} dy}^{\frac 1{p}}
\lesssim_{(d)} \pr{\fint_Q \abs{V^\frac12 (y) V^{-\frac{1}{2}} (x) }^{2p} dy}^{\frac 1{p}} \\
&\simeq_{(d)} \pr{\fint_Q \abs{\tr \pr{V (y) V^{-1} (x) }}^{p} \, dy}^{\frac 1{p}},
\end{aligned}
\end{equation}
where in the final line we have used the fact that whenever $B$ and $C$ are Hermitian, positive semidefinite matrices, it holds that $\displaystyle \abs{B^\frac12 C^\frac 12}^2 = \abs{B ^\frac12 C B^\frac12}
\simeq_{(d)} \tr \pr{B^\frac12 C B^\frac12} = \tr \pr{BC}$.
Using the explicit presentation of $V$ and $V^{-1}$ from \eqref{mpower} and \eqref{mpowerIn}, respectively, we see that
\begin{equation}
\label{BickelEst3}
\begin{aligned}
\pr{\fint_Q \abs{ \tr \brac{V (y) V^{-1} (x) }}^{p} \, dy}^{\frac 1{p}}
&= \pr{\fint_Q \abs{ \sum_{i, j=1}^d a_{ij} a^{ji} |y|^{\gamma_{ij}} |x|^{-\gamma_{ji}} }^{p} \, dy}^{\frac 1{p}}
\\
& \lesssim_{(A)} \sum_{i, j=1}^d |x|^{-\gamma_{ij}} \pr{ \fint_Q \abs{f_{ij}(y)}^{p} dy}^\frac{1}{p} ,
\end{aligned}
\end{equation}
where we have introduced the notation $f_{ij}(y) = \abs{y}^{\gamma_{ij}}$.
Since $p \gamma_{ij} = \frac{p}{2} (\gamma_{i} + \gamma_{j}) > - n$, then $\displaystyle f_{ij}^{p} \in \T{A}_{\infty} = \bigcup_{q \geq 1} \T{A}_q$ (see \cite[p. 506]{Gra14},) which implies that $f_{ij} \in \T{B}_p$.
In particular, it holds that
$$\fint_Q \abs{f_{ij}(y)}^{p} dy \lesssim_{(n, p, \gamma_{ij})} \pr{\fint_Q {f_{ij}(y)} dy}^p.$$
Thus, combining this final observation with \eqref{BickelEst2} and \eqref{BickelEst3} shows that there exists $C = C(d, n, p, A, \V{\gamma})$ so that
\begin{align}
\label{E1EstTool}
|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^2
&\le C \sum_{i, j=1}^d |x|^{-\gamma_{ij}} \pr{\fint_Q f_{ij}(y) dy}.
\end{align}
To estimate $E_1$, we use that for any $C, x_1, x_2, \ldots, x_d > 0$, it holds that $\displaystyle \ln^+\pr{\sum_{i =1}^d x_i} \leq d + \sum_{i = 1}^d \ln ^+x_i$ and $\ln^+(C x) \le \ln^+ C + \ln^+ x$.
(The proofs of these results follow from induction and case analysis.)
Therefore, from \eqref{E1EstTool} we see that
\begin{equation*}
\begin{aligned}
E_1
&= \exp\brac{\fint_Q \ln^+ \pr{|V^{-\frac{1}{2}} (x) R_Q ^{2p}(V^p) \V{e} |^2} \, dx}
\le \exp\set{\fint_Q \ln^+ \brac{C \sum_{i, j=1}^d |x|^{-\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}} \, dx} \\
&\le \exp\brac{\fint_Q \set{d\pr{1 + \ln^+ C} + \sum_{i, j=1}^d \ln^+ \brac{ |x|^{-\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}}} \, dx} \\
&\lesssim_{(d, n, p, A, \V{\gamma})} \prod_{i, j=1}^d \exp\set{\fint_Q \ln^+ \brac{|x|^{-\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}} \, dx} \\
&\le \prod_{i, j =1}^d \exp\set{\fint_Q \ln \brac{|x|^{-\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}} \, dx} \exp\set{\fint_Q \ln^+ \brac{|x|^{\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}^{-1}} \, dx},
\end{aligned}
\end{equation*}
where in the last line we used that $\ln ^+ x = \ln x + \ln^+ x^{-1}$.
Since $\ln^+ x \le x$, then
\begin{align*}
\fint_Q \ln^+ \brac{|x|^{\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}^{-1}} \, dx
&\leq \fint_Q |x|^{\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}^{-1} \, dx
= \fint_Q {f_{ij}(x)} dx \pr{\fint_Q {f_{ij}(y)} dy}^{-1}
= 1.
\end{align*}
On the other hand, since $\gamma_{ij} = \frac 1 2 \pr{\gamma_i + \gamma_j} > -\frac{n}{p} \ge -n$, then $f_{ij} \in \T{A}_\infty$.
An application of the reverse Jensen inequality (see \cite[p. 525]{Gra14}) shows that there exists $C(n, p, \gamma_{ij}) > 0$ so that for any $Q \subset \ensuremath{\mathbb{R}}^n$, it holds that
\begin{align*}
\exp\set{\fint_Q \ln \brac{|x|^{-\gamma_{ij}} \pr{\fint_Q {f_{ij}(y)} dy}} \, dx}
&= \pr{\fint_Q {f_{ij}(y)} dy} \exp\brac{\fint_Q \ln \pr{|x|^{-\gamma_{ij}}} \, dx}
\leq C .
\end{align*}
It follows that $E_1 \lesssim_{(d, n, p, A, \V{\gamma})} 1$, which, when combined with \eqref{BickelEst0} and \eqref{BickelEst1} shows that \eqref{BickelToProve} holds, as required.
\end{proof}
Next we prove Proposition \ref{RBrunnMinProp}, which states that if $V \in \MC{ND}$ and there exists a constant $B_V > 0$ so that $\displaystyle \pr{\det \fint_Q V}^\frac{1}{d} \leq B_V \fint_Q \pr{ \det V} ^\frac{1}{d}$ for every cube $Q \subset \ensuremath{\mathbb{R}}^n$, then $V \in \MC{NC}$.
\begin{proof}[Proof of Proposition \ref{RBrunnMinProp}]
Observe that if $\{e_j\}_{j=1}^d$ is any orthonormal basis of $\ensuremath{\C^d}$, then
\begin{align*}
\abs{V^\frac12 (x) \pr{\fint_Q V}^{-1} V^\frac12 (x)}
& = \abs{V^\frac12 (x) \pr{\fint_Q V}^{-\frac12 } \brac{V^\frac12 (x) \pr{\fint_Q V}^{-\frac12 }}^*} \\
&= \abs{\brac{V^\frac12 (x) \pr{\fint_Q V}^{-\frac12 }}^* V^\frac12 (x) \pr{\fint_Q V}^{-\frac12 }}
= \abs{\pr{\fint_Q V}^{-\frac12 } V(x) \pr{\fint_Q V}^{-\frac12 }} \\
&\leq \sum_{j = 1}^d \innp {V(x) \pr{\fint_Q V}^{-\frac12 } \V{e}_j, \pr{\fint_Q V}^{-\frac12 } \V{e}_j}.
\end{align*}
Thus, we have
$$\abs{\fint_Q V^\frac12 (x) \pr{\fint_Q V}^{-1} V^\frac12 (x) dx}
\leq \sum_{j = 1}^d \innp {\pr{\fint_Q V(x) dx} \pr{\fint_Q V}^{-\frac12 } \V{e}_j, \pr{\fint_Q V}^{-\frac12 } \V{e}_j}
\leq d,$$
which implies that the largest eigenvalue of $\displaystyle \fint_Q {V^\frac12 (x) \pr{\fint_Q V}^{-1} V^\frac12 (x)} dx$ is bounded above by $d$ for every cube $Q \subset \ensuremath{\mathbb{R}}^n$.
Assume that $V \notin \MC{NC}$.
Looking at \eqref{NCCond}, this means that there exists a sequence of cubes $\set{Q_k}_{k=1}^\infty \subset \ensuremath{\mathbb{R}}^n$ so that if we define $\displaystyle V_k := \fint_{Q_k} {V^\frac12 (x) \pr{\fint_{Q_k} V}^{-1} V^\frac12 (x)} dx$, then each $V_k$ has a smallest eigenvalue $\lambda_{k,1} := \lambda_1(V_k)$ with the property that $\lambda_{k,1} \to 0$ as $k \to \infty$.
For $j = 1, \ldots, d$, let $\lambda_{k,j} := \lambda_j(V_k)$, the $j^{\text{th}}$ eigenvalue of $V_k$, and note that $\lambda_{k, j} \le d$ for $j = 2, \ldots, d$.
Then
\begin{equation}
\label{notND}
\begin{aligned}
&\inf \set{\det \brac{\fint_Q {V^\frac12(x) \pr{\fint_Q V}^{-1} V^\frac12(x)} dx} : Q \subset \ensuremath{\mathbb{R}}^n} \\
\le& \inf\set{\det V_k : k \in \ensuremath{\mathbb{N}}}
= \inf\set{\prod_{j = 1}^d \lambda_{k, j} : k \in \ensuremath{\mathbb{N}}}
\le \inf\set{ \lambda_{k, 1} d^{d-1} : k \in \ensuremath{\mathbb{N}}}
= 0.
\end{aligned}
\end{equation}
However, for any $Q \subset \ensuremath{\mathbb{R}}^n$, an application of \eqref{DetConvexIneq} to $\displaystyle V^\frac12 \pr{\fint_Q V}^{-1} V^\frac12$ shows that
\begin{align*}
\det \brac{\fint_Q {V^\frac12(x) \pr{\fint_Q V}^{-1} V^\frac12(x)} dx}
&\ge \set{\fint_Q\brac{\det \pr{V^\frac12(x) \pr{\fint_Q V}^{-1} V^\frac12(x)}}^{\frac 1 d} dx}^d \\
&= \pr{\fint_Q\brac{\det V(x)}^{\frac 1 d} dx}^d \det\pr{\fint_Q V}^{-1}
\ge B_V^{-d},
\end{align*}
where we have applied the assumption in the last inequality.
This contradicts \eqref{notND}, and therefore gives the desired conclusion.
\end{proof}
Finally, we provide the proof of Proposition \ref{umCompLem}.
Recall that Proposition \ref{umCompLem} states that if $V \in {\MC{B}_p} \cap \MC{ND} \cap {\MC{A}_\iny}$ for some $p > \frac n 2$, then $m(x, \lambda_1) \le \underline{m}(x, V) \lesssim m(x, \lambda_1)$.
\begin{proof}[Proof of Proposition \ref{umCompLem}]
Let $r = \frac 1 {\underline{m}\pr{x, V}}$.
Choose $\V{e} \in \ensuremath{\mathbb{S}^{d-1}}$ so that
\begin{align*}
1 &= \innp{\Psi(x, r;V) \V{e}, \V{e}}
= \innp{\pr{\frac{1}{r^{n-2}} \int_{Q\pr{x,r}} V\pr{y}dy } \V{e}, \V{e}}
= \frac{1}{r^{n-2}} \int_{Q\pr{x,r}} \innp{V\pr{y} \V{e}, \V{e}} dy.
\end{align*}
Since $\innp{V\pr{y} \V{e}, \V{e}} \ge \lambda_1\pr{y}$, then it follows that $r \le \frac{1}{m\pr{x, \lambda_1}}$ so that
$$m(x, \lambda_1) \le \underline{m}(x, V).$$
Since $V \in {\MC{A}_\iny}$, then there exists $\delta > 0$ so that if we define
$$S\pr{x, r} = \set{y \in Q\pr{x, r} : V\pr{y} \ge \delta \fint_{Q\pr{x, r}} V\pr{z} dz},$$
then $\abs{S\pr{x, r}} \ge \frac 1 2 \abs{Q\pr{x,r}}$.
Then with $r = \frac 1 {\underline{m}\pr{x, V}}$ as above,
\begin{align*}
\Psi\pr{x, r; \lambda_1}
&= r^{2-n} \int_{Q\pr{x, r}} \innp{V\pr{y} \V{v}_1\pr{y}, \V{v}_1\pr{y}} dy
\ge r^{2-n} \int_{S\pr{x, r}} \innp{V\pr{y} \V{v}_1\pr{y}, \V{v}_1\pr{y}} dy \\
&\ge \delta r^{2-n} \int_{S\pr{x, r}} \innp{\pr{\fint_{Q\pr{x, r}} V\pr{z} dz } \, \V{v}_1\pr{y}, \V{v}_1\pr{y}} dy \\
&\ge \frac \delta 2 \fint_{S\pr{x, r}} \innp{\pr{r^{2-n} \int_{Q\pr{x, r}} V\pr{z} dz} \, \V{v}_1\pr{y}, \V{v}_1\pr{y}} dy
= \frac \delta 2 \fint_{S\pr{x, r}} \innp{ \underline{\Psi}\pr{x} \V{v}_1\pr{y}, \V{v}_1\pr{y}} dy \\
&\ge \frac \delta 2 \fint_{S\pr{x, r}} 1 dy
= \frac \delta 2.
\end{align*}
Applying the previous observation, then Lemma \ref{BasicShenLem} with the fact that $r \le \frac 1 {m\pr{x, \lambda_1}}$, we see that
\begin{align*}
\frac \delta 2
&\le \Psi\pr{x, r; \lambda_1}
\le C_V \pr{r m\pr{x, \lambda_1}}^{2 - \frac n p} \Psi\pr{x, \frac 1 {m\pr{x, \lambda_1}}; \lambda_1}
=C_V \brac{\frac{ m\pr{x, \lambda_1}}{\underline{m}\pr{x, V}}}^{2 - \frac n p},
\end{align*}
where the last equality uses that $\Psi\pr{x, \frac{1}{m\pr{x, \lambda_1}}; \lambda_1} = 1$.
After rearranging, we see that $m\pr{x, \lambda_1} \gtrsim \underline{m}\pr{x, V}$, completing the proof.
\end{proof}
\end{appendix}
\begin{bibdiv}
\begin{biblist}
\bib{Aa09}{thesis}{
author={Aaen, Anders},
title={Singular integral operators on matrix-weighted {$L^p$} spaces},
type={Master's Thesis},
date={2009},
}
\bib{Agm82}{book}{
author={Agmon, Shmuel},
title={Lectures on exponential decay of solutions of second-order
elliptic equations: bounds on eigenfunctions of {$N$}-body {S}chr\"{o}dinger
operators},
series={Mathematical Notes},
publisher={Princeton University Press, Princeton, NJ; University of Tokyo
Press, Tokyo},
date={1982},
volume={29},
ISBN={0-691-08318-5},
review={\MR{745286}},
}
\bib{Amb15}{article}{
author={Ambrosio, Luigi},
title={Lecture notes on elliptic partial differential equations},
date={2015},
journal={Unpublished lecture notes. Scuola Normale Superiore di Pisa},
volume={30},
}
\bib{BHKPSS15}{article}{
author={Bayer, Christian},
author={Hoel, H{\aa}kon},
author={Kadir, Ashraful},
author={Plech\'{a}\v{c}, Petr},
author={Sandberg, Mattias},
author={Szepessy, Anders},
title={Computational error estimates for {B}orn-{O}ppenheimer molecular
dynamics with nearly crossing potential surfaces},
date={2015},
ISSN={1687-1200},
journal={Appl. Math. Res. Express. AMRX},
number={2},
pages={329\ndash 417},
url={https://doi.org/10.1093/amrx/abv007},
review={\MR{3394270}},
}
\bib{BLM17}{article}{
author={Bickel, Kelly},
author={Lunceford, Katherine},
author={Mukhtar, Naba},
title={Characterizations of {$A_2$} matrix power weights},
date={2017},
ISSN={0022-247X},
journal={J. Math. Anal. Appl.},
volume={453},
number={2},
pages={985\ndash 999},
url={https://doi.org/10.1016/j.jmaa.2017.04.035},
review={\MR{3648270}},
}
\bib{Bow01}{article}{
author={Bownik, Marcin},
title={Inverse volume inequalities for matrix weights},
date={2001},
ISSN={0022-2518},
journal={Indiana Univ. Math. J.},
volume={50},
number={1},
pages={383\ndash 410},
url={https://doi.org/10.1512/iumj.2001.50.1672},
review={\MR{1857041}},
}
\bib{Caf82}{article}{
author={Caffarelli, L.~A.},
title={Regularity theorems for weak solutions of some nonlinear
systems},
date={1982},
ISSN={0010-3640},
journal={Comm. Pure Appl. Math.},
volume={35},
number={6},
pages={833\ndash 838},
url={https://doi.org/10.1002/cpa.3160350605},
review={\MR{673831}},
}
\bib{Dall15}{article}{
author={Dall'Ara, Gian~Maria},
title={Discreteness of the spectrum of {S}chr\"{o}dinger operators with
non-negative matrix-valued potentials},
date={2015},
ISSN={0022-1236},
journal={J. Funct. Anal.},
volume={268},
number={12},
pages={3649\ndash 3679},
url={https://doi.org/10.1016/j.jfa.2014.10.007},
review={\MR{3341961}},
}
\bib{DHM18}{article}{
author={Davey, Blair},
author={Hill, Jonathan},
author={Mayboroda, Svitlana},
title={Fundamental matrices and {G}reen matrices for non-homogeneous
elliptic systems},
date={2018},
ISSN={0214-1493},
journal={Publ. Mat.},
volume={62},
number={2},
pages={537\ndash 614},
url={https://doi.org/10.5565/PUBLMAT6221807},
review={\MR{3815288}},
}
\bib{DI22}{article}{
author={Davey, Blair},
author={Isralowitz, Joshua},
title={Matrix {P}oincar\'e inequalities and applications to degenerate
elliptic {PDE}s},
date={2022},
note={In progress},
}
\bib{Fef83}{article}{
author={Fefferman, Charles~L.},
title={The uncertainty principle},
date={1983},
ISSN={0273-0979},
journal={Bull. Amer. Math. Soc. (N.S.)},
volume={9},
number={2},
pages={129\ndash 206},
url={https://doi.org/10.1090/S0273-0979-1983-15154-6},
review={\MR{707957}},
}
\bib{FM12}{article}{
author={Filoche, Marcel},
author={Mayboroda, Svitlana},
title={Universal mechanism for {A}nderson and weak localization},
date={2012},
journal={Proceedings of the National Academy of Sciences},
volume={109},
number={37},
pages={14761\ndash 14766},
}
\bib{Gol03}{article}{
author={Goldberg, Michael},
title={Matrix {$A_p$} weights via maximal functions},
date={2003},
ISSN={0030-8730},
journal={Pacific J. Math.},
volume={211},
number={2},
pages={201\ndash 220},
url={https://doi.org/10.2140/pjm.2003.211.201},
review={\MR{2015733}},
}
\bib{Gra14}{book}{
author={Grafakos, Loukas},
title={Classical {F}ourier analysis},
edition={Third},
series={Graduate Texts in Mathematics},
publisher={Springer, New York},
date={2014},
volume={249},
ISBN={978-1-4939-1193-6; 978-1-4939-1194-3},
url={https://doi.org/10.1007/978-1-4939-1194-3},
review={\MR{3243734}},
}
\bib{GW82}{article}{
author={Gr\"{u}ter, Michael},
author={Widman, Kjell-Ove},
title={The {G}reen function for uniformly elliptic equations},
date={1982},
ISSN={0025-2611},
journal={Manuscripta Math.},
volume={37},
number={3},
pages={303\ndash 342},
url={https://doi.org/10.1007/BF01166225},
review={\MR{657523}},
}
\bib{HL11}{book}{
author={Han, Qing},
author={Lin, Fanghua},
title={Elliptic partial differential equations},
edition={Second},
series={Courant Lecture Notes in Mathematics},
publisher={Courant Institute of Mathematical Sciences, New York; American
Mathematical Society, Providence, RI},
date={2011},
volume={1},
ISBN={978-0-8218-5313-9},
review={\MR{2777537}},
}
\bib{HS20}{article}{
author={Hoel, H{\aa}kon},
author={Szepessy, Anders},
title={Classical {L}angevin dynamics derived from quantum mechanics},
date={2020},
ISSN={1531-3492},
journal={Discrete Contin. Dyn. Syst. Ser. B},
volume={25},
number={10},
pages={4001\ndash 4038},
url={https://doi.org/10.3934/dcdsb.2020135},
review={\MR{4147373}},
}
\bib{HK07}{article}{
author={Hofmann, Steve},
author={Kim, Seick},
title={The {G}reen function estimates for strongly elliptic systems of
second order},
date={2007},
ISSN={0025-2611},
journal={Manuscripta Math.},
volume={124},
number={2},
pages={139\ndash 172},
url={https://doi.org/10.1007/s00229-007-0107-1},
review={\MR{2341783}},
}
\bib{Hor03}{book}{
author={H\"{o}rmander, Lars},
title={The analysis of linear partial differential operators. {I}},
series={Classics in Mathematics},
publisher={Springer-Verlag, Berlin},
date={2003},
ISBN={3-540-00662-1},
url={https://doi.org/10.1007/978-3-642-61497-2},
note={Distribution theory and Fourier analysis, Reprint of the second
(1990) edition [Springer, Berlin; MR1065993 (91m:35001a)]},
review={\MR{1996773}},
}
\bib{KPSS18}{article}{
author={Kammonen, Aku},
author={Plech\'{a}\v{c}, Petr},
author={Sandberg, Mattias},
author={Szepessy, Anders},
title={Canonical quantum observables for molecular systems approximated
by ab initio molecular dynamics},
date={2018},
ISSN={1424-0637},
journal={Ann. Henri Poincar\'{e}},
volume={19},
number={9},
pages={2727\ndash 2781},
url={https://doi.org/10.1007/s00023-018-0699-x},
review={\MR{3844476}},
}
\bib{KPSS18b}{article}{
author={Kammonen, Aku},
author={Plech\'{a}\v{c}, Petr},
author={Sandberg, Mattias},
author={Szepessy, Anders},
title={Correction to: {C}anonical quantum observables for molecular
systems approximated by ab initio molecular dynamics},
date={2019},
ISSN={1424-0637},
journal={Ann. Henri Poincar\'{e}},
volume={20},
number={8},
pages={2873\ndash 2875},
url={https://doi.org/10.1007/s00023-019-00819-x},
review={\MR{3979627}},
}
\bib{MP19}{article}{
author={Mayboroda, Svitlana},
author={Poggi, Bruno},
title={Exponential decay estimates for fundamental solutions of
{S}chr\"{o}dinger-type operators},
date={2019},
ISSN={0002-9947},
journal={Trans. Amer. Math. Soc.},
volume={372},
number={6},
pages={4313\ndash 4357},
url={https://doi.org/10.1090/tran/7817},
review={\MR{4009431}},
}
\bib{NT96}{article}{
author={Nazarov, F.~L.},
author={Tre\u{\i}l\cprime, S.~R.},
title={The hunt for a {B}ellman function: applications to estimates for
singular integral operators and to other classical problems of harmonic
analysis},
date={1996},
ISSN={0234-0852},
journal={Algebra i Analiz},
volume={8},
number={5},
pages={32\ndash 162},
review={\MR{1428988}},
}
\bib{Per01}{incollection}{
author={Pereyra, Mar\'{\i}a~Cristina},
title={Lecture notes on dyadic harmonic analysis},
date={2001},
booktitle={Second {S}ummer {S}chool in {A}nalysis and {M}athematical
{P}hysics ({C}uernavaca, 2000)},
series={Contemp. Math.},
volume={289},
publisher={Amer. Math. Soc., Providence, RI},
pages={1\ndash 60},
url={https://doi.org/10.1090/conm/289/04874},
review={\MR{1864538}},
}
\bib{Pin06}{thesis}{
author={Pingen, Michael},
title={Zur regularit\"atstheorie elliptischer systeme und harmonischer
abbildungen},
type={Ph.D. Thesis},
date={2006},
}
\bib{PSS19}{article}{
author={Plech\'{a}\v{c}, Petr},
author={Sandberg, Mattias},
author={Szepessy, Anders},
title={The classical limit of quantum observables in the conservation
laws of fluid dynamics},
date={2019},
ISSN={1539-6746},
journal={Commun. Math. Sci.},
volume={17},
number={8},
pages={2191\ndash 2221},
url={https://doi.org/10.4310/CMS.2019.v17.n8.a5},
review={\MR{4069618}},
}
\bib{Po21}{article}{
author={Poggi, Bruno},
title={Applications of the landscape function for {S}chr\"odinger
operators with singular potentials and irregular magnetic fields},
date={2021},
journal={arXiv preprint arXiv:2107.14103},
}
\bib{Ros16}{article}{
author={Ros\'{e}n, Andreas},
title={A local {$Tb$} theorem for matrix weighted paraproducts},
date={2016},
ISSN={0213-2230},
journal={Rev. Mat. Iberoam.},
volume={32},
number={4},
pages={1259\ndash 1276},
url={https://doi.org/10.4171/RMI/915},
review={\MR{3593522}},
}
\bib{She94}{article}{
author={Shen, Zhong~Wei},
title={On the {N}eumann problem for {S}chr\"{o}dinger operators in
{L}ipschitz domains},
date={1994},
ISSN={0022-2518},
journal={Indiana Univ. Math. J.},
volume={43},
number={1},
pages={143\ndash 176},
url={https://doi.org/10.1512/iumj.1994.43.43007},
review={\MR{1275456}},
}
\bib{She95}{article}{
author={Shen, Zhong~Wei},
title={{$L^p$} estimates for {S}chr\"{o}dinger operators with certain
potentials},
date={1995},
ISSN={0373-0956},
journal={Ann. Inst. Fourier (Grenoble)},
volume={45},
number={2},
pages={513\ndash 546},
url={http://www.numdam.org/item?id=AIF_1995__45_2_513_0},
review={\MR{1343560}},
}
\bib{She96}{article}{
author={Shen, Zhongwei},
title={Eigenvalue asymptotics and exponential decay of eigenfunctions
for {S}chr\"{o}dinger operators with magnetic fields},
date={1996},
ISSN={0002-9947},
journal={Trans. Amer. Math. Soc.},
volume={348},
number={11},
pages={4465\ndash 4488},
url={https://doi.org/10.1090/S0002-9947-96-01709-6},
review={\MR{1370650}},
}
\bib{She99}{article}{
author={Shen, Zhongwei},
title={On fundamental solutions of generalized {S}chr\"{o}dinger
operators},
date={1999},
ISSN={0022-1236},
journal={J. Funct. Anal.},
volume={167},
number={2},
pages={521\ndash 564},
url={https://doi.org/10.1006/jfan.1999.3455},
review={\MR{1716207}},
}
\bib{Tan07}{book}{
author={Tanner, David},
title={Introduction to quantum mechanics, a time-dependent perspective},
publisher={University Science Books},
date={2007},
ISBN={978-1891389238},
}
\bib{Vol97}{article}{
author={Volberg, A.},
title={Matrix {$A_p$} weights via {$S$}-functions},
date={1997},
ISSN={0894-0347},
journal={J. Amer. Math. Soc.},
volume={10},
number={2},
pages={445\ndash 466},
url={https://doi.org/10.1090/S0894-0347-97-00233-6},
review={\MR{1423034}},
}
\bib{WC04}{article}{
author={Worth, Graham~A},
author={Cederbaum, Lorenz~S},
title={Beyond {B}orn-{O}ppenheimer: molecular dynamics through a conical
intersection},
date={2004},
journal={Annu. Rev. Phys. Chem.},
volume={55},
pages={127\ndash 158},
}
\end{biblist}
\end{bibdiv}
\end{document} |
1,116,691,497,849 | arxiv | \section{Introduction}
YY Geminorum, (BD +32 1582, SAO 60199, Gliese 278c), is a short period (19.54
hours) eclipsing binary with two almost identical dM1e (flare star)
components. The close binary is a subsystem of the nearby Castor multiple star
(YY Gem = Castor C), at a distance of $\sim$14.9 pc. The binary nature was
discovered in 1916 (Adams \& Joy, 1917) and the first spectroscopic orbits were
given by Joy \& Sanford (1926). As the brightest known eclipsing binary of the
dMe type, YY Gem is an important fundamental standard for defining the low-mass
Main Sequence mass-luminosity and mass-radius relationships (Torres \& Ribas,
2002). However, it was clear already from Kron's (1952) pioneer study that there
are significant surface inhomogeneities (starspots) affecting the observed
brightness of both components, likely to complicate data analysis. YY Gem was
the first star, after the Sun, in which such maculation effects were
demonstrated. Before we can accurately define the intrinsic luminosities of
such stars we need to clarify the scale of these effects. This is also
significant for comparing the photometric parallax with direct measurements,
such as that from HIPPARCOS (Budding et al., 2005).
The system was reviewed by Torres \& Ribas (2002) and Qian et al.\ (2002), the
latter concentrating mainly on apparent variations of the orbital period.
Torres \& Ribas (2002) gave revised values for the mean mass and radius of the
very similar components as (solar units) $M$ = 0.5992$\pm$0.0047, $R$ =
0.6191$\pm$0.0057, with mean effective temperature $T$ = 3820$\pm$100 K, as
well as an improved parallax for the system of 66.90$\pm$0.63 mas. From such
results, Torres \& Ribas argued that there had been a tendency to adopt
systematically erroneous parameters for dwarf stars comparable to YY Gem, with
wider implications for low-mass stars in general.
Determination of the precise structure of these stars, in view of the absence of
definitive information on their intrinsic, spot-free, luminosities, is still
rather an open question. Torres \& Ribas (2002) and Qian et al.\ (2002) revised
the work of Chabrier \& Baraffe (1995), giving radiative core radii of about
70\%, leaving the outer 30\% to account for the convective zone. The strong
subsurface convective motions, give rise to large-scale magnetic fields that
produce large starspots (cf.\ Bopp \& Evans, 1973). Moffet first reported large flare
events, and, in subsequent studies, YY Gem has been shown to be very active
(Moffet and Barnes, 1979; Lacy et al., 1976; Doyle \& Butler, 1985; Doyle \& Mathioudakis,
1990; Doyle et al. 1990).
Doyle et al (1990) have previously described
photometric observations of repetitive, apparently periodic, flares on YY Gem
which were observed during this programme. More recently, Gao et al.\ (2008)
modelled such periodicity effects on the basis of magnetic reconnection
between loops on the two stars generating interbinary flares. Fast
magneto-acoustic waves in plasma trapped in the space between the two
components are thought to modulate the magnetic reconnection, producing a
periodic behaviour of the flaring rate. Doyle et al.\ (1990) had previously
suggested filament oscillations. Several authors, (see Vrsnak et al. 2007) have
subsequently reported solar
filament oscillations of similar duration to those suggested on YY Gem.
Multi-wavelength observations of flare activity on YY Gem were initiated by
Jackson, Kundu \& White (1989) using radio data from the VLA (see also, Gary,
1986). Stelzer et al.\ (2002) used the Chandra and XMM-Newton satellites in
simultaneous observations of the X-ray spectrum, while Saar \& Bookbinder
(2003) carried out far ultraviolet observations. Impulsive UV and X-ray
phenomena, taken to be essentially flare-like, were shown to be orders of
magnitude stronger than those occurring on the Sun (Haisch et al., 1990).
Tsikoudi \& Kellett (2000), reviewing X-ray and UV observations of the Castor
system, reported a large (EXOSAT) flare event with total X-ray emission
estimated as $\sim$7$\pm 1 \times 10^{33}$ ergs. Their comparison of X-ray
and bolometric heating rates pointed to strong magnetic activity within hot
coronal components.
\begin{figure*}
\centering
\vspace{9cm}
\vspace*{-8cm}
\includegraphics[width=13cm,clip,angle=0]{FIGURE-1.eps}
\vspace*{1cm}
\caption{Time-line of
various facilities used in the multiwavelength campaign of 1988}
\end{figure*}
\begin{table}
\begin{center}
\caption{Multiwavelength Observations of YY Gem, March 1988}
\begin{tabular}{lccc}
Institute & Observer & Facility & Range\\
\hline
ISAS-Tokyo & Bromage & GINGA & ME X-rays \\
VILSPA-ESA& Foing & IUE & UV \\
Mauna Kea & Butler & UKIRT & IR \\
Mauna Kea & Doyle/Butler & 0.6m & UBVRI \\
McDonald Obs.& Frueh & 0.9m MCD & UBVRI \\
JILA Boulder& Brown & VLA & 5 \& 1.4 GHz \\
Crimea Obs.& Tuominen & 2.6m Shajn & UBVRI + H~${\alpha}$ \\
\hline \\
\end{tabular}
\end{center}
\end{table}
In this article, we concentrate on the multiwavelength campaign initiated from
the Armagh Observatory in 1988 (Butler, 1988). Our general aim is to bring
together results of some work, previously reported (e.g. Doyle et al. 1990, Butler, et al. 1994, 1995,
Budding et al. 1996, Tuominen et al. 1989) with contemporaneous satellite and radio observations
thereby allowing an overview of
the campaign. One specific intention concerns the various light curves and their
analyses in terms of standard eclipsing binary models that include photospheric
inhomogeneities. In addition, we present hitherto unpublished, ultraviolet
(IUE), radio (VLA) and X-ray (Ginga) data, which should be relevant to
subsequent studies. A number of optical flares were observed but only two of
these were seen at other wavelengths, one in X-rays by Ginga and the other in
the microwave region by the VLA.
\section[]{The 1988 multi-wavelength campaign on YY Geminorum} In late February
to early March 1988, YY Gem was the object of a coordinated multiwavelength
campaign to observe the star simultaneously in radio, near infra-red, X-rays, UV
and optical radiation (Butler, 1988). The principal objectives of this programme
were: (i) To provide multicolour photometry of the light curve in order to
establish (a) the distribution of surface inhomogeneities (starspots), and (b)
the temperature difference of these inhomogeneous regions from the normal
photosphere. (ii) To provide high time-resolution photometry in V and K during
the eclipses in order to check on possible surface inhomogeneities by `eclipse
imaging' --- i.e.\ examining any small disturbances observed in the light
curve during eclipses. (iii) To use optical spectroscopy, X-ray and radio
monitoring to probe the outer atmospheres of the components and assess any
topographical connection between photospheric spots and bright chromospheric or
coronal regions. (iv) To monitor flares on YY Gem in as many separate wavebands
as possible in order to check their energy distribution and constrain
models.
The programme involved the facilities and observations given in Table 1. Several
other organisations offered support to the campaign, but unfortunately a
number of these were unable to provide useful data due to poor observing
conditions. In Figure 1, we show the overlap between observing facilities that
were successful in obtaining data. Seven major facilities provided the most
relevant data and six of these were operative on Mar 5 and 6, with a few hours
of overlap on those two days; and to a lesser extent on Mar 4.
\begin{figure*}
\centering
\includegraphics[width=13cm,clip,angle=0]{FIGURE-2.eps}
\caption{Hawaii 0.6 m UBVRI light curves of YY Gem.}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=14cm,clip,angle=0]{FIGURE-3.eps}
\caption{UKIRT-BVK light curves of YY Gem.}
\end{figure*}
\section{UBVRIK photometry}
\subsection{Photometric Techniques}
To achieve the photometric aims we required broad-band photometry covering as
much of the optical and infra-red regions as possible. We therefore operated two
telescopes simultaneously: the University of Hawaii 0.6$^m$ telescope on Mauna
Kea and the neighbouring 3.8$^m$ United Kingdom Infra-Red Telescope (UKIRT).
Some additional observations were contributed by
Marion Frueh of McDonald Observatory, Texas.
All observers were alerted to a particular problem associated with photometry of YY Gem,
namely that the close proximity (separation $\sim$71 arcsec) to
YY Gem of the bright star Castor (A2 type, V $\sim$1.6)
makes it difficult to obtain repeatable and
consistent sky background measurements, particularly in the U and B bands,
where YY Gem is weak and Castor bright. Kron (1952) commented that, in the
vicinity of YY Gem, 30\% of the monitored blue light originated with Castor and
only 70\% with YY Gem itself (Budding \& Kitamura 1974). For this campaign, in order to
reduce the errors associated with scattered light, observers
were requested to take the mean of two adjacent sky areas, one to the east and another
to the west of YY Gem. Frequent reference to three nearby comparison stars: BD
32\degr 1577, BD 31\degr 1611 and BD 31\degr 1627, together with
standard transformation equations
and mean extinction coefficients allowed a photometric accuracy $\sim$ 0.01 magnitudes
to be achieved. Lists of the standards used, the colour equations
derived and the reduced photometric observations are given in the supplementary electronic tables
(http://star.arm.ac.uk/preprints/2014/654/).
\subsection{UBVRIK photometry from the 0.6m telescope on Mauna Kea}
The UBVRI photometry, from the 0.6$^m$ telescope and Tinsley Photometer, was
standardised to the Johnson UBV and Cape/Kron RI systems using equatorial and
southern hemisphere standards from Cousins (1980, 1984). The following mean
extinction coefficients were adopted: $\kappa_U = 0.22$, $\kappa_B = 0.16$,
$\kappa_V = 0.12$, $\kappa_R = 0.10$ and $\kappa_I = 0.07$. Due to the manual
operation of the Tinsley photoelectric photometer, time-resolution for a single complete
UBVRI set of measurements was restricted to several minutes. This was
satisfactory for the slower variations associated with eclipse effects and the
rotational modulation of spots, but unsuitable for flare monitoring. Therefore,
two modes of observation were used on this telescope: (1) UBVRI photometry, with
low time-resolution ($\Delta{\it t}\sim$ 2$^{\rm m}$) during eclipses and
approximately once per hour at other phases, and (2) continuous U-band
monitoring at (mainly) out-of-eclipse phases. Some of the latter data was
reported on by Doyle et al.\ (1990).
Because the 0.6$^m$
telescope was set manually it seems likely that small errors in positioning of
the background comparison region could be responsible for some of the scatter in
the U and B light curves which increases at shorter wavelengths. However, small,
unrecognised, flares would also contribute to the scatter. In Figure 2, we show
the UBVRI light curves for YY Gem from the combined data obtained on 2-7 March
1988 with the 0.6$^m$ telescope.
\subsection{BVK photometry with UKIRT on Mauna Kea}
The United Kingdom Infrared Telescope (UKIRT) was scheduled to observe on four
half-nights, during which two primary and two secondary eclipses occurred.
Continuous monitoring in the K-band simultaneously with V or B was made possible
with a dichroic filter and VISPHOT, a photoelectric photometer set up to
monitor the reflected optical beam. A nodding secondary mirror provided rapid
and repeatable background correction. As spot modulation effects are relatively
more prominent in V and flare effects in B, it was decided to monitor in K and
V during eclipses and in K and B at out-of-eclipse phases. Useful coverage of
the out-of-eclipse phases by UKIRT turned out to be quite limited, however.
The auto-guider was not functional at this time, resulting in occasional guiding
errors. We used the mean atmospheric extinction coefficients given above in
carrying out the differential reductions (see also Krisciunas et al., 1987).
A selection of standards suitable for both optical and infra-red photometry
was made for the determination of the colour equations. One of these (Gleise
699, Barnard's Star) was believed to be in the declining stage of a flare during
observation on 4 March 1988. Further details are given in the supplementary electronic tables.
In Figure 2, discrepancies can be seen at some phases in the B light curves,
but there is generally good agreement in V. This is consistent with the greater
influence of background irregularities and small flares at shorter wavelengths.
In Figure 3 we show the UKIRT B, V and K observations. The two broadband
photometric data-sets (Hawaii 0.6$^m$ and UKIRT) are comparable over common
phase intervals, although the less-scattered UKIRT data has poorer phase
coverage.
The cool-spot hypothesis receives support from the smaller amplitude of the
out-of-eclipse variation at the longer wavelengths. This is quite noticeable in
the UKIRT K-band, but less so in the 0.6$^m$ I-band data.
\subsection{UBVRI photometry from McDonald Observatory}
In order to increase the
probability of obtaining simultaneous optical photometry with radio, X-ray or
ultraviolet observations of flares, YY Gem was placed on the schedule for the
0.91$^{m}$ telescope at McDonald Observatory, Texas on 4, 5 and 6th March 1988. The
photoelectric McD photometer was equipped with a cooled EMI 9658A photomultiplier.
With sequential exposures through U,B,V,R and I filters of the Johnson system, a
time resolution in each waveband of approximately 20 seconds was obtained.
Unfortunately, a computer crash caused the loss of the electronically recorded
data and it was necessary to manually type in the raw photon counts from the
printed output (as had also been necessary for the UKIRT data for similar
reasons). Transformations to the Johnson UBVRI system relied on observations of
17 stars listed by Moffat \& Barnes (1979) and the three local standards listed in Section
3.1 (Butler,
1988). The following mean extinction coefficients were employed: $\kappa_U =
0.57$, $\kappa_B = 0.29$, $\kappa_V = 0.17$, $\kappa_R = 0.12$ and $\kappa_I =
0.09$. As at Mauna Kea we had transformed to the Kron/Cousins R,I system,
rather than Johnson's, the McDonald data was further transformed to the
Kron/Cousins system using equations formulated by Bessell (1979). The UBVRI
observations of YY Gem on the three nights 4/5/6 March 1988 are listed in the
supplementary electronic tables. Though no very large optical flares were
recorded at McDonald during this campaign, a flare of approximately 0.6
magnitudes in U was observed simultaneously with a substantial increase in the
6-cm microwave flux recorded by the VLA.
\section{Modelling the Mauna Kea V Light Curves}
The idea of large-scale inhomogeneities in the local surface brightness of stars
is not new, and, after a period of dormancy, was revived
in the mid-twentieth century, particularly after discussion
of possible causes of stellar brightness variation by the careful photometrist
G.\ Kron (1947, 1950, 1952). Subsequently, evidence has accumulated from across
the electromagnetic spectrum of magneto-dynamic activity effects on cool stars
of a few orders of magnitude greater scale than that known for the Sun.
These effects include large areas of the photosphere (spots) with cooler than
average temperature.
This subject formed the theme of IAU Symposium 176 (Strassmeier \& Linsky, 1996),
and was reviewed in Chapter 10 of Budding \& Demircan (2007), which outlines the
methodology pursued in this paper. Of course, the use of uniform circular areas
to model maculation effects is a physical over-simplification, but it is a
computational device that allows
an easily formulated fitting function to match the data to the available
photometric resolution. Even with the highest S/N data currently available,
a macula less than about 5 deg in angular mean radius produces light curve losses only at
the milli-magnitude level. Whether a given maculation region's shape is circular,
or of uniform intensity is unfortunately not recoverable. Other indications on surface structure however,
such as come from the more detailed Zeeman Doppler Imaging techniques for example
(Donati et al, 2003), tend to support somewhat simple and uniform structures
to maculae, and there are supporting theoretical arguments, related to magnetic loop parameters.
But it is also true that different data sources
(e.g.\ spectroscopy and photometry) and analysis techniques (e.g.\ minimum entropy or information limit)
do not always lead to one clear and consistent picture (Strassmeier, 1992; Radick et al. 1998; Petit et
al., 2004; Baliunas, 2006).
Even if real maculae are neither circular nor uniform, there will be certain
mean values that can represent their (differential) effect to the available accuracy.
Such mean values, as used in sunspot statistical studies, have validity in tracking and
relating data to other activity indicators. So while the surface structure of
active cool stars may well be more complicated than we can presently discern,
the approximations available can summarise observational
findings and stimulate efforts towards more detailed future studies.
Note that the differential maculation variation, that historically caught the attention of
observers, should not cause the steady background component to be disregarded.
The latter, coming from a simultaneously extant, uniform, distribution of maculae,
can have quite a significant effect, as noted by Popper (1998) and Semeniuk (2000),
who derived systematic differences between the distance estimates of certain cool
close binaries, obtained photometrically, with those from the Hipparcos satellite.
They found that the mean surface flux of such cool binaries was too low to allow them
to fit with the normal correlation from their $B - V$ colour indices and concluded
that a uniform distribution of dark spots could account for the difference.
Budding et al (2005) confirmed these results and estimated that the mean
surface flux could be underestimated up to a level of about 30\% in cases of
close binaries similar to YY Gem (see also Torres \& Ribas, 2002).
Computer programs that model the light curves of eclipsing variables with
surface inhomogeneities were discussed by Budding \& Zeilik (1987). This
software was developed into a user-friendly format by M.\ Rhodes, available as
{\sc WINFITTER} from http://home.comcast.net/$\sim$michael.rhodes/.
The adopted technique is an iterative one that progressively defines
parameters affecting light curves, beginning with those relating to the binary
orbit, and subsequently including those controlling the extent and position of
surface spots. The procedure involves a Marquardt-Levenberg
strategy for reducing $\chi^2$ values corresponding to the given
fitting function with an assigned trial set of parameters.
If an advantageous direction for simultaneous optimization
of a group of parameters is located then that direction
can be followed in the iteration sequence (`vector search'),
otherwise the search proceeds by optimizing each parameter in turn.
For a linear problem, $\chi^2$ minimization is equivalent to the
familiar least-squares method (cf.\ Bevington, 1969), but
the parameters in our fitting function are not
in a linear arrangement, preventing an immediate inversion
to the optimal parameter set. However, the $\chi^2$ Hessian
is calculated numerically for a location in parameter space
corresponding to the found minimum. If the
location is a true minimum with all the Hessian's eigenvalues
positive, useful light on the determinacy
of each individual parameter is thrown.
An important issue is the specification of errors.
Photometric data sets usually permit data errors to be
assigned from the spread of differences between comparison and check
stars. We have adopted representative errors based on the observation that
the great majority of data points for YY Gem are within 20\% of the mean.
Uniform error assignment weights the fitting at the bottom of the minima more
highly which is beneficial in fixing the main parameters as these regions of
the light curve have relatively high information content.
A check on
the validity of such error estimates comes from the corresponding optimal
$\chi^2$ values.
The ratios $\chi^2/\nu$, where $\nu$ is the number of degrees of
freedom of the fitting, can be compared with those in standard tables of the
$\chi^2$ variate (e.g.\ Pearson \& Hartley, 1954) and a confidence
level for the given model with the adopted error estimates obtained.
If $\chi^2/\nu$ is quite different from unity, we can be confident
that either the data errors are seriously incorrect, or (more often),
the derived model is producing an inadequate representation for
the available precision.
This relates to another well-known aspect of optimization problems,
i.e.\ that while a given model can be adequate to account for
a given data-set, we cannot be sure that it is the only such model.
This is sometimes called the `uniqueness' problem, and, in its most
general form, is insoluble. However, if we confine ourselves to
modelling with a limited set of parameters and the Hessian
at the located $\chi^2$ minimum remains positive definite for that
set with the $\chi^2/\nu$ ratio also within acceptable
confidence limits, then the results are significant within the context.
If either of these two conditions fail
then there are reasonable grounds for doubting
the representation. Provided the conditions are met, the Hessian can be inverted to yield
the error matrix for the parameter-set. The errors listed in Tables 2 and 3
were estimated in this way.
To speed up a full examination of parameter
space, the data can be binned to form normal points with phase intervals
typically 0.5\% of the period. The residuals from the eclipse model
were first fitted with a simple two-spot model (for procedural details see
Zeilik et al.\ 1988), but this was later revised in a fitting that included a
bright plage visible near primary minimum, on the basis of additional evidence.
The high orbital inclination of YY Gem ($\sim$86\degr) results in poor accuracy
for the spot latitude determination. Spots of a given size at the same
longitudes but in opposite latitude hemispheres would generally show similar
light curve effects. Attempts to derive a full spot parameter specification
simultaneously tend to run into determinacy problems: a low-latitude spot might
be moved towards the pole in the modelling, but a quite similar pattern of
variation could then be reproduced by a corresponding decrease in size at the
same latitude. On the other hand, spot longitudes were always fairly well
defined.
We adopted the following procedure:
(1) Fit the eclipses for the 0.6$^m$ V light curve by adjusting the main
geometrical parameters, using the photospheric temperatures
listed by Budding \& Demircan (2007: Table 3.2).
A normalization constant also appears as a free parameter for any
given light curve. An initial value is usually adopted from
setting the highest measured flux to a nominal value
of unity. Subsequent optimization will yield
a better representation for this.
(2) Specify initial values for the longitudes,
latitudes and radii of spots, as in Zeilik et al.\ (1988).
(3) Estimate the relative intensity of spots {$\kappa$} in the V band (compared
to the unspotted photosphere). We assigned a preliminary value of $\kappa$
$\sim$0.2, assuming black-body emission and an approximate mean temperature
difference of $\sim$500 K between spots and photosphere $T_{\rm p} - T_{\rm
s}$.
The low value of $\kappa$ entails that the spot size is not
so sensitive to the adopted temperature decrement for the V light curve.
Since the V spectral region lies some way to the short-wavelength side
of the Planckian peak at the adopted temperature (3770 K), only in the infra-red
will light curves start to show a noticeably decreased maculation amplitude.
This could be simulated by a smaller spot, but that would not be consistent,
of course, with radii of the same feature obtained in V.
In other words, the weight of information in the shorter wavelength photometry
goes towards fixing the spot size: at the longer wavelengths it goes towards
determining the temperature.
(4) Optimize first spot longitudes, then radii and (possibly) latitudes, using
{\sc CURVEFIT}.
(5) Retrofit the eclipse curve for the stellar parameters with the spot
modulation removed.
\begin{table}
\begin{center}
\caption{Parameters used in or derived from the solution for the V light curve}
\begin{tabular}{lcc}
\\
\multicolumn{3}{c}{{\bf 0.6m V Light Curve model}} \\
\hline
Ratio of Luminosities & $L_{1}/L_{2}$ & 1.02$\pm$.005 \\
Ratio of Masses & $M_{1}/M_{2}$ & 1.0 \\
Ratio of Radii & $R_{1}/R_{2}$ & 1.0$\pm$.008 \\
Coeff. Limb Dark. & $u_{1,2}$ & 0.88 \\
Radius of Primary & $R_{1}/A$ & 0.154$\pm$.001 \\
Orbital Inclination (\degr) & $i$ & 86.0 $\pm$0.11 \\
\hline
\\
\end{tabular}
\begin{tabular}{cccc}
\multicolumn{4}{c}{{\bf Three-spot model for V light curve}} \\
\multicolumn{1}{c}{Long.} & \multicolumn{1}{c}{Lat.} & \multicolumn{1}{c}{Radius} & \multicolumn{1}{c}{Temp.\ decr.} \\
\hline
94.8$^\circ$ & -16$^\circ$ & 16.4$^\circ$ & 0.84 \\
250.0$^\circ$ & 45$^\circ$ & 10.0$^\circ$ & 0.84 \\
342.7$^\circ$ & 21$^\circ$ & 12.3$^\circ$ & 1.13 \\
\hline
\multicolumn{2}{c}{Datum error $\Delta l$} & \multicolumn{2}{c}{0.01}\\
\multicolumn{2}{c}{Goodness of fit $\chi^2/\nu$} & \multicolumn{2}{c}{1.26} \\
\hline
\end{tabular}
\end{center}
\end{table}
Final parameters from this procedure are given in Table 2. Adopting the radial
velocity analysis of Torres \& Ribas (2002) and the standard use of Kepler's
third law leads to a separation of the two
mass-centres as 3.898 R$_{\odot}$, or that the radius of either star is some
0.601 R$_{\odot}$.
This is slightly less than the value Torres \& Ribas
calculated due to the difference in the two light-curve fitting results.
Our masses
(0.600 M$_{\odot}$), however, are in almost exact agreement with those of Torres
\& Ribas, with our own (slightly lower) value for the orbital
inclination, i.e.\ the two sets of results are within their error limits of each other.
The inclination listed in Table 2 derives from the fit to the binary light curve, however,
in the separate fitting that allows spot parameters to be estimated, a mean
value for the inclination has been adopted. This allows the full weight of the
difference curve data to go into the determination of the
geometrical paremeters of the starspots.The final value of $\chi^2/\nu$ given at the bottom of Table 2
is a little high for the adopted accuracy of the data, as mentioned above.
The photometric modelling of these V data, taken in isolation,
should then be regarded as a feasible or coarse representation of reality.
Nonetheless it is in keeping with the other results discussed in the
following sections, and the combination of evidence
gives added significance to the model.
Note that this modelling alone cannot distinguish between spots on the primary
and secondary components, particularly in the present case with an essentially
identical pair. A given spot can be situated on the primary at the
longitude indicated in Table 2, or on the secondary at that longitude
$\pm$180\degr. The longitudes of the darkened regions are about 5 and 20 deg from
quadrature, i.e.\ they reach their maximum visibility when the two stars are not
too far from greatest elongation. This recalls Doyle \& Mathioudakis' (1990) finding
that flares tend to occur close to quadrature phases, which, in turn, suggests a
topological connection between flaring regions and cool photospheric spots.
\section{Models for the B, R, I and K light curves and temperatures of spots}
\begin{table*}
\begin{center}
\caption{\bf Relative Intensities of Dark Spots in V,R,I,K and the derived temperature difference between the
spots and the photosphere}
\begin{tabular}{ccccccc}
\multicolumn{1}{c}{{\bf Filter}} &
\multicolumn{1}{c}{{\bf $\lambda_{\rm eff}$(\AA)}} &
\multicolumn{1}{c}{{\bf Limb darkening}} &
\multicolumn{1}{c}{{\bf Mean intensity}} &
\multicolumn{2}{c}{{\bf Spot Temp. Diff.($^{\circ}$K) T$_{P}$ - T$_{S}$}} & \\
& & \multicolumn{1}{c}{{\bf Coefficient}} & \multicolumn{1}{c}{{\bf $\kappa$}} &\multicolumn{1}{c}{{\bf Method 1}} &
\multicolumn{1}{c}{{\bf Method 2}} & \\
\hline
V & 5550 & 0.88 & 0.20 & \\
$R_{C}$ & 6800 & 0.73 & 0.24$\pm$.04 & 630 & 420 \\
$I_{C}$ & 8250 & 0.60 & 0.73$\pm$.04 & 200 & 280 \\
K & 22000 & 0.33 & 0.30$\pm$.08 & 1000 & 1320 \\
\end{tabular}
\end{center}
\end{table*}
Following the determination of basic parameters for the V light curve we
processed the light curves from the other filters, assuming the same geometry. We
verified that the B, R, I and K light curves could all be fitted by eclipses
having closely similar numerical values of the main parameters to those of the V.
The large scatter of the U
band (0.6$^m$) data prevented their detailed analysis in this way. In our final
spot models for the B, R, I and K data we adopted longitudes, radii and
latitudes of the spots which were the same as for the V, and assumed that only the
limb-darkening and mean surface brightness of the spots, relative to the
unspotted photosphere, differed.
At a given wavelength the optimized
value of $\kappa$ corresponds to a spot mean temperature through the
implicit relation (1). The photometric information content
thus directs us towards the temperature estimate.
With limb-darkening coefficients at the mean
wavelength of the Cape/Kron R, I and Johnson B and K bands taken from van Hamme
(1993), we determined the relative surface brightness of the spots in the
different photometric bands using {\sc CURVEFIT}. We could then estimate the
difference in temperature of the spotted regions from the unspotted
photosphere.
The mean surface brightness becomes adjustable in the fitting of the infra-red
light curves. The geometrical parameters are held constant to allow the fitting to
concentrate only on the flux ratio for the infra-red data-sets. The increase in this
flux ratio can definitely be seen for the infra-red light curves, though we cannot get
away from the relatively high noise level which detracts from the temperature
estimation. The light curves are normalised in steps, with initially an approximate value
used to scale the input magnitude differences so that the out of eclipse flux level
can be approximately unity. The finally adopted fractional luminosities are then given
in terms of that corrected reference light level.
Eker (1994), discussing the determination of spot temperatures from broad-band
photometry, suggested two alternative approaches: (a) Assume that the spots and the
normal photosphere both radiate as black bodies at set temperatures. (b) Assume
that the radiation from a spot is the same as that arising from a normal stellar
photosphere of the given temperature, and then use the Barnes-Evans (Barnes \& Evans, 1976)
relationship between colour (V--R) and surface flux $f_V$. Both methods can be
criticised; for example, black-body radiation is unlikely to provide a
very accurate result for localised spectral regions, considering the strong
influence of molecular bands on the flux distribution of dMe stars. On the
other hand, spectrophotometric fluxes, predicted by current models, are not
always sufficiently close to real stellar spectra to give accurate colours over
the temperature range required. Given such issues, we applied the first
procedure to the B, R, I and K light curves, and checked the result with the
second one.
In Table 3, column 4 we give the relative intensities of the spots derived from
the models fitted to the R, I and K light curves, adopting the positions and
radii of the spots from the V light curve, specifying, initially, the dark spot
intensity as 0.2 of that of the normal photosphere. We used the following
identity, where the left side refers to mean fluxes
$f$ in spot `$s$` and photospheric $`p`$ regions, and the right adopts an
appropriate flux formula (e.g.\ black body) $\phi(T, \lambda)$:
\begin{center}
\begin{equation}
\frac{(f_{\lambda}/f_{V})_{s}}{(f_{\lambda}/f_V)_{p}}
= \frac{\phi(T_{\rm s},\lambda)}{\phi(T_{\rm p},\lambda)}\frac{\phi(T_{\rm p},V)}{\phi(T_{\rm s},V)} \,\,\, .
\end{equation}
\end{center}
\begin{figure*}
\centering
\includegraphics[width=12cm,clip,angle=0]{FIGURE-4.eps}
\caption{Integrated IUE fluxes (squares) in the ultraviolet emission lines of Mg~{\sc ii} h and k (top)
and the Ly~${\alpha}$ (bottom)
against phase with the scaled V-band model eclipse light curves for YY Gem. The IUE fluxes are in units of 10$^{-12}$ ergs
cm$^{-2}$ s$^{-1}$. Note the reasonable fit of the Mg~{\sc ii} fluxes to the V secondary eclipse curve and the much
broader eclipse in Ly${\alpha}x
$.}
\end{figure*}
\begin{figure*}
\vspace{20cm}
\centering
\vspace*{-16cm}
\includegraphics[width=14cm,clip,angle=0]{FIGURE-5A.eps}
\special{psfile=FIGURE-5B.eps hoffset=-30 voffset=-70 hscale=120 vscale=120 angle=0}
\includegraphics[width=14cm,clip,angle=0]{FIGURE-5B.eps}
\vspace*{0.2cm}
\caption{H$\alpha$ line profiles of YY Gem obtained on the 5 and 6 March 1988
at the Crimean Astrophysical Observatory (see Tuominen et al. 1989).
Fluxes are normalised to the continuum.}
\end{figure*}
Here, $\lambda$ is the effective wavelength of the R, I or K filters being
compared with V. The left side concerns empirically determined ratios of the
maculation amplitudes, while the right implicitly yields corresponding spot
temperatures for given wavelengths if we have some value of the mean unspotted
photospheric temperature $T_{\rm p}$. We have adopted the value 3770 K given by
Budding \& Demircan (2007). This was determined using an absolute flux
calibration, with an adopted flux of $8.82\times10^{-12}$ W m$^{-2}$. This
temperature is higher than many values for M1 stars in the literature, as
noted by Torres and Ribas (2002) who preferred a yet higher value of 3820 K.
The bolometric correction required to match the V flux is --1.18 mag. This is
somewhat less than the value --1.25 mag that recent sources would give for
this type of star (cf.\ di Benedetto, 1998; Bessell et al., 1998), but a higher
assigned temperature would increase this discrepancy and our adopted 3770 K
appears a reasonable compromise. In Table 3, column 5, we list the spot temperature
differences which satisfy the aforegoing identity.
Eker's alternative approach seems less direct, given insufficient transmission
details for the three R, I and K wavebands. Here, we assume the (V--R), (V--I)
and (V--K) colours of spotted regions are the same as those of a (very cool)
star of the same spectral type or temperature. For the R and I bands, we first
interpolated the values in Table 4 of Th\'{e} et al. (1984) to determine
spectral types of the relevant spotted regions, and thence corresponding
temperatures (Cox, 2000). For the K band we can find a temperature directly
from the relation for log~$T_e$ to $(V-2.2\mu)$ of Veeder (1974). Results are
given in Table 3, column 6. In both methods, the mean representative
temperatures of all dark spots affecting a given light curve are taken to be the
same.
Empirically derived values for the difference in temperature of spots and the
normal photosphere on YY Gem were found to vary from $\sim$200 K to
$\sim$1200 K, with the difference from the K band larger than that from
the R and I bands. The average results from Table 3 give a temperature difference
of 650 $\pm$ 300 K, which appears in good agreement with the
photosphere -- spot temperature difference of 600 $\pm$ 450 K found
by Vogt (1981) and Eker (1994) for the prototype star BY Dra (M0 V).
The disparity in the results for the mean temperatures of the spotted areas shown
in Table 3 render infeasible an accurate resolution into distinct penumbral and umbral
regions. While the solar case suggests a significant role for penumbra in large spots (Bray \& Loughhead (1964)),
the fact that derived temperature differentials for the maculation
of cool active stars are generally less than that of large sunspots may well be an indication
that these active regions are heterogeneous in detail, either because of complex shapes and
groupings of spots, the presence of white-light faculae, penumbral components or other,
perhaps temporal, irregularities.
\begin{figure*}
\centering
\vspace*{-2cm}
\includegraphics[width=14cm,clip,angle=-90]{FIGURE-6.eps}
\vspace*{-2cm}
\caption{H$\alpha$ line profile for YY Gem with a profile model fit.}
\end{figure*}
Several calculations of mean spot temperatures for RS CVn stars have been
reported, giving values that differ from the normal photospheric temperature by
typically $\sim$1000 K, (Vogt, 1981; Eker, 1994, Neff et al. 1995, Olah \& Strassmeier, 2001,
Berdyugina, 2004).
This difference is lower for M type stars than for the solar type (Vogt, 1981:
Rodono, 1986). At first sight, this is not surprising, as, from their relative
areas, one can expect penumbrae to dominate the flux. However, Dorren
(1987) argued that this is unlikely, as only spots with umbral areas $<$10\% of
the total spot area show penumbral flux domination. Dorren found, for
practically all cases, that the increased contrast of the umbra weights the
result towards the umbral component's effect.
Berdyugina (2005) has written a comprehensive review of current techniques
for the determination of starspot temperatures. In Figure 7 of that publication,
starspot temperature decrements (T$_P$-T$_S$) are shown to correlate with the
photospheric temperature (T$_P$), with both dwarfs and giants scattered about
a single mean curve. The mean starspot temperature decrement we derive here for
YY Gem (650$\pm$300 K) falls close to the mean line in the lower part of
Berdyugina's figure.
\section{Spectroscopic Data}
\subsection{Ultraviolet Spectra from IUE}
Observations of ultraviolet spectra from the International Ultraviolet Explorer Satellite (IUE)
of YY Gem were scheduled for 5 and 6 March 1988 from 03:00 to 11:00 UT. A total of 30 spectra were
obtained; 9 with the short wavelength (1000-2000 A) SWP camera and 21 with the long wavelength (2000-3000 A)
LWP camera. To improve the time resolution, two exposures of 10 minutes duration were taken with the
LWP camera before the image was downloaded, whereas exposures in the SWP camera were single and longer
(circa 25m). The spectra obtained covered the secondary eclipse and some
contiguous phases. The Starlink reduction package {\sc IUEDR} was used to
extract the emission line fluxes in Mg~{\sc ii} (2800A), Ly~${\alpha}$,
C~{\sc iv} (1550A) and various other lines. Only the Mg~{\sc ii} and
Ly${\alpha}$ results will be detailed here.
The most noticeable feature of the Mg~{\sc ii} emission during secondary
eclipse is that it is fitted reasonably well by the scaled V light curve (see
Figure 4 - top). This implies that the surface is approximately uniformly
covered by Mg~{\sc ii} emitting plages. At first sight, the existence
of an approximately uniform chromosphere above an evidently non-uniform photosphere
might be unexpected. However, this is not the first
time that such a situation has been suggested by observations.
That the very active dMe stars may be
totally covered (`saturated') with chromospheric regions was proposed by Linsky
\& Gary (1983) as an explanation for the high integrated Mg~{\sc ii} flux on BY Dra like
stars. Mathioudakis \& Doyle (1989) reached a similar conclusion, taking into
account the integrated Mg~{\sc ii} and soft X-ray fluxes of dM-dMe stars.
There is some suggestion of a Mg~{\sc ii} flux increase towards the end of our IUE observation run.
This could be associated with a lower latitude active region at longitude about
230\degr, but there is insufficient data to confirm this suggestion.
In Figure 4 (bottom) we show the flux in the Ly$\alpha$ line with the
geocoronal emission subtracted. The extraction was made using the method of
Byrne \& Doyle (1988). The eclipse light curve in Ly$\alpha$ appears broader
than the V-band curve. We should note, however, that there are only a few data
points at relevant phases, and the out-of-eclipse scatter shows deviations that
are comparable. There are two feasible explanations; one of them intrinsic to
the star and the other not. The first alternative would be that the broad
eclipse arises from a larger volume of Ly${\alpha}$ emitting material than the
photosphere of the secondary star, roughly centred on that star. In effect,
this extended region would be optically thick in Ly${\alpha}$. Another
explanation for the broad decline at secondary minimum in Ly${\alpha}$ could
be variable absorption by interstellar H as the emission lines of the orbiting
stars are Doppler shifted across the rest wavelength of the interstellar
absorption line. This is unlikely to be significant, however, due to the small
range in radial velocity $\sim$15\% at eclipses when the stellar motion is
perpendicular to the line of sight.
We conclude, therefore, that the Ly${\alpha}$ light curve reflects the
scale of the stellar outer atmosphere, i.e.\ optically thick conditions at Ly${\alpha}$
out to heights of the order of twice the photospheric radii (cf. Aschwanden,
2004). This is quite different to the situation for the H${\alpha}$ data that
we examine next. For H${\alpha}$, the predominant formation layer appears
to be located at a relatively low height above the photosphere.
\subsection{Optical Spectra}
Optical spectra covering the H${\alpha}$ region were obtained with the coude
Spectrograph on the 2.6$^m$ Shajn telescope at the Crimean Astrophysical Observatory on
5 and 6 March 1988 with ten spectra obtained on the first night and nine on the
second (see Tuominen et al. 1989). The H${\alpha}$ emission at various phases
close to and out of eclipse are shown in Figures 5 and 6. These profiles were modelled
using the program {\sc PROF} (Olah et al, 1992) where the assumption is made
that the emission flux at a particular wavelength is proportional to the
number of atoms in the line of sight emitting at that wavelength. The program
numerically integrates Doppler and Gaussian broadening contributions for each
of the ten slices across the projected stellar surface.
The results are listed in Table 4. In this table the parameters
$I_{0}$, $\lambda_{0}$, $r$ and $s$ give the peak intensity, mean
wavelength, rotational and Gaussian broadening coefficients respectively for
those emission line profiles which could be separated into two components. We
note a high degree of self-consistency in the rotation parameter, implying mean
equatorial rotational velocities of 38.6 and 38.5 km s$^{-1}$ ($\pm$0.1 km
s$^{-1}$) for the primary and secondary respectively. If we use the orbital
velocity sum of Torres \& Ribas (2002) and assume co-rotation of the
components, we find a pair of essentially equal-sized stars, but with radii
some 4\% bigger than those derived from the broadband light curves. In other
words, there is evidence for a small but significant scale of chromospheric
enhancement from the H${\alpha}$ profiles. The scatter of the Gaussian
component from profile to profile is significantly bigger than that of the
rotation parameter, indicating a detectable variability of local surface
turbulence that could be associated with local inhomogeneities of velocity.
This picture is consistent with that of the Sun where various
H$\alpha$ emission features such as spicules and prominences etc regularly
appear above the surface, particularly
when active regions are close to the limb. A dMe star, such as YY Gem, with
its much higher level of activity would be expected to show extensive
off-disk structures
\begin{table}
\caption{Parametrisation of {\sc PROF} fittings to the
H$\alpha$ emission lines shown in Fig 5.}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
HJD & Comp.\ & $I_{0}$ & $\lambda_{0}$(\AA) & r(\AA) & s(\AA) \\
\hline
47226.3736 & 2 & 1.401 & 6562.848 & 0.840 & 0.854 \\ \cline{3-6}
$\phi$ = 0.036 & • & $\pm0.015$ & $\pm0.012$ & $\pm0.010$ & $\pm0.013$ \\
\hline
\hline
& 1 & 0.768 & 6561.400 & 0.838 & 0.710 \\ \cline{3-6}
47226.4444 & • & $\pm0.013$ & $\pm0.017$ & $\pm0.017$ & $\pm0.021$ \\ \cline{2-6}
$\phi$ = 0.123 & 2 & 0.629 & 6564.411 & 0.838 & 0.705 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.023$ & $\pm0.025$ & $\pm0.027$ \\
\hline
\hline
& 1 & 0.935 & 6561.054 & 0.833 & 0.827 \\ \cline{3-6}
47226.4681 & & $\pm0.015$ & $\pm0.017$ & $\pm0.015$ & $\pm0.020$ \\ \cline{2-6}
$\phi$ = 0.152 & 2 & 0.749 & 6564.745 & 0.843 & 0.762 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.020$ & $\pm0.019$ & $\pm0.023$ \\
\hline
\hline
& 1 & 0.884 & 6560.765 & 0.841 & 0.789 \\ \cline{3-6}
47226.4792 & & $\pm0.014$ & $\pm0.017$ & $\pm0.015$ & $\pm0.020$ \\ \cline{2-6}
$\phi$ = 0.166 & 2 & 0.909 & 6565.089 & 0.841 & 0.777 \\ \cline{3-6}
& & $\pm0.014$ & $\pm0.016$ & $\pm0.015$ & $\pm0.019$ \\
\hline
\hline
& 1 & 0.816 & 6560.560 & 0.844 & 0.768 \\ \cline{3-6}
47226.5125 & & $\pm0.013$ & $\pm0.018$ & $\pm0.016$ & $\pm0.020$ \\ \cline{2-6}
$\phi$ = 0.206 & 2 & 0.837 & 6565.290 & 0.841 & 0.751 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.017$ & $\pm0.016$ & $\pm0.020$ \\
\hline
\hline
& 1 & 0.813 & 6560.562 & 0.847 & 0.763 \\ \cline{3-6}
47227.3132 & • & $\pm0.013$ & $\pm0.018$ & $\pm0.017$ & $\pm0.020$ \\ \cline{2-6}
$\phi$ = 0.190 & 2 & 0.682 & 6565.043 & 0.845 & 0.742 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.021$ & $\pm0.019$ & $\pm0.024$ \\
\hline
\hline
& 1 & 0.927 & 6560.335 & 0.844 & 0.791 \\ \cline{3-6}
47227.3361 & & $\pm0.014$ & $\pm0.016$ & $\pm0.015$ & $\pm0.019$ \\ \cline{2-6}
$\phi$ = 0.218 & 2 & 0.748 & 6565.265 & 0.830 & 0.793 \\ \cline{3-6}
& & $\pm0.014$ & $\pm0.020$ & $\pm0.018$ & $\pm0.024$ \\
\hline
\hline
& 1 & 0.822 & 6560.223 & 0.840 & 0.684 \\ \cline{3-6}
47227.3597 & & $\pm0.013$ & $\pm0.016$ & $\pm0.015$ & $\pm0.019$ \\ \cline{2-6}
$\phi$ = 0.247 & 2 & 0.677 & 6565.435 & 0.835 & 0.704 \\ \cline{3-6}
& & $\pm0.014$ & $\pm0.019$ & $\pm0.019$ & $\pm0.024$ \\
\hline
\hline
& 1 & 0.815 & 6560.218 & 0.843 & 0.719 \\ \cline{3-6}
47227.3875 & & $\pm0.013$ & $\pm0.017$ & $\pm0.016$ & $\pm0.020$ \\ \cline{2-6}
$\phi$ = 0.281 & 2 & 0.705 & 6565.411 & 0.839 & 0.754 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.020$ & $\pm0.019$ & $\pm0.024$ \\
\hline
\hline
& 1 & 0.911 & 6560.319 & 0.843 & 0.720 \\ \cline{3-6}
47227.4111 & & $\pm0.013$ & $\pm0.015$ & $\pm0.141$ & $\pm0.017$ \\ \cline{2-6}
$\phi$ = 0.310 & 2 & 0.720 & 6565.366 & 0.838 & 0.725 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.019$ & $\pm0.019$ & $\pm0.023$ \\
\hline
\hline
& 1 & 1.159 & 6560.520 & 0.845 & 0.843 \\ \cline{3-6}
47227.4410 & & $\pm0.014$ & $\pm0.014$ & $\pm0.012$ & $\pm0.015$ \\ \cline{2-6}
$\phi$ = 0.347 & 2 & 0.770 & 6565.224 & 0.845 & 0.748 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.018$ & $\pm0.017$ & $\pm0.021$ \\
\hline
\hline
& 1 & 1.042 & 6560.738 & 0.838 & 0.824 \\ \cline{3-6}
47227.4646 & & $\pm0.015$ & $\pm0.015$ & $\pm0.013$ & $\pm0.017$ \\ \cline{2-6}
$\phi$ = 0.376 & 2 & 0.661 & 6564.982 & 0.847 & 0.730 \\ \cline{3-6}
& & $\pm0.013$ & $\pm0.021$ & $\pm0.020$ & $\pm0.025$ \\
\hline
\end{tabular}
\end{table}
From Figure 5, we note that at quadrature (phase $\sim$ 0.25),
when the primary component is approaching the observer (blue-shifted spectrum), the
H${\alpha}$ line from the primary is approximately 20\% brighter than that from the secondary.
Likewise, in Table 4, we see that the ratio of the central intensities of the
two components (primary/secondary) is around 1.2 for the three spectra near phase
0.25 (0.218, 0.247, 0.281) rising to 1.5 for the two spectra with phase
0.347 and 0.376.
At this phase the largest spot listed in Table 2, with longitude 94.8$^\circ$ and
latitude $\neg$16$^\circ$ would be on the facing hemisphere of the primary component.
A reasonable explanation for this would be that a bright H${\alpha}$ emission region on the
primary was topologically associated with a dark spot on the same component.
\section{X-ray and Radio observations}
\subsection{X-ray data from GINGA}
YY Gem was observed with the Ginga (ASTRO-C) satellite, using its Large
Area proportional Counter (LAC)
from 09:00 UT on March 4 until 11:00 UT on March 6. Turner et al.
(1989) give details of Ginga, and a full description of the LAC instrument including
its design, construction, calibration, operation, energy range, resolution,
and sensitivity.
Primarily because of Ginga's low-Earth orbit, good quality
reduced data on YY Gem was obtained only intermittently during the overall period
listed above, in between Earth occultations, passages through the South Atlantic
Anomaly, and times when background subtraction was otherwise relatively uncertain.
Reduction of the data was performed at Rutherford Appleton Laboratory using the
University of Leicester's Ginga software. The time periods with best coverage and
data quality were approximately 10:00 to 15:00 UT on March 4 (see Figure 7 below),
11:00 to 19:00 UT on March 5, and 06:00 to 11:00 UT on March 6. Sparse data were also
obtained at other intervening times, e.g. immediately after the period shown in
Figure 7, namely from 16:00 UT on March 4 to 05:00 UT on March 5. Further details of the observations
are given with the supplementary electronic tables.
The quiescent X-ray emission during these observations indicates two components with
temperatures 3-4MK (soft) and 40-50MK (hard).
A flare which occurred shortly
after the beginning of observations on 4 March had a
two-component flare emission of 3 MK and 30-35 MK. In both the quiescent and the
flare situations the
low-temperature (soft) component makes up about 10\% of the total flux over
the 2-10 KeV range. Both flare and quiescent spectra showed only weak Fe~{\sc
xxv} emission at 6.7 keV. Fits to the X-ray spectra were improved if the
Fe-abundance was reduced to 0.5 solar.
In Figure 7, we see that the flare begins to rise above quiescent level
around 10:12 to 10:15 UT (HJD $\sim$ 2447224.92) and reaches a maximum at
around 10:45 UT (HJD $\sim$ 2247224.94). The estimated integrated X-ray luminosity
is about 3-4 10$^{33}$ ergs (2-10 KeV). The flare light curve can be very
well represented by the magnetic reconnection model of Kopp \& Poletto (1984)
and Poletto, Pallavicini \& Kopp (1988). For example, using a time constant of
around 2.5
hours (i.e. the time constant of the underlying reconnection process) and a
start time of 10:15 UT, the predicted light curve fits the rise and peak very
well, and is a reasonable fit to the late decay phase. The long duration of
the X-ray event and the agreement with this model, tends to favour
interpretation as a solar-like, two-ribbon, flare event. In solar terms this
equates to an X30-X40 class flare, equivalent to the largest solar flare ever
observed. As with the
increased H$\alpha$ emission mentioned in the previous section, we note that
the flare detected in X-rays on 4 March, occurred at binary phase $\sim$0.27,
i.e. close to quadrature and the phase of maximum visibility of the large spot
identified in Section 4 and Table 2.
The X-ray data suggest a fairly continuous, relatively active state of the
components of YY Gem, that can probably be associated with micro-flaring, since
only a single large flare was observed during the campaign. The
background level in X-rays has a significant hard component that is even harder
than during the flares. This background may well be associated with the same
electrons that give rise to the more or less continuous microwave emission.
The flare seen in X-rays by Ginga was also observed optically with the 0.6$^m$
reflector on Mauna Kea. For most of the duration of the flare, observations
with this telescope were sporadic, intended for spot modulation rather than continuous
monitoring for flare activity. The U and B observations over this period are
shown as filled black squares in the lower two panels of Figure 7. With only
three or four U/B observations during the gradual phase of the flare, the precise
shape of the light curve at this time is uncertain. Nevertheless, we believe
that a reasonable estimate of the integrated optical flux during the gradual
phase of the flare can be derived from the data. We estimate the total
integrated flare energy for the gradual phase of the flare to be $8 \times 10^{33}$ ergs
and $27 \times 10^{33}$ ergs in the U and B bands, respectively.
For a period of around 18
minutes the mode of observation was changed to continuous U-band monitoring with a time
resolution of approximately 6 seconds. This data which has been corrected for extinction and
converted to magnitudes is shown in the middle panel of Figure 7 with filled circles. By
chance, the observations captured the impulsive phase of the optical flare which sits on
top of the gradual flare described above. The integrated U-band energy of the impulsive phase
is $\sim 6 \times 10^{32}$ ergs, i.e. about an order of magnitude less than the U-band
energy of the gradual phase. Assuming the ratio of the U-band to B-band fluxes to be the same
for the impulsive stage of the flare as for the gradual phase, we derive a total optical
energy in the near ultraviolet/blue region of the spectrum for both stages of the flare of $37
\times 10^{33}$ ergs and a
ratio of X-ray to optical flux $\sim$ 0.09. This is
close to the values for flares on UV Ceti
($\sim$ 0.2), YZ CMi ($\sim$ 0.1) and AD Leo ($\sim$ 0.1-0.2), listed by Katsova (1981). In
her summary, Katsova (1981) notes that values L$_X$/L$_{opt}$ $\sim$ 0.1 - 0.2 are consistent
with expectations for a flare eruption from a closed magnetic field configuration. Table 5
summarises the flare energetics from this campaign.
\begin{figure*}
\includegraphics[width=12cm,clip,angle=0]{FIGURE-7.eps}
\caption{The flare on YY Gem observed optically and in X-rays by the
Ginga Satellite on 4 March 1988. Top - X-ray count rate in the Large Area Proportional Counter (LAC); Middle and Bottom - U
and B band photometry from 60cm telescope on Mauna Kea}
\end{figure*}
\subsection{Microwave observations from the VLA}
YY Gem was observed in the microwave region with the VLA on two
successive days, March 4/5 and 6, 1988. The VLA was operated
in two sub-arrays: one observing continuously at C band (6cm; 4.8 Ghz), and the other alternating
between X band (3.6 cm; 8.4 GHz) and L band (20cm; 1.46 GHz). It was detected in
both the shorter wavelengths but not at 20cm. Figure 8 shows the 6 cm flux
curve of the system from the observations made on 4/5 March. Both the primary
(phase = 1.0)
and secondary (phase = 1.5) eclipses appear to be visible in the flux curves with a slight
shift of about 0.03 in the phase of minima compared to the V-band light curve.
Both radio eclipses appear to be significantly narrower (by a factor $\sim$2)
than the optical eclipses. The
6 cm flux at primary minimum falls close to zero which indicates that most if
not all the 6 cm emission lies on the side of the primary facing the secondary
or in the space between the two components. This is similar to that found on
the RSCVn binary CF Tuc by Gunn et al (1997). At secondary eclipse, (phase $\sim$1.5)
the 6 cm flux falls to about 50\% of the normal
non-eclipsed level. The secondary eclipse does not appear in observations made
on 6 March. These apparent eclipses cannot be considered definitive, however,
as comparable variations are seen elsewhere in the microwave data at times
that definitely cannot be attributed to eclipses. If the feature
at phase 0.98 is an eclipse it would suggest a radio emitting region
somewhat displaced towards the advancing hemisphere.
\begin{figure*}
\includegraphics[width=12cm,clip,angle=0]{FIGURE-8.eps}
\caption{The 6-cm microwave flux (in mJy) from the VLA for YY Gem on 4/5 March with one sigma error
bars}
\end{figure*}
On the 5th March at approximately 03:00 UT a large flare was seen in 6 cm
emission that lasted for one and a half to two hours. This
flare was also observed in the optical at McDonald Observatory. In Figure 9
we show the 6-cm flux curve together
with the U and B band photometry for the impulsive
part of the flare followed by its prolonged decline. We note that this flare
occurs at binary phase $\sim$0.15, i.e. not far from quadrature.
\begin{figure*}
\includegraphics[width=12cm,clip,angle=0]{FIGURE-9.eps}
\caption{The flare observed in 6-cm emission on 5 March by the VLA and
simultaneously in the optical at the
McDonald Observatory. Top - 6-cm microwave flux in mJy; Middle and Bottom - U
and B band light curves}
\end{figure*}
From the optical light curves of the flare we derive a peak flux of
approximately 7.4 x 10$^{30}$ ergs s$^{-1}$ and 5.9 x 10$^{30}$ ergs s$^{-1}$ in U and B
respectively. Total integrated flare energies were found to be almost identical
in U and B at 2.1 x 10$^{33}$ ergs giving a total optical energy of the order
of 5 x 10$^{33}$ ergs. One third of this energy comes from the long tail of the
flare which lasted approximately one hour in the optical and 1.7 hours in the
radio.
The peak flux at 6cm during the flare reaches 0.9 mJy, equivalent
to a total emitted flux at flare maximum of 2.4 x 10$^{14}$ ergs s$^{-1}$ Hz$^{-1}$.
The measured time-averaged radio flux, on the other hand, is 1.6 x
10$^{14}$ ergs s$^{-1}$ Hz$^{-1}$. Using Figure 6 from Benz \& Gudel (1994),
which relates the time-averaged X-ray and radio fluxes for
active stars, we can estimate the soft X-ray flux from the measured VLA flux
for the above flare on YY Gem. We obtain a value for the time-averaged soft X-ray
flux of $3 \times 10^{29}$ ergs s$^{-1}$. Assuming the flare lasted as long in X-rays
as at 6cm radio wavelengths (namely, $\sim$ 1.7 hours), we derive an integrated
flare energy in soft X-rays of $1.8 \times 10^{33}$ ergs. With an observed optical
energy for the same flare of $\sim 5 \times 10^{33}$ ergs from above, we obtain a
value of L$_{X}$/L$_{opt} \sim 0.36$, in reasonable agreement with that
observed directly by Ginga described in Section 7.1 and on flares on other stars
given by Katsova (1981).
Regrettably, because the periods with successful Ginga data were so sporadic,
there is limited overlap between the VLA and Ginga observation windows.
Nevertheless, it appears that Ginga did just catch the tail end of the VLA/optical
flare on 4 March. However, the single Ginga data point obtained at this time is
quite inadequate to give quantitative information on the strength of the coincident
X-ray flare.
\begin{table*}
\begin{center}
\caption{Integrated optical and X-ray energies of flares on YY Gem in units of 10$^{33}$ ergs}
\begin{tabular}{ccccccc}
{\bf UT (max)} & {\bf HJD (max)} & {\bf U-band} & {\bf B-band} & {\bf L$_{opt}$} & {\bf X-ray (2-10KeV)} & {\bf L$_{X}$/L$_{opt}$} \\ \hline
March 4 10:45 & 2247224.94 & 8.6 & 29 & 37 & 3-4 & 0.09\\
March 5 03:00 & 2247225.62 & 2.1 & 2.1 & 5 & $\sim$ 1.8 & 0.36 \\
\end{tabular}
\end{center}
\end{table*}
\section{Conclusions}
The March 1988 coordinated multiwavelength campaign on YY Geminorum resulted
in less extensive ground-based coverage than originally planned. This was
principally due to the exceptionally bad weather which prevailed in the Canary
Islands at that time. Nevertheless, useful data were obtained from
ground-based facilities in the western hemisphere, notably the VLA and Mauna
Kea in Hawaii, as well as space-based facilities on board the IUE and Ginga
satellites.
With the aid of computer modelling of the V-band eclipse light curve, we
obtain almost identical luminosities and radii for the two components of YY
Gem and, when combined with previously published radial velocity curves,
masses of $0.6M_{o}$ and radii of $0.601R_{o}$ are derived. Fits to the
out-of-eclipse light curves with spot models give two cool spots roughly $165^{o}$
apart in longitude and at latitudes of $-16^{o}$ and $45^{o}$ and one
bright spot at $21^{o}$ latitude. Due to the high
orbital inclination, there is significant ambiguity in the latitudes derived
and it is not possible to determine on which component the maculation occurs.
Combining these models with the additional broad-band colours B, R, I and K,
allows estimates to be made of the temperature differential between the cool spots
and the normal photosphere. We estimate a value of $650^{o} \pm 300$, closely
similar to previous results by Vogt (1981) on BY Draconis. Analysis of the line
profiles of H$\alpha$ in contemporaneous spectra of YY Gem indicates that the
radii of the H$\alpha$ emission volumes are
about 4\% larger than the optical radii.
We have observed the secondary eclipse of YY Gem in the ultraviolet emission
lines of Mg II and Ly$\alpha$ with the IUE satellite. When scaled, the Mg II
eclipse light curve is fitted well by the V-band light curve, indicating the
presence of a chromosphere on YY Gem with uniformly emitting Mg II from
contiguous plage regions. The existence of a widespread network of
emission in the chromospheric lines of CaII and MgII on the Sun has been known
for many years (see Phillips, 1992). A similar picture emerges from studies of
many spotted stars where high levels of MgII line flux have been observed along
with a relatively small degree of rotational modulation (see Butler et al.
1987; Butler, 1996). The Ly$\alpha$ light curve of YY Gem, on the other hand, has a much
broader secondary eclipse suggesting an extended outer atmosphere with a height
of the order of twice the radius.
YY Gem was detectable with the VLA at both 3.6 and 6 cm wavelengths but
not at 20cm. Both the primary and secondary eclipses appear in the 6cm light
curve on 5 March but on the following day the secondary eclipse is no longer
evident. A poor signal-to-noise ratio prevents us from drawing any definitive
conclusions about either eclipse at radio wavelengths, however there are
indications that both the primary and secondary eclipses are narrower and
slightly offset in phase in 6 cm emission compared to the optical V-band which
would be indicative of relatively compact radio emitting regions. The almost
total primary eclipse compared to a drop to only half the uneclipsed flux at
the secondary eclipse would suggest that the emitting region on the primary is
more compact than that on the secondary.
Four flares which were detected optically on YY Gem during this campaign and
have been reported earlier showed evidence of periodicity
(see Doyle et al. (1990).
Two possible mechanisms which could have given rise to the periodicity are: (1)
oscillations in magnetic filaments associated with the flares, or (2) fast
magneto-acoustic waves between the binary components as described by Gao et al.
(2008).
In this paper we show optical observations of two further flares on YY Gem,
one of which was also seen in soft X-rays by the Ginga satellite and
the other in 6-cm radiation by the VLA. Estimates of the integrated optical
energy, based on the photometry are possible for both flares. For the flare
seen by Ginga, we can also derive the integrated soft X-ray flux, leading to
an estimate for the ratio of the integrated X-ray to optical energy,
L$_X$/L$_{opt}$ $\sim$ 0.1, closely similar to the ratio observed in flares
on other dMe stars.
The well established correspondence between the soft X-ray and radio fluxes
by Benz \& Gudel (1994) allows us to make an indirect estimate of the soft X-ray
energy of the flare seen by the VLA and thereby estimate L$_X$/L$_{opt}$
for this additional flare.
The ratio L$_X$/L$_{opt}$ for both YY Gem flares lies close to the value
predicted by Katsova's gas-dynamic model of a stellar flare in a constrained
magnetic loop. If this ratio was $\sim$ 1000, as found for some solar flares,
it would indicate, according to Katsova, an origin in an open magnetic
structure where evaporation precluded excessive heating
of the affected plasma.
Lastly, we note the coincidence in binary phase of several indicators of
magnetic activity on YY Gem. The flares observed in X-rays and the radio region
occurred at phases 0.27 and 0.15 respectively; i.e. within the phase interval
during which the largest spot at longitude 94.8$^{\circ}$ would be visible.
Spectroscopic data obtained as part of this campaign also provides evidence
of an H$\alpha$ emission region on the primary component with increasing
visibility from phase 0.25 to 0.35. A reasonable interpretation of these observations
would be that all are manifestations of magnetic activity
from a single large active region on the primary component of YY Gem. This is not
an unexpected conclusion as similar associations between flares, H$\alpha$ emission
regions and spots are common on the Sun and have been seen previously on other stellar
systems (see Rodono et al. 1987, Olah et al. 1992, Butler, 1996).
\section*{Acknowledgements}
NE wishes to acknowledge support of the European Student Exchange Programme (ERASMUS)
for a 3 month period of research and study at
Armagh Observatory. EB acknowledges stimulative input and hospitality from Armagh Observatory. We wish to thank:
the National Radio Astronomy Observatories of the USA for time on the VLA, the University
of Hawaii for access to the 60cm telescope on Mauna Kea, the European Space Agency for observations with the
IUE satellite, the Science and Engineering Research Council of the United Kingdom for access to UKIRT and the
Institute of Space and Aeronautical Science of Japan (ISAS) for hospitality
and technical assistance during observations with the Japan-UK LAC instrument on Ginga. Research at
Armagh Observatory is grant-aided by the
Department for Culture, Arts and Leisure of Northern Ireland.
|
1,116,691,497,850 | arxiv | \section{Introduction }
The transition form factor(TFF) of the pion is attracting the great
interest last years. In particular, it is related to the pion radius
at small virtualities of the photon.
It is used for description of the hadrons-photon interactions, and
within it, it is possible to match predictions of the pQCD and
non-perturbation methods.
The space-like region of the transition form factor of the $\pi^0$
is quite good investigated. The available experimental data cover a
fairly wide range of $Q^2$. The CELLO\cite{cello} and
CLEO\cite{cleo} collaborations measured it in the intervals $0.7-2.2
\ GeV^2$ and $1.6-8.0 \ GeV^2$ , respectively. The BaBar\cite{babar}
and Belle\cite{belle} move the range of $Q^2$ up to 40 $GeV^2$. At
the $1.0\leq Q^2\leq9$ all the experiments show same behaviour and
agree with theoretical expectations\cite{lepage}. From the other side, there is well known
contradiction between BaBar\cite{babar} and Belle\cite{belle}
results of the measurement of the $\pi^0$ TFF at the region of the
large space-like photon virtualities. At the same time,
in the region of low $Q^2$ and in the time-like region the number of
direct measurements of the pion TFF is quite small. The more precise
data are expected in the future from BES-III\cite{bes3} and
KLOE-2\cite{kloe2} collaborations.
Our paper is dedicated to the pion transition form factor in the
time-like region. In this region pion TFF can be studied in the
process $e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow \pi^{0}\gamma$.
The SND\cite{datasnd} (the most accurate data on the process
$e^{+}e^{-}\rightarrow\pi^{0}\gamma$ to date) and CMD2\cite{datacmd}
experiments had collected the data that cover the range $0.6-1.38 \
GeV$ of $\sqrt{s}$. Theoretically, the process was studied in the
work \cite{HKLNS}, with using the methods of the dispersive theory.
In the future, CLAS\cite{clas} will provide more precise data of the
direct measurements of the time-like pion TFF.
Recently a new method was offered to describe the pion form factor
based on the dispersive representation of the axial anomaly
\cite{hor-ter},\cite{kot2011} (see also review \cite{IOFFE2}). This method has an advantage of being
model-independent and not relying on the QCD factorization. Also in
this approach the pion TFF can be described in the whole region of
the photon virtualities $Q^2=-q^2$. In the paper \cite{kot2011} the
pion TFF was obtained in the whole space-like region $Q^2>0$. Later
it was analytically continued to the time-like region
\cite{KOT2013}. As soon as this method is based on the exact
relation, implied by the axial anomaly it provides very powerful
tool to study pion (and other pseudoscalar mesons) TFFs both in
space-like and time-like regions.
The goal of this work is to check predictions of the ASR method in
the time-like region at low $q^2$ using the SND and CMD2
experimental data.
The paper is organized as follows: the section \ref{tcsc} contains a
brief description of the main steps of the pion TFF calculation and
the calculation of total cross section and angular distribution of
the process $e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow
\pi^{0}\gamma$; in the next section \ref{comp} the obtained
expressions are compared with the experimental data: the subsection
\ref{general} comprises analysis of the situation far from the
poles, while the subsections \ref{peak1} and \ref{peak2} are
dedicated to the attempts to find modifications to the expression of
the pion TFF in order to describe data peaks, within the ASR
approach; in the last section \ref{last} we summarize the obtained
results.
\section{Total cross section calculation} \label{tcsc}
Let us consider a process
$e^{+}e^{-}\rightarrow\gamma^{*}\rightarrow\pi^{0}\gamma$. It can be
expressed in terms of the pion TFF $F(q^2)$, which is defined as:
\begin{equation}\label{TFF}
\int d^{4}x e^{ikx} \langle \pi^0(p)|T\{J_\mu (x) J_\nu(0)
\}|0\rangle = \epsilon_{\mu\nu\rho\sigma}k^\rho q^\sigma F(q^2) \;,
\end{equation}
where $k,q$ are momenta of photons, $p=q-k$, and two electromagnetic
currents $J_{\mu}=\sum\limits_{i=u,d,s} e_i \bar{q_i}\gamma_\mu
q_i$. In what following all the expressions will be written in the unit of electron charge $|e|=1$, while in the equations for cross sections and amplitudes explicit dependence of electron charge is restored. One photon is real $(k^2=0)$ and the other one is virtual. The
Feynman diagram is shown on the Fig.\ref{fig:1}.
\begin{figure}[h!]
\centerline{
\includegraphics[width=0.6\textwidth]{diagram.eps}}
\caption{Feynman diagram of the process.} \label{fig:1}
\end{figure}
The expression for the pion TFF was obtained in approach based on
the dispersive representation of the axial anomaly in the works
\cite{kot2011},\cite{KOT2013}. Here we briefly recall the main steps
of the method.
The vector-vector-axial triangle graph amplitude, where the axial
anomaly occurs, contains an axial current $J_{\alpha 5}$ and two
electromagnetic currents $J_{\mu}=\sum\limits_{i=u,d,s} e_i
\bar{q_i}\gamma_\mu q_i$,
\begin{equation} \label{VVA}
T_{\alpha \mu\nu}(k,q)=\int d^4 x d^4 y e^{(ikx+iqy)} \langle 0|T\{
J_{\alpha 5}(0) J_\mu (x) J_\nu(y) \}|0\rangle,
\end{equation}
where $k$ and $q$ are the photons momenta. In what follows, we limit
ourselves to the case when one of the photons is on-shell ($k^2=0$).
As it was shown in the paper \cite{hor-ter} the imaginary part $A_3$
of the of the invariant amplitude at the tensor structure $k_{\nu}
\varepsilon_{\alpha \mu \rho \sigma}k^{\rho} q^{\sigma}$ in the
variable $ (k+q)^2= s > 0$ satisfy the following relation:
\begin{equation}\label{asr}
\int_{0}^{\infty} A_{3}(s,q^{2}; m_i^{2}) ds =
\frac{1}{2\pi}\frac{1}{\sqrt{2}}.
\end{equation}
The relation \eqref{asr} is exact: $\alpha_s$ corrections are zero
and it is expected that all nonperturbative corrections are absent
as well (due to 't Hooft's principle \cite{hor-ter,tHooft}). Note,
in the original paper \cite{hor-ter} the eq.\eqref{asr} was
obtained for the space-like photon $(q^2<0)$. Later in the paper
\cite{KOT2013} analytical continuation to the time-like region was
developed.
Supposing that $A_3$ decreases fast enough at $\lvert q^2 \rvert \to
\infty$ and is analytical everywhere except the cut
$q^2\in(0,+\infty)$, it was found
\begin{align} \label{asr-re}
p.v.\int_{0}^{\infty}ds\int_{0}^{\infty}dy
\frac{\rho(s,y)}{y-q^2}=\frac{1}{\sqrt{2}},\\ \label{asr-im}
\int_{0}^{\infty}ds \rho(s,q^2)=0,\; .
\end{align}
where $\rho=2Im_{q^2} A_3$. Saturating the lhs of the three-point
correlation function \eqref{VVA} with the resonances in the axial
channel, singling out the first (pion) contribution and replacing
the higher resonance's contributions with the integral of the
spectral density, the ASR in the time-like region \eqref{asr-re}
leads to
\begin{equation} \label{qhd3}
\pi f_{\pi}Re F_{\pi\gamma}(q^2)+ \int_{s_3}^{\infty} A_{3}(s,q^{2})
ds =\frac{1}{2\pi}\frac{1}{\sqrt{2}},
\end{equation}
where $s_3$ is duality region of the pion in the isovector channel,
the definition of the TFFs $F_{\pi\gamma}$ is \eqref{TFF}, and the
meson decay constant $f_{\pi}$ is,
\begin{align} \label{def_f}
\langle& 0|J_{\alpha 5}(0) |\pi^0(p)\rangle= i p_\alpha f_{\pi},
\end{align}
where $J_{\alpha
5}(0)=\frac{1}{\sqrt{2}}(\bar{u}\gamma_{\alpha}\gamma_5u-\bar{d}\gamma_{\alpha}\gamma_5d)$.
As the integral of $A_3$ in eq.\eqref{qhd3} is over the region
$s>s_3$, we expect that nonperturbative corrections to $A_3$ in this
region are small enough and we can use the one-loop expression for
it.
Then the ASR leads to the pion TFF:
\begin{equation}
F(q^2)=\frac{1}{2\sqrt{2}\pi^2f_{\pi}}\frac{s_3}{s_3-q^2}.
\label{ff}
\end{equation}
As were discussed in the papers \cite{kot2011},\cite{KOT2013}, this
result is valid in both time-like and space-like regions (expect the
pole $q^2=s_3$).
The numerical value of $s_3$ was obtained in the limit
$-q^2\rightarrow\infty$ of the space-like ASR \cite{kot2011}, $s_3 =
4\pi^2f^2_{\pi}=0.67 \ GeV^2 \pm 10\% $. This expression coincides
with the one obtained earlier from the two-point correlator
analysis\cite{rad} and is close to the numerical value obtained from
two-point sum rules\cite{SVZ}. In the recent analysis light cone and
anomaly sum rules predictions were compared \cite{OPST}, and it was
shown that $s_3$ is indeed approximately constant with the accuracy
about $\pm10\%$. At same time it was noted that in the region of
small $q^2$ the value $s_3=0.61 \ GeV^2$ is more preferable.
As we can see from \eqref{ff} the pion time-like TFF has a pole at
$q^2=s_3$, which is numerically close to $m^2_{\rho}\approx0.6 \ GeV^2$
and to $m_\omega^2\approx0.61 \ GeV^2$. The pole behavior (which
corresponds to zero width of the ρ meson) appeared since we used
the one-loop approximation for $A_3$, neglecting the possible
dependence of $s$ on $Q^2$ and final-state interactions. Therefore,
the eq. (\ref{ff}) can be used not too close to the pole $q^2 =
s_3$. The effect of the finite width can be estimated if one takes
into account small corrections: as perturbative($\alpha_s^2$ and
higher) as non-perturbative; and also the small effects of mixing.
One can write down the amplitude $M_{\pi\gamma}$ as:
\begin{equation}
M_{\pi\gamma} =
ie^3\bar{u}\gamma^{\alpha}u\frac{g_{\alpha}^{\mu}}{q^2}\epsilon_{\mu\nu\rho\sigma}k^{\rho}q^{\sigma}F(q^2)(e^{\nu})^{*}
\label{amp}
\end{equation}
Neglecting masses and summing over polarisations, the square of the
amplitude takes form:
\begin{equation}
|M_{\pi\gamma}|^2 =
\frac{e^6}{4}Sp[\not{p_{1}}\gamma^{\mu}\epsilon_{\mu\nu\rho\sigma}k^{\rho}q^{\sigma}\not{p_{2}}\gamma^{\mu'}\epsilon_{\mu'\nu'\rho'\sigma'}k^{\rho'}q^{\sigma'}g^{\nu\nu'}]\frac{|F^2(q^2)|}{q^4}.
\label{amp_sq}
\end{equation}
Total cross section has the form:
\begin{equation}
\sigma=\frac{2}{3}\pi^2\alpha_{QED}^3|F^2(q^2)|.
\label{asr_cs}
\end{equation}
Finally, substituting eq.\eqref{ff} to \eqref{asr_cs} we obtain expression for the total cross section:
\begin{equation}
\sigma_{theor}=\frac{\alpha_{QED}^3}{12\pi^{2}f_{\pi}^2}\frac{s_{3}^2}{(s_{3}-q^2)^2}.
\label{asr_cs_theor}
\end{equation}
And the expression for angular distribution has a rather common form
implied by angular momentum conservation:
\begin{equation}
\label{angular}
\frac{d\sigma}{d\cos\theta}\big|_{\pi}^0=\frac{\alpha_{QED}^3}{32\pi^2f_{\pi}^2}\frac{s_{3}^2}{(s_3-q^2)^2}(1+\cos^2\theta).
\end{equation}
\section{Comparison with data}\label{comp}
\subsection{Comparison with experiment out of poles}\label{general}
In this section we will compare theoretical results of the ASR
approach \eqref{asr_cs_theor} with SND2016 and CMD2 experimental data
\cite{datasnd},\cite{datacmd}. The curves corresponding to
eq.\eqref{asr_cs_theor} with the lower and upper limits $s_3 = 0.61, 0.67
\ GeV^2$ are shown (by the dashed and solid lines) on the Fig.\ref{fig:2}.
\begin{figure}[h!]
\centerline{
\includegraphics[width=1.0\textwidth]{alldata.eps}}
\caption{Total cross section $\sigma_{theor}$ vs. SND2016 and CMD2
\cite{datasnd,datacmd} data.} \label{fig:2}
\end{figure}
Let us emphasize, that the eq.\eqref{ff} indicates the $\rho-\omega$
meson resonance( in the zero width approximation) position, but do
not indicate the existence of the second resonance, corresponding to
the $\phi$ meson mass. The reason of this is that the equation of
the pion TFF was obtained for the isovector channel of the axial
current and it does not take into account the possible effects of
mixing. So the hadrons including s-quarks can't be accounted in such
approximation. Discussion and possible treatment of this will be
done later in the next sections.
Note, that from the Fig.\ref{fig:2} one can see curve with $s_3=0.67
\ GeV^2$(solid line) describes data much worse, than with $s_3=0.61
\ GeV^2$(dashed line). This one coincides with the results of the
matching Light-Cone Sum Rules(LCSR) and ASR approaches\cite{OPST} in
the space-like region, where, as it was mentioned earlier, it was
shown that at the region of small $q^2$ the value $s_3=0.61 \ GeV^2$
is more preferable, while within the $\pm10\%$ error of the $s_3$
calculation the value $s_3 = 0.67 \ GeV^2$ also agrees with the
experiment. But in the time-like region of the $q^2$, as can be seen
from Fig.\ref{fig:2}, the equation for the pion TFF \eqref{ff}, and
the corresponding total cross section \eqref{asr_cs_theor}, is much more
sensitive to the value of the $s_3$, and the value $s_3 = 0.67 \ GeV^2$ has
much worse agreement with the experiment, than $s_3=0.61 \ GeV^2$.
Thus, the time-like region is a kind of a microscope for analysis of
the parameters of the axial channel in the space-like region. So we
can expect, that analysis of the pion TFF in the time-like region of
$q^2$ allows to clarify the values of the small corrections, which
one are hard to determine in the space-like region.
The fact that the pole of the eq.\eqref{asr_cs_theor} $q^2=s_3=0.61$ is
close to the $m_{\rho}^2\approx0.60 \ GeV^2, m_{\omega}^2\approx0.61 \ GeV^2$ is
quite interesting. It actually means that from the parameters of the
axial channel one can obtain spectrum of masses in the vector
channel. The tendency of $s_3$ variation in the space-like channel
may be attributed to the effect of the pole in the time-like
channel, requiring that
$$s(m_V^2)=m_V^2.$$
Clearly, at present accuracy one cannot distinguish $\rho$ and
$\omega$ masses.
It seems that more accurate analysis of the anomaly sum rule
\eqref{asr-re}, which will takes into account the effects of
$\pi,\eta,\eta'$ mixings (as well as perturbative and
non-perturbative small corrections), can provide a better
description of the experimental data, in particular of the second
peak on the Fig. \ref{fig:2}. Theoretical estimations show, that if
one includes the effects of the mixing into the calculation of the
pion TFF, then one obtains more complicated expression than the
eq.\eqref{ff}. It should be a linear combination of terms of the
type of the eq.\eqref{ff} and each of them will have its own value
of $s_3$. This work is now in progress.
In the next chapter we discuss a modification of the eq.\eqref{ff} in
order to describe the peaks and estimate how good this approximation
is be able to describe experimental data. Firstly, we consider the
first peak and further generalize the result to the second one.
\subsection{$\rho$-$\omega$ peak}\label{peak1}
Let us consider the first experimental peak, which one is
corresponds to $\rho$-$\omega$ resonance. Here we perform fits of
the data below $1.0 \ GeV.$ The equation of the pion
TFF \eqref{ff} was obtained using zero width of $\rho$-meson, so the
$ImF(q^2)\sim\delta(s_3-q^2)$. To get right description of the data,
one should add to the denominator of \eqref{ff} term, which should
gives resonance corresponding to the data peak. It means that the
$\rho$-meson should have finite width, so the $ImF(q^2)$ will not
have such a trivial form.
We modify \eqref{ff} by adding to denominator the term of the form
$im_v\Gamma_v$, so the equation for pion time-like TFF takes form
similar to the relativistic Breit-Wigner amplitude:
\begin{equation}
F(q^2)=\frac{1}{2\sqrt{2}\pi^2f_{\pi}}\frac{s_3}{s_3-q^2+im_v\Gamma_v}.
\label{mtff}
\end{equation}
Substituting to \eqref{amp_sq} the modified pion TFF equation
\eqref{mtff}, with the values of the $m_v, \Gamma_v, s_3=m_v^2$
corresponding to the $\rho$ and $\omega$ mesons( we take the
averaged one values of masses and widths of the $\rho$ , $\omega$
mesons from PDG \cite{PDG}: $m_{\rho}=0.77526 \ GeV,
\Gamma_{\rho}=0.149 \ GeV, m_{\omega}=0.78265 \ GeV,
\Gamma_{\omega}=0.00849 \ GeV.$), and doing simple calculations, we
obtain two fits for total cross sections:
\begin{equation}
\sigma_{fit-\rho,\omega}=\frac{\alpha_{QED}^3}{12\pi^{2}f_{\pi}^2}\frac{s_{3\rho,\omega}^2}{(s_{3\rho,\omega}-q^2)^2+m_{\rho,\omega}^2\Gamma_{\rho,\omega}^2}.
\label{fit_cs1}
\end{equation}
Result is shown on Fig.\ref{fig:3} for $\rho$ and $\omega$ cases
(dotted and dot-dashed lines, correspondingly). It is clearly seen
that both of this cases have poor description of the experimental
data. The description is also not improved when the more complicated
models of the single resonance are applied.
\begin{figure}[h!]
\includegraphics[width=1.0\textwidth]{rhoomg1res50ka.eps}
\caption{Fits of the $\rho-\omega$ peak with 1 resonance.}
\label{fig:3}
\end{figure}
Thus we can assume, that reasonable description of the first
experimental peak can be done by the to two resonance
parametrisation. If one takes linear combination of amplitudes type
of \eqref{mtff}, so, that the pion TFF will be a sum of the two
terms, where each one has the values of the $m_v, \Gamma_v,
s_{3v}=m_v^2$ corresponding to the $\rho$ and $\omega$ mesons:
\begin{equation}
\label{2resff}
F(q^2)=\frac{1}{2\sqrt{2}\pi^2f_{\pi}}\left(\alpha\frac{s_{3\rho}}{s_{3\rho}-q^2+im_{\rho}\Gamma_{\rho}}+\beta\frac{s_{3\omega}}{s_{3\omega}-q^2+im_{\omega}\Gamma_{\omega}}\right).
\end{equation}
Then total cross section takes form:
\begin{eqnarray}\label{2rescs}
\sigma_{fit-2resonances} & = & \frac{\alpha_{QED}^3}{12\pi^{2}f_{\pi}^2}\Big(\alpha^2\frac{s^2_{3\rho}}{(s_{3\rho}-q^2)^2+m_{\rho}^2\Gamma_{\rho}^2}+\beta^2\frac{s^2_{3\omega}}{(s_{3\omega}-q^2)^2+m_{\omega}^2\Gamma_{\omega}^2}+ {}\nonumber \\
& & +2\alpha\beta s_{3\rho}s_{3\omega}\frac{(s_{3\rho}-q^2)(s_{3\omega}-q^2)+m_{\rho}\Gamma_{\rho}m_{\omega}\Gamma_{\omega}}{((s_{3\rho}-q^2)^2+m_{\rho}^2\Gamma_{\rho}^2)((s_{3\omega}-q^2)^2+m_{\omega}^2\Gamma^2_{\omega})}\Big).
\end{eqnarray}
Note, due to that far the pole one should obtain formula \eqref{ff},
so $\alpha+\beta\approx1$.
The result is shown on the Fig.\ref{fig:4}(solid line), the values of the fit parameters are: $\alpha = 0.52,\ \beta = 0.49;$ and the values of $\chi^2$ are in the Table\ref{tbl1}.
\begin{figure}[h!]
\includegraphics[width=1.0\textwidth]{rhoomg2res.eps}
\caption{Fits of the $\rho-\omega$ peak with 2 resonances. Insert is zoom in of the peak.} \label{fig:4}
\end{figure}
\begin{table}[h!]
\caption{Values of $\chi^2$ for fit with 2 resonances.}
\label{tbl1}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& \textbf{CMD2} & \textbf{SND2016} & \textbf{CMD2+SND2016} \\
\hline
$\chi^2/d.o.f.$ & 2.29 & 1.46 & 1.71 \\
\hline
\end{tabular}
\end{center}
\end{table}
From other side, such small value as the difference between $\rho$ and $\omega$ mesons masses is clearly out of the the accuracy of the ASR approach. That's why we perform one more fit using \eqref{2resff}, supposing that $m_{\rho} = m_{\omega}$ and $s_{3\rho}=s_{3\omega}=m_{\rho}^2=m_{\omega}^2\approx0.61 \ GeV^2$, but with PDG \cite{PDG} values for widths: $\Gamma_{\rho}=0.149 \ GeV, \Gamma_{\omega}=0.00849 \ GeV.$ The result is shown on the Fig.\ref{fig:4}(dashed line), the values of the fit parameters are the same: $\alpha = 0.52,\ \beta = 0.49;$ and the values of $\chi^2$ are in the Table\ref{tbl2}.
\begin{table}[h!]
\caption{Values of $\chi^2$ for fit with 2 resonances with $m_{\rho} = m_{\omega}$.}
\label{tbl2}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& \textbf{CMD2} & \textbf{SND2016} & \textbf{CMD2+SND2016} \\
\hline
$\chi^2/d.o.f.$ & 2.12 & 1.07 & 1.42 \\
\hline
\end{tabular}
\end{center}
\end{table}
As one can see from the Fig.\ref{fig:4} and Tables \ref{tbl1}, \ref{tbl2}, the both variants have very good agreement with data. Note, that the second one describes data even better than the first one.
In the limit of real photon $q^2\rightarrow0$ and neglecting $\Gamma_{\rho},\Gamma_{\omega}$ the pion TFF takes form:
$$F(0)=\frac{1}{2\sqrt{2}\pi^2f_{\pi}}(\alpha+\beta).$
And correspondingly one can find
$$\Gamma(\pi^0\rightarrow2\gamma)\approx7.9 \ eV,$$
which is in the perfect agreement with experimental data \cite{JAGER}.
\subsection{$\phi$ peak}\label{peak2}
Let us now consider the second peak. Theoretical estimations shows(
work is now in progress), that the such kind of a peak can appear,
if one takes into account $\pi-\eta-\eta'$ mixing and s-quark masses
into the anomaly sum rules. Thus, in this case the pion TFF will
have 3 terms. So, in addition to the terms corresponding to $\rho$ and $\omega$ mesons, one should add a term corresponding to the $\phi$ meson ( the value of $m_{\phi}=1.0194 \ GeV$ and the value of $\Gamma_{\phi}=0.00426 \ GeV$ were taken from PDG \cite{PDGphi}. ). Let us suppose, that $s_{3\phi}$ will be close to the $m_{\phi}^2$ and $m_{\rho}=m_{\omega},s_{3\rho}=s_{3\omega}=m_{\rho}^2=m_{\omega}^2=s_3\approx0.61 \ GeV^2.$ Thus, we got:
\begin{equation}
\label{3resff}
F(q^2)=\frac{1}{2\sqrt{2}\pi^2f_{\pi^0}}\left(\alpha\frac{s_{3}}{s_{3}-q^2+im_{\rho}\Gamma_{\rho}}+\beta\frac{s_{3}}{s_{3}-q^2+im_{\omega}\Gamma_{\omega}}+\gamma\frac{s_{3\phi}}{s_{3\phi}-q^2+im_{\phi}\Gamma_{\phi}}\right).
\end{equation}
Substituting the eq.\eqref{3resff} to \eqref{asr_cs}, one obtains the equation for the total cross section.
The result of the fit is shown on the Fig.\ref{fig:5}, values of the fit coefficients are: $\alpha = 0.556, \ \beta = 0.49, \ \gamma=-0.036$. The $\chi^2$ values are in the Table\ref{tbl3}.
\begin{table}[h!]
\caption{Values of $\chi^2$ for fit with 3 resonances.}
\label{tbl3}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& \textbf{CMD2} & \textbf{SND2016} & \textbf{CMD2+SND2016} \\
\hline
$\chi^2/d.o.f.$ & 2.53 & 1.52 & 1.87 \\
\hline
\end{tabular}
\end{center}
\end{table}
As one can see from the Fig.\ref{fig:5}, the fit with 3 resonances gives good description of experimental data. In the same way, as was discussed in the end of the previous section, by use of \eqref{3resff} we can obtain in the limit $q^2\rightarrow0$ the value
$$\Gamma(\pi^0\rightarrow2\gamma)\approx7.9 \ eV$$
in good agreement with experiment \cite{JAGER}. We found that the coefficient $\gamma$ is negative, which is dictated by the interference term in the vicinity of $\phi$ peak. Moreover, allowing for the phase shift between $\phi$ and $\rho,\omega$ to be close to $\pi$, supporting negative $\gamma$
\begin{figure}[h!]
\includegraphics[width=1.0\textwidth]{3res50k.eps}
\caption{Fit of the whole data range. The inserts are zoom in $\rho-\omega$ and $\phi$ peaks.}
\label{fig:5}
\end{figure}
Let us pay attention to the coefficient $\gamma$, it is found to be
small. As it was mentioned before in the text, the third term in
\eqref{3resff} can appear due to effects of $\pi-\eta-\eta'$ mixing.
So the coefficient $\gamma$ corresponds to the mixing angle
$\theta_{\pi-\eta}$. Theoretical prediction of the mixing angle
value $\theta_{\pi-\eta}$ was done in the works \cite{IOFFE}, \cite{GTW} (and also see \cite{IO}:
\begin{equation}\label{angle}
\theta_{\pi-\eta}
=\frac{1}{\sqrt{3}}\frac{m_u-m_d}{m_u+m_d}\cdot\frac{m^2_{\pi}}{m^2_{\eta}}
=- 0.0150 \pm0.020.
\end{equation}
Note, that the values of the coefficient $\gamma$ and the
mixing angle $\theta_{\pi-\eta}$ are consistent by the order of
magnitude. This consistency provides additional support to our
assumption about origin of the third term of \eqref{3resff}.
Due to the fact that $\phi$ meson has very narrow width, the
contribution of the third term is non-negligible only at the region
of the $\phi$ resonance, and thus, it can be safely neglected everywhere, except resonance region. Note, that the fit of total cross section with 3 resonances(dashed line) differs from the total cross section with a single pole \eqref{asr_cs}(solid line) by $\approx10\%$ at $\sqrt{s}=1.3 \ GeV$. And the difference between them decreases with the increasing of the $q^2$, thus, as we mentioned earlier in the text, in region far from the poles the total cross section with resonances coincides with the single pole total cross section.
As the result of the analysis of the fits, one can conclude that the
modified equation for the pion TFF \eqref{3resff} and the
corresponding total cross section shows rather good
agreement with the experimental data. The whole spectrum of the data
probably can be described if one takes into account the mixing in
axial channel. Thus, in the modified ASR approach, which will be
taking into account the effects of the $\pi-\eta-\eta'$ mixing and
small corrections (as well pertubative as non-pertubative), the
equation of the pion TFF will contain three terms. Theoretical estimations show that the coefficients $\alpha,\beta$ and $\gamma$ of the eq.\eqref{3resff} can be expressed in terms of the mixing angles $\theta_{\eta-\eta'}$, $\theta_{\pi-\eta}$, $\theta_{\pi-\eta'}$. The results of the fits provide hard restrictions to the expected values of $\pi-\eta-\eta'$ mixing angles and parameters of the modified ASR approach. The work now is in progress.
\section{Conclusions and Outlook}\label{last}
1. We show that the result obtained in the paper \cite{KOT2013} for
the pion timelike TFF \eqref{ff}, can be used to describe the data in the
regions far from the pole, and the place of the pole coincides with
the experimental peak.
It has been shown, that eq.\eqref{ff}, and the corresponding equation of the total cross section eq.\eqref{asr_cs_theor}, has better agreement with the data if $s_3\approx0.61 GeV^2$.
This one coincides with the result of matching ASR and LCSR \cite{OPST} that $s_3$ should vary between $0.67 \ GeV^2$ at
large $q^2$ and $0.61 \ GeV^2$ at low $q^2$.
2. We propose a modification of the equation of the pion TFF in
order to describe the data at the resonance regions. As a result of
the analysis, we may conclude that in order to describe the whole
spectrum of data, one should obtain formula of the pion TFF
containing three terms. Using modified equation of the pion TFF
\ref{3resff}, we perform fits of the experimental data and obtain
the values of the fit coefficients. The obtained result provides a good description of the experimental data, including the correct limit for the case of real photons($q^2\rightarrow0$).
3. To obtain three terms in the pion TFF within the ASR approach one should include the effects of the $\pi-\eta-\eta'$ mixing. Let us stress, that the obtained value of the fit coefficient $\gamma$ corresponding to the third term \eqref{3resff} has the same order of the magnitude as the $\theta_{\pi-\eta}$ mixing angle.
So it confirms our assumptions.
The achieved values of the fits coefficients lead to strong restrictions
to the values of the $\pi-\eta-\eta'$ mixing angles and can be used
in matching with the theoretical values, calculated, in particular, in the
modified ASR approach.
The work was supported in part by RFBR grant 14-01-00647.
\newpage
|
1,116,691,497,851 | arxiv | \section{Introduction}
Chiral symmetry is an intrinsic symmetry of quantum chromodynamics (QCD) in the massless limit of quarks and is spontaneously broken due to the nonzero quark condensate. The spontaneous symmetry breaking (SSB) of the $SU_L(3)\times SU_R(3)$ in to the flavor $SU(3)$ generates eight Goldstone particles sorted into the pseudoscalar octet composed of $\{\pi, K, \eta_8\}$ which are slightly massive due to the small $u,d,s$ quark masses. If SSB also applies to the $U_L(1)\times U_R(1)$ sector of the chiral symmetry, its breaking into $U_V(1)$ that corresponds to the baryon number conservation expects the existence of an additional Goldstone particle, namely, a light flavor singlet pseudoscalar meson. The $\eta'$ meson is predominantly a flavor singlet but is too massive to be taken as a candidate for this Goldstone particle. The $\eta'$ mass puzzle has a direct connection with the QCD $U_A(1)$ anomaly that the anomalous gluonic term, the topological charge density, breaks the conservation of the flavor singlet axial current even in the chiral limit. Even though the anomalous axial vector relation can be written as the divergence of a gauge variant axial vector, which is zero and implies a $U_A(1)$ symmetry, Kogut and Susskind~\cite{Kogut:1974kt} pointed out that its spontaneous breaking generates a massless mode that violates the Gell-Mann-Okubo relation and then renders $\eta'$ more massive. With respect to the nontrivial topology of QCD vacuum, Witten~\cite{Witten:1978bc} and Veneziano~\cite{Veneziano:1979ec} proposed a mechanism for the origin of $\eta'$ mass that the nonperturbative coupling of the topological charge density and the flavor singlet pseudoscalar induces a self energy correction $m_0^2$, which is proportional to the topological susceptibility $\chi$ of gauge fields. Using the physical mass of $\eta’$, the value of $\chi$ is estimated to be around $(180~\mathrm{MeV})^4$, which is supported by lattice QCD calculations~\cite{Cichy:2015jra}. Another interesting property of $\eta'$ is its large production rate in the $J/\psi$ radiative decays~\cite{Zyla:2020zbs}, which are abundant in gluons and are expected to favor the production of glueballs. This observation along with the origin mechanism of $\eta'$ mass manifests the strong coupling of $\eta'$ to gluons and thereby prompts the conjecture that $\eta'$ may mix substantially with pseudoscalar glueballs since they have the same quantum number. The KLOE Collaboration analyzed the processes $\phi\to \gamma \eta$ and $\phi\to \gamma\eta'$ and found that the $\eta'$-glueball mixing might be required and the mixing angle can be as large as $(22\pm 3)^\circ$~\cite{KLOE:2006guu}. In contrast, another phenomenological analysis of the KLOE result gave the mixing angle $(12\pm13)^\circ$ which is consistent with zero within the large error~\cite{Escribano:2007cd}. A phenomenological analysis on the processes $J/\psi(\psi')$ decaying into a pseudoscalar and a vector final states obtained the $\eta'$-glueball mixing angle to be around $9^\circ$ by considering the $\eta-\eta'$-glueball mixing model~\cite{Li:2007ky}. Obviously, the determined mixing angle varies in a fairly large range. Based on the $\eta'$-glueball mixing picture, there have been theoretical discussions on the possibility of $\eta(1405)$ as a pseudoscalar glueball candidate~\cite{Li:2007ky,Cheng:2008ss,He:2009sb,Li:2009rk,Tsai:2011dp}. However, the quenched lattice QCD studies~\cite{Morningstar:1997ff,Morningstar:1999rf,Chen:2005mg} predict that the mass of the pseudoscalar glueball is around 2.4-2.6 GeV, which is confirmed by lattice simulations with dynamical quarks~\cite{Chowdhury:2014mra,Richards:2010ck,Gregory:2012hu,Sun:2017ipk}. This raised a question on $\eta(1405)$ as a glueball candidate because of its much lighter mass. On the other hand, the strong hint for $\eta(1405)$ to be a glueball candidate is the observation that there exist three isoscalar pseudoscalar mesons $\eta(1295)$, $\eta(1405)$ and $\eta(1475)$ such that one of them is surplus according to the quark model. If $\eta(1405)$ and $\eta(1475)$ are the same state~\cite{Wu:2011yx}, there is no need of a pseudoscalar glueball state in this mass region. Some mixing model studies also favor the pseudoscalar glueball to have a mass heavier than 2 GeV~\cite{Mathieu:2009sg,Qin:2017qes}.
In this work, we will investigate the possible mixing of isoscalar pseudoscalar meson and the pseudoscalar glueball in $N_f=2$ lattice QCD. The isoscalar pseudoscalar meson is named $\eta$ throughout this work, which is the $SU(2)$ counterpart of the $SU(3)$ flavor singlet (approximately $\eta'$) in the $N_f=3$ case. We have generated a large ensemble of gauge configurations with $N_f=2$ degenerated $u,d$ quarks at a pion mass $m_\pi\approx 350$ MeV, so we can make a precise determination of $\eta$ mass. The calculation of $\eta'$ mass (and the $\eta'-\eta$ mixing) has been performed in serveral $N_f=2+1$ lattice QCD studies, whose results are in agreement with the physical value after the chiral extrapolation~\cite{Christ:2010dd,Michael:2013gka,Fukaya:2015ara}. There are also many studies on the $\eta$ mass from $N_f=2$ lattice QCD~\cite{CP-PACS:2002exu,Hashimoto:2008xg,Sun:2017ipk,Dimopoulos:2018xkm}. According to the Witten-Veneziano mechanism (WV), in the $N_f=2$ case, pion mass and $\eta$ mass are related as $m_\eta^2=m_\pi^2+m_0^2$, where $m_0^2$ is the correction from the topology induced interaction and is proportional to $N_f$. As a check of WV, we would like to take a look at this relation and use the obtained $m_0^2$ to predict the $\eta'$ mass in the physical case (a pioneering work following this way in the quenched approximation can be found in Ref.~\cite{Kuramashi:1994aj}). After that, we calculate the correlation functions of the $\eta$ operator and the glueball operator, from which the mixing angle can be extracted. The strategy of the study is similar to that used in the $\eta_c$-glueball mixing~\cite{Zhang:2021xvl}. As an exploratory study, we tentatively treat the pseudoscalar glueball as a stable particle and ignore its resonance nature in the presence of light sea quarks. Obviously, the numeric task involves the calculations of the annihilation diagrams of $u,d$ quarks, so we adopt the distillation method~\cite{Peardon:2009gh} which enables the gauge covariant smearing of quark fields and the all-to-all quark propagators (perambulators) simultaneously. Since we also have the perambulators of the valence charm quark, we also calculate the $\eta-\eta_c$ correlation functions and explore their properties.
This paper is organized as follows: In Section~\ref{sec:numerical} we describe the lattice setup, operator construction and formulation of correlation functions. Section~\ref{sec:II} gives the theoretical formalism of the meson-glueball mixing, where the data analyses and the results can be found. The preliminary results from $\eta-\eta_c$ correlation functions are presented in Section~\ref{sec:etac-eta}. The discussion and summary are given in Section~\ref{sec:summary} .
\section{Numerical Details}\label{sec:numerical}
\subsection{Lattice Setup}
We generate gauge configurations with $N_f=2$ degenerate $u,d$ on an $L^3\times T=16^3\times 128$ anisotropic lattice. We use the tadpole improved Symanzik's gauge action for anisotropic lattices~\cite{Morningstar:1997ff,Chen:2005mg} and the tadpole improved anisotropic clover fermion action~\cite{Edwards:2008ja,Sun:2017ipk}. The parameters in the action are tuned to give the aspect ratio $\xi=a_s/a_t\approx 5.3$, where $a_t$ and $a_s$ are the temporal and spatial lattice spacings, respectively. The aspect ratio $\xi\approx 5.3$ is checked by the dispersion relation of the pseudoscalar $\pi$ (along with that of $\eta$ calculated using the distillation method, see Sec.~\ref{sec:II}), which takes the continuum form
\begin{equation}~\label{eq:disp}
E_X^2(\vec{p})a_t^2=m_X^2 a_t^2+\frac{1}{\xi^2} |\vec{p}|^2a_s^2,
\end{equation}
where $X$ refers to a specific hadron state and $\vec{p}$ is the spatial momentum $\vec{p}=\frac{2\pi}{La_s}\vec{n}$ on the lattice, and $\vec{n}$ is the mode of spatial momentum. Figure~\ref{fig:dispVPS} shows the energies obtained from the correlation functions of $\pi$, $\eta$ and $\rho$ at different spatial momentum modes up to $\vec{n}=(1,2,2)$. The data points of $\pi$ and $\eta$ fall on straight lines perfectly and can be well described by Eq.~(\ref{eq:disp}) with $\xi=5.365(5)$ and $5.34(3)$, respectively (illustrated as shaded lines in the figure). The fit to energies in the $\rho$ channel using Eq.~(\ref{eq:disp}) gives $\xi=5.58(1)$ which deviates from $5.3$ drastically. This can be definitely attributed to the influence from $P$-wave $\pi\pi$ scattering states nearby that have the same center-of-mass momentum as $\rho$. So we do not use $\rho$ meson to check the $\xi$ value.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/E_light.pdf}
\caption{\label{fig:dispVPS} The dispersion relations of $\pi$ (blue) and $\eta$ (red). The data points are lattice results and the shaded lines illustrate the fitted results using Eq.~(\ref{eq:disp}). The best fit values of $\xi$ are 5.365(5) and 5.34(3) for $\pi$ and $\eta$, respectively. The cyan points are ground state energies obtained from the $\rho$ correlators at different spatial momentum $\vec{p}$, and the cyan line is the fitted result with $\xi=5.58(1)$ which deviates from $5.3$ drastically due to the interference with nearby $\pi\pi$ states.}
\end{figure}
\begin{table}[t]
\centering\caption{\label{tab:v2-ps2} Experimental values of the masses of the pseudoscalar (P) and vector mesons (V) of quark configurations $n\bar{q}$, $n\bar{s}$, $n\bar{c}$, $s\bar{c}$, $n\bar{b}$ and $s\bar{b}$~\cite{Zyla:2020zbs}. Here $n$ refers to the light $u,d$ quarks. The right most column lists the $m_V^2-m_{PS}^2~(\mathrm{GeV}^2)$. In the row of $s\bar{s}$ states, the mass of the $s\bar{s}$ pseudoscalar $\eta_s$ is determined by the HPQCD collaboration from lattice QCD calculations~\cite{Davies:2009tsa}.}
\begin{ruledtabular}
\begin{tabular}{cccc}
$q_l\bar{q}$ & $m_V$ (GeV) & $m_{PS}$ (GeV) & $m_V^2-m_{PS}^2~(\mathrm{GeV}^2)$ \\
\hline
$n\bar{n}$ & 0.775 & 0.140 & 0.581 \\
$n\bar{s}$ & 0.896 & 0.494 & 0.559 \\
$s\bar{s}$ & 1.020 & 0.686~\cite{Davies:2009tsa} & 0.570 \\
$n\bar{c}$ & 2.010 & 1.870 & 0.543 \\
$s\bar{c}$ & 2.112 & 1.968 & 0.588 \\
$n\bar{b}$ & 5.325 & 5.279 & 0.481 \\
$s\bar{b}$ & 5.415 & 5.367 & 0.523
\end{tabular}
\end{ruledtabular}
\end{table}
There are subtleties in the determination of the lattice spacing for our lattice setup. As the first step, we use the Sommer's scale parameter $r_0=0.491$ fm to estimate the lattice spacing $a_s$ by calculating the static potential. After that, we tune the bare quark mass parameter to give the pion mass to be around 300 MeV. However, we find the mass of the vector meson $\rho$ is around 750 MeV, which is obviously lower than expected (note that $m_\rho$ is 770 MeV at the physical pion mass $m_\pi\sim 139$ MeV). This may be attributed to the uncertainty of $r_0$ since its value varies from $0.46\sim 0.50$ fm determined by different lattice groups~\cite{FlavourLatticeAveragingGroup:2019iem}. Since hadron masses calculated on the lattice depend on both the quark mass parameter and the lattice spacing, the reasonable procedure is that one first sets the lattice spacing then tunes the quark mass parameter to an expected value. So it is desirable to choose physical quantities that are insensitive to quark masses. Experimentally, there is an interesting relation between pseudoscalar meson masses $m_{PS}$ and the vector meson masses $m_V$ of the quark configuration $q_l\bar{q}$,
\begin{equation}\label{eq:dmsq}
\Delta m^2 \equiv m_V^2 -m_{PS}^2\approx 0.56-0.58~~\mathrm{GeV}^2
\end{equation}
where $q_l$ stands for the $u,d,s$ quark and $q$ stands for $u,d,s,c$ quarks. The PDG results of the masses of these vector and pseudoscalar mesons~\cite{Zyla:2020zbs} are collected in Table~\ref{tab:v2-ps2} along with their mass-squared differences. Even though the reason is still unknown for this relation, empirically these values are insensitive to quark masses. On the other hand, the mass of $\eta_s$, the $s\bar{s}$ counterpart of $\pi$ (not considering the $s\bar{s}$ annihilation effects in the calculation), is determined to be $m_{\eta_s}=0.686(4)$ GeV from lattice QCD by the HPQCD collaboration~\cite{Davies:2009tsa}. Even though $\eta_s$ is not a physical state, the mass squared difference $m_\phi^2-m_{\eta_s}^2\approx 0.570 ~\mathrm{GeV}^2$ also satisfies the empirical relation in Eq.~(\ref{eq:dmsq}). In this study, the dimensionless masses of $\pi$ and $\rho$ is determined to be $m_\pi a_t=0.05055(13)$ and $m_\rho a_t=0.12046(20)$. In this sense, we assume the relation of Eq.~(\ref{eq:dmsq}) is somewhat general for heavy-light mesons and use it to set the scale parameter $a_t$. Of course, one should take caution to use this relation since $\rho$ is experimentally a wide resonance and decays into $P$-wave $\pi\pi$ states by 99\%. On our lattice the lowest $P$-wave $\pi\pi$ energy threshold in the rest frame of $\pi\pi$ is $2E_\pi(p)a_t\approx 0.1795$ with $\xi\approx 5.3$, which is substantially higher than $m_\rho a_t$. This means $\rho$ in its rest frame is a stable particle, such that the $m_\rho$ value is reliable. In practice, we average the above mentioned mass squared differences over the $n\bar{n}$, $s\bar{n}$, $c\bar{n}$ and $c\bar{s}$ systems where $n$ refers to the $u,d$ quarks, and get an average value $\overline{\Delta m^2}=0.5682(80)$ $\mathrm{GeV}^2$, which serves as an input to give the lattice scale parameter $a_t^{-1}=6.894(51)$ GeV and the corresponding spatial lattice spacing $a_s\approx 0.152(1)$ fm. Accordingly, the $u,d$ mass parameter in this study gives $m_\pi=348.5(1.0)$ MeV and $m_\rho=830.5(6.3)$ MeV. Thus we have $m_\pi L_s \approx 3.9$ for this lattice setup, which warrants the small finite volume effects. The number of configurations of our gauge ensemble is 6991, which is crucial for the glueball relevant studies. The details of the gauge ensemble are given in Table~\ref{tab:config}.
\begin{table}[t]
\renewcommand\arraystretch{1.5}
\caption{Parameters of the gauge ensemble.}
\label{tab:config}
\begin{ruledtabular}
\begin{tabular}{lllllc}
$L^3 \times T$ & $\beta$ & $a_t^{-1}$(GeV) & $\xi$ & $m_\pi$(MeV) & $N_\mathrm{cfg}$ \\\hline
$16^3 \times 128$ & 2.0 & $6.894(51)$ & $\sim 5.3$ & $348.5(1.0)$ & $6991$ \\
\end{tabular}
\end{ruledtabular}
\end{table}
As a cross check of this scheme, we also calculate the heavy quark static potential through Wilson loops, from which the relation of the Sommer's scale parameter $r_0$ to the spatial lattice spacing $a_s$ is expressed as $\frac{r_0}{a_s}=\sqrt{\frac{1.65-e_c}{\sigma a_s^2}}$, where $e_c$ and $\sigma a_s^2$ are the parameters of the Cornell type parametrization of the static potential. Using the obtained $a_t$ and $\xi$, we estimate the $r_0$ to be $0.455(3)$ fm.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/E_charm.pdf}
\caption{\label{fig:dispcharm}The dispersion relations of $\eta_c$ and $J/\psi$. The data points are lattice results and the shaded lines illustrate the fit results using Eq.~(\ref{eq:disp}). The fitted results of $\xi$ are 5.341(2), 5.307(5) for $\eta_c$ and $J/\psi$, respectively.}
\end{figure}
For the valence charm quark, we adopt the clover fermion action in Ref.~\cite{CLQCD:2009nvn} and the charm quark mass parameter is tuned to give $(m_{\eta_c}+3m_{J/\psi})/4=3.069$ GeV. With the tuned charm quark mass parameter, we generate the perambulators of charm quark on our ensemble, from which the masses of $\eta_c$ and $J/\psi$ are derived precisely to be $m_{\eta_c}=2.9750(3)$ GeV and $m_{J/\psi}=3.0988(4)$ GeV. The according $1S$ hyperfine splitting is $\Delta_\mathrm{HFS}=m_{J/\psi}-m_{\eta_c}=123.8(5)$ MeV. We also check the dispersion relation in Eq.~(\ref{eq:disp}) for $\eta_c$ and $J/\psi$ up to the momentum mode $\vec{n}=(1,2,2)$. As shown in Fig.~\ref{fig:dispcharm}, the dispersion relation is almost perfectly satisfied with $\xi=5.341(2)$ and $5.307(5)$ for $\eta_c$ and $J/\psi$, respectively.
As a further check of our scale setting scheme, we also calculate the masses of $D$ and $D^*$ and get $m_D=1.882(1)$ GeV and $m_D^*=2.023(1)$ GeV on a fraction of configurations of our ensemble. It is interesting to see that the hyperfine splitting $\Delta_\mathrm{HFS}(D)=m_{D^*}-m_D=0.141(2)$ GeV almost reproduces the experimental values $m_{D^{*0}}-m_{D^0}=0.14201(7)$ GeV and $m_{D^{*+}}-m_{D^+}=0.14060(7)$ GeV~\cite{Zyla:2020zbs}. This manifests that our tuning of charm quark mass and the scale setting scheme are reasonable. The dispersion relation Eq.~(\ref{eq:disp}) is also checked to be correct for $D$ and $D^*$ with $\xi=5.32(2)$ and $5.31(3)$, respectively. The figure is similar to Fig.~\ref{fig:dispVPS} and Fig.~\ref{fig:dispcharm} and is omitted here to save the space.
Table~\ref{tab:charm} collects the results of $\eta_c$, $J/\psi$, $D$ and $D^*$ mesons. Along with the values of $\xi$ derived from the dispersion relations of $\pi$ and $\eta$, we can see that the values of $\xi$ for different mesons are in agreement with the value $\xi=5.30$ during the parameter tuning and are consistent with each other within 1\%.
\begin{table}[t]
\renewcommand\arraystretch{1.5}
\caption{The masses of $J/\psi$, $\eta_c$, $D$ and $D^*$. The PDG value of $m_{D^{(*)}}$ is the average of masses of $D^{(*)0}$ and $D^{(*)+}$.}
\label{tab:charm}
\begin{ruledtabular}
\begin{tabular}{llllc}
$X$ & $\eta_c$ & $J/\psi$ & $D$ & ${D^*}$ \\\hline
$m_X$(GeV) & 2.9750(3) & 3.0988(4) & 1.882(1) & 2.023(1) \\
PDG~\cite{Zyla:2020zbs} & 2.983 & 3.097 & $\sim 1.867$ & $\sim 2.008$ \\
$\xi$ & 5.341(2) & 5.307(5) & 5.32(2) & 5.31(3)
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{Operator Construction and Distillation Method}
The principal goal of this work is to investigate the possible mixing of the pseudoscalar glueball and the pseudoscalar $q\bar{q}$ meson, therefore the quark annihilation diagrams should be taken care of. For this to be done, we adopt the distillation method~\cite{Peardon:2009gh} which provides a smearing scheme for quark fields (Laplacian Heaviside smearing) and the calculation strategy of the all-to-all propagators of the smeared quark fields that are distilled to be perambulators in the Laplacian Heaviside subspace of the spatial Laplacian operator $-\nabla^2$. Since we plan to investigate the $\eta-\eta_c$ correlation functions as well, we calculate the perambulators of $u,d$ and $c$ quarks on our large gauge ensemble in the Laplacian Heaviside space spanned by the $N=70$ eigenvectors of $-\nabla^2$ operator with the lowest eigenvalues. For the pseudoscalar glueball operator, we adopt the strategy in Refs.~\cite{Morningstar:1999rf,Chen:2005mg} to get the optimized hermitian operator $\mathcal{O}_G(t)=\mathcal{O}^\dagger_G(t)$ coupling mainly to the ground state glueball based on different prototypes of Wilson loops and gauge link smearing schemes (see Appendix for the details).
For the isoscalar $\eta$, the interpolation field can be defined as
\begin{equation}
\mathcal{O}_\Gamma = \frac{1}{\sqrt{2}}\left[\bar{u}^{(s)}\Gamma u^{(s)} + \bar{d}^{(s)}\Gamma d^{(s)}\right],
\end{equation}
where $\Gamma$ refers to $\gamma_5$ or $\gamma_4\gamma_5$, $u^{(s)}$ and $d^{(s)}$ are Laplacian Heaviside smeared $u,d$ quark fields. Thus the correlation function of $\mathcal{O}_\Gamma$ can be expressed as
\begin{eqnarray}\label{eq:ccc}
C_{\Gamma\Gamma}(t)&=&\frac{1}{T}\sum\limits_{t_s=1}^{T}\sum\limits_{\mathbf{xy}}\langle \mathcal{O}_\Gamma (\mathbf{x},t+t_s)\mathcal{O}_\Gamma ^\dagger(\mathbf{y},t_s)\rangle\nonumber\\
&\equiv& \mathcal{C}_\Gamma(t)+2\mathcal{D}_\Gamma(t)
\end{eqnarray}
with $\mathcal{C}_\Gamma(t)$ and $\mathcal{D}_\Gamma(t)$ being the contributions from the connected and disconnected diagrams, respectively. We also consider the following correlation functions
\begin{eqnarray}\label{eq:corrs}
C_{GG}(t)&=&\frac{1}{T}\sum\limits_{t_s=1}^{T}\langle \mathcal{O}_G(t+t_s)\mathcal{O}_G(t_s)\rangle\nonumber\\
C_{G\Gamma}(t)&=&\frac{1}{T}\sum\limits_{t_s=1}^{T}\sum\limits_{\mathbf{x}}\langle \mathcal{O}_G(t+t_s)\mathcal{O}_\Gamma^\dagger(\mathbf{x},t_s)\rangle\nonumber\\
C_{\Gamma G}(t)&=&\frac{1}{T}\sum\limits_{t_s=1}^{T}\sum\limits_{\mathbf{x}}\langle \mathcal{O}_\Gamma(\mathbf{x},t+t_s)\mathcal{O}_G(t_s)\rangle\nonumber\\
&=&\mp C_{G\Gamma}(t)\nonumber\\
C_{\Gamma\Gamma_c}(t)&=&\frac{1}{T}\sum\limits_{t_s=1}^{T}\sum\limits_{\mathbf{xy}}\langle \mathcal{O}_{\Gamma}(\mathbf{x},t+t_s)\mathcal{O}_{\Gamma_c}^\dagger(\mathbf{y},t_s)\rangle
\end{eqnarray}
where the $\mp$ sign comes from the hermiticity of $\mathcal{O}_{\Gamma}$, it takes minus sign for $\Gamma=\gamma_5$ (anti-hermitian) and plus sign for $\gamma_4\gamma_5$ (hermitian). The operator $\mathcal{O}_{\Gamma_c}=\bar{c}^{(s)}\Gamma_c c^{(s)}$ where $\Gamma_c=\gamma_5,\gamma_4\gamma_5$ is also defined in terms of the Laplacian Heaviside smeared charm quark field $c^{(s)}$. Obviously, all of these correlation functions except for $C_{GG}(t)$ are contributed by quark annihilation diagrams and can be dealt with conveniently in the framework of distillation method.
\subsection{$\eta$ mass as a further calibration}\label{sec:etamass}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/C_55.pdf}
\caption{\label{fig:corr-gamma5}The correlation function of $\eta$ with $\Gamma=\gamma_5$. We can see the function contains a nonzero constant with large error near $t=T/2$.}
\end{figure}
We calculate two types of correlation functions for $\eta$, namely $C_{\gamma_5\gamma_5}(t)$ and $C_{(\gamma_4\gamma_5)(\gamma_4\gamma_5)}(t)$. We do observe the finite volume artefact that $C_{\gamma_5\gamma_5}(t)$ approaches to a nonzero constant when $t$ is large, as shown in Fig.~\ref{fig:corr-gamma5}. It has been argued that this constant term comes from the topology of QCD vacuum and can be approximately expressed as $a^5(\chi_\mathrm{top} + Q^2/V)/T$ where $a$ is the lattice spacing (in the isotropic case), $\chi_\mathrm{top}$ is the topological susceptibility, $Q$ is the topological charge, $V$ is the spatial volume and $T$ is the temporal extension of the lattice~\cite{Aoki:2007ka,Bali:2014pva,Dimopoulos:2018xkm}. This can be understood from the anomalous axial vector current relation that the $\mathcal{O}_{\gamma_5}$ has direct connection with the topological charge density operator.
It is interesting to observe that $C_{(\gamma_4\gamma_5)(\gamma_4\gamma_5)}(t)$ has normal large $t$ behavior that it damps to zero for large $t$. As usual, the $t$-behavior of these correlation functions can be seen more clearly through their effective mass functions defined as
\begin{equation}\label{eq:geffm}
m_\mathrm{eff}(t)=\ln \frac{C_{\Gamma\Gamma}(t)}{C_{\Gamma\Gamma}(t+1)}
\end{equation}
where $\Gamma$ refers to $\gamma_5$ or $\gamma_4\gamma_5$. Figure~\ref{fig:effm-5} shows these effective mass functions. Benefited from the large statistics of our gauge ensemble, the effective mass plateau starts from $t\sim 10$ and signal-to-noise ratio keeps good for $t$ beyond 20 in the case $\Gamma=\gamma_4\gamma_5$.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/m_eta.pdf}
\caption{\label{fig:effm-5}The effective masses of $\pi$ and $\eta$ with different $\Gamma$ insertions. For $\eta$, signal-to-noise ratio looks better in the case $\Gamma=\gamma_4\gamma_5$ because the corresponding correlation function doesn't contain a constant associated with the topology of QCD vacuum.}
\end{figure}
By a combined fit of $C_{\gamma_5\gamma_5}(t)$ and $C_{(\gamma_4\gamma_5)(\gamma_4\gamma_5)}(t)$ using two mass terms, we obtain the best-fit mass of $\eta$ to be
\begin{equation}\label{eq:meta}
m_\eta=714.1(5.4)~~\mathrm{MeV}.
\end{equation}
where the error is obtained through the jackknife analysis. This value seems lower than previous lattice results~\cite{CP-PACS:2002exu,Hashimoto:2008xg,Dimopoulos:2018xkm} by roughly 10\%, but seems more reasonable with regard to the Witten-Veneziano mechanism (WV)~\cite{Witten:1978bc,Veneziano:1979ec} for the mass of the light flavor singlet pseudoscalar meson. According to WV, the large mass of the flavor singlet pseudoscalar meson has a direct connection with the topology of QCD vacuum. In the $N_f=2$ case, to the leading order of chiral perturbation theory, $m_\eta$ is related to $m_\pi$
\begin{equation}
m_\eta^2 = m_\pi^2 +m_0^2,
\end{equation}
where the parameter $m_0^2$ is defined in terms of the topological susceptibility $\chi_\mathrm{top}$ and the pion decay constant $f_\pi$, namely
\begin{equation}
m_0^2 = m_\eta^2-m_\pi^2\approx\frac{4N_f}{f_\pi^2} \chi_\mathrm{top}.
\end{equation}
Usually $\chi_\mathrm{top}$ is for the pure gauge case and is expected to be independent of flavor number $N_f$. Using the values of $m_\pi=348.5(1.0)$ MeV and $m_\eta$ in Eq.~(\ref{eq:meta}), $m_0^2$ can be derived to be $m_0^2=0.3885(77)$ $\mathrm{GeV}^2$. If $f_\pi$ at our pion mass is assumed to be $f_\pi \approx 1.1 f_\pi^\mathrm{exp}$ (we have not calculated $f_\pi$ at $m_\pi\sim 350$ MeV yet), then $\chi_\mathrm{top}^{1/4}$ is estimated to be $\chi_\mathrm{top}^{1/4}\approx 177$ MeV, which is very close to the phenomenological value $180$ MeV and the lattice value $185.3(5.6)$ MeV~\cite{Cichy:2015jra}. In the physical $N_f=3$ case at physical pion mass, the GMOR relation implies the mass of the singlet counterpart of the pseudoscalar octet should be $m_1^2=(2m_K^2 + m_\pi^2)/3=0.170~\mathrm{GeV}^2$. Thus using the topological susceptibility we obtained, we can estimate the mass of the flavor singlet pseudoscalar meson to be
\begin{equation}
m_{\eta_1}^2=m_1^2 +\frac{12 \chi_\mathrm{top}}{f_\pi^2}\approx 0.875~\mathrm{GeV}^2,
\end{equation}
which corresponds to $m_{\eta_1}\approx 0.936$ GeV and is not far from the experimental value $m_{\eta'} =0.958~{\mathrm{GeV}}$. Note that $\eta'$ is not a purely flavor singlet and has a fraction of flavor octet. The GMOR relation also implies $m_{\eta_8}^2=(4m_K^2-m_\pi^2)/3\approx 0.321~\mathrm{GeV}^2$. One can test that the relation $m_{\eta_1}^2 +m_{\eta_8}^2 = m_\eta^{\mathrm{exp.},2} +m_{\eta'}^2$ is satisfied within 2\%. These discussions confirm again that the Witten-Veneziano mechanism for the flavor singlet pseudoscalar mass works fairly well both for the $SU(2)$ and $SU(3)$ flavor symmetry. On the other hand, these results also reinforce the reasonability of our scale setting in Sec.~\ref{sec:numerical}.
\section{Towards the $\eta$-glueball mixing}\label{sec:II}
\subsection{Theoretical consideration}\label{sec:formalism}
In order to investigate the possible mixing between the pseudoscalar glueball and $\eta$, we must parameterize the correlation functions in Eq.~(\ref{eq:corrs}). We adopt the following theoretical logic. As usual in the lattice study, the correlation function $C_{XY}(t)$ of operator $\mathcal{O}_X$ and $\mathcal{O}_Y$ can be parameterized as
\begin{equation}\label{eq:gc}
C_{XY}\approx \sum\limits_{n\neq 0} \left[ \langle 0|\mathcal{O}_X|n\rangle \langle n|\mathcal{O}^\dagger_{Y}|0\rangle \left(e^{-E_nt}\pm e^{-E_n(T-t)}\right)\right]
\end{equation}
where the $\pm$ sign is for the same and opposite hermiticities of $\mathcal{O}_X$ and $\mathcal{O}_Y$, respectively, and $|n\rangle$ are the eigenstates of the lattice Hamiltonian $\hat{H}$ defined as $\hat{H}|n\rangle=E_n|n\rangle$ with $E_n$ being the corresponding eigen energies. For a given quantum number, $|n\rangle$'s establish an orthogonal and complete set, namely, $\sum\limits_n |n\rangle\langle n|=1$ with the normalization condition $\langle m|n\rangle=\delta_{mn}$. In principle, $\hat{H}$ only exists heuristically, so we do not know the exact particle configurations of these eigenstates simply from the correlation function in Eq.~(\ref{eq:gc}). As far as the flavor singlet pseudoscalar channel is concerned in the $N_f=2$ QCD theory, each of the state $|n\rangle$ should be a specific admixture of bare $\eta$ states and bare glueballs if they exist theoretically (here we ignore the multi-hadron states temporarily) and can be taken as the states in the eigenstate set $\{|\alpha_n\rangle,n=1,2,\ldots\}$ of the free Hamiltonian $\hat{H}_0$. Since we are working in a unitary lattice framework for $u,d$ quarks, in principle this state set is orthogonal and complete with the normalization condition $\langle \alpha_m|\alpha_n\rangle=\delta_{mn}$. Now we introduce the interaction Hamiltonian $\hat{H}_I$ to account for the dynamics of the possible mixing, such that $|n\rangle $ of $\hat{H}=\hat{H}_0+\hat{H}_I$ can be expanded in terms of $|\alpha_m\rangle$ as
\begin{equation}
|n\rangle =\sum\limits_m C_{nm}|\alpha_m\rangle
\end{equation}
with $\sum\limits_m |C_{nm}|^2=1$. In this sense, one can say that $|n\rangle$ is an admixture of states $|\alpha_m\rangle$ whose fractions are $|C_{nm}|^2$, respectively. Furthermore, if $\hat{H}_I$ is small relative to $\hat{H}_0$, then to the lowest order of the perturbation theory, one has
\begin{eqnarray}
|n\rangle&=&|\alpha_n\rangle+\sum\limits_{m\ne n}\frac{\langle \alpha_m|\hat{H}_I|\alpha_n\rangle}{E_n^{(0)}-E_m^{(0)}}|\alpha_m\rangle\nonumber\\
E_n&=& E_n^{(0)}+\sum\limits_{m\ne n}\frac{|\langle \alpha_m|\hat{H}_I|\alpha_n\rangle|^2}{E_n^{(0)}-E_m^{(0)}}
\end{eqnarray}
where $E^{(0)}_n$ is the eigenenergy of $|\alpha_n\rangle$ and is ordered from low to high.
The experimentally observed isoscalar pseudoscalars are $\eta$, $\eta'$, $\eta(1295)$, $\eta(1405/1475)$, etc.~\cite{Zyla:2020zbs}, which are identified as $I=0$ members of different $\bar{q}q$ $SU(3)$ nonets of different radial quantum numbers. As for the $SU(2)$ case of this study, since there is only one isoscalar in each isospin quartet, the spectrum can be simplified largely. Because the smeared quark field suppresses the contribution of excited states in the correlation functions of $\eta$, as a simple approximation, we can truncate the spectrum of $\eta$ state to be $\eta$ and $\eta^*$ with $\eta^*$ taking account into all the excited $\eta$ states. On the other hand, quenched lattice QCD predicted the mass of the lowest pseudoscalar glueball is around 2.4-2.6 GeV. This seems to be confirmed by the correlation function $C_{GG}(t)$ of the optimized operator $\mathcal{O}_G$ which is expected to couple most to the ground state. So we include the ground state pseudoscalar glueball $|G\rangle$ and another state $|G^*\rangle $ in the state basis $\{|\alpha_i\rangle, i=1,2,\cdots\}$ with $|G^*\rangle$ standing for all the excited states of the pseudoscalar glueball. Finally, we have the following state basis
\begin{equation}
|\alpha_i\rangle=|\eta\rangle, |\eta^*\rangle, \ldots, |G\rangle, |G^*\rangle,\ldots,
\end{equation}
With this state basis, the free Hamiltonian $\hat{H}_0=\mathrm{diag}\{m_\eta, m_{\eta^*},m_G, m_{G^*}\}$ is diagonal with the matrix elements being the bare masses of the basis states, respectively, and being ordered from low to high. Theoretically, $|\eta\rangle$ and $|\eta^*\rangle$ are orthogonal, so do the states $|G\rangle$ and $|G^*\rangle$. Thus the interaction Hamiltonian $\hat{H}_I$ can be expressed as
\begin{equation}
H_I=\left(
\begin{array}{cccc}
0 & 0 & x_1 & y_1 \\
0 & 0 & x_2 & y_2 \\
x_1 & x_2 & 0 & 0 \\
y_1 & y_2 & 0 & 0 \\
\end{array}
\right),
\end{equation}
where $x_i, y_i$ are called mixing energies sometimes. Accordingly, we have the following state expansion of $|n\rangle$
\begin{eqnarray}\label{eq:expansion}
|1\rangle&\approx&|\eta\rangle +\frac{x_1}{m_\eta-m_G}|G\rangle+\frac{y_1}{m_\eta-m_{G^*}}|G^*\rangle\nonumber\\
|2\rangle&\approx&|\eta^*\rangle +\frac{x_2}{m_{\eta^*}-m_G}|G\rangle+\frac{y_2}{m_{\eta^*}-m_{G^*}}|G^*\rangle\nonumber\\
|3\rangle&\approx&|G\rangle +\frac{x_1}{m_G-m_\eta}|\eta\rangle+\frac{x_2}{m_G-m_{\eta^*}}|\eta^*\rangle\nonumber\\
|4\rangle&\approx&|G^*\rangle +\frac{y_1}{m_{G^*}-m_\eta}|\eta\rangle+\frac{ y_2}{m_{G^*}-m_{\eta^*}}|\eta^*\rangle.
\end{eqnarray}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/C_G5.pdf}
\caption{\label{fig:corr-eta-G-origin}The correlation function of $\eta-G$ with $\Gamma=\gamma_5$, called $C_{G\gamma_5}$. The value of this correlator tends to zero at $t=0$ within errors and the error seems to be a constant independent of $t$. The correlator approaches a positive(negative) constant when $t<\frac{T}{2}$($t>\frac{T}{2}$) which might be due to the contribution from topology.}
\end{figure}
\subsection{The $\Gamma=\gamma_5$ case}
Now we explore the possibility of glueball-$\eta$ mixing through the correlation function $C_{G\gamma_5}(t)$. Let us take a look at the $t$-dependence
of $C_{G\gamma_5}(t)$ shown in Fig.~\ref{fig:corr-eta-G-origin}. We have the following observations:
\begin{itemize}
\item[] (1)~ $C_{G\gamma_5}(t)$ is anti-symmetric with respect to $t=T/2$ and tends to 0 at $t=0$. This is understood because $\mathcal{O}_G$ is hermitian by construction (and even under the time reversal transformation $\mathcal{T}$) while $\mathcal{O}_{\gamma_5}$ is anti-hermitian and $\mathcal{T}$-odd. At $t=0$, since the product $\mathcal{O}_G(0)\mathcal{O}_{\gamma_5}(0)$ is $\mathcal{T}$-odd, its vacuum expectation value $\langle \mathcal{O}_G\mathcal{O}_{\gamma_5}(0)\rangle$ certainly vanishes.
\item[] (2)~ $C_{G\gamma_5}(t)$ approaches a positive (negative) constant when $t<\frac{T}{2}$ ($t>\frac{T}{2}$). This may be due to the constant contribution from the topology similar to the case of $C_{\gamma_5\gamma_5}(t)$ discussed in Sec.~\ref{sec:etamass}. Since $C_{G\gamma_5}(t)$ is now $\mathcal{T}$-odd, so does the topology contribution.
\item[] (3)~ Even though its central value is smooth, the error of $C_{G\gamma_5}(t)$ is almost constant throughout the time range.
\end{itemize}
In order to find a function form to describe the time behavior of $C_{G\gamma_5}(t)$, we take the following approximations
\begin{eqnarray}\label{eq:create}
\mathcal{O}_G^\dagger|0\rangle&\approx&\sum\limits_{i\neq 0}\sqrt{Z_{G_i}}|G_i\rangle\nonumber\\
\mathcal{O}_{\gamma_5}^\dagger|0\rangle&\approx&\sum\limits_{i\neq 0}\sqrt{Z_{\gamma_5,i}}|\eta_i\rangle,
\end{eqnarray}
by the assumptions $\langle 0|\mathcal{O}_G|\eta_i\rangle\approx 0$ and $\langle 0|\mathcal{O}_{\gamma_5}|G_i\rangle\approx 0$ similar to those adopted in the $\eta-\eta'$ mixing studies~\cite{Christ:2010dd,Michael:2013gka}.
Consequently we have the following coupling matrix elements of the operators $\mathcal{O}_G$ and $\mathcal{O}_{\gamma_5}$,
\begin{eqnarray}\label{eq:gcouple}
\langle 0|\mathcal{O}_G|1\rangle&=&\frac{x_1 \sqrt{Z_G}}{m_\eta-m_G}+\frac{y_1 \sqrt{Z_{G^*}}}{m_\eta-m_{G^*}}\nonumber\\
\langle 0|\mathcal{O}_G|2\rangle&=&\frac{x_2 \sqrt{Z_G}}{m_{\eta^*}-m_G}+\frac{y_2 \sqrt{Z_{G^*}}}{m_{\eta^*}-m_{G^*}}\nonumber\\
\langle 0|\mathcal{O}_G|3\rangle&=&\sqrt{Z_G}\nonumber\\
\langle 0|\mathcal{O}_G|4\rangle&=&\sqrt{Z_{G^*}}
\end{eqnarray}
and
\begin{eqnarray}\label{eq:etacouple}
\langle 0|\mathcal{O}_{\gamma_5}|1\rangle&=&\sqrt{Z_{\gamma_5,1}}\nonumber\\
\langle 0|\mathcal{O}_{\gamma_5}|2\rangle&=&\sqrt{Z_{\gamma_5,2}}\nonumber\\
\langle 0|\mathcal{O}_{\gamma_5}|3\rangle&=&-\frac{x_1 \sqrt{Z_{\gamma_5,1}}}{m_\eta-m_G}-\frac{x_2 \sqrt{Z_{\gamma_5,2}}}{m_{\eta^*}-m_{G}}\nonumber\\
\langle 0|\mathcal{O}_{\gamma_5}|4\rangle&=&-\frac{y_1 \sqrt{Z_{\gamma_5,1}}}{m_{\eta}-m_{G^*}}-\frac{y_2 \sqrt{Z_{\gamma_5,2}}}{m_{\eta^*}-m_{G^*}}
\end{eqnarray}
As an exploratory study and for the simplicity of the future data analysis, we temporarily neglect $\eta^*$ contributes to $C_{G\gamma_5}(t)$. That is to say, we take the further approximation $\mathcal{O}_{\gamma_5}^\dagger|0\rangle\approx \sqrt{Z_{\gamma_5,1}}|\eta\rangle$. Thus after inserting Eq.~(\ref{eq:expansion},\ref{eq:create},\ref{eq:gcouple},\ref{eq:etacouple}) to Eq.~(\ref{eq:gc}) and ignore the terms relevant to $|\eta^*\rangle$, we have the approximate expression of $C_{G\gamma_5}(t)$ as
\begin{eqnarray}\label{eq:gc_general}
C_{G\gamma_5}(t)&\approx&\sqrt{Z_G Z_{\gamma_5,1}}\frac{x_1 }{m_\eta-m_G} \left(e^{-m_1 t}-e^{-m_3 t}\right)\nonumber\\
&+&\sqrt{Z_{G^*} Z_{\gamma_5,1}}\frac{y_1 }{m_{\eta}-m_{G^*}} \left(e^{-m_1 t}-e^{-m_4 t}\right)\nonumber\\
&-& (t\to (T-t)~~ \mathrm{terms}).
\end{eqnarray}
The feature of this expression is $C_{G\gamma_5}(t=0)=0$ and is in accordance with the observation of item (1).
In order to understand the almost constant error of $C_{G\gamma_5}(t)$, we consider its variance~\cite{Endres:2011mm}
\begin{equation}
\delta^2 C_{ G\gamma_5}(t)\equiv \langle \mathcal{O}_G^2(t)\mathcal{O}_{\gamma_5}^2 (0) \rangle -C_{ G\gamma_5}^2 (t).
\end{equation}
The first term on the right hand side can be viewed as a correlation function of the operator $\mathcal{O}^2_{\gamma_5}$ and $\mathcal{O}_G^2$, both of which have the vacuum quantum number $0^{++}$ (in the continuum limit) and are expected to have nonzero vacuum expectation values $\langle \mathcal{O}^2_{\gamma_5}\rangle\ne 0$ and $\langle \mathcal{O}_G^2\rangle\ne0$. Thus we have
\begin{equation}\label{eq:cg5}
\delta^2 C_{G\gamma_5 }(t)= \langle \overline{\mathcal{O}_G^2}(t)\overline{\mathcal{O}_{\gamma_5}^2}(0) \rangle-C_{G\gamma_5}^2 (t)+\langle \mathcal{O}_G^2\rangle\langle\mathcal{O}^2_{\gamma_5}\rangle
\end{equation}
where $\overline {\mathcal{O}_i^2}(t)\equiv \mathcal{O}_i^2(t)-\langle \mathcal{O}^2_i\rangle$. The almost constant error of $C_{G\gamma_5}(t)$ implies that the constant term $\langle\mathcal{O}^2_{\gamma_5}\rangle \langle \mathcal{O}_G^2\rangle$ is large and dominate the variance. This is consistent with the argument in Ref.~\cite{Endres:2011mm} that the variance of a correlation function is dominated by the possible lowest state, which corresponds to the vacuum state with $E_{\mathrm{vac}}=0$ in our case. This motivates us to consider the temporal derivative of $C_{G\gamma_5}(t)$, namely,
\begin{equation}\label{eq:derivative}
\partial_t C_{G\gamma_5}(t)=\frac{1}{2a_t}\left(C_{G\gamma_5 }(t+a_t)-C_{G\gamma_5}(t-a_t)\right),
\end{equation}
such that the constant term in $C_{G\gamma_5}(t)$ and its constant variance can be cancelled. This is surely the case. We plot $\partial_t C_{G\gamma_5}(t)$ in Fig.~\ref{fig:corr-eta-G}, where one can see that $\partial_t C_{G\gamma_5}(t)$ goes to zero when $t$ is large and its relative error is much smaller.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/dt_C_G5.pdf}
\caption{\label{fig:corr-eta-G}The temporal derivative of $C_{G\gamma_5}$ defined by Eq.~(\ref{eq:derivative}). The data points are lattice results and the shaded band illustrates the fitting results using Eq.~(\ref{eq:gc_general}). The signal to noise ratio of $\partial_tC_{G\gamma_5}$ is much better than that of $C_{G\gamma_5}$ in Fig.~\ref{fig:corr-eta-G-origin}.}
\end{figure}
In order for the mixing energies $x_1$ and $y_1$ to be extracted by using Eq.~(\ref{eq:gc_general}), one has to know the parameters $m_1$, $m_3$, $m_4$, $m_{\eta}$, $m_G$, $m_{G^*}$, $Z_G$, $Z_{G^{*}}$ and $Z_{\gamma_5,1}$, which, based on the assumptions of Eq.~(\ref{eq:create}), are encoded in the correlation functions $C_{\gamma_5\gamma_5}(t)$ and $C_{GG}(t)$ as
\begin{eqnarray}\label{eq:gg-gamma5}
C_{GG}(t) &=& \sum\limits_i Z_{G_i}\left(e^{-m_{G_i}t}+e^{-m_{G_i}(T-t)}\right)\nonumber\\
C_{\gamma_5\gamma_5}(t) &=& \sum\limits_i Z_{\gamma_5,i}\left(e^{-m_{\eta_i}t}+e^{-m_{\eta_i}(T-t)}\right)\nonumber\\
&\approx& Z_{\gamma_5,1}\left(e^{-m_{\eta}t}+e^{-m_{\eta}(T-t)}\right)
\end{eqnarray}
where we take $i=1,2$ for $C_{GG}(t)$ and therefore $G_{1,2}$ refer to $G$ and $G^*$. It should be noted that the second state $|G^*\rangle$ should be considered even though we have built the optimal operator $\mathcal{O}_G$ based a large operator set (seen in Appendix), it turns out that there are still a substantial contribution of higher states in $C_{GG}$. To manifest this, we plot the effective mass function $m_\mathrm{eff}(t)$ of $C_{GG}(t)$ in Fig.~\ref{fig:geffm} by the definition in Eq.~(\ref{eq:geffm}), where one can see that the ground state glueball $G$ has not saturate $C_{GG}(t)$ before the signals are undermined by errors. With this observation we add the second term relevant to $G^*$ to the first equation in Eq.~(\ref{eq:gg-gamma5}) but do not consider its physical meaning.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/m_G.pdf}
\caption{\label{fig:geffm} The effective mass of the pseudoscalar glueball. The shaded band illustrates the fitting results using Eq.~(\ref{eq:gg-gamma5}) with two mass terms.}
\end{figure}
\begin{table*}[t]
\caption{Ground state mass and mixing angle fitted from operators with $\Gamma=\gamma_5$ and $\Gamma=\gamma_4\gamma_5$ on ensemble. Values of $m_\eta$ are same in both cases because the value is derived from a combined fit to $C_{\gamma_5\gamma_5}$ and $C_{(\gamma_4\gamma_5)(\gamma_4\gamma_5)}$. $\chi^2$ are obtained from the fitting results of $C_{G\Gamma}$.}
\label{tab:fit}
\begin{ruledtabular}
\begin{tabular}{lcccc|cccc}
$\Gamma$ & $[t_l,t_h]_{\Gamma}$ & $[t_l,t_h]_{GG}$ & $[t_l,t_h]_{G\Gamma}$ & $\chi^2/\mathrm{dof}$ & $m_{\eta}$(MeV) & $m_{G}$(MeV) & $|x_1|$(MeV) & $|\theta|$ \\\hline
$\gamma_5$ & [9, 30] & [1, 14] & [3, 30] & $0.96$ & $714.1(5.4)$ & $2487(50)$ & $107(14)$ & $3.47(46)^\circ$ \\
$\gamma_4\gamma_5$ & [5, 30] & [1, 14] & [0, 30] & $0.15$ & $714.1(5.4)$ & $2487(50)$ & $79(37)$ & $2.6(1.2)^\circ$ \\
\end{tabular}
\end{ruledtabular}
\end{table*}
Since the mixing effect is expected to be small (our final results confirm this), we take the assumptions $m_1\approx m_\eta$, $m_3\approx m_G$ and $m_4\approx m_{G^*}$. In the data analysis procedure, we first rearrange the $N_\mathrm{conf}$ measurements into $279$ bins with each bin including $25$ measurements, and then we perform the one-eliminating jackknife analysis on these data bins. For each time of jackknife re-sampling, we first extract $m_{\eta}$, $m_G$, $m_{G^*}$, $Z_G$, $Z_{G^{*}}$, $Z_{\gamma_4\gamma_5,1}$ and $Z_{\gamma_5,1}$ from $C_{GG}(t)$, $C_{\gamma_5\gamma_5}(t)$ and $C_{(\gamma_4\gamma_5)(\gamma_4\gamma_5)}(t)$ in the fixed time windows $[t_l,t_h]_{GG}=[1,14]$, $[t_l,t_h]_{\gamma_5}=[9,30]$ and $[t_l,t_h]_{\gamma_4\gamma_5}=[5,30]$, as shown in Table~\ref{tab:fit}. Then we feed these parameters to $\partial_t C_{G\gamma_5}(t)$ to determine the parameter $x_1$ and $y_1$. The final results of $m_\eta$, $m_G$ and $|x_1|$ with jackknife errors are obtained to be
\begin{eqnarray}
m_\eta&=&714.1(5.4)~\mathrm{MeV},\nonumber\\
m_{G}&=&2487(50)~\mathrm{MeV},\nonumber\\
|x_1|&=&107(14)~\mathrm{MeV}.
\end{eqnarray}
Note that the definitions in Eq.(\ref{eq:create},\ref{eq:gcouple},\ref{eq:etacouple}) are up to a plus or minus sign, we can only determine the absolute value $|x_1|$ of $x_1$. The parameters of the fitting procedure and fitting results are collected in Table~\ref{tab:fit}. The goodness of the fit of Eq.~(\ref{eq:gc_general}) to $\partial_t C_{G\gamma_5}$ using the function is reflected by the $\chi^2/\mathrm{dof}=0.96$ in the fitting window $[t_l,t_h]_{G\gamma_5}=[3,30]$, and is also illustrated in Fig.~\ref{fig:corr-eta-G} by the shaded curve. In mean time, making use of the $\mathcal{T}$-odd property of $C_{G\gamma_5}(t)$, we average the $t<\frac{T}{2}$ part and $t>\frac{T}{2}$ part and find the errors can be reduced drastically around $t=0$ except for $C_{G\gamma_5}(t=0)$, as shown in Fig.~\ref{fig:corr-eta-G-fold}, where the function of Eq.~(\ref{eq:gc_general}) with fitted parameters is also plotted with shaded curves. It is seen that the function describes the $t$-dependence of $C_{G\gamma_5}(t)$ very well up to an unknown constant term with opposite signs for $t<\frac{T}{2}$ and $t>\frac{T}{2}$.
Since $|x_1|$ is much smaller than $m_G-m_\eta$, to the lowest order of the perturbation theory, we can estimate the mixing angle $\theta$ of $\eta$ and the ground state glueball $G$ as
\begin{equation}
|\theta|\approx \sin |\theta|\approx \frac{|x_1|}{m_G-m_\eta}=3.47(46)^\circ.
\end{equation}
\begin{figure}[ht]
\includegraphics[width=0.9\linewidth]{figs/C_G5_fold.pdf}
\caption{\label{fig:corr-eta-G-fold}$C_{G\gamma_5}$, and used $\mathcal{T}$-odd property to fold the data for every gauge configuration. The blue band shows fitted results. The band doesn't perfectly match data point because we just fitted temporal derivative $\partial_t C_{G\gamma_5}$ instead of correlation function $C_{G\gamma_5}$, so we dropped some constant in the fitting result. The $\mathcal{T}$-odd property make the constant negative in $-t$ direction. The x-axis is shifted by 10 to show the near zero behaviour.}
\end{figure}
\subsection{The $\Gamma=\gamma_4\gamma_5$ case}
As a cross check, we also carry the similar calculation by using the $\Gamma=\gamma_4\gamma_5$ for the interpolation field operator of $\eta$. The corresponding correlation functions $C_{(\gamma_4\gamma_5)(\gamma_4\gamma_5)}(t)$ and $C_{G(\gamma_4\gamma_5)}(t)$ are calculated using Eq.~(\ref{eq:corrs}). In contrast to the case of $\Gamma=\gamma_5$, the correlation function $C_{G(\gamma_4\gamma_5)}(t)$ does not go to zero when $t\to 0$. This is similar to the study of mixing of the pseudoscalar charmonium and the pseudoscalar glueball~\cite{Zhang:2021xvl}, which can be explained following the same logic that the QCD $U_A(1)$ anomaly may play an important role here. Obviously, the operator $\mathcal{O}_{\gamma_4\gamma_5}$ has the same operator structure as the temporal component of the isoscalar axial vector current
$j^5_\mu(x)=\frac{1}{\sqrt{2}}\left[\bar{u}(x)\gamma_\mu\gamma_5 u(x)+\bar{d}(x)\gamma_\mu\gamma_5 d(x)\right]$, which satisfies the anomalous axial vector relation
\begin{equation}\label{eq:u1a}
\partial_\mu j^5_\mu(x) = 2m_q j^5(x) + \sqrt{2} q(x),
\end{equation}
where $j^5(x)=\frac{1}{\sqrt{2}}\left[\bar{u}(x)\gamma_5 u(x)+\bar{d}(x)\gamma_5 d(x)\right]$ is the pseudoscalar density and $q(x)=\frac{g^2}{32\pi^2} \epsilon^{\alpha\beta\rho\sigma} G_{\alpha\beta}^a G^a_{\rho\sigma}$ is the anomalous term stemming from the $U_A(1)$ anomaly with $g$ being the strong coupling constant and $G_{\alpha\beta}^a$ being the strength of color fields. Since $j^5(x)$ also has the same structure as $\mathcal{O}_{\gamma_5}$, based on the assumption in Eq.~(\ref{eq:create}) we expect
\begin{equation}
m_{G_i}^2 f_{G_i} =\langle 0|\partial_\mu j^5_\mu(0)|G_i\rangle \approx \sqrt{2} \langle 0|q(0)|G_i\rangle
\end{equation}
where $f_{G_i}$ is the {\it decay constant} of the pseudoscalar glueball $G_i$. Accordingly we have
\begin{equation}
\langle 0|j_4^5(0)|G_i(\vec{p}=0)\rangle \approx \frac{\sqrt{2}}{m_{G_i}} \langle 0|q(0)|G_i(\vec{p}=0)\rangle.
\end{equation}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/C_G45.pdf}
\caption{\label{fig:corr-eta-G-45} The correlation function of $\eta-G$ with $\Gamma=\gamma_4\gamma_5$, called $C_{G(\gamma_4\gamma_5)}$. The shaded band illustrates the fitting results using Eq.~(\ref{eq:cg45}). The fit window starts from $t=0$ in order to match the near zero behavior of the correlator. The x-axis is shifted by 5 to show the near zero behaviour.}
\end{figure}
Previous lattice studies show that pseudoscalar states can be accessed by the operator $q(x)$~\cite{Chen:2005mg,Chowdhury:2014mra}, thus the nonzero matrix element $\langle 0|q(0)|G_i\rangle$ implies the coupling $\langle 0| \mathcal{O}_{\gamma_4\gamma_5}|G_i\rangle \neq 0$. If we insist the relation $\mathcal{O}_G^\dagger|0\rangle=\sum\limits_{i\neq 0}\sqrt{Z_{G_i}}|G_i\rangle$ still holds, then the correlation function $C_{G(\gamma_4\gamma_5)}(t)$ can be parameterized as
\begin{eqnarray}\label{eq:cg45}
C_{G(\gamma_4\gamma_5)}(t)&\approx& A e^{-m_3 t}\nonumber\\
&+&\sqrt{Z_G Z_{\gamma_4\gamma_5,1}}\frac{x_1 }{m_\eta-m_G} \left(e^{-m_1 t}-e^{-m_3 t}\right)\nonumber\\
&+&\sqrt{Z_{G^*} Z_{\gamma_4\gamma_5,1}}\frac{y_1 }{m_{\eta}-m_{G^*}} \left(e^{-m_1 t}-e^{-m_4 t}\right)\nonumber\\
&+& (t\to (T-t)~~ \mathrm{terms}),
\end{eqnarray}
which is similar to Eq.~(\ref{eq:gc_general}) apart from the first term due to nonzero coupling $\langle 0|q(0)|G_i\rangle$ (here we ignore the contribution of excited $\eta$ states). Alongside with the correlation function
\begin{equation}\label{eq:gamma45-corr}
C_{(\gamma_4\gamma_5)(\gamma_4\gamma_5)}(t)=\sum\limits_i Z_{\gamma_4\gamma_5,i}\left(e^{-m_{\eta_i}t}+e^{-m_{\eta_i}(T-t)}\right)
\end{equation}
and the first equation of Eq.~(\ref{eq:gg-gamma5}), we carry out a similar jackknife data analysis to the case of $\Gamma=\gamma_5$ and get the mixing energy $|x_1|=79(37)$ MeV and the corresponding mixing angle $\theta_1=2.6(1.2)^\circ$, which are consistent with the results of $\Gamma=\gamma_5$ case but have larger errors. The parameters of the fitting procedure and the fit results are also listed in Table~\ref{tab:fit} for comparison. In Fig.~\ref{fig:corr-eta-G-45}, the colored curves with error bands are plotted using the fitted parameters. It is seen that the function forms mentioned above also describe the data very well. The fitted results are shown in Table~\ref{tab:fit} and can be compared with the $\gamma_5$-case directly. On this ensemble, it is clear that the results of the two cases are compatible with each other within errors.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/C_etac_eta.pdf}
\caption{\label{fig:corr-etac-eta}Correlation functions of $\eta-\eta_c$, called $C_{\Gamma\Gamma_c}$. Different colors indicate different source $\Gamma_c$ and sink $\Gamma$ insertions.}
\end{figure}
\section{$\eta-\eta_c$ correlators}\label{sec:etac-eta}
With perambulators of charm and light quarks at hand, we also take a look at the correlation functions $C_{\Gamma\Gamma_c}(t)$ of $\eta$ operators $\mathcal{O}_\Gamma$ and $\eta_c$ operators $\mathcal{O}_{\Gamma_c}=\bar{c}\Gamma_c c$ with $\Gamma_c=\gamma_5$ and $\gamma_4\gamma_5$. These correlation functions are contributed only from annihilation diagrams of charm and light quarks, and can be calculated conveniently through the corresponding perambulators. Fig.~\ref{fig:corr-etac-eta} shows the four correlation functions, where the labels show the $\Gamma\Gamma_c$ combinations of $C_{\Gamma\Gamma_c}(t)$. It is seen that the signal-to-noise ratios are fairly good for all of $C_{\Gamma\Gamma_c}(t)$ which benefit from the distillation method and the large statistics of our ensemble. In order to see their spectral structure clearly, the effective mass functions of these $C_{\Gamma\Gamma_c}(t)$ defined by Eq.~(\ref{eq:geffm}) are presented in Fig.~\ref{fig:mass-etac-eta}, where the horizontal grey line illustrates the $\eta$ mass $m_\eta a_t$ in the lattice unit corresponding to the previously derived mass value $m_\eta=714.1(5.4)$ MeV. Albeit the different behaviors at small $t$ range, all the effective masses approach to $m_\eta a_t$ when $t$ is large. This is not surprising since our lattice setup is unitary for $u,d$ light quarks and there exist propagating modes made up of light quarks along the time with the lightest one being necessarily the $\eta$ state.
It is intriguing to understand the small $t$ behaviors of $C_{\Gamma\Gamma_c}(t)$. Since the charm sea quarks are absent, our lattice setup is not unitary for charm quarks and the discussion based on mixing models in Sec.~\ref{sec:II} does not apply here. Instead, we should consider the $n\bar{n}-c\bar{c}$ transition during the propagation where $n$ refers to the $u,d$ light quark. If we denote the states involving only light flavors (and gluons) by $\{|\eta^{(i)}\rangle, i=1,2,\ldots\}$ and the states involving $c\bar{c}$ by $\{ |\eta_c^{(i)}\rangle, i=1,2,\ldots\}$, then there are two types of contributions to $C_{\Gamma\Gamma_c}$. The first type includes the normal propagating modes of $|\eta^{(i)}\rangle$ since both $\mathcal{O}_\Gamma$ and $\mathcal{O}_{\Gamma_c}$ couple to these states in principle. The second type comes from the the transitions between $|\eta^{(i)}\rangle$ and the $|\eta_c^{(i)}\rangle$ states that are developed by the valence charm quark propagating forward and backward in time, since $\mathcal{O}_\Gamma$ cannot generate $|\eta_c^{(i)}\rangle$ from the vacuum without charm quarks. If we introduce the interaction Hamiltonian
$\hat{H}_{I,ij}=|\eta^{(i)}\rangle x_{ij} \langle \eta_c^{(j)}|$ to describe the $n\bar{n}-c\bar{c}$ transitions that take place anytime $t'\in [0,t]$, then the second type of contribution can be expressed as~\cite{Lee:1999kv,McNeile:2000xx,McNeile:2002az,McNeile:2002fh}
\begin{eqnarray}
C_{\Gamma\Gamma_c}^{(II)}(t)&=&\sum\limits_{ij}Z_{\Gamma,i}^\eta Z_{\Gamma_c,j}^{\eta_c*} \frac{x_{ij}}{m_{\eta_c^{(j)}}-m_{\eta^{(i)}}}\nonumber\\
&\times& \left( e^{-m_{\eta^{(i)}} t}-e^{-m_{{\eta_c}^{(j)}}t}\right),
\end{eqnarray}
after the summation of $t'\in [0,t]$, such that
\begin{equation}\label{eq:exprx}
C_{\Gamma\Gamma_c}(t)=\sum\limits_{i} Z_{\Gamma,i}^\eta Z_{\Gamma_c,i}^{\eta_c,*} e^{-m_{\eta^{(i)}}t}+C_{\Gamma\Gamma_c}^{(II)}(t)
\end{equation}
where $Z_{\Gamma,i}^X=\langle 0|\mathcal{O}_\Gamma|X^{(i)}\rangle$ and $Z_{\Gamma_c,i}^X=\langle 0|\mathcal{O}_{\Gamma_c}|X^{(i)}\rangle$ with $X$ referring to $\eta$ and $\eta_c$. This expression is very complicated with quite a lot of parameters and is not suitable for the data analysis of $C_{\Gamma\Gamma_c}(t)$. Anyway, when $t$ is large, it is dominated by the lowest state $\eta$, as manifested by Fig.~\ref{fig:mass-etac-eta}.
We make the last comment on the $\eta-\eta_c$ correlation functions we obtain: they have very good signals and have some interesting information encoded, such as the transition amplitude $x_{ij}$. If optimized operators for any single $\eta^{(i)}$ and $\eta_c^{(j)}$ states are derived, the expression in Eq.~(\ref{eq:exprx}) can be drastically simplified for $x_{ij}$ to be extracted. This kind of explorations can be performed in the future.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{figs/m_etac_eta.pdf}
\caption{\label{fig:mass-etac-eta}Effective mass functions of $C_{\Gamma\Gamma_c}$ defined by Eq.~(\ref{eq:geffm}). Different colors indicate different source $\Gamma_c$ and sink $\Gamma$ insertions. The grey band shows fitted result of $m_\eta a_t$ with error, and we can see all of four functions have plateaus near $m_\eta a_t$ at large $t$.}
\end{figure}
\section{Discussion and Summary}\label{sec:summary}
We generate a large gauge ensemble with $N_f=2$ degenerated $u,d$ dynamical quarks on a $16^3\times 128$ anisotropic lattice with aspect ratio $\xi=a_s/a_t\approx 5.3$. Based on the experimental observation that the differences of mass square of the heavy-light vector and the pseudoscalar mesons are insensitive to quark masses, we propose a new scale setting scheme that the temporal lattice spacing $a_t$ can be estimated by this quantity. In practice, we use the value $m_V^2-m_{PS}^2\approx 0.568~\mathrm{GeV}^2$, which is an average of $n\bar{n}$, $s\bar{n}$, $s\bar{s}$, $c\bar{n}$ and $c\bar{s}$ mesons with $n$ referring to the light $u,d$ quarks. Our $u,d$ quark mass parameter corresponds to a pion mass $m_\pi\approx 350$ MeV. It turns out that this scale setting scheme gives reasonable physical results. We calculate the perambulators of $u,d$ quarks and the valence charm quark on our ensemble using the distillation method where the quark annihilation diagrams can be calculated efficiently.
We calculate the mass of the isoscalar pseudoscalar meson $\eta$ to be $m_\eta=714.1(5.4)$ MeV. This mass value can be matched, through the Witten-Veneziano relation, to the $SU(3)$ flavor singlet pseudoscalar meson mass $m_{\eta_1}\approx 935$ MeV, which is in a good agreement with the $\eta'$ mass $m_{\eta'}=958$ MeV if the $\eta-\eta'$ mixing is considered. We also calculate the correlation function of the $\eta_c$ operator and the $\eta$ operator. We observe the contribution from $\eta$ dominates the large $t$ behavior of $\eta_c-\eta$ correlation functions, from which $m_\eta$ can be extracted correctly.
The mixing of $\eta$ and the pseudoscalar glueball is investigated for the first time on the lattice. From the correlation function of the glueball operator and $\eta$ operator, the mixing energy and the mixing angle are determined to be $|x_1|=107(14)$ MeV and $|\theta|=3.47(46)^\circ$ given the pseudoscalar glueball mass $m_G\approx 2.5$ GeV. This mixing angle is so tiny that the mixing effects can be neglected when the properties of $\eta$ (and also $\eta'$ in the physical $N_f=3$ case) are explored.
To summarize, we find that there is little mixing between the flavor singlet pseudoscalar ($\eta$ is our case) and the pseudoscalar glueball. The topology of the QCD vacuum plays a crucial role for the origin of $\eta$ mass due to the QCD $U_A(1)$ anomaly. This is in accordance with the common theoretical considerations~\cite{Bass:2018xmz}.
\vspace{0.5cm}
\begin{acknowledgements}
This work is supported by the National Key Research and Development Program of China (No. 2020YFA0406400), the Strategic Priority Research Program of Chinese Academy of Sciences (No. XDB34030302), and the National Natural Science Foundation of China (NNSFC) under Grants No.11935017, No.11775229, No.12075253, No.12070131001 (CRC 110 by DFG and NNSFC) and No.12175063. The Chroma software system~\cite{Edwards:2004sx} and QUDA library~\cite{Clark:2009wm,Babich:2011np} are acknowledged. The computations were performed on the HPC clusters at Institute of High Energy Physics (Beijing) and China Spallation Neutron Source (Dongguan), and the CAS Sunrise-1 computing environment.
\end{acknowledgements}
\section*{Appendix}\label{sec:appendix}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figs/a1mp_wilsonloop.pdf}
\caption{\label{fig:O_G}Wilson loop prototypes used to construct the pseudoscalar glueball operator~\cite{Morningstar:1999rf,Chen:2005mg}.}
\end{figure}
\subsection{Glueball operator construction}\label{appendix:oper}
The interpolation field operators for the pseudoscalar glueball are built based on the four prototypes of Wilson loops shown in Fig.~\ref{fig:O_G}. The Wilson loops of each prototype are calculated using smeared gauge links. We adopt six different schemes to smear gauge links, which are different combinations of single link smearing and double link smearing~\cite{Morningstar:1999rf,Chen:2005mg}. The spatial symmetry group on the lattice is the octahedral group $O$ whose irreducible representation $A_1$ corresponds to the spin $J=0,4,\ldots$ in the continuum limit. It is expected the glueball of $J=4$ is higher than the $J=0$ state in mass, so we can build operators in the $A_1^{-+}$ representation to couple with the ground state pseudoscalar glueball. Let $W_\alpha(\mathbf{x},t)$ be one prototype of Wilson loop under a specific smearing scheme, then the $A_1^{-+}$ operator in the rest frame of a glueball can be obtained by
\begin{equation}
\phi_\alpha (t)=\sum\limits_{\mathbf{x}}\sum\limits_{R\in O} c_R^{A_1}\left[R\circ W_\alpha(\mathbf{x},t)-\mathcal{P}R\circ W_\alpha(\mathbf{x},t)\mathcal{P}^{-1}\right]
\end{equation}
where $R\circ W_\alpha$ refers to a differently oriented Wilson loops after one of the 24 elements of $O$ ($R$) operated on $W_\alpha$, $\mathcal{P}$ is spatial reflection operation and $c_R^{A_1}$ are the combinational coefficients for the $A_1$ representation. Thus we obtain a $A_1^{-+}$ operator set $\{\phi_\alpha(t),\alpha=1,2,\ldots, 24\}$ based on the four prototypes and six smearing schemes. In order to get an optimized operator $O_G$ that couples most to the ground state glueball, we first calculate the correlation matrix of the operator set,
\begin{equation}
C_{\alpha\beta}(t)=\frac{1}{T}\sum\limits_{\tau}\langle \phi_\alpha(t+\tau)\phi_\beta(\tau)\rangle,
\end{equation}
and then solve the generalized eigenvalue problem
\begin{equation}
C_{\alpha\beta}(t_1)w_\beta=\lambda C_{\alpha\beta}(t_0)w_\beta
\end{equation}
to get the eigenvector $w_\alpha$ of the largest eigenvalue $\lambda$, which serves as the combinational coefficients of $O_G$, namely,
\begin{equation}
O_G(t)=w_\alpha \phi_\alpha(t).
\end{equation}
In practice, we take $t_0=1$ and $t_1=2$ for the optimized operator coupling more to the ground state pseudoscalar glueball.
|
1,116,691,497,852 | arxiv | \section{\label{introduction} Introduction}
Gamma-ray lines carry unique information of prime importance for our
understanding of fundamental astrophysical questions such as the
origin of heavy elements or the mechanisms behind the spectacular
death of stars in supernovae \citep[see e.g.\ reviews
by][]{Prantzos05, Weidenspointner06}. Despite their paramount
interest, observations of gamma-ray lines have historically been plagued
by intense and complex instrumental backgrounds against which the much
smaller signals from celestial sources need to be discerned
\citep[e.g.][]{Weidenspointner05}. Recently, a novel experimental
technique, promising to improve the sensitivity of existing gamma-ray
telescopes by one to two orders of magnitude, has been demonstrated: the
Laue lens \citep{Halloin04, vonBallmoos04}.
A Laue lens takes advantage of Bragg diffraction in
crystals to concentrate incident gamma rays onto a detector
\citep{Lund92, Smither95, Halloin04, vonBallmoos04}. In this approach
it is possible to employ a large photon collecting area together with
a small detector volume, which results in a greatly increased
signal-to-background ratio and hence a greatly improved sensitivity.
MAX is a Laue lens telescope that has been proposed to the French
Space Agency CNES in response to an announcement of opportunity for a
formation flight demonstration mission \citep{Barriere06,
vonBallmoos06}. The MAX gamma-ray Laue lens consists of Ge and Cu
crystals and has a focal length of about 86~m. MAX is designed to
concentrate gamma rays in two 100~keV wide energy bands centered on
the two lines which constitute the prime astrophysical interest of the
mission: the 511~keV positron annihilation line, and the broadened
847~keV line from the decay of $^{56}$Co copiously produced in Type~Ia
supernovae.
Significant effort is being devoted to studying different types of
crystals that may be suitable for focusing gamma rays at nuclear line
energies \citep[e.g.][]{Abrosimov06, Courtois06, Smither06}. However,
to achieve the best possible performance of MAX, it is also necessary
to optimize the detector used to collect the source photons
concentrated by the lens. We address this need by applying proven
Monte Carlo and event reconstruction packages to predict the
performance of MAX for three different Ge detector concepts: a
standard co-axial detector, a stack of segmented detectors, and a
Compton camera consisting of a stack of strip detectors. We chose Ge
as detector material since it provides the best energy resolution for
line spectroscopy in the energy range of nuclear transitions. Each of
these detector concepts exhibits distinct advantages and disadvantages
regarding fundamental instrumental characteristics such as detection
efficiency or background rejection, which ultimately determine
achievable sensitivities. Our goal is to identify the most promising
detector concept for a Laue lens. We consider the expected
sensitivity to be the most important performance parameter, but also
include capabilities for spectroscopy, imaging, and polarimetry in our
final decision. The most promising detector concept will be studied in
more detail and optimized in the future. First advances in the design
of a Compton detector are presented in a companion paper by
\citet{Wunderer06b}.
\section{\label{sim_performance} Simulation of instrument performance}
This section provides a brief overview of the simulation and analysis
techniques that we employed to estimate by {\it ab
initio} Monte Carlo simulation the performance of three different
detector concepts for MAX. Our study benefited from experience gained
from the modelling of the performance of past or existing gamma-ray
missions such as TGRS on board {\sl Wind}, the Ramaty High Energy Solar
Spectroscopic Imager ({\sl RHESSI}), or SPI onboard {\sl INTEGRAL} by
Monte Carlo simulation \citep[see][ respectively]{Weidenspointner05,
Wunderer04, Weidenspointner03}, and the enhanced set of Monte Carlo and data
analysis tools developed for predicting the performance of various
instrumental concepts for a future {\it Advanced Compton Telescope}
\citep{Wunderer06a,Boggs06a}.
\subsection{\label{sim_performance_instr-sc-models} Instrument and
spacecraft models}
Among other inputs, the simulation of the performance of a gamma-ray
instrument requires a detailed computer description of the
experimental set-up under study. This so-called mass model specifies
the geometrical structure of instrument and spacecraft, the atomic
and/or isotopic composition of materials, and sets parameters that
influence the transport of particles in different materials.
\begin{figure}[t]
\begin{center}
\epsfig{figure=vac0_hiddenlines.ps,bbllx=149pt,bblly=177pt,bburx=411pt,bbury=651pt,clip=,width=6cm}
\end{center}
\caption{A side view of the MAX detector spacecraft model. The
spacecraft body is cylindrical. The detector is
situated on top of a 1~m tower. The circular radiator assumed to
passively cool the detector is clearly visible. Details are given in
the text.}
\label{overview_fig}
\end{figure}
The basic design of the MAX detector spacecraft and the mounting and
cooling of the detector concepts assumed in this study emerged from
the CNES pre phase A study of the MAX mission \citep{Barriere06,
vonBallmoos06}. This basic design is identical for all three detector
concepts, the only difference between the mass models is in the
definition of the detectors. As can be seen in
Fig.~\ref{overview_fig}, in each concept the detector is located on
top of a 1~m tower; passive cooling is provided by a Be radiator of
diameter 1~m.
The dimensions of the cylindrical spacecraft body are radius 60~cm and
height 192~cm. The spacecraft contains, among other components, tanks
for hydrazine propellant and for cold gas, various electronics boxes,
reaction wheels, and thrusters. The total mass of the spacecraft is
about 260~kg. The tower separating the detector from the spacecraft
body is intended to reduce possible instrumental background created in
the satellite structure. In addition, a 5~cm thick BGO (bismuth
germanate) crystal at the top of the tower and underneath the detector
(see Figs.~\ref{maxtgrs_fig} and \ref{maxnct_fig}) serves as active
(veto) and passive shield for the detector. The combined mass of the
tower structure, the BGO veto shield, the radiator, and various
detector electronics components is about 47~kg.
\begin{figure}[t]
\begin{center}
\epsfig{figure=max-tgrs_det-tower-section.xfig.ps,bbllx=122pt,bblly=244pt,bburx=484pt,bbury=570pt,clip=,width=8cm}
\end{center}
\caption{A section of the MAX-TGRS detector mass
model, including the top of the tower. The Ge crystal is surrounded by
a plastic veto dome at the sides and at the top, and by a BGO crystal
at the bottom.}
\label{maxtgrs_fig}
\end{figure}
To study the performance of a standard co-axial detector concept for
MAX (hereafter: MAX-TGRS) we resorted to the TGRS Ge detector flown on
the {\sl Wind} mission. The TGRS detector has been described by
\citet{Owens95}; our mass model is a modified version of the TGRS mass
model used by \citet{Weidenspointner05} for their detailed
instrumental background study. A section of the MAX-TGRS detector mass
model, including the top of the tower, is depicted in
Fig.~\ref{maxtgrs_fig}. Size, geometry, and material composition of Ge
crystal, cathode, and Al housing remained unchanged. For MAX-TGRS, the
radiative cooler of the original TGRS detector was removed. Instead, a
cold finger leading to the radiator was introduced, and miscellaneous
passive materials representing assumed detector support structure and
electronics were positioned below the detector housing. The detector
assembly is enclosed on the sides and at the top by a 0.5~cm thick
plastic veto shield, which is viewed by two photomultipliers
(PMTs). Also depicted in Fig.~\ref{maxtgrs_fig} are the BGO veto
shield beneath the detector assembly and the respective PMTs. Plastic
dome and BGO crystal cover all lines of sight to the Ge crystal. The
volume of the Ge crystal (height about 6.1~cm, radius about 3.4~cm) is
about 216~cm$^3$, the total mass of the detector assembly including
the plastic veto dome is about 3~kg.
The performance of a stack of segmented detectors and of a Compton
camera consisting of a stack of strip detectors was studied with the
exact same mass model, but different analysis procedures for the
simulated data (see Sec.~\ref{sim_performance_data_analysis}). Both
the segmented (hereafter: MAX-NCTseg) and the Compton detector
(hereafter: MAX-NCTcompt) concepts consist of a stack of five detector
modules modelled after the successfully tested Ge detectors of the
balloon borne {\it Nuclear Compton Telescope} \citep[NCT][]{Boggs06b}.
As can be seen in Fig.~\ref{maxnct_fig}, the basic layout of the
instrument geometry (or mass model) for these two concepts is the same
as that of MAX-TGRS: the detector assembly is located inside a plastic veto
dome, with the BGO veto shield below. Each of the five detector planes
is roughly $8 \times 8$~cm$^2$ in size, with a thickness of 1.5~cm,
yielding a total detector volume of about 480~cm$^3$. The gap between
adjacent detector planes was chosen to be 0.7~cm in our concepts. The
total mass of the detector assembly including the plastic veto dome is
about 6~kg.
Each of these detector concepts exhibits distinct advantages and
disadvantages. From a technical point of view, the MAX-TGRS concept is
simplest and easiest to realize, while MAX-NCTcompt is the most
complex and demanding. MAX-TGRS has only one detector channel,
MAX-NCTseg a few, MAX-NCTcompt a few hundred; consequently cooling
MAX-NCTcompt is much more challenging than MAX-TGRS. MAX-NCTcompt
offers superior background rejection capabilities, at
the price of reduced photopeak efficiency, because much more
information is available for each registered event than for the other
two concepts.
Finally, MAX-NCTcompt has the unique advantage of fine
spatial resolution, which is indispensable for realizing imaging and
polarimetry.
\begin{figure}[t]
\begin{center}
\epsfig{figure=maxnctseg_side-section.xfig.ps,bbllx=110pt,bblly=230pt,bburx=490pt,bbury=570pt,clip=,width=8cm}
\end{center}
\caption{A section of the MAX-NCTseg and MAX-NCTcompt detector mass
models, including the top of the tower. The stack of Ge detectors is
surrounded by a plastic veto dome at the sides and at the top, and by
a BGO crystal at the bottom.}
\label{maxnct_fig}
\end{figure}
We would like to emphasize that all three detector concepts are
conservative in the sense that we only used detector designs that have
already been flown and successfully operated in a space
environment.
However, the desgin of all three concepts can be improved, e.g.\ by
minimizing the amount of passive materials, by carefully selecting the
passive materials (e.g.\ elemental composition: carbon fiber instead
of Al structure), or by optimizing the geometry and amount of detector
material for the photon energies of interest to MAX. This is
particularly pertinent for the NCT detectors, which currently are
designed with emphasis on cost as well as reliability and robustness
for use in a balloon demonstration flight. We therefore expect our
performance estimates to be conservative, and that improvements of the
detector designs will result in improved performance.
\subsection{\label{sim_performance_lens-model} Lens beam and
effective area}
\begin{figure}[t]
\begin{center}
\epsfig{figure=MAXbeam_reduced.eps,width=8cm}
\end{center}
\caption{The focal spot distribution of photons concentrated by the
MAX Laue lens onto the detector plane. The three contour levels
indicate, with increasing radius, the detector surface exposed to
50\%, 75\%, and 90\% of all incident photons, respectively. More
details are given in the text.}
\label{maxbeam_fig}
\end{figure}
Estimating the performance of MAX detector concepts also requires a
model for the focal spot distribution of photons concentrated by the
Laue lens, and an estimate of its effective area (i.e.\ the
geometrical area times the diffraction efficiency).
For this study, both quantities were determined
by Monte Carlo simulation.
As described in more detail in \citet{Barriere06}, for this study the
lens crystals were assumed to have a geometrical size of 1.5~cm
$\times$ 1.5~cm and a mosaicity of 30$^{\prime\prime}$. The focal
length was assumed to be 86~m, and the source was assumed to be
located on the optical axis (i.e.\ the lens is pointed at the source).
For these parameters, about 50\% of the photons from an on-axis point
source that are diffracted by the lens are concentrated within 1~cm
from the center of the focal spot, as can be seen in
Fig.~\ref{maxbeam_fig} depicting the simulated focal spot
distribution. To actually perform the detector response simulations in
our study, the Laue lens focal spot distribution was introduced as a
new beam type named {\tt LRAD} into the MGGPOD Monte Carlo suite
described below in Sec.~\ref{sim_performance_mggpod}.
Unlike the focal spot distribution of the Laue lens design considered
here,
the effective area of the lens is a function
of energy: about 1191~cm$^2$ and 661~cm$^2$ at energies of 511~keV and
847~keV, respectively.
\subsection{\label{sim_performance_radenv-models} Radiation environment
models}
In the CNES pre phase A study it was concluded that MAX would be best
operated in either a high Earth orbit (HEO) or an L2 orbit
\citep{vonBallmoos06}. The instrumental background would then mainly
be due to two radiation fields, namely diffuse cosmic gamma rays and
Galactic cosmic rays. Both radiation fields were assumed to be
isotropic in our simulations. The spectrum of the diffuse cosmic
gamma-ray background was taken from the analytical description given
by \citet{Gruber99}. The spectrum and intensity of Galactic cosmic-ray
protons was modelled using the {\tt COSN} default solar minimum
spectrum of the MGGPOD package, which is based on the cosmic-ray
propagation models of
\citet{Moskalenko02}.
Galactic cosmic rays not only produce prompt background due to
hadronic interactions and de-excitations of excited nuclei, but also
produce radioactive isotopes whose decay gives rise to delayed
instrumental background. When simulating this delayed background, we
assumed that the instrument and spacecraft materials had been
irradiated for one year with the {\tt COSN} cosmic-ray proton spectrum
and intensity.
\subsection{\label{sim_performance_mggpod} MGGPOD Monte Carlo suite}
We used the MGGPOD Monte Carlo package \citep{Weidenspointner05} to
simulate the response and the instrumental background expected for
each of the three different detector concepts for
MAX. MGGPOD is a user-friendly suite of Monte Carlo codes that is
available to the public from a site at
CESR\footnote{http://sigma-2.cesr.fr/spi/MGGPOD/}. MGGPOD is built
around the widely used GEANT3.21 package \citep{Brun95} and allows
simulation {\it ab initio} of the physical processes relevant for
estimating the performance of gamma-ray instruments. Of particular
importance is the production of instrumental backgrounds, which
include the build-up and delayed decay of radioactive isotopes as well
as the prompt de-excitation of excited nuclei, both of which give rise
to a plethora of instrumental gamma-ray background lines in addition
to continuum backgrounds. Among other packages, MGGPOD includes the
GLECS \citep{Kippen04} and GLEPS \citep{McConnell_Kippen04} packages for
simulating the effects of atomic electron binding and photon
polarization for Rayleigh and Compton scattering.
As mentioned in Sec.~\ref{sim_performance_lens-model}, for this study
a new beam type named {\tt LRAD} was introduced into the MGGPOD Monte
Carlo suite. This beam allows the user to define an azimuthally
symmetric incident photon flux which is characterized by its radial
profile. The direction of incidence is assumed to be identical for all
photons, rather than spread over the directions to the lens covering a
few degrees in the field-of-view, which is an approximation that
should not signficantly affect our detector performance estimates at
this early stage of the study.
\subsection{\label{sim_performance_megalib} MEGAlib analysis package}
The complex event analysis for the MAX-NCTcompt detector concept was
performed with the MEGAlib package \citep{Zoglauer06}. Originally, it
had been developed for the MEGA Compton telescope prototype
\citep{Kanbach04}. The package provides the complete data analysis
chain for Compton telescopes, including the crucial steps of event
reconstruction and background rejection, which are described in more
detail in \citet{Zoglauer05, Zoglauer06, Wunderer06b} and references therein.
\subsection{\label{sim_performance_data_analysis} Data analysis}
We compared the performance of the three MAX detector concepts under
study for three different gamma-ray lines: narrow lines at 511~keV and
847~keV, and a broadened line (3\% full width at half maximum, FWHM, deemed
typical for Type~Ia supernovae) at 847~keV. For each concept we
simulated the instrumental response to these three lines for an
on-axis point source. We also simulated the instrumental backgrounds
due to diffuse cosmic gamma rays, to Galactic cosmic-ray protons at solar
minimum, and to the decay of radioactive isotopes resulting from
one year of cosmic-ray proton irradiation. For all three concepts
radioactive decays in the detectors and diffuse cosmic gamma rays were
found to be the dominant instrumental background components. In
comparison, the prompt cosmic-ray induced background is small, and the
background due to radioactive decays in the satellite structure is
even smaller.
Despite the fact that source photons are concentrated by the Laue lens
onto the detector, MAX is still largely background dominated (the
signal-to-noise ratio being on the order of several per cent), and we
calculated its sensitivity to an on-axis gamma-ray line point source
according to
\begin{equation}
\label{sens_bgddom_unknown_final}
f_{n_\sigma} = \frac{n_\sigma \cdot \sqrt{\sum_{i=1}^{n_b} b_i(\Delta
E)}} {A_{\rm eff} \cdot \epsilon(\Delta E) \cdot \sqrt{t_{tot}}}
\cdot \eta
\end{equation}
where $f_{n_\sigma}$ is the sensitivity in [ph~cm$^{-2}$~s$^{-1}$],
$n_\sigma$ is the statistical significance of the detection,
$\sum_{i=1}^{n_b} b_i(\Delta E)$ is the sum of all instrumental
background components in [cts~s$^{-1}$] in the analysis interval
$\Delta E$ centered on the line energy, $A_{\rm eff}$ is the effective
area of the Laue lens in [cm$^2$],
$\epsilon(\Delta E)$ is the photopeak efficiency, $t_{tot}$ is the
total effective observation time in [s], and $\eta$ is a factor in the
range 1--2 whose value depends on how the instrumental background
during an observation is determined. Ideally, the instrumental
background is known, and $\eta$ becomes 1. If the instrumental
background is determined through an on-off observation strategy, i.e.\
one half of the total effective observation time is spent pointing at
the source, and the other half pointing away from the source measuring
the instrumental background, $\eta$ is 2. An intermediate case can be
realized by operating two detectors simultaneously such that they
alternately point at the source and away from it; $\eta$ then assumes
a value of $\sqrt{2}$.
For MAX-TGRS data analysis is straight forward since the only event
selections that can be applied in the case of a single detector
crystal are the thresholds of the detector and of the veto shields and
the width of the analysis energy interval. We assumed the same
thresholds for all three concepts: 15~keV for the detector, and veto
thresholds of 70~keV and 200~keV for the BGO and plastic dome shields,
respectively. We assumed an energy resolution as measured for the SPI
detectors \citep{Lonjou05} for the MAX-TGRS detector.
In the MAX-TGRS concept it is impossible to separate source signal and
instrumental background from a single observation; an on-off pointing
strategy must be adopted, and $\eta = 2$ in
Eq.~\ref{sens_bgddom_unknown_final} when calculating the instrument
sensitivity. At best, two MAX-TGRS detectors could be operated
simultaneously; the minimum value of $\eta$ therefore is $\sqrt{2}$
for this concept.
\begin{table*}[t]
\caption{The sensitivity of three detector concepts for MAX for three
different gamma-ray lines. Sensitivities are for a statistical
significance of $3\sigma$ and a total effective observation time of
$10^6$~s. The effective area of the MAX Laue lens was assumed to be
1191~cm$^2$ and 661~cm$^2$ at 511~keV and 847~keV, respectively.
The quoted values pertain to the energy interval $\Delta E$,
centered on the line energy, that optimizes the sensitivity. The ranges
in sensitivity reflect the possible values of $\eta$ in
Eq.~\ref{sens_bgddom_unknown_final} as discussed in the text.
The range in sensitivity for MAX-NCTseg in addition includes the two
choices for treating energy deposits in unused detectors, which may be
ignored or used as additional veto.
}
\label{table_sens}
\begin{center}
\begin{tabular}[t]{lccc}
\hline
\noalign{\smallskip}
& MAX-TGRS & MAX-NCTseg & MAX-NCTcompt \\
\noalign{\smallskip}
\hline \hline
\noalign{\smallskip}
Line Energy [keV] & \multicolumn{3}{c}{Sensitivity
[$10^{-6}$~ph/cm$^2$/s]} \\
\noalign{\smallskip}
\hline \hline
\noalign{\smallskip}
511 & 4.2--6.0 & 2.5--4.6 & 1.3--1.8 \\
847 & 4.9--6.9 & 2.6--3.8 & 1.3--1.8 \\
847 (3\% FWHM) & 18--25 & 10--15 & 3.5--4.9 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[t]
\caption{The photopeak efficiencies and the background rates that went
into the calculation of the line sensitivities in
Table~\ref{table_sens}.}
\label{table_eff_bgd}
\begin{center}
\begin{tabular}[t]{lccc}
\hline
\noalign{\smallskip}
& MAX-TGRS & MAX-NCTseg & MAX-NCTcompt \\
\noalign{\smallskip}
\hline \hline
\noalign{\smallskip}
Line Energy [keV] & \multicolumn{3}{c}{Photopeak Efficiency [\%]} \\
\noalign{\smallskip}
\hline \hline
\noalign{\smallskip}
511 & 38 & $22-28$ & 6 \\
847 & 24 & $16-22$ & 6 \\
847 (3\% FWHM) & 27 & $17-24$ & 6 \\
\noalign{\smallskip}
\hline \hline
\noalign{\smallskip}
Line Energy [keV] & \multicolumn{3}{c}{Background Rate [cts/s]} \\
\noalign{\smallskip}
\hline \hline
\noalign{\smallskip}
511 & $2.1 \times 10^{-1}$ & $2.3\times10^{-2} -
6.5\times10^{-2}$ & $1.0\times10^{-3}$ \\
847 & $3.4 \times 10^{-2}$ & $4.2\times10^{-3} -
8.6\times10^{-3}$ & $2.6\times10^{-4}$ \\
847 (3\% FWHM) & $5.2 \times 10^{-1}$ & $8.0\times10^{-2} -
1.6\times10^{-1}$ & $2.1\times10^{-3}$ \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
\end{table*}
For MAX-NCTseg, data analysis is slightly more complicated. As
described in Sec.~\ref{sim_performance_lens-model}, for an on-axis
point source the Laue lens concentrates source photons onto a
relatively small focal spot. This can be exploited in the data
analysis by including the criterion that a valid event must
deposit energy in a cylindrical detector volume
centered on the optical axis in a selected set of detector layers;
events that do not deposit energy in these central detector volumes
are most likely instrumental background that should be rejected. We
implemented this simple scheme by assuming that each detector layer
consists of two segments or pixels: a cylindrical, central segment,
and a second segment comprising the remaining detector layer
volume.
Different values for the central radius were tried.
In addition, we varied in the analysis the number of detector layers
used to record source photons (source recording detector layers --
SRDLs) in order to estimate the optimum number of detector layers for
the MAX-NCTseg concept without performing a full simulation for each
possibility. In doing so, we also had to choose how to treat remaining
detector layers (background recording detector layers -- BRDLs):
these were either ignored or used as additional veto shields. A valid
event was required to deposit more than 15~keV in at least one central
pixel of the SRDLs
without any veto trigger. We assumed an energy resolution as
measured for the NCT detectors (S.\ Boggs, priv.\ comm.). For the same
reasons given for the MAX-TGRS concept, $\eta$ lies in the range
$\sqrt{2}$--2 for the MAX-NCTseg concept.
Data analysis for MAX-NCTcompt was most complex and performed using
MEGAlib. Many logical criteria can be applied to decide whether the
energy deposits in the detector are consistent with an interaction
sequence of a photon originating from the Laue lens, including the
criterion that the first interaction needs to occur in a cylindrical
detector volume of a given radius centered on the optical axis in any
one of a selected set of detector layers
\citep[see][ for details]{Wunderer06b}.
The spatial pitch of the Ge strip detectors was assumed to be 2~mm in
the plane of the detectors, and 0.4~mm in depth (S.\ Boggs, priv.\
comm.). Again, we assumed an energy resolution as measured for the
NCT detectors.
The inherent imaging capabilities of the MAX-NCTcompt detector should
permit both the source signal and the instrumental background to be
measured in a single observation, as was the case for the imaging
Compton telescope COMPTEL \citep{Schoenfelder93}. Any need for
off-source observations is therefore obviated, and the total effective
observation time $t_{tot}$ can be spent pointing at the source. In
this case the value of $\eta$ approximates 1 in
Eq.~\ref{sens_bgddom_unknown_final}. An exact determination of the
value of $\eta$ is difficult and beyond the scope of this paper. We
expect the number of data space bins free of source signal to exceed
that of data space bins containing source signal, hence $\eta$ should
be smaller than $\sqrt{2}$. We
conservatively adopt a range of
1--$\sqrt{2}$ for $\eta$ for the Compton detector concept.
\section{\label{results} Results}
The sensitivities of the three different MAX detector concepts for an
on-axis point source for three different gamma-ray lines are
summarized in Table~\ref{table_sens}. Sensitivities are for a
statistical significance of $3\sigma$ and a total effective
observation time of $10^6$~s; the ranges reflect the possible values
of $\eta$ in Eq.~\ref{sens_bgddom_unknown_final} as discussed
above. The effective area of the MAX Laue lens was assumed to be
1191~cm$^2$ and 661~cm$^2$ at 511~keV and 847~keV,
respectively. Sensitivity values are quoted for the best choices of
both the radius of the central pixel and of the width of the energy
interval $\Delta E$ centered on the line energy.
The range in sensitivity for MAX-NCTseg in addition includes the two
choices for treating energy deposits in BRDLs, which
may be ignored or used as additional veto, and are given for the
optimal choice of SRDLs in each case. If energy deposits in
BRDLs are ignored, for the 511~keV line it is best to
use all five detector layers; for the 847~keV lines there is little
difference between using three, four, or five layers (values are
quoted for five layers). If BRDLs are used as
additional veto the sensitivity can be slightly improved. It is then
best to use only three detector layers in this case for both the
511~keV and the 847~keV lines. In either case the optimal central
pixel radius is about 1.1~cm.
For MAX-NCTcompt the best radial size of the detector region where the
first interaction needs to occur is slighty larger than for
MAX-NCTseg;
values range between 1.2 and 1.5~cm, depending on the details of the
event selections. The achieved sensitivity depends only weakly on the
choice of detector layers in which the first interaction needs to
occur. It seems that restricting the first interaction to the top four
layers is best.
As can be seen from Table~\ref{table_sens}, the Compton detector
concept MAX-NCTcompt offers the best sensitivity for each of the three
lines. In order to illustrate fundamental performance characteristics
such as detection efficiency or background rejection,
Table~\ref{table_eff_bgd} summarizes the photopeak efficiencies and
the background count rates corresponding to the choices for energy
band, detector layers, and event selections that optimize the
sensitivity for each of the three detector concepts and all three
lines under study. The ranges for the MAX-NCTseg concept reflect the
two different treatments of energy deposits in BRDLs
(the lower and upper bounds are obtained if BRDLs are
treated as additional veto or ignored, respectively). The photopeak
efficiencies for MAX-NCTseg do not fall far short of those obtained
with MAX-TGRS. Differences are due to the fact that photons can more
easily escape MAX-NCTseg than MAX-TGRS, and that only a fraction of
the incident photons interacts in one of the central pixels of the
segmented MAX-NCTseg detectors. In contrast, the MAX-NCTcompt
photopeak efficiency is much smaller. For this concept the rather low
photopeak efficiency is due to the severe event selections, which
result in many source events being rejected. Nevertheless, the
MAX-NCTcompt concept offers the best sensitivity because of its
superior capabilities for rejecting instrumental background, as can be
seen in Table~\ref{table_eff_bgd}.
\section{\label{summary} Summary and conclusion}
We have used {\it ab initio} Monte Carlo simulations to compare the
performance of three different Ge detector concepts for the MAX Laue lens
gamma-ray telecope: a standard co-axial detector, a stack of segmented
detectors, and a Compton camera consisting of a stack of strip
detectors. The performance was assessed for an on-axis point source in
three different gamma-ray lines: narrow lines at 511~keV and 847~keV,
and a broadened (3\% FWHM) line at 847~keV.
We find that the Compton detector concept MAX-NCTcompt offers the best
sensitivity for each of the three lines. The Compton concept also
offers other unique advantages over the other two concepts. Because of
their fine spatial resolution, the detectors of a Compton camera are
ideally suited to follow the inevitable small excursions of the focal
spot on the detector surface due to residual relative motions of the
lens and detector spacecraft; with a Compton camera one could also
adjust the size of the focal spot to the requirements of a given
observation during data analysis. The fine spatial resolution
necessary for Compton detectors is also required if the limited
imaging capabilites of a Laue lens are to be exploited, e.g.\ to
separate close point sources or to study the morphology of slightly
extended emission such as that from Galactic supernova remnants.
Finally, the complementary characteristics of a Laue lens and of a
Compton detector with respect to photon polarisation render their
combination a powerful polarimeter. At nuclear line energies a Laue
lens does not change the polarisation of the diffracted photons
\citep{Halloin_Bastie06, Halloin06}, while a Compton detector is
instrinsicaly ideally suited for performing polarimetry because of the
azimuthal variation of the scattering direction for linearly polarized
photons \citep{Lei97}. The combination of a Laue lens with a Compton
detector will thus open a new
observational window on many gamma-ray sources in which strong
magnetic fields are present, such as pulsars, or on jets expelled by
compact, accreting objects.
We therefore conclude that a Compton
camera is the most promising detector concept for MAX. We expect this
conclusion to apply not only to the three gamma-ray lines studied
here, but to all Laue lens gamma-ray telescopes proposed for the
nuclear line region, such as the Gamma-Ray Imager
\citep[GRI,][]{Knoedlseder06}.
Although not the primary focus of this study, it is still worth
pointing out that even with a rather conservative design of the
Compton camera that leaves still ample room for improvement, narrow
line sensitivities of about $10^{-6}$~ph~cm$^{-2}$~s$^{-1}$ are
possible with a relatively small mission such as MAX -- an improvement
over the currently best gamma-ray spectrometer SPI onboard the
INTEGRAL observatory of more than an order of magnitude.
There are many aspects in which the Compton camera studied here can be
improved. First steps towards
optimizing the design of the MAX-NCTcompt
detector are presented in a companion paper by
\citet{Wunderer06b} (there, MAX-NCTcompt is referred to as the SMALL
design). Possible improvements include
a revised design of the BGO
veto shield to decrease the instrumental background contribution of
cosmic diffuse photons, the reduction of passive materials around the
Ge wafers (passive material is a source of instrumental background and
in addition reduces the photopeak efficiency because some fraction of
the source photon's energy might be deposited there), the careful
selection of passive materials used (e.g.\ elemental
composition), the optimization of the geometry and the spatial and
spectral resolution of the Ge detectors to increase the photopeak
efficiency, or improvements of event reconstruction
algorithms. Efforts to optimize the performance of a Compton detector
for Laue gamma-ray lenses are ongoing \citep[see e.g.\ the companion
paper by][]{Wunderer06b} and will be reported in future
publications.
|
1,116,691,497,853 | arxiv | \section{Introduction}
Hybrid materials formed by carbon-conjugated molecules adsorbed on low-dimensional semiconductors and insulators have been attracting attention due to the their structural versatility and electronic tunability.~\cite{Lee2014,zhen+16nano,breu+16pssrrl,Gobbi2018, dauk+19apx,mrky+19apl,rija+20jpcl,qiao+212DM,amst+21jpcl} Depending on their density on the substrate and on their physico-chemical characteristics, physisorbed moieties can introduce localized electronic states,~\cite{chou+17jpcc,zhon+18jpcl,wang-paul20pccp} dispersive bands,~\cite{rija+20jpcl} or a combination thereof.~\cite{cai+16cm,jing+jmca,krum-cocc21es} The electronic structure of the interface results from the level alignment between the organic and inorganic components~\cite{zhu+18sa,zhan+18am,aden-liu21jcp,park+21as,ye+21jpcl} and the hybridization between their electronic wave-functions.~\cite{song+17nano,shen-tao17ami,xie+19jpca,krum-cocc21es,guo+22nr} As both these effects depend on the intrinsic nature of the building blocks, the need for systematic analyses on the electronic structure of hybrid systems are in high demand.
Electronic structure calculations based on density-functional theory (DFT) are particularly suited for this purpose~\cite{quek-khoo14acr,hofm+21pccp} and for exploring various material combinations without requiring empirical parameters. With the electron density being its central quantity, DFT grants immediate access to the charge redistribution induced by adsorption.~\cite{cai+16cm,jing+jmca,song+17nano,park+21am}
This way, it is possible to assess the type of ground-state doping and to gain insight into the spatial extension of the electron cloud at the interface. Furthermore, DFT calculations are able to deliver work functions, level alignments, band structures, and (projected) density of states, among other important properties.~\cite{cai+16cm,jing+jmca,zhen+16nano,park+21am,krum-cocc21es}
While state-of-the-art first-principles methods to obtain the electronic structure of solid-state materials are currently based on many-body perturbation theory,~\cite{drax+14acr,aden-liu21jcp} the choice of range-separated hybrid functionals to approximate the exchange-correlation potential in DFT offers the optimal trade-off between accuracy and computational costs.~\cite{park+21am,krum-cocc21es}
Proper inclusion of van der Waals interactions improves the prediction of structural arrangements and hence the description of electronic properties.~\cite{tkat+10mrs}
The level of accuracy currently achieved by such \textit{ab initio} calculations ensures reliable results complementary to experiments.~\cite{zhen+16nano,liu+17nl,park+21am}
In this work, we present a DFT study on the structural, energetic, and electronic properties of five representative organic molecules, including donor and acceptor compounds as well as a purely aromatic moiety, adsorbed on freestanding hexagonal boron nitride (hBN) and molybdenum disulfide (\ce{MoS2}) monolayers.
The former is a known insulator, widely used as a substrate and/or as an encapsulating material in low-dimensional heterostructures,~\cite{zhan+18am} which has been receiving increasing attention in surface and interface science~\cite{Auwarter2012, Lin2012, Gomez2013, Weng2016, Zhang2017, Kim2018, Auwarter2019}, for instance to sustain the growth of well-defined organic thin films.~\cite{krat+19jpd,matk+19afm,amst+21jpcl} \ce{MoS2} belongs to the family of transition-metal dichalcogenides, the most promising emerging class of low-dimensional semiconductors.
By performing geometry optimizations using the generalized-gradient approximation (GGA) and refining the analysis of the electronic structure using a range-separated hybrid functional, we rationalize how the nature of the constituents of the hybrid interface determines the level alignment and the projected density of states. Our findings offer useful indications to interpret and predict the electronic properties of similar low-dimensional hybrid interfaces from the character of substrates and adsorbates.
\section{Methods and Systems}
\subsection{Computational details}\label{Methods}
All results presented in this work are obtained from DFT~\cite{hohe-kohn64pr} electronic structure calculations through the solution of the Kohn-Sham equations.~\cite{kohn-sham65pr}
The structures are optimized at the GGA level of theory, using the Perdew-Burke-Ernzerhof (PBE) functional.~\cite{perd+96prl}
To compute electronic properties on each optimized structure, including densities of states, energy levels alignment and molecular orbitals, the Heyd–Scuseria–Ernzerhof (HSE06)~\cite{heyd+06} range-separated hybrid functional is adopted.
For all complexes with hBN as a substrate, we employ the Gaussian and plane-wave formalism, as implemented in the CP2K package.~\cite{cp2k2020} We choose the short-range-double-$\zeta$ MOLOPT basis sets~\cite{molopt2007} for the expansion of the valence electron density, while the interaction with the atomic cores is represented by Godecker-Teter-Hutter (GTH) pseudopotentials.~\cite{GTH1996, GTH1998, GTH2005}
The expansion of the density in an auxiliary plane waves basis is truncated at the kinetic-energy cutoff of 600~Ry.
The van der Waals (vdW) contributions are included either according to the Grimme-D3 scheme~\cite{grim06JCC} or by augmenting the exchange-correlation functional with the self-consistent rVV10 functional,~\cite{rvv10} which in combination with PBE is known to provide reliable structural properties for similar hybrid interfaces.~\cite{Iannuzzi2014}
We apply the quasi-Newtonian Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm to find the minimum on the potential energy surface, with a convergence criterion of 5$\times$10$^{-4}$~Ha/bohr in the energy gradients. The Brillouin zone is sampled at the $\Gamma$-point only.
For the \ce{MoS2}-based interfaces, we use the plane-wave expansion of the wave-functions and the electron density as implemented in the Quantum Espresso (QE) code,~\cite{gian+17jpcm} with a cutoff of 30 and 300 Ry respectively, and the projector augmented-wave method.~\cite{bloc94prb} BFGS optimization is carried out with a threshold for the interatomic forces of 5$\times$10$^{-4}$~Ha/bohr.
A uniform 6$\times$6$\times$1 \textbf{k}-point mesh is adopted to sample the Brillouin zone and vdW corrections are included according to the Grimme-D3 scheme.~\cite{grim06JCC}
\subsection{Model systems}
We consider two-dimensional (2D) hybrid interfaces formed by five carbon-conjugated organic molecules physisorbed on monolayer hBN and \ce{MoS2}.
The organic molecules considered in this study exhibit different electronic characteristics: tetrathiafulvalene (TTF) and 2,2'-bithiophene (2T) are known to act as donors, while 7,7,8,8-tetracyanoquinodimethane (TCNQ) and its tetrafluorinated derivative (\ce{F4-TCNQ}) are strong acceptors;~\cite{sun+19am} for comparison, we additionally consider pyrene,~\cite{picc+19jpcl,herp+21jpca} a polycyclic aromatic hydrocarbon~\cite{dias85acr} of similar size as the aforementioned molecules.
The hybrid model interfaces are constructed by placing one molecule on top of the two dimensional material with parallel backbone with respect to the substrate and running a geometry optimization (see Fig.~\ref{fig_bn_1}
and \ref{fig_mos2_1}).
hBN is modelled in a 6$\times$6 supercell, where we adopted the experimental lattice constant for the unit cell, $a=2.5$~\AA{}.
For \ce{MoS2}, we used a 4$\times$4 supercell with unit-cell lattice parameter $a = 3.19$~{\AA}.
A sufficiently large amount of vacuum (20~\AA{} with \ce{MoS2} and 40~\AA{} with hBN) above the interfaces prevents spurious interactions between the periodic replicas under the applied periodic boundary conditions.
\section{Results and Discussion}
\subsection{Structural Properties}\label{structures}
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{hBN-structures.png}
\caption{ {\small (a) A 6$\times$6 supercell of hBN monolayer; top and side views of the hybrid interfaces formed by (b) tetrathiafulvalene (TTF), (c) bithiophene (2T), (d) pyrene, (e) tetracyanoquinodimethane (TCNQ), and (f) fluorinated TCNQ (\ce{F4-TCNQ}) adsorbates.}}
\label{fig_bn_1}
\end{figure}
All molecules adsorb approximately flat on top of hBN, thus maximising dispersion interactions.
The distance of the molecular species from the substrate plane ranges from 3.3 to 3.4~{\AA}.
Upon adsorption, the molecular structures do not change appreciably compared to the gas-phase configurations. Exceptions concern a concave bending of TTF of about 11$^{\circ}$ towards the substrate (see Fig.~\ref{fig_bn_1}b), in contrast with previous results on metallic surfaces,~\cite{Wang2011, Kretz2021} where the molecule bends in a convex fashion due to the strong interactions with the metal electronic charge density. Furthermore, the 2T undergoes a backbone ``twist'' with a dihedral angle of 7$^{\circ}$ (Fig.~\ref{fig_bn_1}c).
Finally, hBN is subject to a slight rippling as a result of the attractive $\pi$-$\pi$ interactions with the physisorbed molecules.
Corresponding values of 0.21~\AA{}, 0.23~\AA{}, 0.27~\AA{}, and 0.29~\AA{} are found in the heterostructures with TTF and 2T, with TCNQ, with \ce{F4-TCNQ}, and with pyrene, respectively, see Fig.~\ref{fig_bn_1}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{MoS2-structures.png}
\caption{ {\small (a) A 4$\times$4 supercell of \ce{MoS2} monolayer; top and side views of hybrid interfaces formed by (b) tetrathiafulvalene (TTF), (c) bithiophene (2T), (d) pyrene, (e) tetracyanoquinodimethane (TCNQ) and (f) fluorinated TCNQ (\ce{F4-TCNQ}) adsorbates.}}
\label{fig_mos2_1}
\end{figure}
In the hybrid heterostructure including the \ce{MoS2} substrate (Fig.~\ref{fig_mos2_1}a), the donor molecules TTF and 2T exhibit the same concave bending and backbone twisting as in the hBN-based ones discused above (see Fig.~\ref{fig_mos2_1}b,c). As a result, in these molecules, the hydrogen atoms are closer to the substrate than the carbon atoms, at a distance of 3.39~\AA{}.
The acceptors TCNQ and \ce{F4-TCNQ} are slightly bent, too, when physisorbed on \ce{MoS2}, with the nitrogen atoms pointing towards the substrate and being displaced 0.2~\AA{} downwards with respect to the backbone plane lying at 3.39~\AA{} above the monolayer (see Fig.~\ref{fig_mos2_1}e,f). This behavior is analogous to the one exhibited by these molecules on ZnO,~\cite{xu+13prl} on graphene,~\cite{kuma+17acsn} and on the hydrogenated Si(111) surface.~\cite{wang+19aem,jaco+20apx}
Finally, pyrene, which is planar in the gas phase,~\cite{dias85acr} remains such also upon adsorption, and lays at a distance of 3.32~\AA{} from \ce{MoS2}.
\subsection{Energetics}
\begin{table*
\caption{Adsorption energy ($E_\text{ads}$) calculated for the hBN-based heterostructure at the PBE-vdW level, using both the D3 and rVV10 scheme for the vdW contributions; interaction energy ($E_\text{int}$) and dispersion energy ($E_\text{disp}$) computed at the PBE-rVV10 level of theory. All values are in eV.}
\label{tbl:example}
\begin{centering}
\begin{tabular}{lcccc}
\hline
System & $E_{\text{ads}}$(D3) & $E_{\text{ads}}$(rVV10) & $E_{\text{int}}(rVV10)$ & $E_{\text{disp}}(rVV10)$\\
\hline
TTF@hBN & -0.96 & -1.06 & -1.08 & -1.09 \\
2T@hBN & -0.85 & -0.96 & -0.98 & -1.02\\
Pyrene@hBN & -1.13 & -1.30 & -1.33 & -1.44 \\
TCNQ@hBN & -1.04 & -1.21 & -1.24 & -1.31 \\
\ce{F4-TCNQ}@hBN & -1.14 & -1.41 & -1.45 & -1.49 \\
\hline
\end{tabular}
\label{tab_bn_ads}
\end{centering}
\end{table*}
In order to quantify the energetic stability of the considered hybrid heterostructures, we introduce the adsorption energy defined as:
\begin{equation}\label{eq_ads}
E_{\text{ads}} = E^{\text{opt}}_{\text{mol@surf}} - E^{\text{opt}}_{\text{surf}} - E^{\text{opt}}_{\text{mol}},
\end{equation}
where the superscript ``opt'' refers to the optimized geometries and the subscripts ``mol'' and ``surf'' stand for the molecular and surface total energies, respectively.
In the hBN-based interfaces, the adsorption strength increases from donor-like systems to the acceptors (see Table~\ref{tab_bn_ads}) with \ce{F4-TCNQ}, the most electron-withdrawing molecule among the considering ones, leading to the most stable heterostructure precisely on account of this characteristic.~\cite{Greber2018, Auwarter2019}
To better characterize the nature of the molecule-substrate interactions in the considred hybrid interfaces, it is convenient to single out the dispersion contribution from the interaction strength, by introducing the interaction energy
\begin{equation}\label{eq_int}
E_{\text{int}} = E^{\text{opt}}_{\text{mol@surf}} - E_{\text{surf}} - E_{\text{mol}},
\end{equation}
where $E_{\text{surf}}$ and $E_{\text{mol}}$ are the single-point energies computed for the individual subsystems taken with the same coordinates as in the optimized complex. The dispersion contribution to each term is defined as the energy difference at fixed coordinates between a calculation with the vdW correction and one without it. The final contribution to the adsorption is given by the dispersion energy, defined as:
\begin{equation}\label{eq_disp}
E_{\text{disp}} = E^{\text{disp}}_{\text{mol@surf}} - E^{\text{disp}}_{\text{surf}} - E^{\text{disp}}_{\text{mol}}.
\end{equation}
As expected, in the case of the hBN-based heterostructures, the dispersion contribution turns out to be predominant (see Table~\ref{tab_bn_ads}), confirming that no chemical bond is formed between the molecules and the monolayer. The small, yet noticeable, differences between adsorption and interaction energies (20--40 meV) indicate that the charge distribution and also the original geometries of both molecules and substrate are slightly perturbed upon physisorption.
When comparing the interaction energy with the dispersion contribution, one observes that the latter is slightly more negative. This result points to a minor destabilization effect due to distortions upon molecular adsorption.
Indeed, the interaction term must include some repulsive (Pauli) contributions owing to the overlap of the electronic distributions of molecule and substrate, whereas the dispersion part is purely attractive.
Depending on the choice of vdW functional, the relative magnitude of dispersion \textit{vs.} interaction may somewhat vary, but our comparison between two approaches demonstrates the same qualitative picture (see Table~\ref{tab_bn_ads}).
In both cases, all adsorption energies lie between -0.9 and -1.4~eV and the relative trends in stability are the same.
The following electronic-structure calculations involving the hBN substrate are then restricted to the rVV10 approach only, which proved to yield reliable adsorption and structural properties.~\cite{Iannuzzi2014}
The adsorption of TCNQ and TTF on hBN was investigated in a previous work by Tang and coworkers~\cite{Tang2011} who applied DFT with the PBE functional and no additional vdW correction. The resulting adsorption energies are -0.112~eV and -0.041~eV, respectively, \textit{i.e.}, significantly weaker due to the missing dispersion contribution.
\begin{table
\begin{tabular}{ l c c c }
\hline
System & $E_{\text{ads}}$(D3) & $E_{\text{int}}(D3)$ & $E_{\text{disp}}(D3)$ \\ \hline
TTF@MoS$_2$ & -0.91 & -0.91 & -0.94 \\
2T@MoS$_2$ & -0.77 & -0.77 & -0.82\\
Pyrene@MoS$_2$ & -1.02 & -1.02 & -1.17 \\
TCNQ@MoS$_2$ & -0.88 & -0.88 & -0.97 \\
\ce{F4-TCNQ}@MoS$_2$ & -0.97 & -0.97 & -1.01\\
\hline
\end{tabular}
\caption{Adsorption energy ($E_\text{ads}$), interaction energy ($E_\text{int}$) and dispersion energy ($E_\text{disp}$) for the \ce{MoS2}-based heterostrustructures computed at the PBE-Grimme-D3 level of theory. All values are in eV.}
\label{table:abs_energy}
\end{table}
Moving now to the \ce{MoS2}-based interfaces, we find a qualitatively similar trend in the adsorption energies as the one discussed above for the heterostructures with hBN (see Table~\ref{table:abs_energy}).
Among the considered systems, the least stable one is 2T@\ce{MoS2}, due to the twisted backbone of the molecule that reduces the attractive $\pi$-$\pi$ interactions with the substrate. Unsurprisingly, the most negative value of $E_{\text{ads}}$ is found for pyrene, which adsorbs flat on \ce{MoS2} (see Fig.~\ref{fig_mos2_1}d).
On the other hand, in all \ce{MoS2}-based heterostructures, adsorption and interaction energies exhibit differences on the order of 10$^{-3}$~eV, as a sign of negligible energy relaxation of the molecules and of the \ce{MoS2} monolayer when the hybrid interfaces are formed. These variations are one order of magnitude smaller than those computed for the hBN-based interfaces (see Table~\ref{tab_bn_ads}).
A reason for these contrasting behaviors can be ascribed to the chemical nature of the two substrates: While hBN is characterized by a N-rich surface, \ce{MoS2} has instead a S-rich one. Such bare distinction in the composition of the two inorganic materials affects the affinity of the adsorbates towards them. Indeed, N-containing molecules such as TCNQ and its fluorinated sibling adsorb more favorably on hBN than TTF and 2T which are S-rich, likely as a consequence of orbital overlap between atoms of same kind.
The values of dispersion energies shown in Table~\ref{table:abs_energy} also exhibit a qualitative difference with respect to their counterparts in Table~\ref{tab_bn_ads}, namely, the dispersion contribution for pyrene on \ce{MoS2} is larger than the one for \ce{F4-TCNQ}.
This behavior which can be explained again based on the chemical affinity argument presented above.
\subsection{Electronic Properties}\label{electronic}
\begin{figure
\centering
\includegraphics[width=0.9\textwidth]{LA_hBN.png}
\caption{Energy level alignment computed for the hBN-based hybrid interfaces using the HSE06+rVV10 hybrid functional.}
\label{fig_bn_la}
\end{figure}
In the last part of our analysis, we inspect the electronic properties of the considered hybrid interfaces analyzing in particular the energy level alignment and the projected density of states.
Again, we start from the hybrid systems including hBN.
Like its bulk counterpart~\cite{Watanabe2004,blas+95prb,arna+06prl,aggo+18prb} monolayer hBN is an insulator~\cite{elia+19natcom} with a computed value of the quasi-particle band-gap above 7~eV.~\cite{galv+16prb,pale+182dm}
Our result obtained from DFT with the HSE06 hybrid functional (6.08~eV, see Fig.~\ref{fig_bn_la}) underestimates that value but it significantly improves it with respect to the one obtained from local DFT.~\cite{pale+182dm}
The agreement with experimental references is also very good.~\cite{Watanabe2004, Cassabois2016, Auwarter2019}
The large electronic gap of hBN and the absolute energies of its band edges determine the alignment with respect to the molecular frontier levels (Fig.~\ref{fig_bn_la}).
Both frontier states of TTF, 2T, and pyrene fall within the energy gap of hBN, leading to a type-I lineup.
In these three interfaces, the band edges lie within the band-gap of hBN, however, they are systematically downshifted by a few hundreds of meV with respect to the frontier states of the molecules.
In the interfaces including TCNQ and \ce{F4-TCNQ}, instead, the highest-occupied molecular orbital (HOMO) of the gas-phase molecules lies below the valence-band maximum (VBM) of free-standing hBN, giving rise to a type-II level alignment.
In these cases, the highest occupied (lowest-unoccupied) level of the hybrid interface is downshifted (upshisfted) by a few tens of meV with respect to the respective counterpart in the isolated monolayer (molecule), see Fig.~\ref{fig_bn_la}.
\begin{figure}[h!]
\includegraphics[width=0.9\textwidth,clip=true]{PDOS_HSE_hBN.png}
\caption{Projected density of states for the hBN-based hybrid inorganic-organic systems (HIOS, black solid lines), including (a)-(b) the donors, TTF and 2T, (c) the aromatic molecule pyrene, and (d)-(e) the acceptors, TCNQ and \ce{F4-TCNQ}, calculated at the HSE06+rVV10 level of theory and compared against the results obtained for the isolated constituents shown by dashed lines (hBN) and gray areas (molecules).The contributions of the molecules within the hybrid interfaces are depicted by colored areas. A broadening of 500 meV is applied in all plots. The energy scale is offset to the vacuum level ($E_{vac}$).}
\label{fig_bn_pdos}
\end{figure}
The plots of the projected density of states (PDOS) reported in Fig.~\ref{fig_bn_pdos} confirm the picture rendered by Fig.~\ref{fig_bn_la}. Furthermore, they visually show that the localization of the frontier states reflects the energetic lineup of the electronic levels.
For a more detailed analysis, we include in Fig.~\ref{fig_bn_pdos} also the density of states of the isolated constituents.
For further comparison, the contributions of the molecules within the electronic structure of the hybrid interfaces are shown, too.
By inspecting these results, we identify two concomitant effects in the PDOS of the heterostructures.
First, the energy levels of the physisorbed molecules undergo a shift with respect to their counterparts in gas-phase.
As the direction of this shift depends on the electron-donating (downwards) or -accepting (upwards) character of the molecule, we can rationalize this effect in terms of charge transfer.
With the moiety releasing or withdrawing electrons to or from the substrate, an interfacial dipole is formed.
For the chosen molecules, the electron-donating character of the donor is stronger in magnitude than the withdrawing ability of the acceptors.
As a result, the frontier levels of TTF, 2T, and pyrene are subject to a downshift of a few hundreds of meV, up to 0.5~eV; those of TCNQ and its perfluorinated counterpart undergo instead an upshift of the order of 100~meV.
The second effect disclosed from Fig.~\ref{fig_bn_pdos} is the electronic hybridization between the molecular orbitals and the hBN bands, which are particularly evident in the valence region of TTF@hBN, 2T@hBN, and, to a lesser extent, of pyrene@hBN (Fig.~\ref{fig_bn_pdos}a-c), as well as in the conduction region of the interfaces hosting the molecular acceptors (Fig.~\ref{fig_bn_pdos}d-e).
With the partial exception of the lowest-unoccupied molecular orbital (LUMO) of TTF and the HOMO of \ce{F4-TCNQ}, the frontier states of the hybrid systems do not hybridize with the hBN bands.
Moving now to the electronic properties of the \ce{MoS2}-based hybrid interfaces, we notice that all these systems exhibit a type-II level alignment, with the band edges of the heterostructures being determined by the electron-donating ability of the absorbed molecule (see Fig.~\ref{Fig:LA-MoS2}).
Upon adsorption of TTF, 2T, and pyrene, the highest-occupied state of the interface corresponds to the HOMO of the adsorbate, whereas the lowest-unoccupied one is given by the conduction-band minimum (CBM) of free-standing \ce{MoS2}.
On the contrary, the LUMO of the molecular acceptors, TCNQ and \ce{F4-TCNQ}, falls within the energy gap of \ce{MoS2}, whereas the HOMO of these molecules is lower than the VBM of the 2D material, in agreement with the known behavior of electron accepting molecules on this type of substrates.~\cite{jing+jmca}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{LA_MoS2.png}
\caption{Energy level alignment computed for \ce{MoS2}-based hybrid interfaces using the HSE06+D3 functional.}
\label{Fig:LA-MoS2}
\end{figure}
The PDOS calculated for the \ce{MoS2}-based interfaces and reported in Figure~\ref{Fig:PDOS-MoS2} illustrate well the distribution of the molecular states of the adsorbates with respect to the electronic bands of the substrate.
In the occupied region, hybridization between \ce{MoS2} states and molecular orbitals can be seen especially for the interfaces including the donor molecules and pyrene (Fig.~\ref{Fig:PDOS-MoS2}a-c).
This effect manifests itself as a broadening of the peaks associated with molecular states, which are no longer $\delta$-like maxima as in the isolated counterpart.
On the other hand, acceptor molecules do not exhibit any signs of hybridization with the \ce{MoS2} bands, at least in the energy window displayed in Fig.~\ref{Fig:PDOS-MoS2}d-e.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{PDOS_HSE_MoS2.png}
\caption{Projected density of states of all \ce{MoS2}-based hybrid inorganic-organic systems (HIOS, black solid lines), including (a)-(b) the donor molecules TTF and bithiophene, (c) the aromatic molecule pyrene, and (d)-(e) the molecular acceptors, TCNQ and \ce{F4-TCNQ}, calculated at the HSE06+D3 level of theory and compared against the results obtained for the isolated constituents shown by dashed lines (\ce{MoS2}) and gray areas (molecules). The contributions of the molecules within the hybrid interfaces are depicted by colored areas. A broadening of 50 meV is applied in all plots. The energy scale is offset to the vacuum level ($E_{vac}$).}
\label{Fig:PDOS-MoS2}
\end{figure}
Additionally, in Fig.~\ref{Fig:PDOS-MoS2}, the consequences of charge transfer between the molecule and the substrate can be seen, as already discussed for the hBN-based heterostructures.
When the donor molecules TTF and 2T are adsorbed on \ce{MoS2}, their energy levels are downshifted with respect to their counterparts in the isolated moieties.
The size of such shifts is not rigid.
In TTF@\ce{MoS2}, the highest-occupied state coinciding with the HOMO of the molecule is only 51~meV below the highest-occupied orbital of the gas-phase donor.
On the other hand, the HOMO-1 and the HOMO-2 are downshifted by 130~meV and 200~meV, respectively.
In the PDOS of the 2T@\ce{MoS2} interface, the HOMO is downshifted by 110~meV with respect to the gas-phase counterpart.
For the HOMO-1, HOMO-2, and HOMO-3, the shift of the molecular levels due to charge transfer is entangled with the hybridization with \ce{MoS2} bands, which induces a remarkable smearing in the corresponding peaks.
As a result, a quantitative assessment of the former effect is not straightforward.
A similar behavior is shown also by the PDOS of the pyrene@\ce{MoS2} heterostructure, whereby, interestingly, the downshift of the HOMO is the largest among those seen in Fig.~\ref{Fig:PDOS-MoS2}.
In this system, the three occupied states of pyrene that are visible in Fig.~\ref{Fig:PDOS-MoS2}c are also subject to the joint action of charge-transfer-induced downshift and hybridization with \ce{MoS2} bands.
The energy levels of the molecular acceptors adsorbed on \ce{MoS2} are upshifted by the creation of an interfacial dipole with the monolayer.
Similar to the scenario offered by the hBN-based heterostructures, the magnitude of this effect is much less pronounced than for the donors and signs of hybridization with the substrate bands are hardly visible.
In the TCNQ@\ce{MoS2} heterostructure (Fig.~\ref{Fig:PDOS-MoS2}d), the molecular levels are essentially aligned with their counterparts in the isolated molecule.
The PDOS of the \ce{F4-TCNQ}@\ce{MoS2} interface exhibits a similar behavior (Fig.~\ref{Fig:PDOS-MoS2}d) but, in this case, the upshift of the HOMO and LUMO levels of \ce{F4-TCNQ} is almost rigid and as large as 60~meV.
\section{Summary and Conclusions}
In summary, we presented a DFT study of hybrid interfaces formed by hBN and \ce{MoS2} monolyers acting as substrates for five physisorbed molecules: two electron-donor species, TTF and 2T, two acceptors, TCNQ and \ce{F4-TCNQ}, and the aromatic hydrocarbon, pyrene.
All molecules adsorb substantially flat on both substrates, although structural modifications can be seen depending on the chemical nature of adsorbates and substrates:
Donor and acceptor compounds undergo minor distortions due to the presence of S and N atoms therein, respectively; hBN ripples slightly when interacting with the physisorbed molecules while, owing to its larger rigidity, the structure of \ce{MoS2} in the hybrid interfaces is unchanged compared to the free-standing configuration.
From an energetic point of view, all material combinations form stable heterostructures thanks to the contribution of dispersive interactions, which are quantitatively accounted for in our calculations.
As a general trend, pyrene and the acceptors adsorb more favorably on both substrates than the considered donors.
From the analysis of the electronic structure, we noticed weak coupling between molecules and hBN, as expected from the chemically inert and insulating character of this 2D material.
In the considered hBN-based heterostructures, both type-I and type-II level alignments are formed.
Straddling lineups appear for the donor molecules, TTF and 2T, and with pyrene; staggered ones are driven by the acceptors TCNQ and \ce{F4-TCNQ} and their relatively low frontier levels with respect to the vacuum.
In contrast, all \ce{MoS2}-based hybrid systems exhibit a type-II level alignment, with the highest-occupied (lowest-unoccupied) level of the interface coinciding with the HOMO (LUMO) of the electron-donating (-withdrawing) molecule.
The projected density of states of all considered interfaces show two concomitant effects: (i) hybridization between the electronic states of the inorganic and organic components, involving only marginally the frontier orbitals of the physisorbed molecules and (ii) charge transfer between the molecules and the monolayer substrates shifting the molecular energy levels up- or downwards, depending on the electron-donating or electron-withdrawing nature of the organic compounds.
Interestingly, both effects are qualitatively and, to a large extent, also quantitatively similar regardless of the substrate.
The results to this work provide important indications to rationalize the design of low-dimensional hybrid interfaces for opto-electronic applications.
Our findings suggest that the characteristics of the physisorbed molecules play a bigger role in determining the details of the electronic structure of the heterostructure than those of the inorganic substrate.
However, the band-gap of the latter and the relative energies of their band edges rules to the largest extend the level alignment of the hybrid system.
Future work on the characterization of the electronic excitations is expected to supplement this analysis for a deeper understanding of the opto-electronic activity of these novel materials.
\section*{Author Contributions}
\textbf{Giacomo Melani:} Investigation, Data Curation, Visualization, Writing - Original Draft; \textbf{Juan Pablo Guerrero:} Investigation, Data Curation, Visualization, Writing - Original Draft; \textbf{Ana M. Valencia:} Data Curation, Visualization, Supervision, Writing - Original Draft; \textbf{Jannis Krumland:} Supervision, Writing - Review \& Editing; \textbf{Caterina Cocchi:} Conceptualization, Supervision, Project administration, Funding acquisition, Writing - Review \& Editing; \textbf{Marcella Iannuzzi:} Conceptualization, Supervision, Project administration, Funding acquisition, Writing - Review \& Editing.
\section*{Data Availability Statement}
All data produced in this work are available free of charge at 10.5281/zenodo.6388531.
\section*{Conflicts of interest}
There are no conflicts to declare.
\section*{Acknowledgements}
This work was carried out in the framework of the strategic partnership between the University of Z\"{u}rich and the Humboldt Universit\"{a}t zu Berlin.
G.M. and M.I. gratefully acknowledge computational support from the Swiss National Supercomputing Centre (CSCS) under project s965 ``Molecules at interfaces from density functional theory''. G.M. acknowledges funding from the University of Z\"{u}rich Forschungskredit Postdoctoral Fellowship.
J.P.G, A.M.V., J.K., and C.C. appreciate funding from the German Research Foundation (DFG), project number 182087777 -- CRC 951, and computational resources from the North-German Supercomputing
Alliance (HLRN), project bep00104. Additional support is acknowledged by A.M.V. and C.C. to the German Federal Ministry of Education and Research (Professorinnenprogramm III), and by the State of Lower Saxony (Professorinnen für Niedersachsen).
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{74}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Lee \latin{et~al.}(2014)Lee, Lee, van~der Zande, Han, Cui, Arefe,
Nuckolls, Heinz, Hone, and Kim]{Lee2014}
Lee,~G.-H.; Lee,~C.-H.; van~der Zande,~A.~M.; Han,~M.; Cui,~X.; Arefe,~G.;
Nuckolls,~C.; Heinz,~T.~F.; Hone,~J.; Kim,~P. Heterostructures based on
inorganic and organic van der Waals systems. \emph{APL Mater.} \textbf{2014},
\emph{2}, 092511\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zheng \latin{et~al.}(2016)Zheng, Huang, Chen, Zhao, Eda, Spataru,
Zhang, Chang, Li, Chi, Quek, and Wee]{zhen+16nano}
Zheng,~Y.~J.; Huang,~Y.~L.; Chen,~Y.; Zhao,~W.; Eda,~G.; Spataru,~C.~D.;
Zhang,~W.; Chang,~Y.-H.; Li,~L.-J.; Chi,~D.; Quek,~S.~Y.; Wee,~A. T.~S.
Heterointerface Screening Effects between Organic Monolayers and Monolayer
Transition Metal Dichalcogenides. \emph{ACS~Nano} \textbf{2016}, \emph{10},
2476--2484, PMID: 26792247\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Breuer \latin{et~al.}(2016)Breuer, Ma{\ss}meyer, M{\"a}nz, Zoerb,
Harbrecht, and Witte]{breu+16pssrrl}
Breuer,~T.; Ma{\ss}meyer,~T.; M{\"a}nz,~A.; Zoerb,~S.; Harbrecht,~B.; Witte,~G.
Structure of van der Waals bound hybrids of organic semiconductors and
transition metal dichalcogenides: the case of acene films on MoS2.
\emph{Phys.~Status~Solidi~(RRL)} \textbf{2016}, \emph{10}, 905--910\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gobbi \latin{et~al.}(2018)Gobbi, Orgiu, and Samorì]{Gobbi2018}
Gobbi,~M.; Orgiu,~E.; Samorì,~P. When 2D Materials Meet Molecules:
Opportunities and Challenges of Hybrid Organic/Inorganic van der Waals
Heterostructures. \emph{Adv.~Mater.~} \textbf{2018}, \emph{30}, 1706103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Daukiya \latin{et~al.}(2019)Daukiya, Seibel, and
De~Feyter]{dauk+19apx}
Daukiya,~L.; Seibel,~J.; De~Feyter,~S. Chemical modification of 2D materials
using molecules and assemblies of molecules. \emph{Adv.~Phys:~X}
\textbf{2019}, \emph{4}, 1625723\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Mrkyvkova \latin{et~al.}(2019)Mrkyvkova, Hodas, Hagara, Nadazdy,
Halahovets, Bodik, Tokar, Chai, Wang, Chi, Chumakov, Konovalov, Hinderhofer,
Jergel, Majkova, Siffalovic, and Schreiber]{mrky+19apl}
Mrkyvkova,~N. \latin{et~al.} Diindenoperylene thin-film structure on MoS2
monolayer. \emph{Appl.~Phys.~Lett.~} \textbf{2019}, \emph{114}, 251906\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rijal \latin{et~al.}(2020)Rijal, Rudayni, Kafle, and
Chan]{rija+20jpcl}
Rijal,~K.; Rudayni,~F.; Kafle,~T.~R.; Chan,~W.-L. Collective effects of band
offset and wave function dimensionality on impeding electron transfer from 2D
to organic crystals. \emph{J.~Phys.~Chem.~Lett.} \textbf{2020}, \emph{11},
7495--7501\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Qiao \latin{et~al.}(2021)Qiao, Niu, Wen, Yang, Chen, Wang, Feng, Qin,
and Hao]{qiao+212DM}
Qiao,~J.-W.; Niu,~M.-S.; Wen,~Z.-C.; Yang,~X.-K.; Chen,~Z.-H.; Wang,~Y.-X.;
Feng,~L.; Qin,~W.; Hao,~X.-T. Efficient photoluminescence enhancement and
tunable photocarrier transfer in vertical 2D organic--inorganic
heterostructure by energy funneling. \emph{2D Mater.} \textbf{2021},
\emph{8}, 025026\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Amsterdam \latin{et~al.}(2021)Amsterdam, Marks, and
Hersam]{amst+21jpcl}
Amsterdam,~S.~H.; Marks,~T.~J.; Hersam,~M.~C. Leveraging Molecular Properties
to Tailor Mixed-Dimensional Heterostructures beyond Energy Level Alignment.
\emph{J.~Phys.~Chem.~Lett.} \textbf{2021}, \emph{12}, 4543--4557\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Choudhury \latin{et~al.}(2017)Choudhury, Ravavarapu, Dekle, and
Chowdhury]{chou+17jpcc}
Choudhury,~P.; Ravavarapu,~L.; Dekle,~R.; Chowdhury,~S. Modulating electronic
and optical properties of monolayer MoS2 using nonbonded phthalocyanine
molecules. \emph{J.~Phys.~Chem.~C} \textbf{2017}, \emph{121},
2959--2967\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhong \latin{et~al.}(2018)Zhong, Sangwan, Wang, Bergeron, Hersam, and
Weiss]{zhon+18jpcl}
Zhong,~C.; Sangwan,~V.~K.; Wang,~C.; Bergeron,~H.; Hersam,~M.~C.; Weiss,~E.~A.
Mechanisms of ultrafast charge separation in a PTB7/Monolayer MoS2 van der
Waals heterojunction. \emph{J.~Phys.~Chem.~Lett.} \textbf{2018}, \emph{9},
2484--2491\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang and Paulus(2020)Wang, and Paulus]{wang-paul20pccp}
Wang,~K.; Paulus,~B. Tuning the binding energy of excitons in the MoS 2
monolayer by molecular functionalization and defective engineering.
\emph{Phys.~Chem.~Chem.~Phys.~} \textbf{2020}, \emph{22}, 11936--11942\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cai \latin{et~al.}(2016)Cai, Zhou, Zhang, and Zhang]{cai+16cm}
Cai,~Y.; Zhou,~H.; Zhang,~G.; Zhang,~Y.-W. Modulating carrier density and
transport properties of MoS2 by organic molecular doping and defect
engineering. \emph{Chem.~Mater.~} \textbf{2016}, \emph{28}, 8611--8621\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jing \latin{et~al.}(2014)Jing, Tan, Zhou, and Shen]{jing+jmca}
Jing,~Y.; Tan,~X.; Zhou,~Z.; Shen,~P. Tuning electronic and optical properties
of MoS2 monolayer via molecular charge transfer. \emph{J.~Mater.~Chem.~A}
\textbf{2014}, \emph{2}, 16892--16897\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Krumland and Cocchi(2021)Krumland, and Cocchi]{krum-cocc21es}
Krumland,~J.; Cocchi,~C. Conditions for electronic hybridization between
transition-metal dichalcogenide monolayers and physisorbed carbon-conjugated
molecules. \emph{Electron.~Struct.~} \textbf{2021}, \emph{3}, 044003\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhu \latin{et~al.}(2018)Zhu, Yuan, Zhao, Zhou, Wan, Mei, and
Huang]{zhu+18sa}
Zhu,~T.; Yuan,~L.; Zhao,~Y.; Zhou,~M.; Wan,~Y.; Mei,~J.; Huang,~L. Highly
mobile charge-transfer excitons in two-dimensional WS2/tetracene
heterostructures. \emph{Sci.~Adv.} \textbf{2018}, \emph{4}, eaao3104\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhang \latin{et~al.}(2018)Zhang, Sharma, Zhu, Zhang, Wang, Dong,
Nguyen, Wang, Wen, Cao, Liu, Sun, Yang, Li, Kar, Shi, Macdonald, Yu, Wang,
and Lu]{zhan+18am}
Zhang,~L. \latin{et~al.} Efficient and Layer-Dependent Exciton Pumping across
Atomically Thin Organic–Inorganic Type-I Heterostructures.
\emph{Adv.~Mater.~} \textbf{2018}, \emph{30}, 1803986\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Adeniran and Liu(2021)Adeniran, and Liu]{aden-liu21jcp}
Adeniran,~O.; Liu,~Z.-F. Quasiparticle electronic structure of phthalocyanine:
TMD interfaces from first-principles GW. \emph{J.~Chem.~Phys.~}
\textbf{2021}, \emph{155}, 214702\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Park \latin{et~al.}(2021)Park, Mutz, Kovalenko, Schultz, Shin, Aljarb,
Li, Tung, Amsalem, List-Kratochvil, Stähler, Xu, Blumstengel, and
Koch]{park+21as}
Park,~S.; Mutz,~N.; Kovalenko,~S.~A.; Schultz,~T.; Shin,~D.; Aljarb,~A.;
Li,~L.-J.; Tung,~V.; Amsalem,~P.; List-Kratochvil,~E. J.~W.; Stähler,~J.;
Xu,~X.; Blumstengel,~S.; Koch,~N. Type-I Energy Level Alignment at the
PTCDA—Monolayer MoS2 Interface Promotes Resonance Energy Transfer and
Luminescence Enhancement. \emph{Adv.~Sci.} \textbf{2021}, \emph{8},
2100215\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ye \latin{et~al.}(2021)Ye, Liu, Zhou, Tao, Li, Wang, and
Zhu]{ye+21jpcl}
Ye,~L.; Liu,~Y.; Zhou,~Q.; Tao,~W.; Li,~Y.; Wang,~Z.; Zhu,~H. Ultrafast Singlet
Energy Transfer before Fission in a Tetracene/WSe2 Type II Hybrid
Heterostructure. \emph{J.~Phys.~Chem.~Lett.} \textbf{2021}, \emph{12},
8440--8446\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Song \latin{et~al.}(2017)Song, Schultz, Ding, Lei, Han, Amsalem, Lin,
Chi, Wong, Zheng, Li, Li, Chen, Koch, Huang, and Wee]{song+17nano}
Song,~Z. \latin{et~al.} Electronic Properties of a 1D Intrinsic/p-Doped
Heterojunction in a 2D Transition Metal Dichalcogenide Semiconductor.
\emph{ACS~Nano} \textbf{2017}, \emph{11}, 9128--9135, PMID: 28753270\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shen and Tao(2017)Shen, and Tao]{shen-tao17ami}
Shen,~N.; Tao,~G. Charge transfer and interface engineering of the pentacene
and MoS2 monolayer complex. \textbf{2017}, \emph{4}, 1601083\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xie \latin{et~al.}(2019)Xie, Liu, Fang, Fang, and Cui]{xie+19jpca}
Xie,~X.-Y.; Liu,~X.-Y.; Fang,~Q.; Fang,~W.-H.; Cui,~G. Photoinduced Carrier
Dynamics at the Interface of Pentacene and Molybdenum Disulfide.
\emph{J.~Phys.~Chem.~A} \textbf{2019}, \emph{123}, 7693--7703\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Guo \latin{et~al.}(2022)Guo, Wu, Deng, Zhou, Jiang, Lu, Huo, Ji, Bai,
Lin, Zhang, Xu, Ji, and Zhang]{guo+22nr}
Guo,~Y.; Wu,~L.; Deng,~J.; Zhou,~L.; Jiang,~W.; Lu,~S.; Huo,~D.; Ji,~J.;
Bai,~Y.; Lin,~X.; Zhang,~S.; Xu,~H.; Ji,~W.; Zhang,~C. Band alignment and
interlayer hybridization in monolayer organic/WSe2 heterojunction.
\emph{Nano~Res.~} \textbf{2022}, \emph{15}, 1276--1281\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Quek and Khoo(2014)Quek, and Khoo]{quek-khoo14acr}
Quek,~S.~Y.; Khoo,~K.~H. Predictive DFT-based approaches to charge and spin
transport in single-molecule junctions and two-dimensional materials:
Successes and challenges. \emph{Acc.~Chem.~Res.~} \textbf{2014}, \emph{47},
3250--3257\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hofmann \latin{et~al.}(2021)Hofmann, Zojer, H{\"o}rmann, Jeindl, and
Maurer]{hofm+21pccp}
Hofmann,~O.~T.; Zojer,~E.; H{\"o}rmann,~L.; Jeindl,~A.; Maurer,~R.~J.
First-principles calculations of hybrid inorganic--organic interfaces: from
state-of-the-art to best practice. \emph{Phys.~Chem.~Chem.~Phys.~}
\textbf{2021}, \emph{23}, 8132--8180\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Park \latin{et~al.}(2021)Park, Wang, Schultz, Shin, Ovsyannikov,
Zacharias, Maksimov, Meissner, Hasegawa, Yamaguchi, Kera, Aljarb, Hakami, Li,
Tung, Amsalem, Rossi, and Koch]{park+21am}
Park,~S. \latin{et~al.} Temperature-Dependent Electronic Ground-State Charge
Transfer in van der Waals Heterostructures. \emph{Adv.~Mater.~}
\textbf{2021}, \emph{33}, 2008677\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Draxl \latin{et~al.}(2014)Draxl, Nabok, and Hannewald]{drax+14acr}
Draxl,~C.; Nabok,~D.; Hannewald,~K. Organic/inorganic hybrid materials:
Challenges for ab initio methodology. \emph{Acc.~Chem.~Res.~} \textbf{2014},
\emph{47}, 3225--3232\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tkatchenko \latin{et~al.}(2010)Tkatchenko, Romaner, Hofmann, Zojer,
Ambrosch-Draxl, and Scheffler]{tkat+10mrs}
Tkatchenko,~A.; Romaner,~L.; Hofmann,~O.~T.; Zojer,~E.; Ambrosch-Draxl,~C.;
Scheffler,~M. Van der Waals interactions between organic adsorbates and at
organic/inorganic interfaces. \emph{MRS~Bull.} \textbf{2010}, \emph{35},
435--442\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2017)Liu, Gu, Ding, Fan, Hu, Tseng, Lee, Menon, and
Forrest]{liu+17nl}
Liu,~X.; Gu,~J.; Ding,~K.; Fan,~D.; Hu,~X.; Tseng,~Y.-W.; Lee,~Y.-H.;
Menon,~V.; Forrest,~S.~R. Photoresponse of an organic
semiconductor/two-dimensional transition metal dichalcogenide heterojunction.
\emph{Nano~Lett.~} \textbf{2017}, \emph{17}, 3176--3181\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Joshi \latin{et~al.}(2012)Joshi, Ecija, Koitz, Iannuzzi, Seitsonen,
Hutter, Sachdev, Vijayaraghavan, Bischoff, Seufert, Barth, and
Auwärter]{Auwarter2012}
Joshi,~S.; Ecija,~D.; Koitz,~R.; Iannuzzi,~M.; Seitsonen,~A.~P.; Hutter,~J.;
Sachdev,~H.; Vijayaraghavan,~S.; Bischoff,~F.; Seufert,~K.; Barth,~J.~V.;
Auwärter,~W. Boron Nitride on Cu(111): An Electronically Corrugated
Monolayer. \emph{Nano Letters} \textbf{2012}, \emph{12}, 5821–5828, PMID:
23083003\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lin and Connell(2012)Lin, and Connell]{Lin2012}
Lin,~Y.; Connell,~J.~W. Advances in 2D boron nitride nanostructures:
nanosheets{,} nanoribbons{,} nanomeshes{,} and hybrids with graphene.
\emph{Nanoscale} \textbf{2012}, \emph{4}, 6908–6939\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Gomez Diaz} \latin{et~al.}(2013){Gomez Diaz}, Ding, Koitz, Seitsonen,
Iannuzzi, and Hutter]{Gomez2013}
{Gomez Diaz},~J.; Ding,~Y.; Koitz,~R.; Seitsonen,~A.~P.; Iannuzzi,~M.;
Hutter,~J. Hexagonal boron nitride on transition metal surfaces.
\emph{Theor.~Chem.~Acta} \textbf{2013}, \emph{132}, 1350\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Weng \latin{et~al.}(2016)Weng, Wang, Wang, Bando, and
Golberg]{Weng2016}
Weng,~Q.; Wang,~X.; Wang,~X.; Bando,~Y.; Golberg,~D. Functionalized hexagonal
boron nitride nanomaterials: emerging properties and applications.
\emph{Chem. Soc. Rev.} \textbf{2016}, \emph{45}, 3989–4012\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhang \latin{et~al.}(2017)Zhang, Feng, Wang, Yang, and
Wang]{Zhang2017}
Zhang,~K.; Feng,~Y.; Wang,~F.; Yang,~Z.; Wang,~J. Two dimensional hexagonal
boron nitride (2D-hBN): synthesis{,} properties and applications. \emph{J.
Mater. Chem. C} \textbf{2017}, \emph{5}, 11992–12022\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kim \latin{et~al.}(2018)Kim, Lee, and Lee]{Kim2018}
Kim,~K.~K.; Lee,~H.~S.; Lee,~Y.~H. Synthesis of hexagonal boron nitride
heterostructures for 2D van der Waals electronics. \emph{Chem. Soc. Rev.}
\textbf{2018}, \emph{47}, 6342–6369\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Auw\"arter(2019)]{Auwarter2019}
Auw\"arter,~W. Hexagonal boron nitride monolayers on metal supports: Versatile
templates for atoms, molecules and nanostructures.
\emph{Surf.~Sci.~Rep.~(Netherlands)} \textbf{2019}, \emph{74}, 1–95\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kratzer \latin{et~al.}(2019)Kratzer, Matkovic, and
Teichert]{krat+19jpd}
Kratzer,~M.; Matkovic,~A.; Teichert,~C. Adsorption and epitaxial growth of
small organic semiconductors on hexagonal boron nitride. \emph{J.~Phys.~D}
\textbf{2019}, \emph{52}, 383001\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Matkovi{\'c} \latin{et~al.}(2019)Matkovi{\'c}, Genser, Kratzer,
L{\"u}ftner, Chen, Siri, Puschnig, Becker, and Teichert]{matk+19afm}
Matkovi{\'c},~A.; Genser,~J.; Kratzer,~M.; L{\"u}ftner,~D.; Chen,~Z.; Siri,~O.;
Puschnig,~P.; Becker,~C.; Teichert,~C. Light-Assisted Charge Propagation in
Networks of Organic Semiconductor Crystallites on Hexagonal Boron Nitride.
\emph{Adv.~Funct.~Mater.~} \textbf{2019}, \emph{29}, 1903816\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hohenberg and Kohn(1964)Hohenberg, and Kohn]{hohe-kohn64pr}
Hohenberg,~P.; Kohn,~W. Inhomogeneus Electron Gas. \emph{Phys.~Rev.}
\textbf{1964}, \emph{136}, B864--B871\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kohn and Sham(1965)Kohn, and Sham]{kohn-sham65pr}
Kohn,~W.; Sham,~L.~J. Self-Consistent Equations Including Exchange and
Correlation Effects. \emph{Phys.~Rev.} \textbf{1965}, \emph{140},
A1133--A1138\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perdew \latin{et~al.}(1996)Perdew, Burke, and Ernzerhof]{perd+96prl}
Perdew,~J.~P.; Burke,~K.; Ernzerhof,~M. Generalized Gradient Approximation Made
Simple. \emph{Phys.~Rev.~Lett.} \textbf{1996}, \emph{77}, 3865--3868\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[{Heyd} \latin{et~al.}(2006){Heyd}, {Scuseria}, and
{Ernzerhof}]{heyd+06}
{Heyd},~J.; {Scuseria},~G.~E.; {Ernzerhof},~M. {Erratum: ``Hybrid functionals
based on a screened Coulomb potential'' [J. Chem. Phys. 118, 8207 (2003)]}.
\emph{J.~Chem.~Phys.~} \textbf{2006}, \emph{124}, 219906--219906\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kühne \latin{et~al.}(2020)Kühne, Iannuzzi, {Del Ben}, Rybkin,
Seewald, Stein, Laino, Khaliullin, Schütt, Schiffmann, Golze, Wilhelm,
Chulkov, Bani-Hashemian, Weber, Borštnik, Taillefumier, Jakobovits, Lazzaro,
Pabst, Müller, Schade, Guidon, Andermatt, Holmberg, Schenter, Hehn, Bussy,
Belleflamme, Tabacchi, Glöß, Lass, Bethune, Mundy, Plessl, Watkins,
VandeVondele, Krack, and Hutter]{cp2k2020}
Kühne,~T.~D. \latin{et~al.} CP2K: An electronic structure and molecular
dynamics software package - Quickstep: Efficient and accurate electronic
structure calculations. \emph{J.~Chem.~Phys.~} \textbf{2020}, \emph{152},
194103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[VandeVondele and Hutter(2007)VandeVondele, and Hutter]{molopt2007}
VandeVondele,~J.; Hutter,~J. Gaussian basis sets for accurate calculations on
molecular systems in gas and condensed phases. \emph{J.~Chem.~Phys.~}
\textbf{2007}, \emph{127}, 114105\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Goedecker \latin{et~al.}(1996)Goedecker, Teter, and Hutter]{GTH1996}
Goedecker,~S.; Teter,~M.; Hutter,~J. Separable dual-space Gaussian
pseudopotentials. \emph{Phys. Rev. B} \textbf{1996}, \emph{54},
1703--1710\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hartwigsen \latin{et~al.}(1998)Hartwigsen, Goedecker, and
Hutter]{GTH1998}
Hartwigsen,~C.; Goedecker,~S.; Hutter,~J. Relativistic separable dual-space
Gaussian pseudopotentials from H to Rn. \emph{Phys. Rev. B} \textbf{1998},
\emph{58}, 3641--3662\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Krack(2005)]{GTH2005}
Krack,~M. Pseudopotentials for H to Kr optimized for gradient-corrected
exchange-correlation functionals. \emph{Theo. Chem. Acc.} \textbf{2005},
\emph{114}, 145--152\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Grimme(2006)]{grim06JCC}
Grimme,~S. Semiempirical GGA-type density functional constructed with a
long-range dispersion correction. \emph{J.~Comput.~Chem.~} \textbf{2006},
\emph{27}, 1787--1799\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sabatini \latin{et~al.}(2013)Sabatini, Gorni, and de~Gironcoli]{rvv10}
Sabatini,~R.; Gorni,~T.; de~Gironcoli,~S. Nonlocal van der Waals density
functional made simple and efficient. \emph{Phys. Rev. B} \textbf{2013},
\emph{87}, 041108\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Iannuzzi \latin{et~al.}(2014)Iannuzzi, Tran, Widmer, Dienel, Radican,
Ding, Hutter, and Gröning]{Iannuzzi2014}
Iannuzzi,~M.; Tran,~F.; Widmer,~R.; Dienel,~T.; Radican,~K.; Ding,~Y.;
Hutter,~J.; Gröning,~O. Site-selective adsorption of phthalocyanine on
h-BN/Rh(111) nanomesh. \emph{Phys. Chem. Chem. Phys.} \textbf{2014},
\emph{16}, 12374--12384\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Giannozzi \latin{et~al.}(2017)Giannozzi, Andreussi, Brumme, Bunau,
Nardelli, Calandra, Car, Cavazzoni, Ceresoli, Cococcioni, Colonna, Carnimeo,
Corso, de~Gironcoli, Delugas, DiStasio, Ferretti, Floris, Fratesi, Fugallo,
Gebauer, Gerstmann, Giustino, Gorni, Jia, Kawamura, Ko, Kokalj,
K\"{u}{\c{c}}\"{u}kbenli, Lazzeri, Marsili, Marzari, Mauri, Nguyen, Nguyen,
de-la Roza, Paulatto, Ponc{\'{e}}, Rocca, Sabatini, Santra, Schlipf,
Seitsonen, Smogunov, Timrov, Thonhauser, Umari, Vast, Wu, and
Baroni]{gian+17jpcm}
Giannozzi,~P. \latin{et~al.} Advanced capabilities for materials modelling
with Quantum {ESPRESSO}. \emph{J.~Phys.:~Condens.~Matter.~} \textbf{2017},
\emph{29}, 465901\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bl\"ochl(1994)]{bloc94prb}
Bl\"ochl,~P.~E. Projector augmented-wave method. \emph{Phys.~Rev.~B}
\textbf{1994}, \emph{50}, 17953--17979\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sun \latin{et~al.}(2019)Sun, Wang, Yang, Zhang, and Hu]{sun+19am}
Sun,~L.; Wang,~Y.; Yang,~F.; Zhang,~X.; Hu,~W. Cocrystal engineering: a
collaborative strategy toward functional materials. \emph{Adv.~Mater.~}
\textbf{2019}, \emph{31}, 1902328\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Picchiotti \latin{et~al.}(2019)Picchiotti, Nenov, Giussani,
Prokhorenko, Miller, Mukamel, and Garavelli]{picc+19jpcl}
Picchiotti,~A.; Nenov,~A.; Giussani,~A.; Prokhorenko,~V.~I.; Miller,~R.~D.;
Mukamel,~S.; Garavelli,~M. Pyrene, a test case for deep-ultraviolet molecular
photophysics. \emph{J.~Phys.~Chem.~Lett.} \textbf{2019}, \emph{10},
3481--3487\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Herperger \latin{et~al.}(2021)Herperger, Krumland, and
Cocchi]{herp+21jpca}
Herperger,~K.~R.; Krumland,~J.; Cocchi,~C. Laser-Induced Electronic and
Vibronic Dynamics in the Pyrene Molecule and Its Cation.
\emph{J.~Phys.~Chem.~A} \textbf{2021}, \emph{125}, 9619--9631\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dias(1985)]{dias85acr}
Dias,~J.~R. A periodic table for polycyclic aromatic hydrocarbons.
\emph{Acc.~Chem.~Res.~} \textbf{1985}, \emph{18}, 241--248\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}((2011))Wang, Urban, Rodríguez-Fernández,
Gallego, Otero, Martín, Miranda, Alcamí, and Martín]{Wang2011}
Wang,~Y.; Urban,~C.; Rodríguez-Fernández,~J.; Gallego,~J.~M.; Otero,~R.;
Martín,~N.; Miranda,~R.; Alcamí,~M.; Martín,~F. Formation of
Self-Assembled Chains of Tetrathiafulvalene on a Cu(100) Surface. \emph{J.
Phys. Chem. A} \textbf{(2011)}, \emph{115}, 13080--13087\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kre()]{Kretz2021}
\relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Xu \latin{et~al.}(2013)Xu, Hofmann, Schlesinger, Winkler, Frisch,
Niederhausen, Vollmer, Blumstengel, Henneberger, Koch, Rinke, and
Scheffler]{xu+13prl}
Xu,~Y.; Hofmann,~O.~T.; Schlesinger,~R.; Winkler,~S.; Frisch,~J.;
Niederhausen,~J.; Vollmer,~A.; Blumstengel,~S.; Henneberger,~F.; Koch,~N.;
Rinke,~P.; Scheffler,~M. Space-Charge Transfer in Hybrid Inorganic-Organic
Systems. \emph{Phys. Rev. Lett.} \textbf{2013}, \emph{111}, 226802\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kumar \latin{et~al.}(2017)Kumar, Banerjee, Dvorak, Schulz, Harju,
Rinke, and Liljeroth]{kuma+17acsn}
Kumar,~A.; Banerjee,~K.; Dvorak,~M.; Schulz,~F.; Harju,~A.; Rinke,~P.;
Liljeroth,~P. Charge-Transfer-Driven Nonplanar Adsorption of F4TCNQ Molecules
on Epitaxial Graphene. \emph{ACS Nano} \textbf{2017}, \emph{11}, 4960--4968,
PMID: 28467831\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wang \latin{et~al.}(2019)Wang, Levchenko, Schultz, Koch, Scheffler,
and Rossi]{wang+19aem}
Wang,~H.; Levchenko,~S.~V.; Schultz,~T.; Koch,~N.; Scheffler,~M.; Rossi,~M.
Modulation of the Work Function by the Atomic Structure of Strong Organic
Electron Acceptors on H-Si(111). \emph{Adv.~Energy~Mater.~} \textbf{2019},
\emph{5}, 1800891\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jacobs \latin{et~al.}(2020)Jacobs, Krumland, Valencia, Wang, Rossi,
and Cocchi]{jaco+20apx}
Jacobs,~M.; Krumland,~J.; Valencia,~A.~M.; Wang,~H.; Rossi,~M.; Cocchi,~C.
Ultrafast charge transfer and vibronic coupling in a laser-excited hybrid
inorganic/organic interface. \emph{Adv.~Phys:~X} \textbf{2020}, \emph{5},
1749883\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cun \latin{et~al.}(2018)Cun, Seitsonen, Roth, Decurtins, Liu,
Osterwalder, and Greber]{Greber2018}
Cun,~H.; Seitsonen,~A.~P.; Roth,~S.; Decurtins,~S.; Liu,~S.-X.;
Osterwalder,~J.; Greber,~T. An electron acceptor molecule in a nanomesh:
F4TCNQ on h-BN/Rh(111). \emph{Surf.~Sci.~} \textbf{2018}, \emph{678},
183–188, Surface Structure and Dynamics – in Honor of Karl-Heinz
Rieder\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tang \latin{et~al.}(2011)Tang, Zhou, and Chen]{Tang2011}
Tang,~Q.; Zhou,~Z.; Chen,~Z. Molecular charge transfer: a simple and effective
route to engineer the band structures of BN nanosheets and nanoribbons.
\emph{J.~Phys.~Chem.~C} \textbf{2011}, \emph{115}, 18531--18537\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Watanabe \latin{et~al.}(2004)Watanabe, Taniguchi, and
Kanda]{Watanabe2004}
Watanabe,~K.; Taniguchi,~T.; Kanda,~H. Direct-bandgap properties and evidence
for ultraviolet lasing of hexagonal boron nitride single crystal.
\emph{Nature Mat.} \textbf{2004}, \emph{3}, 404–409\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Blase \latin{et~al.}(1995)Blase, Rubio, Louie, and Cohen]{blas+95prb}
Blase,~X.; Rubio,~A.; Louie,~S.~G.; Cohen,~M.~L. Quasiparticle band structure
of bulk hexagonal boron nitride and related systems. \emph{Phys.~Rev.~B}
\textbf{1995}, \emph{51}, 6868\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Arnaud \latin{et~al.}(2006)Arnaud, Leb\`egue, Rabiller, and
Alouani]{arna+06prl}
Arnaud,~B.; Leb\`egue,~S.; Rabiller,~P.; Alouani,~M. Huge Excitonic Effects in
Layered Hexagonal Boron Nitride. \emph{Phys.~Rev.~Lett.} \textbf{2006},
\emph{96}, 026402\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Aggoune \latin{et~al.}(2018)Aggoune, Cocchi, Nabok, Rezouali, Belkhir,
and Draxl]{aggo+18prb}
Aggoune,~W.; Cocchi,~C.; Nabok,~D.; Rezouali,~K.; Belkhir,~M.~A.; Draxl,~C.
Dimensionality of excitons in stacked van der Waals materials: The example of
hexagonal boron nitride. \emph{Phys.~Rev.~B} \textbf{2018}, \emph{97},
241114\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Elias \latin{et~al.}(2019)Elias, Valvin, Pelini, Summerfield, Mellor,
Cheng, Eaves, Foxon, Beton, Novikov, Gil, and Cassabois]{elia+19natcom}
Elias,~C.; Valvin,~P.; Pelini,~T.; Summerfield,~A.; Mellor,~C.; Cheng,~T.;
Eaves,~L.; Foxon,~C.; Beton,~P.; Novikov,~S.; Gil,~B.; Cassabois,~G. Direct
band-gap crossover in epitaxial monolayer boron nitride. \emph{Nature
Commun.} \textbf{2019}, \emph{10}, 1--7\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Galvani \latin{et~al.}(2016)Galvani, Paleari, Miranda,
Molina-S\'anchez, Wirtz, Latil, Amara, and Ducastelle]{galv+16prb}
Galvani,~T.; Paleari,~F.; Miranda,~H. P.~C.; Molina-S\'anchez,~A.; Wirtz,~L.;
Latil,~S.; Amara,~H.; Ducastelle,~F. m.~c. Excitons in boron nitride single
layer. \emph{Phys.~Rev.~B} \textbf{2016}, \emph{94}, 125303\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Paleari \latin{et~al.}(2018)Paleari, Galvani, Amara, Ducastelle,
Molina-S{\'a}nchez, and Wirtz]{pale+182dm}
Paleari,~F.; Galvani,~T.; Amara,~H.; Ducastelle,~F.; Molina-S{\'a}nchez,~A.;
Wirtz,~L. Excitons in few-layer hexagonal boron nitride: Davydov splitting
and surface localization. \emph{2D ~Mater.} \textbf{2018}, \emph{5},
045017\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cassabois \latin{et~al.}(2016)Cassabois, Valvin, and
Gil]{Cassabois2016}
Cassabois,~G.; Valvin,~P.; Gil,~B. Hexagonal boron nitride is an indirect
bandgap semiconductor. \emph{Nature Photon.} \textbf{2016}, \emph{10},
262–266\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,116,691,497,854 | arxiv | \section{Introduction}
In many real world situations it is of interest the estimation of causal effect of some treatment on a certain outcome. The causal effect of taking a certain medicine for a certain disease by the patients and that of participation in a certain job training program by unemployed individuals in order to find employment in the future are two examples among many in medical and socio-economic contexts respectively, among many others in a lot of disciplines. Sometimes it may be unethical or infeasible to assign each subject either to the treatment or to the control randomly in order to perform a randomized study that is considered as the gold standard to estimate the causal effect of the treatment. However it may be of interest of socio-economic policy makers, medical professional, etc., to evaluate the causal effect of their treatments of interest in order to plan for the future. In the absence of the randomized assignment of the treatments they may only have observed data on collection of subjects who either have taken the treatment or not. When the effect of a treatment on an outcome needs to be identified from such observational data sample it needs to control for (condition on) the confounders, i.e., subgroups with the same confounder values in the treatment group and those in the control group should be compared through their empirical mean values of the outcome and then it should be taken the weighted average of them where weights are observed proportions of sizes of the subgroups in the data sample to evaluate the average causal effect of the treatment. For simplicity assume all confounders are discrete. Note that the confounders are factors that affect the subjects to take the treatment or not while simultaneously affecting the subjects' outcome in some way, therefore the effect of the treatment is confounded with the effects of these confounding factors when they are present. So, these unnecessary effects should be removed otherwise the estimate of the average treatment effect is biased. One can see that here the implicit assumption is that within each subgroup of confounder value the treatment assignments are assumed to be randomized, therefore comparisons are done subgroup-wise. But this assumption is true when a 'sufficient' set of confounders, perhaps not all of them are considered.
However sometimes controlling for the confounders can be difficult, for example, if they are high dimensional then it may be difficult to find treatment and control subgroups of subjects of sufficient sizes with same confounder values. A popular way to increase the sizes of these treatment and control subgroups that should be compared is to use so-called propensity scores \cite{RR1983}. The propensity score is the conditional probability of receiving the treatment given the values of observed pre-treatment confounding covariates of the treatment and the outcome. Among others, they are used in the causal inference method of potential outcome framework \cite{RD1974,HP1986} for matching subgroups of treated subjects with those with untreated, usually called stratification of data sample, for estimating the causal effects of the treatments.
Finding a 'sufficient' set of condounders on which the comparison should be done is somewhat problematic and the potential outcome framework offers no clear way to do it even when all pretreatment confounders of the treatment and the outcome are available. Note that one does not need to control for all the confounders since when some of the them are considered then some of the others may become redundant. However causal graphical modeling framework of Pearl and his colleagues (see \cite{PJ2009} and references therein) offers one called 'back door criterion' to choose a set of sufficient covariates in order to identify the causal effect, i.e., to estimate without bias. When a graphical model is done on the treatment variable and outcome variable and all their assumed causal factors, both direct and indirect, the criterion can find a sufficient set of covariates on which one should control for estimation of the causal effect. Then such a set is called 'admissible' or 'deconfounding' set. However the selected set is only sufficient for all the causal factors that are assumed but may not be sufficient if some causal factors of treatment and outcome are omitted. And considering some covariates as confounders by ignoring such criterion or similar one can cause introduction of further bias (p. 351 of \cite{PJ2009}). So, in our analysis we confine to the case of that taken confounders make a superset of an admissible set and stated otherwise.
Often these two camps of causal inference methods have a lot of disagreements between them, especially the applied users of them. However developers of the two frameworks, if not theoreticians in them have remarked the relationship between them. One such instance is reported in the journal "NeuroImage" under the section "Comments and Controversies" about applying two modeling frameworks for brain image data \cite{RSC2011, LS2011, PJ2011, LS2013, GC2013}. Therein Pearl argues that his group (in his words) has proved that two frameworks are logically equivalent in the sense that a theorem in one is a theorem in the other and an assumption in one has a parallel interpretation in the other. And Glymour argues that (in his words) the potential outcome model is an special case of the causal graphical model but with twists that make causal estimation impossible except in restricted contexts. And others in the debate are of the opinion that the two frameworks are close to each other. Though such arguements are around among the theoreticians of the two frameworks, the applied users still seem to be unconvinced about it, and therefore they treat that they are very different frameworks and often one is supeiror than the other. Or even worst, one gives wrong answers while only the other gives correct answers. It is rare that both frameworks are applied for same data. Furthermore due to different numerical estimation methods one may obtain two numerically different causal effect estimates when the two frameworks are used.
Here we show that two frameworks are equivalent in most contexts in the sense that both give same analytical expressions for causal effect estimates or rather any causal effect estimate in one modeling framework can be obtained from the other. Since causal effect estimates are dependent on estimated probabilities because they are functions of statistical conditional expectations of outcome variable there can be differences in causal effect estimates numerically if the used probabilities are estimated differently. But there are reasons, at least operationally, to favor the graphical modeling framework over the other, for example, it can be computationally efficient, for example, through controlling for a sufficient set of confounders rather than doing so for all the assumed confounders. We show their equivalence at the basic level of their application. Furthermore since the potential outcomes model has many forms causal effect estimators we show how they can be derived through the graphical modeling framework, thus providing some insight into the estimators. So, our discussion here can be useful not only for researchers in these two modeling frameworks but also especially for the users of them to understand each other.
\section{Observational Studies}
We consider the simple situation where one is interested in evaluating the effect of some exposure or treatment on a certain outcome that can either be a success or a failure. Let us denote the treatment by a binary variable $Z$ where $Z=1$ when the treatment is implemented and $Z=0$ when it is not and the outcome by a binary variable $Y$ where $Y=1$ when a success is observed and $Y=0$ when a failure is observed for each subject concerned. In the potential outcome framework for causal inference it is accepted existence of pair of potential outcome variables, say, $(Y_1, Y_0)$ where $Y_i$ is the outcome that would have been observed had the treatment $Z=i$ for $i=1,0$. Note that then the observable outcome $Y$ satisfies the relation $Y=ZY_1 + (1-Z)Y_0$. Then a randomized experiment is when the potential outcomes are independent of treatment assignment, written as $(Y_0,Y_1) \perp Z $; each subject receives treatment without considering its future outcome. Then average causal effect for the population $\tau$ is defined as follows.
\begin{eqnarray*}
\tau & = & E[Y_1] - E[Y_0] \\
&=& E[Y_1 \vert Z=1] - E[Y_0 \vert Z=0] \textrm{ since } (Y_0,Y_1) \perp Z \\
&= & E[Y \vert Z=1] - E[Y \vert Z=0]
\end{eqnarray*}
Here we assume that $0<P(Z=1)<1$, i.e., in our sample of data we have both treated and untreated subjects. If it is not the case then we are not able to estimate $\tau$ since then only one of quantities in the expression is known.
But in observational data the independence assumption $(Y_1,Y_0) \perp Z$ may not hold because subjects do not receive the treatment independent of their future outcomes, therefore characteristics of subjects in the treatment group may differ from those of the control group. This is a situation where the treatment effect is confounded with some external factors, i.e., the treatment and the outcome are confounded. Therefore the treatment group and control group cannot be compared directly to evaluate the effect of the treatment. Then the assumption is modified and it says that the potential outcomes are conditionally independent of the treatment assignment given some confounding factors that makes (a superset of) an admissible set for confounding. When this set of confounders are denoted by multivariate variable $X$ then the assumption is written as $(Y_1,Y_0) \perp Z \vert X$ and it is sometimes called the assumption of no unmeasured confounders to mean that all the confounding effects are removed by $X$. In addition, for inference, similar to randomized experiment it needs to have $0<P(Z=1 \vert X) <1$, which is called assumption of common support. That is, for each configuration (stratum) of $X$, we should have both treated and untreated subjects otherwise, say for example, if $P(Z=1 \vert X=x_1)=1$ in our data sample then the causal effect for the subgroup with $X=x_1$ may not be calculated. Recall that we assume that $X$ is discrete, therefore any continuous covariate is discretized. That is, in each stratum of $X$ the treatment assignments are as if they are randomized and we have data on both treated and untreated subjects. This is to say that in observational data our objective is to mimic the randomization within each stratum of $X$. Therefore, firstly one should find a sufficient set of confounders $X$. However this assumption cannot be tested even if all the potential confounders are found.
Now let us define that individual causal effect for an individual, say, $j$ with $X=x$ is $\tau^{j}(x)=Y_1^j -Y_0^j$. The $j^{th}$ individual is the $j^{th}$ data case of the sample and throughout any quantity referring to it is denoted with the superscript $j$ attached to the respective quantity. But it is clear that no subject has both the values of $Y_1$ and $Y_0$ observed therefore we cannot have $\tau^j(x)$ numerically. So we need a mechanism to get it but it is right at our hands; the randomization of the treatment assignments within each stratum of $X$, the assumption of no unmeasured confounders (this is also called the assumption of strong ignorable treatment assignment \cite{RR1983}). That is, within any stratum $X=x$ if we know a subject is treated ($Z=1$) we observe $Y_1=Y$ but $Y_0$ is not known, but the latter can be known by any other subject in the stratum who is not treated ($Z=0$); two quantities are conditionally exchangeable. Here the word 'conditionally' is to mean that within the stratum. And similarly for any subject that is not treated ($Z=0$). Therefore, as if the observed data are from randomizations within each level $x$ of $X$, we can calculate the average causal effect for the subpopulation of all individuals with $X=x$, say, $\tau(x)$ by
\begin{eqnarray*}
\tau(x) & = & E[Y_1 \vert X=x] - E[Y_0 \vert X=x] \\
&=& E[Y_1 \vert X=x,Z=1] - E[Y_0 \vert X=x,Z=0] \textrm{ since } (Y_1,Y_2) \perp Z \vert X \\
&= & E[Y \vert X=x,Z=1] - E[Y \vert X=x,Z=0]
\end{eqnarray*}
where the expectation $E$ should be taken over whole subpopulation with $X=x$. Since this mechanism applies for all the strata of $X$, we can calculate the average causal effect for the whole population, say, $\tau$
\begin{eqnarray*}
\tau &=& E_x [ E[Y \vert Z=1, X=x] - E[Y \vert Z=0, X=x]] \\
&=& \sum_x \sum_y y p(Y=y \vert X=x,Z=1)p(X=x) \\
& & - \sum_x \sum_y y p(Y=y \vert X=x,Z=0)p(X=x)
\end{eqnarray*}
It is sufficient to estimate accurately the probabilities $p(Y=y \vert X=x,Z=z)$ and $p(X=x)$ for $Z=0,1$ and for all values of $ X$ in order to estimate $\tau$ accurately but due to its definition it is not necessity. For example, if some forms of errors have been introduced in calculation of $p(Y=1 \vert X=x, Z=1)$ then similar errors in calculation of $p(Y=1 \vert X=x, Z=0)$ may make sure that the correct value for $\tau$ is obtained. For these types of reasons or similar ones sometimes researchers claim that even models, for example, those for conditional probabilities, are misspecified correct estimates for causal effects can be obtained. But here we avoid discussion on this topic.
The above estimate for $\tau$ is analytically equal to that we get by the estimation of the causal effects using interventions in causal graphical models (also called do-calculus) \cite{PJ2009,LR2002}, that is another popular framework for the task, therefore two frameworks are equivalent in this case. To recall the reader with this calculus, first define the distribution with conditioning by intervention or action; if we have observed a random sample of data on a set of variable, say, $X_1,...,X_n$, we can find the probability distribution of the set of variables, say, $p(x_1,...,x_n)$. We can have a factorization of probability distribution $p(x_1,...,x_n)$; let it be that $p(x_1,...,x_n)= \prod_{i}^{n} p(x_i \vert pa_i)$ where $pa_i \subseteq \{x_1,...,x_{i-1}$\} with the exception of $pa_1= \Phi$ (empty set) using some conditional independence assumptions within $X_1,...,X_n$. Note that to have a causal representation in the factorization one can use, for example, the time order to index the variables such that cause variables have higher indices than those of effect variables'. Then, for $i=1,...,n$ the probability distribution of $\{X_1,...,X_n\} \backslash \{X_i\}$ when $X_i$ is intervened to a particular value of it, say, $x_i$, written as $do(X_i=x_i)$, denoted by $p(\{x_1,...,x_n \} \backslash \{x_i\} \vert do(X_i=x_i))$ is defined as follows;
\begin{eqnarray*}
p(\{x_1,...,x_n\} \backslash \{x_i\} \vert do(X_i=x_i)) &=& \frac{p(x_1,...,x_n)} {p(x_i \vert pa_i)}=\prod_{k=1:k \neq i}^{n} p(x_k \vert pa_k) \\
& \neq & \frac{p(x_1,...,x_n)} {p(x_i )} =\frac{1}{p(x_i)}\prod_{k=1}^{n} p(x_k \vert pa_k) \\
&=& p(\{x_1,...,x_n\} \backslash \{x_i\} \vert X_i=x_i)
\end{eqnarray*}
where the last expression is corresponding conditional probability distribution when we have observed $X_i=x_i$, which is generally different from that of conditioning by intervention.
\small
\begin{figure}
\begin{center}
\setlength{\unitlength}{1mm}
\begin{picture}(100,40)(-50,-10)
\thicklines
\put(0,20){\circle{7}}
\put(-2,19){$X$}
\put(-20,0){\circle{7}}
\put(-22,-1){$Z$}
\put(20,0){\circle{7}}
\put(18,-1){$Y$}
\put(-2,17){\vector(-1,-1){15}}
\put(2,17){\vector(1,-1){15}}
\put(-16.5,0){\vector(1,0){33}}
\put(-22, -10){$p(y,z,x)=p(x)p(z\vert x)p(y\vert x,z)$}
\end{picture}
\end{center}
\caption{ \label{simple.bn} Bayesian network for causal model}
\end{figure}
\normalsize
The causal relationships between $X$, $Y$ and $Z$ in our context can be represented as a causal network model $p(y,z,x)=p(x)p(z\vert x)p(y\vert x,z)$ as shown in the Figure \ref{simple.bn}. And if we intervene on $Z$ as $do(Z=z)$ for $z=0,1$, then the intervention distribution
\begin{eqnarray*}
p(Y=y, X=x \vert do(Z=z) &=& \frac{p(X=x)p(Z=z \vert X=x)p(Y=y \vert Z=z,X=x)}{p(Z=z \vert X=x)}
\end{eqnarray*}
So we have $p(Y=y \vert do(Z=z)) = \sum_x p(Y=y \vert Z=z,X=x)p(X=x)$. The causal effect of the treatment option $Z=1$ compared to the control option $Z=0$ is defined as
\begin{eqnarray*}
\rho &= & \sum_y y p(Y=y \vert do(Z=1)) - \sum_y y p(Y=y \vert do(Z=0)) \\
&=& \sum_y y \sum_x p(Y=y \vert Z=1,X=x)p(X=x) - \sum_y y \sum_x p(Y=y \vert Z=1,X=x)p(X=x) \\
&=& \tau
\end{eqnarray*}
So we have seen that strong ignorable treatment assignment assumption in potential outcome model is equivalent to implementing intervention operations in probability distributions when the confounding factors are the same in both cases, i.e., they yield analytically the same causal effect estimates. In fact, for the above one can see that the probability distribution of the potential outcome of a hypothetical treatment assignment under the strong ignorability assumption and that of the outcome of the intervention of same value are the same. For $i,j=0,1$,
\begin{eqnarray*}
p(Y_i=y)&=& \sum_x p(Y_i=y \vert x)p(x) = \sum_x p(Y_i=y \vert Z=j,x)p(x) \\
&=& \sum_x p(Y=y \vert Z=i,x)p(x) =p(Y=y \vert do(Z=i))
\end{eqnarray*}
Now suppose the case where treatment has also an indirect effect on the outcome in addition to its direct effect. Suppose effect of $Z$ on $Y$ is also mediated through $Z'$ and a set of confounders among causal relationships between them is denoted by $X$ such that $X$ is the union of distinct sets of confounders $X_1$, $X_2$, $X_3$ and $X_4$ where $X_1$ and $X_4$ are the set of all direct confounders for direct causal relation between $Z$ and $Z'$, $Z$, $X_2$ and $X_4$ are those between direct causal relation between $Z'$ and $Y$, and $X_3$ and $X_4$ together complete the set of all confounders for the indirect causal relation between $Z$ and $Y$. Here we have taken all the confounders rather than respective admissible sets for simplicity. Let us define the potential outcome $Y_{ij}$ the outcome that would have observed had $Z=i$ and $Z'=j$ and then $Y_i=Z' Y_{i1}+(1-Z')Y_{i0}$ for $i,j=0,1$ and $Y=Z Y_1+(1-Z)Y_0$. Then we have strong ignorability assumptions $Z_1',Z_0' \perp Z \vert \{X_1,X_4\}$ and $Y_{i1},Y_{i0} \perp Z' \vert \{Z=i,X_2,X_4 \} $ for $i=0,1$ for the direct causal relationships between $ Z \rightarrow Z'$ and $Z' \rightarrow Y$ respectively. But they do not imply ignorability assumption for $(Y_1,Y_0)$ and $Z$. So we need to assume, for example, safely that $Y_1,Y_0 \perp Z \vert X $. Note that there is no obvious way to take a subset of $X$ as the conditioning set. In this case also we get, for $i=0,1$, $p(Y_i=y) = \sum_{x} p(Y=y \vert Z=i,x)p(x) $ for $i=0,1$. And in the causal graphical model
\begin{eqnarray*}
p(x_1,...,x_4,z,z',y) &=& p(x_1,...,x_4)p(z \vert x_1,x_3,x_4)p(z' \vert z,x_1,x_2,x_4)p(y \vert z',z,x_2,x_3,x_4) \\
p(x_1,...,x_4,z',y \vert do(z)) &=&p(x_1,...,x_4)p(z' \vert z,x_1,x_2,x_4)p(y \vert z',z,x_1,x_2,x_3,x_4) \\
p(y \vert do(z)) &=& \sum_{x_1,..,x_4,z'} p(x_1,...,x_4)p(z' \vert z,x_1,x_2,x_4)p(y \vert z',z,x_1,x_2,x_3,x_4) \\
&=& \sum_{x_1,x_2,x_3,x_4} p(x_1,x_2,x_3,x_4)p(y \vert z,x_1,x_2,x_3,x_4)
\end{eqnarray*}
Therefore, $p(Y_i=y) = p(y \vert do(Z=i)) $ for $i=0,1$. So we have seen the two frameworks are yielding same causal effect estimates, therefore two frameworks are equivalent in this case too. However, since $p(y \vert do(z))=\sum_{x_2,x_3,x_4} p(x_2,x_3,x_4)p(y \vert z,x_2,x_3,x_4)$ using graphical model is more efficient compared to doing so the potential outcome framework.
Since one can encounter situations where the causal structures of the phenomena are complex, it is advisable to use the causal graph interventions for estimation of desired causal effect. If the confounding factors taken into consideration in the potential outcome model and the graphical model are the same then both models yield analytically the same causal effect estimates.
\section{Some Differences in Two Modeling Frameworks} \label{SomDiff}
As seen earlier, in order to have same numerical causal effect estimates in both frameworks they should include supersets of similar admissible sets of confounders and same probability density estimates. However, researchers who use the potential outcome model tend to include pretreatment covariates that are associative but not causal with both $Z$ and $Y$ too as confounders. This can induce spurious bias as shown in literature using the graphical modeling framework. Such factors may not be direct confounders but they are said to be inducing so-called M-bias in casual effect estimation. Therefore researchers argue that they should be neglected in causal effect estimation \cite{SI2009, RD2009, SA2009, PJ2009a}. However when a pretreatment covariate that is associative with both treatment and outcome is found this may indicate that either there is another unmeasured confounder or two dependent unmeasured confounders in the system, not necessarily two independent confounders as considered in the above debate. If former two are the cases (either single unmeasured confounder or two dependent confounders) whether conditioned on associative confounder or not causal effect estimates are biased. In a forthcoming paper \cite{WL2014} it is shown that in these two cases it is more beneficial to condition on the associative confounder than not doing so. We avoid discussion on this topic here.
Another difference is caused by discriminative and generative estimation of probabilities where in the potential outcome model often individual conditional probabilities are estimated discriminatively, for example, using logistic regression for propensity score estimation whereas in the graphical model often joint likelihood is maximized to obtain component conditional probabilities of the factorization of joint density of $Z,X$ and $Y$. The factorization $p(X=x,Z=z,Y=y)=p(X=x)p(Z=z \vert X=x)p(Y \vert Z=z, X=x)$ includes propensity scores and therefore if two estimation methods yield two numerically different estimates for propensity scores then it can result in two different causal effect estimates. See below for further comments.
\section{Some Causal Effect Estimators}
Let us see how the graphical model estimator can be used to derive the causal effect estimators such as inverse probability of treatment weighted estimator, stratified estimator and doubly robust estimator commonly found in the potential outcome model applications. In the following we avoid direct definition of those estimators but derive them by manipulation of the graphical model estimator.
\subsection{Inverse Probability of Treatment Weighted Estimator}
The graphical model causal effect estimator $\rho$ is equivalent to inverse probability of treatment weighted estimator (\emph{IPTW}) described \cite{RH2000}.
\begin{eqnarray*}
\rho &= & \sum_y y p(Y=y \vert do(Z=1)) - \sum_y y p(Y=y \vert do(Z=0)) \\
&=& \sum_y y \sum_x p(Y=y \vert Z=1,X=x)p(X=x) \\
& &- \sum_y y \sum_x p(Y=y \vert Z=0,X=x)p(X=x) \\
&=& \sum_y y \sum_x \frac{p(Y=y, Z=1,X=x)}{p(Z=1 \vert X=x)} - \sum_y y \sum_x \frac{p(Y=y,Z=0,X=x)}{p(Z=0 \vert X=x)} \\
&=& \sum_x \frac{p(Y=1, Z=1,X=x)}{p(Z=1 \vert X=x)} - \sum_x \frac{p(Y=1,Z=0,X=x)}{p(Z=0 \vert X=x)} \\
&=& \sum_x \frac{1}{e(x)}\frac{N(Y=1, Z=1,X=x)}{N} -\sum_x \frac{1}{1-e(x)}\frac{N(Y=1, Z=0,X=x)}{N} \\
&=& \sum_i \frac{1}{e(x^i)}\frac{I(Y^i=1) I(Z^i=1) I(X^i=x^i)}{N} -\sum_x \frac{1}{1-e(x^i)}\frac{I(Y^i=1) I(Z^i=0) I(X^i=x^i)}{N} \\
&=& \frac{1}{N} \sum_i \frac{Z^i Y^i}{e^i} -\frac{1}{N} \sum_i \frac{(1-Z^i) Y^i}{1-e^i} = IPTW
\end{eqnarray*}
where $N(.)$ denotes the number of data cases satisfying its arguments, $I(.)=1$ when its argument is true and $I(.)=0$ otherwise and $p(Z^i=1 \vert X^i=x^i)=e(x^i)=e^i$. Therefore, analytically the graphical model intervention estimator is the \emph{IPTW} estimator. However often they can be different numerically, for example, when the propensity score estimates, $e(x)$ for all $x$, differ in the two contexts as discussed in Section \ref{SomDiff}.
\subsection{Stratified Estimator}
Essentially we can obtain the propensity score stratified estimator \cite{WM2012} from \emph{IPTW} since it is just a stratification of range of propensity score values into several bins where within each bin it is assumed the propensity scores are approximately the same. In fact it is an algebraic simplicity (summing up fractions by assuming some of them have equal denominators) but what it is important to note is that in the stratified estimator those common propensity scores for corresponding sets of approximately equal propensity scores are estimated by the sample proportions of treated subjects related to those propensity scores. In fact, the estimates are maximum likelihood estimates for $P(Z \vert X')$ from the likelihood for the joint density where $X'$ obtained from $X$ through a 'new' definition on the state space of $X$. For clarity we can see how stratified estimator is related with \emph{IPTW} estimator. Note that since it is emplicit that common propensity score values are in fact used in stratified estimator, for most applied researchers it is not clear about it. Suppose we write propensity score estimates in increasing order for all the subjects, say, $e^{(1)},...,e^{(N)}$ in the sample, and we stratify the sequence into $K$ number of bins such that the bin $s$ has $N r_s$ number of propensity scores (corresponding subjects) where vector $(r_1,...,r_K)$ satisfies $\sum_s r_s=1$. And for each subject define the variable $S \in \{1,...,K\}$ to denote its propensity score bin, i.e., $e^{i}$ is related with some $S=s$. Then the bin $s$ has many different propensity score values but in the stratification we assume that they can be represented by a single score, say, $e^s$, for $s=1,...,K$. Then by estimating unknown $e^s$ with the proportion of treated subjects in the bin $s$, i.e., $e^s=N_{1s}/N r_s$ where $N_{1s}$ and $N_{0s}$ are number of treated and untreated subjects that belong to the bin $s$, so $Nr_s =N_{1s}+N_{0s}$. Then
\begin{eqnarray*}
\rho & =& \frac{1}{N} \sum_i \frac{Z^i Y^i}{e^i} -\frac{1}{N} \sum_i \frac{(1-Z^i) Y^i}{1-e^i} \\
& \approx & \frac{1}{N} \sum_s \sum_i \frac{Z^i Y^i}{e^s} I(S_i=s)- \sum_s \frac{1}{N} \sum_i \frac{(1-Z^i) Y^i}{1-e^s} I(S_i=s)\\
& = & \sum_s r_s \sum_i \frac{Z^i Y^i}{N_{1s}} I(S_i=s)- \sum_s r_s \sum_i \frac{(1-Z^i) Y^i}{N_{0s}} I(S_i=s) = \rho_s
\end{eqnarray*}
which is the stratified estimator. Due to these approximations stratified estimator may not be equal to \emph{IPTW} estimator. Usually in practice $K=5$, therefore in the estimator there are only $5$ possible values of propensity scores are used even though there should be $N$ number of propensity scores.
\subsection{Doubly Robust Estimator}
So called doubly robust (\emph{DR}) estimator (see \cite{LD2004} and references therein) is a popular one in potential outcome framework. To understand how it is related to graphical model estimator let us suppose the case that in the causal network we use predicted outcome, say, $\hat{Y}$ instead of what is really observed $Y$; that is we can use two separate regression model, say, $\hat{Y}_1:=E\{Y \vert Z=1,X \}$ and $\hat{Y}_0:=E\{Y \vert Z=0,X \}$ to predict possible outcomes for each subject. By this task which is done external to the causal graphical model, we have data for a pair of variables $\hat{Y}_0$ and $\hat{Y}_1$ for $Z=0$ and $Z=1$ respectively for each subject even though each subject has either $Z=0$ or $Z=1$. Firstly for simplicity let us assume that both $\hat{Y}_0$ and $\hat{Y}_1$ take only values from the set $\{0,1\}$ (as if the regression functions are classifiers). Then, the average causal effect estimate based on predicted outcome, say, $\rho_p$;
\begin{eqnarray*}
\rho_p &= & \sum_y y p(\hat{Y}=y \vert do(Z=1)) - \sum_y y p(\hat{Y}=y \vert do(Z=0)) \\
&=& \sum_y y \sum_x p(\hat{Y}=y \vert Z=1,X=x)p(X=x) - \sum_y y \sum_x p(\hat{Y}=y \vert Z=0,X=x)p(X=x) \\
&=& \frac{1}{N} \sum_i \frac{Z^i \hat{Y}_1^i}{e^i} -\frac{1}{N} \sum_i \frac{(1-Z^i) \hat{Y}_0^i}{1-e^i}
\end{eqnarray*}
Note that the above estimator is dependent of the used regression models. One drawback of the $\rho_p$ is that it is not using both predictions for each subject even though both are available. So, let us consider the following modification to it to get another estimate, say, $\rho_p'$;
\begin{eqnarray*}
\rho_p' &=& \rho_p - \Bigg\{ \frac{1}{N}\sum_{i} \hat{Y}_1^i - \frac{1}{N}\sum_{i} \hat{Y}_0^i \Bigg\} \\
&=& \frac{1}{N} \sum_i \Bigg\{ \frac{Z^i \hat{Y}_1^i}{e^i} - \hat{Y}_1^i \Bigg\} -\frac{1}{N} \sum_i \Bigg\{ \frac{(1-Z^i) \hat{Y}_0^i}{1-e^i} - \hat{Y}_0^i \Bigg\} \\
&=& \frac{1}{N} \sum_i \frac{(Z^i-e^i) \hat{Y}_1^i}{e^i} +\frac{1}{N} \sum_i \frac{(Z^i-e^i) \hat{Y}_0^i}{1-e^i}
\end{eqnarray*}
Now
\begin{eqnarray*}
\rho - \rho'_{p}&=& \frac{1}{N} \sum_i \Bigg\{ \frac{Z^i Y^i}{e^i} - \frac{(Z^i-e^i) \hat{Y}_1^i}{e^i} \Bigg\} - \frac{1}{N} \sum_i \Bigg\{ \frac{(1-Z^i) Y^i}{1-e^i} + \frac{(Z^i-e^i) \hat{Y}_0^i}{1-e^i} \Bigg\} = DR
\end{eqnarray*}
which is called the doubly robust estimator ($DR$). That is, we can have the $DR$ estimator from the graphical model estimator if we use both the observed outcome and some predicted outcome in the graphical model. Note that $\rho_p'$ can be effectively zero if our propensity score estimates are equal to respective sample proportions i.e., the maximum likelihood estimates from the joint likelihood for $p(y,z,x)$. Then we get the $DR$ and the $IPTW$ the same in this case. Furthermore numerically the $IPTW$ is just the maximum likelihood parameter estimate based graphical model estimate when the propensity scores are sample proportions. So we have that the $DR$ is numerically equal to the basic graphical model estimator in this case.
Often researchers estimate propensity scores through a model, for example, a logistic regression with independent variables $X$ (as a linear or/and non-linear combination of them). But generally no one knows the true model in a given empirical context therefore $DR$ estimate may be affected by the propensity model specification. When propensity scores are consistently estimated the $IPTW$ estimator is a consistent to the average causal effect, therefore so does the $DR$ estimator. Note that the result is true irrespective of specification of the two regression models $E\{Y \vert Z=1,X\}$ and $E\{Y \vert Z=0,X\}$ -whether they are true or not. However for small samples, $DR$ estimate may depend on the used regression models if estimated propensity scores are different from corresponding sample proportions.
Likewise it may be of interest to see that what can the $DR$ estimator be if we have true outcome regression models. First consider the following writing of the $DR$ estimator.
\begin{eqnarray*}
DR &=& \frac{1}{N} \sum_i \Bigg\{ (Y^i - \hat{Y}_1^i ) \frac{Z^i }{e^i} - (Y^i - \hat{Y}_0^i ) \frac{(1-Z^i)}{1-e^i} \Bigg\} + \Bigg\{ \frac{1}{N}\sum_{i} \hat{Y}_1^i -\frac{1}{N}\sum_{i} \hat{Y}_0^i \Bigg\}
\end{eqnarray*}
And from the above we know that $E_x E_y \{ Y \vert Z=1,X\} = \sum_{x,y} yp(y \vert Z=1,x)p(x)= \frac{1}{N} \sum_i \frac{Y^i Z^i}{e^i} $ and therefore $E_x E_{\hat{y}} \{ \hat{Y} \vert Z=1,X\}=\sum_{x,\hat{y}} \hat{y}p(\hat{y} \vert Z=1,x)p(x)= \frac{1}{N} \sum_i \frac{\hat{Y}^i Z^i}{e^i} $. Now consider the case of $Z=1$. Since for each $X=x$, $\hat{Y}$ is a single value, say, $\hat{y}(Z=1,x)$ then we have $\sum_{x,\hat{y}} \hat{y}p(\hat{y} \vert Z=1,x)p(x) = \sum_x \hat{y}(Z=1,x) p(x)$. If we take $\hat{y}(Z=1,x)=\sum_{y} yp(y \vert Z=1,x)$ for each $x$, i.e., if we let our regression function at $X=x$ to be the empirical mean of $Y$ values at $X=x$ then we get $\sum_i \frac{Y^i Z^i}{e^i}=\sum_i \frac{\hat{Y}^i Z^i}{e^i}$. And in the similar way, for the case of $Z=0$ we get $\sum_i \frac{Y^i (1-Z^i)}{1-e^i}=\sum_i \frac{\hat{Y}^i (1-Z^i)}{1-e^i}$. Both of them imply that
\begin{eqnarray*}
DR &=& \frac{1}{N}\sum_{i} \hat{Y}_1^i -\frac{1}{N}\sum_{i} \hat{Y}_0^i \\
&=& \frac{1}{N}\sum_{x} N_x \hat{Y}(Z=1,x) - \frac{1}{N}\sum_{x} N_x \hat{Y}(Z=0,x) \\
&=& \sum_{x} \hat{Y}(Z=1,x) p(x)- \sum_{x} \hat{Y}(Z=0,x) p(x)
\end{eqnarray*}
That is, if the outcome regression model $\hat{Y}(Z=z,X=x)$ has its value at $X=x$ as the mean of the observed $Y$ values at $X=x$ for $Z=z$, for $z=0,1$ or in other words when our two regression models are the true models then the $DR$ estimator has the above simple form, that is independent of propesity score model -whether it is correct or not.
In fact we do not need to have above restriction on values of $\hat{Y}_0$ and $\hat{Y}_1$ for the validity of the above discussion. Generally a regression model predicts a continuous variable and for any continuous random variable $Y$ when we have a random sample of $n$ observations, say $\{ y_1,...,y_n \}$, $ \int_y y p(y)$ is estimated by $\sum_{i=1}^n y_i /n$. In general when $Y$ is continuous and $X$ is mixture of discrete and continuous then writing all summations of $X$ as integration, if any, we have that
\begin{eqnarray*}
\int_x \int_y yp(y \vert z,x)p(x) dydx&=& \int_x \frac{1}{N(z,x)} \sum_j y^jI(Z=z)I(X=x) \\
&=& \frac{1}{N} \sum_x \frac{N(x)}{N(z,x)} \sum_j y^jI(Z^j=z)I(X^j=x) \\
&=& \frac{1}{N} \sum_j y^j \sum_x \frac{I(Z^j=z)I(X^j=x)}{N(z,x)/N(x)} \\
&=& \frac{1}{N} \sum_j y^j \frac{I(Z^j=z)I(X^j=x^j)}{P(Z^j=z \vert X^j=x^j)}
\end{eqnarray*}
Therefore $\int_x \int_y yp(y \vert Z=1,x)p(x)=\frac{1}{N} \sum_j \frac{y^j z^i}{e^j}$ and $ \int_x \int_y yp(y \vert Z=0,x)p(x)=\frac{1}{N} \sum_j \frac{y^j (1-z^i)}{1-e^j}$. So, when $\hat{Y}_0$ and $\hat{Y}_1$ are continuous random variables then $\rho_p = \int_y y P(\hat{Y}=y \vert do(Z=1)) - \int_y y P(\hat{Y}=y \vert do(Z=0))$ that can be estimated with summations, thus above formulas can be obtained. From above it is clear all the discussion can be generalized to the case of when $Y$ and $X$ have any finite state spaces.
|
1,116,691,497,855 | arxiv | \section{Introduction \label{sec:intro}}
Interaction-free measurement (IFM) is a counterintuitive feature of quantum mechanics that enables detecting the presence of an object without interacting with it \cite{elitzur1993quantum}. Since its discovery, various applications of IFM have been identified in quantum information processing, leading to so-called counterfactual quantum key distribution \cite{guo1999quantum, noh2009counterfactual,shenoy2013semi, rao2021noiseless}, direct communication \cite{salih2013protocol, aharonov2019modification}, quantum computation \cite{cao2020counterfactual}, certificate authorization \cite{shenoy2014counterfactual} and others \cite{shenoy2017quantum}.
The basic principle behind IFM is paradigmatically explained using the Mach-Zehnder (MZ) interferometer. A photon incident on the MZ interferometer is split at a beam-splitter, and re-interfered at a second splitter, leading to a detection at a specific output port owing to destructive interference. However, if an obstacle is inserted in one of the arms, the consequent disruption of the destructive interference leads to a possible detection at the other output port. Counterfactuality or IFM refers to this feature whereby the presence of an obstacle on one interferometric arm is revealed by a measuring device placed elsewhere.
Yet another counterintuitive feature of quantum mechanics is the indistinguishability of identical particles, which is responsible for the Hong-Ou-Mandel effect \cite{hong1987measurement}, bosonic stimulation \cite{shenoy2013efficient}, boson sampling \cite{aaronson2011computational}, among others. Here, the indistinguishability is enforced through the (anti-)symmetrization conditions $[a_j, a_k] = \delta_{j,k}$ (resp., $\{a_j, a_k\} = \delta_{j,k}$) in the case of bosons (resp., fermions), where the subscripts $j$ and $k$ refer to the two modes of the quantum field.
In this work, we propose the idea of ``IFM-by-proxy'', which is a fundamental modification of IFM that combines counterfactuality and indistinguishability. Under IFM-by-proxy, the presence of an obstacle on a particle's path is revealed by a measurement made elsewhere, on another particle that is indistinguishable with the first particle. Even though IFM-by-proxy involves two or more particles, yet paradoxically it turns out to be a single-particle interference effect in the ideal case. We refer to this aspect of quantum indistinguishability brought forth by IFM-by-proxy as \textit{counterfactual indistinguishability}, which will be distinguished from the ``usual'' aspect of indistinguishability, mentioned in the preceding paragraph.
This article is organized as follows. A practical scheme for observing IFM-by-proxy using attenuated coherent pulses is presented in Sec. \ref{sec:tpifm}. An idealization thereof is explored in Sec. \ref{sec:exp}. A generalization of the experimental idea to a three-pulse interference and beyond is given in Section \ref{sec:3pulse}. Finally, we present discussions and conclusions in Sec. \ref{sec:conc}, where we briefly indicate potential experimental parameters that are suitable to realize the proposed experiment.
\section{Interaction-free measurement based on indistinguishability \label{sec:tpifm}}
We consider the following variant of IFM involving a modified MZ interferometer, as depicted in Fig. \ref{fig:ifm2}. One of the arms (labelled $l$) is longer than the other (labelled $s$), with the path length difference between the two denoted $\delta$. A retractable obstacle $O$ is present in the long arm. The condition $\delta>0$ ensures that when a light pulse is incident on the first beam-splitter BS$_1$, the resulting two partial waves will not recombine at the second beam-splitter BS$_2$. Thus, the conventional IFM is ruled out.
\begin{figure}[h]
\includegraphics[scale=0.35]{Figure1.pdf}
\caption{Schematic of a modified Mach-Zehnder interferometer to observe IFM-by-proxy with a train of coherent pulses, incident on beam-splitter BS$_1$. Owing to the difference in the length of the two arms, partial waves belonging to any two consecutive pulses meet at the beam-splitter BS$_2$, where they interfere by virtue of quantum indistinguishability. In the absence of obstacle $O$, any detection event necessarily happens at detector $D_1$. In the presence of $O$, a detection can happen at detector $D_2$ as well, constituting a remote measurement of $O$.}
\label{fig:ifm2}
\end{figure}
Suppose a train of $N$ identical, attenuated coherent pulses, labelled with time stamp $t_n$, where $n = \{1, 2, 3, \cdots, N\}$, is incident on BS$_1$. Consecutive pulses are spaced by a constant interval of $\delta$, which ensures that the partial wave of $(j)$-th pulse traveling via the short arm and that of $(j-1)$-th pulse traveling via the long arm, are incident at the same time on BS$_2$.
The state of the train is given by:
\begin{equation}
\ket{\Psi} = \bigotimes_{j=1}^{N} \ket{\alpha e^{i\phi_j}},
\label{eq:coherent}
\end{equation}
where $|\alpha|^2$ is the mean photon number of the train and $\phi_j$ is the initial phase of the state, which we set to $\phi_j=0$ to begin with. If $\hat{a}^\dagger$ denotes the creation operator for the input mode, then
the transformation at BS$_1$ is given by:
$\hat{a}^\dagger_{n} \rightarrow \frac{1}{\sqrt2}(\hat{l}^\dagger_{n} + i\hat{s}^\dagger_{n})$, where $\hat{l}^\dagger_n$ and $\hat{s}^\dagger_n$ are the creation operators of the respective arms of the interferometer. Similarly, the transformations at BS$_2$ is $\hat{l}^\dagger_n \rightarrow \frac{1}{\sqrt{2}}(i\hat{c}^\dagger_n + \hat{d}^\dagger_n)$ and $\hat{s}^\dagger_n \rightarrow \frac{1}{\sqrt{2}}(\hat{c}^\dagger_n + i\hat{d}^\dagger_n)$, where $\hat{c}^\dagger_n$ and $\hat{d}^\dagger_n$ are the creation operators of the respective output ports.
The action of a beam-splitter on coherent states $\ket{\alpha}$ and $\ket{\beta}$ incident on its two input ports is given by $\ket{\alpha}\ket{\beta} \xrightarrow{\rm BS} \ket{\frac{\alpha + i \beta}{\sqrt{2}}}\ket{\frac{i\alpha + \beta}{\sqrt{2}}}$. Accordingly, at BS$_1$, for the $(j)$-th pulse, we have $\ket{\alpha}_j\ket{\rm vac} \xrightarrow{{\rm BS}_1} \ket{\frac{\alpha}{\sqrt{2}}}_{j,s}\ket{\frac{i\alpha}{\sqrt{2}}}_{j,l}$. Consider the case when obstacle $O$ (Fig. \ref{fig:ifm2}) is absent in the arm $l$. On account of the path difference, the partial waves of two consecutive pulses meet, and we have
\begin{equation}
\ket{\frac{\alpha}{\sqrt{2}}}_{j,s}\ket{\frac{i\alpha}{\sqrt{2}}}_{j-1,l} \xrightarrow{{\rm BS}_2} \ket{\alpha}_{j,c}\ket{\rm vac}_{j,d},
\label{eq:BS2}
\end{equation}
implying that there can be a detection at detector $D_1$ and none at $D_2$. We note that in Eq. (\ref{eq:BS2}) the interfering partial waves at BS$_2$ belong to two consecutive pulses. Therefore, the quantum indistinguishability of the photons in the pulses is crucial for this interference to happen.
If obstacle $O$ is inserted, then the pulse amplitude in arm $l$ is blocked. Therefore, in place of Eq. (\ref{eq:BS2}), we have
\begin{equation}
\ket{\frac{\alpha}{\sqrt{2}}}_{j,s}\ket{\rm vac}_{j-1,l} \xrightarrow{{\rm BS}_2} \ket{\frac{\alpha}{2}}_{j,c}\ket{\frac{i\alpha}{2}}_{j,d},
\label{eq:BS2+}
\end{equation}
showing that there could be a detection at detector $D_2$ as well. This measurement of $O$ is indeed counterfactual in the sense that the detection at $D_2$ entails the presence of a blocking action elsewhere. Letting $|\alpha|^2 \ll 1$, typically $|\alpha|^2=0.1$, the probability $P_{\emptyset}$ that there is no detection at $O$ conditioned on a detection at $D_2$, is given by $P_{\emptyset} \approx 1-\frac{1}{2}|\alpha|^2 = 0.95$. Thus, strictly speaking, the measurement of $O$ is not interaction-free.
In conventional IFM, the detection of a particle at one place indicates its blocking elsewhere. Indeed, this nonlocal element is present here also. Additionally, the measurement of the $(j)$-th pulse can indicate the blocking (elsewhere) of \textit{another} pulse, namely the $(j-1)$-th. As noted earlier, it seems natural to refer to this type of measurement as ``IFM-by-proxy'' or ``proxy counterfactual'', in that the measured ($(j)$-th) pulse acts as a proxy for another ($(j-1)$-th) pulse, such that the measurement of the former pulse could indicate the presence of a blockade in the path of the latter pulse.
If the two converging partial waves at BS$_2$ were distinguishable (say, possessing different wavelengths, or one particle being a photon and the other a neutron), then they would not interfere at BS$_2$, and therefore lead to any of the four possible detections (namely, both at $D_1$ or both at $D_2$ or one at each). Thus, the interference in Eq. (\ref{eq:BS2}) implies that the photons of the two pulses meeting at BS$_2$ are indistinguishable. This aspect of quantum indistinguishability that is responsible for IFM-by-proxy constitutes counterfactual indistinguishability, a simple but intimate union of quantum indistinguishability and quantum counterfactuality.
\section{Towards an ideal IFM-by-proxy \label{sec:exp}}
We noted in the preceding section that there is a non-zero probability of detection both at $D_2$ (this being counterfactual) and also at obstacle $O$, owing to the presence of multiphotons in the coherent pulse. As a result, the measurement here is not strictly interaction-free. We now point out how one may in principle realize an ideal IFM-by-proxy.
It is not hard to show that the ideal IFM-by-proxy can be realized if the train of $N$ pulses in Eq. (\ref{eq:coherent}) is replaced by the following ``tensor sum'' train:
\begin{equation}
\hat{Q}_N\ket{\rm vac} \equiv \frac{1}{\sqrt{N}}\sum_{n=1}^{N} \hat{a}^\dagger_n\ket{\rm vac}.
\label{eq:state}
\end{equation}
If this state is input into the setup of Fig. \ref{fig:ifm2}, then it outputs a photon at $D_1$ with unit probability in the absence of $O$, and with probability $\frac{1}{2}$ if $O$ is inserted. Moreover, in the latter case a detection at $O$ and at $D_1$ or $D_2$ is mutually exclusive. Thus this setup realizes an ideal IFM-by-proxy.
The state Eq. (\ref{eq:state}) can be engineered from the state Eq. (\ref{eq:coherent}) by means of suitable nonlinear filtering, as explained below. We note that Eq. (\ref{eq:coherent}) can be written as
\begin{equation}
\ket{\Psi} = \sum_{j=0}^{\infty}\frac{N^{j/2}|\alpha|^j}{\sqrt{j!}} (\hat{Q}_N)^j \ket{\rm vac}.
\label{eq:coherent2}
\end{equation}
It follows that the required filtering operation is one that takes a train of $N$ pulses initially in the state Eq. (\ref{eq:coherent}), and probabilistically outputs the state Eq. (\ref{eq:state}) by eliminating the terms corresponding to $j=0$ (vacuum) and $j>1$ (higher-order excitations).
Interestingly, the (ideal) IFM-by-proxy can be understood using just ``first quantization'' arguments. In particular, the statement of quantum indistinguishability based on the commutation relation $[\hat{c}^\dagger_n,\hat{d}^\dagger_n]=0$ doesn't need to be invoked. Let the electric field operators corresponding to detectors $D_1$ and $D_2$ be given by (up to a global phase)
\begin{align}
\hat{c}^{\dagger}_n(t_n) &=\frac{1}{\sqrt2}(\hat{l}_{n-1}e^{i(k\delta - \omega t_{n-1})} + i\hat{s}_n e^{-i \omega t_{n}}), \\
\hat{d}^{\dagger}_n(t_n) &= \frac{1}{\sqrt2}(i\hat{l}_{n-1}e^{i(k\delta - \omega t_{n-1})} + \hat{s}_n e^{-i \omega t_{n}}),
\end{align}
where $\omega$ and $k$ denote the angular frequency and wave number of the mode, respectively. The probability of detection of a photon at detector $D_1$ at time $t_n$ is given by
\begin{equation}
\langle \hat{c}_n(t_n) \hat{c}^{\dagger}_n(t_n) \rangle = \frac{1}{2} (1 + \cos\left[k\delta - \omega (t_{n-1} - t_{n})\right]).
\label{eq:d1expt}
\end{equation}
Since $k\delta = \omega (t_{n-1} - t_{n})$, this interference is like that in a conventional MZ interferometer, except that the two partial waves that converge at BS$_2$ belong to two distinct (consecutive) pulses.
In this sense, even though we have here a two-photon situation, the ideal IFM-by-proxy realizes \textit{single-photon} interference, rather than two-photon interference. In this light, it seems more apt to attribute counterfactual indistinguishability to photonic \textit{non-individuality} within the mode, and not to (anti-)symmetrization condition. This type of indistinguishability may be differentiated from the ``conventional'' photonic indistinguishability indicated, for example, in the Hong-Ou-Mandel effect or bosonic sampling, where there is a genuine two-photon or multi-photon interference. In the former case, there is a cancellation of the two-photon amplitude for both particles being transmitted through a beam-splitter and that for both being reflected.
In place of the train of coherent pulses as in Eq. (\ref{eq:coherent}), suppose we have a train of single photons. This would be described by the state:
\begin{equation}
\ket{\Phi} \equiv \bigotimes_{n=1}^{N} \hat{a}^\dagger_n \ket{\rm vac}
\label{eq:stateX}
\end{equation}
in place of the state Eq. (\ref{eq:state}) i.e., replacing the tensor sum with a tensor product (apart from the normalization factor). The state Eq. (\ref{eq:stateX}) leads to a probabilistic Hong-Ou-Mandel effect, and not to the ``single-particle'' interference that is the basis of the IFM-by-proxy.
\section{Counterfactual quantum indistinguishability with multiple consecutive pulses \label{sec:3pulse}}
The principle of IFM-by-proxy can be straightforwardly extended to a situation where multiple pulses interfere in an interferometric setup. Consider the case of an analogous single-photon interference in a 3-pulse scenario. A similar setup as in Fig. \ref{fig:ifm2} can be considered, but with three consecutive pulses being interfered. For example, the beam-splitters of Fig. \ref{fig:ifm2} are replaced by tritters (three-way beam-splitters) described by the unitary action
\begin{equation}
\mathbf{U}_3 = \begin{pmatrix}
\frac{1}{\sqrt{2}} & \frac{i}{2} & -\frac{1}{2} \\
\frac{i}{\sqrt{2}} & \frac{1}{2} & \frac{i}{2} \\
0 & \frac{i}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{pmatrix}.
\label{eq:3way}
\end{equation}
In general, any $n$-input $n$-output splitter can be realized by a cascaded setup of $\mathbf{U}_2$ \cite{reck1994experimental}. A proxy counterfactual setup incorporating the tritter transformation of Eq. (\ref{eq:3way}) realized through a two-interferometer cascaded setup is depicted in Fig. \ref{fig:ifm3}.
In the absence of a retractable obstacle $O$, only a $D_1$ detection occurs for the train of pulses given in Eq. (\ref{eq:coherent}). To show this, note that the state of the fields after BS$_1$ and BS$_2$ is given by:
\begin{subequations}
\begin{align}
\ket{\psi} &= \ket{\frac{\alpha}{\sqrt{2}}}_{j,s} \ket{\frac{-\alpha}{2}}_{j-1,m} \ket{\frac{i\alpha}{2}}_{j-2,l} \label{eq:3traina}\\
&\xrightarrow{{\rm BS}_3} \ket{\frac{\alpha}{\sqrt{2}}}_{j,s} \ket{\frac{-\alpha}{\sqrt{2}}}_{\ast,e} \ket{\rm vac}_{\ast,b} \label{eq:3trainb} \\
&\xrightarrow{{\rm BS}_4} \ket{\rm vac}_{\ast,c}\ket{-\alpha}_{\ast,d},
\label{eq:3trainc}
\end{align}
\label{eq:3train}
\end{subequations}
where $\ast$ indicates the superposition of two or more consecutive pulses. Eq. (\ref{eq:3traina}) shows that three sequential pulses are involved in the interference. A phase of $e^{i\pi/2}$ (via a half-wave plate) is assumed to be introduced in the arm $s$ prior to the action of ${\rm BS_{4}}$. As follows from Eq. (\ref{eq:3train}), in the absence of the obstacle $O$, detection may happen only at detector $D_1$ and not at $D_2$ or $D_3$.
It follows that a detection at detector $D_2$ or $D_3$ indicates the presence of $O$ on path $l$. This corresponds to a counterfactual measurement of the obstacle. Similarly, an obstacle placed at arm $m$ can also be counterfactually measured. The counterfactual detection is interaction-free with probability $\approx 1-|\alpha|^2 \approx 0.9$, letting $|\alpha|^2=0.1$. As in the previous case, one may observe ideal IFM-by-proxy by employing the state Eq. (\ref{eq:state}), rather than a train of coherent pulses.
\begin{figure}[h]
\includegraphics[scale=0.35]{Figure2.pdf}
\caption{Schematic of an interferometric setup to observe IFM-by-proxy with three consecutive pulses. Without the obstacle $O$, the detection event happens only at detector $D_1$ alone. In the presence of the obstacle in either arm $l$ or $m$, a detection at detectors $D_2$ or $D_3$ is possible, which constitutes IFM-by-proxy.}
\label{fig:ifm3}
\end{figure}
A potential application of this kind of setup is checking for misalignment or defects in a quantum circuit. For example, suppose we require a quantum circuit that builds on the one given in Fig. \ref{fig:ifm3}. Then, before placing the necessary optical elements (such as gates, polarizers, half/full wave plates etc.), one can check for the above interference patterns, such as a detection only at $D_1$ in the absence of any path defects in arm $l$ or $m$. One can run a defect diagnostics on the circuit by introducing obstacles or certain phase fluctuations and monitoring the detector clicks.
\section{Discussion and Conclusion \label{sec:conc}}
Interaction-free measurement (IFM) is a counterintuitive feature of quantum interference in which the blocking action by an object is indicated by a measurement outcome elsewhere. Here we have proposed a twist to this situation in which the partial waves in the two interferometric arms belong to two distinct but indistinguishable photons. The IFM in this case is such that the detection pattern pertaining to one photon is able to indicate the presence of an obstacle in the path of another, indistinguishable photon-- a case of IFM mediated by a proxy. As the IFM wouldn't be possible if the interfering particles were distinguishable, this effect corresponds to what may be called \textit{counterfactual indistinguishability}.
Although this interference effect involves two photons, it does not constitute a two-photon interference, such as occurs in the Hong-Ou-Mandel effect. Instead, paradoxically, it constitutes a single-photon interference effect in that it can be fully described employing only the first-quantization formalism. The aspect of indistinguishability highlighted here is not the exchange symmetry between two or more identical particles, but rather a kind of photonic non-individuality of the particles belonging to a given mode. Finally, we point out that counterfactual indistinguishability is not limited to the case of re-interference of only two pulses, but can be extended to multiple pulses as well.
The experimental setup of Fig. \ref{fig:ifm2} to realize counterfactual quantum indistinguishability is well within the scope of current technology. In particular, it can be built employing the same setup that implements differential-phase-shift (DPS) cryptography, barring the retractable obstacle. We outline a few details of a potential experiment to realize it.
It would incorporate a fiber-coupled laser as the source of coherent light, and one-bit delay line MZ interferometer followed by two single-photon detectors. The path-length difference in the delay-line based MZ interferometer will introduce a one-bit delay corresponding to the $\delta$ value. The choice of the source wavelength is vital to the design of the entire optical path.
The source could be a continuous wave (cw) laser diode equipped with an external cavity of 810 nm. This is converted into a pulsed light source by means of a high-speed amplitude modulator placed just behind the cw light source \cite{zhang2009megabits}. We may as well employ 1550 nm telecom wavelength, which may be considered mainly for the availability of InGaAs detectors. However, SiAPD may be preferable thanks to its better performance, specifically its higher efficiency of $70\% $, lower dead-time of 50 ns, etc \cite{takesue200610}. Further, the light source should be greatly attenuated using fixed and variable attenuators to the required mean photon number of $|\alpha|^2 \approx 0.1$, which translates to a single detection over 10 consecutive pulses. This degree of attenuation ensures that there is a much higher probability for single-photon events over multi-photon events. The light source is then linked to the MZ interferometer through a single-mode fiber about 2 m long.
In case of the pulsed laser source the interferometer introduces a delay equal to the interval of the neighboring pulses. The stability of the MZ interferometer is a critical part in differential phase detection. The path delay relies upon the characteristics of the light source such as frequency and optical path. It should be tuned to achieve good spatial matching such that most of the time there is at most a single detection event, at one of the detectors or obstacle.
In the context of Fig. \ref{fig:ifm2}, the detection occurs at $D_2$ instead of $D_1$ (in the absence of $O$), if the phase difference between two consecutive pulses, $\phi = \pi$. This fact forms the basis of DPS QKD \cite{inoue2003differential}, which was an inspiration for the present work. However, DPS QKD does not involve counterfactual measurements, and a key bit is generated conditioned on a detection at $D_1$ or $D_2$. The security of DPS QKD is based on the fact that by choosing very small $|\alpha|^2$, the two possible encoding states can be made sufficiently non-orthogonal, as $\bra{\alpha}\ket{-\alpha} = e^{-2|\alpha|^2}$ \cite{waks2006security, moroder2012security}. By contrast, here we require small $|\alpha|^2$ to ensure that the counterfactual effect reduces to IFM proper.
It is known that multiphoton, linear interference can be the basis of a powerful, albeit non-universal, model of quantum computing, since it can be used to sample a probability distribution that is known to be hard to calculate. In particular, the distribution requires calculating the permanent of a matrix based on the unitary that describes a multiport interferometer used for the interference, a problem known to be \#P-hard \cite{aaronson2011computational}. By contrast, the single-photon interference that leads to IFM-by-proxy is not expected to give a greater than quadratic speedup, as in Grover search \cite{bennett1997strengths}, since it can be described by first-quantization principles.
\acknowledgements
V.N.R. and R.S. acknowledge the support from Interdisciplinary Cyber Physical Systems (ICPS) program of the Department
of Science and Technology (DST), India, Grant No.
DST/ICPS/QuST/Theme-1/2019/14. V.N.R. acknowledges the support and encouragement
from Admar Mutt Education Foundation.
|
1,116,691,497,856 | arxiv | \section{Introduction}
The so-called ``dark ages" of the universe started $\approx 380000$
years after the Big Bang as matter cooled down and
space became filled with neutral hydrogen. This phase of the universe lasted up to about a billion years, when the complex process of reionization
of the IGM was completed (\cite{Loeb10}). It is currently believed that most of the reionization was caused by the ultraviolet radiation from massive stars formed in the first generations of galaxies. However, it is uncertain what fraction of ionizing
ultraviolet photons could escape from primitive galaxies to produce and maintain the ionization far from galaxies in low density regions of the IGM. Recent observations with the Hubble Space Telescope suggest that the rest-frame ultraviolet radiation from the most distant galaxies detected so far at the heart of the dark ages is not enough to heat and ionize the IGM over large volumes of space. To solve this apparent ``photon-starved problem" (\cite{Bouwens10}), it has been suggested by Lehnert et al. (2010) that a fainter population of galaxies below the present detection limit could contribute significantly to the reionization.
X-rays from accreting black holes have a longer mean free path than the
ultraviolet photons from massive stars. In this context Madau et al. (2004) and Ricotti \& Ostriker (2004) have suggested that a smaller fraction of ionizing photons were provided by primordial black holes of
intermediate mass (``miniquasars", at z $> 10$) accreting by
Bondi-Hoyle from the surrounding gas. However, feedback
from Bondi-Hoyle accretion to solitary black holes significantly
suppresses both any further inflow (\cite{Alvarez09}; \cite{Milo09}) and the
consequent injection of radiation and high-energy particles to the
surrounding medium.
We propose that BH-HMXBs at z $\geq$ 6, namely, the fossils of massive stars are an important -so far overlooked- agent in the complex process of the reionization of the universe. In the context of the current models
on the formation (\cite{Krumholz09}; \cite{Turk09}; \cite{Stacy10})
and collapse (\cite{Heger03}; \cite{Meynet05}; \cite{Georgy09}; \cite{Linden10}) of primordial stars,
an attractive and realistic alternative to the hypothesis of
quasi-radial, Bondi-like accretion on solitary black holes of
intermediate mass (``miniquasars") is accretion on stellar black
holes from high-mass stars in binary
systems, namely, ``microquasars" (\cite{Mirabel99}). As shown below, the
formation rate of BH-HMXBs must have been very large in the young
Universe, playing an important role in the thermal history of the
IGM, and a complementary role to that of
their progenitor stars in the re-ionization process of the IGM over
large volumes of space. It is this scenario that we investigate here.
\section{Cosmic evolution of BH-HMXBs: A prediction from current theoretical models }
Recent hydrodynamic simulations of the formation of the first
generations of stars show that a substantial fraction of stars in
primordial galaxies form as binaries with typical masses of tens of
solar masses (\cite{Krumholz09}; \cite{Turk09}; \cite{Stacy10}).
Models of single stars with very low metal content and initial masses of a few tens
of solar masses show that they collapse directly with no energetic natal kicks, and
end as black holes (\cite{Heger03}; \cite{Meynet05};
\cite{Georgy09}).
On the other hand, a recent model of binary evolution of massive stars
by Linden et al. (2010) show that the number of HMXBs and ULXs, their time evolution, and orbital
period distribution, are strongly metallicity dependent. Linden et al. (2010) find that ULXs formed in a typical starburst
of 10$^6$ $M_{\sun}$ with Z = 0.02 Z$_{\sun}$ outnumber ULXs formed with Z = Z$_{\sun}$ by a factor of 5, and after 10 Myr by almost three orders of magnitude. Besides, at Z = 0.02 Z$_{\sun}$, among the ULX population, $>95\%$ of the compact objects are black holes formed by direct collapse and therefore after black hole formation it remains gravitationally bound to a companion, donor star. Most of the orbital periods at Z = 0.02 Z$_{\sun}$ are less than 3 days and accretion is by Roche lobe overflow which creates very luminous and persistent BH-HMXBs. Probably, this trend continues for starbursts with the metallicities (Z $\leq 0.02 Z_{\sun}$) of the reionization era.
These models imply that the majority
of the first generations of high mass stellar binaries
remain gravitationally bound after the formation of black holes.
Massive stellar binaries can thus become BH-HMXB microquasars, which are sources
of UV photons, X-rays, massive winds, and relativistic jets (\cite{Mirabel99}).
Therefore, in the context of the models of massive stellar evolution and
the cosmic evolution of metallicity it is expected that: \textit{1)
the fraction of black holes to
neutron stars and 2) the fraction of black hole binaries to solitary
black holes, should increase with redshift. That is, the rate of
formation of bright BH-HMXBs was likely much larger in the early Universe
than at present.}
\section{Formation rate of stellar black holes as a function of metallicity: Observations }
The cosmic evolution of BH-HMXBs inferred from theoretical models is consistent with the following
observational studies of stellar black holes and neutron stars in the near and distant universe:
\begin{enumerate}
\item The mass of black holes in HMXBs seems to be a decreasing function of the host galaxy metallicity (\cite{Crowther10} and references therein).
The black holes in the binaries M 33 X-7, NGC 300 X-1, and IC10 X-1
are in low-metallicity galaxies and have masses -determined dynamically- in the range of 16 to 30 solar masses, which are higher than the mass of any known
stellar compact source in the Milky Way and Andromeda galaxies, which have higher metallicities.{ However, while the model by Linden et al. (2010) supports
the formation of HMXBs and ULXs in low-metallicity environments, they conclude that it is difficult to create very massive black holes through common envelope phases,
since this tends to strip a high fraction of the primary envelope. Given the low number statistics of the known dynamic masses of
black holes in HMXBs, it is possible that the relatively high masses of 16 to 30 solar masses come from the selection of the brightest sources, namely, those that are at the tip of the iceberg.}
\item It is believed that the majority of ultraluminous X-ray sources (ULXs) found in external galaxies are HMXBs that
contain black holes accreting at Super-Eddington rates (\cite{Gladstone09}). In fact,
the occurrence rate per unit galaxy mass of ULXs observed in nearby galaxies is a decreasing function of the galaxy mass - hence of the metallicity - of the host galaxy (\cite{Zamperi09}).
\item The space kinematics of Galactic X-ray binaries that contain black holes with more than ten solar masses provides
evidence of black hole formation by implosion, with no large kicks due to energetic supernovae (\cite{Mirabel03}; \cite{Mirabel10}).
\item Observations now support the notion that massive stars with high metal content may end as neutron stars instead of black holes (\cite{Meynet05}; \cite{Georgy09}; \cite{Linden10}).
Recently formed neutron stars observed as soft gamma ray repeaters and
anomalous X-ray pulsars are found in young clusters of large metal content that contain stars with masses of 40-50
solar masses (\cite{Figer05}; \cite{Muno06}).
\item It is believed that the majority of gamma ray bursts of long duration (LGRBs) mark the formation of black holes by the collapse of massive stars.
Although a fraction of dark LGRBs may require local extinction columns of Av $>1$ mag, the majority of their hosts are faint,
irregular galaxies with global limited chemical evolution (\cite{LeFloch03}; \cite{Fruchter06}; \cite{Han10}; \cite{Levesque10}).
The properties of GRB 090423 at z = 8.1 are similar to those of GRBs observed at low/intermediate redshifts (\cite{Salvaterra09}),
suggesting that the mechanisms and progenitors that gave rise to this burst are not too different from those producing GRBs with
identified hosts.
\item There is increasing evidence for an enhanced LGRB rate at z $> 3$
(\cite{Daigne06}; \cite{Kistler08}; \cite{Wanderman10}; \cite{Qin10}), as expected from the increase in the specific star
formation rate (SFR) with decreasing metallicity (\cite{Mannucci10}).
\end{enumerate}
\section{Ionizing power of a stellar black hole in an HMXB relative to its progenitor star}
In the following we compute the number of ionizing soft X-rays and UV photons from the accretion disk of a BH-HMXB, and then compare its ionization power with that of its progenitor massive star. To this end we assume that the black hole of mass M$_{\rm{BH}}$ is accreting at a fraction f$_{\rm{edd}}$ of its Eddington luminosity for a time t$_{\rm{acc}}$. Then the ratio of the total number of ionizing photons emitted by the accreting black hole to that emitted by the progenitor star is given by
\begin{equation}
\begin{array}{l}
\frac{\mathrm{N_{\gamma,BH}}}{\mathrm{N_{\gamma,*}}} =
0.6
\left(\frac{\mathrm{M_{BH}}}{\mathrm{M_*}}\right)
\left(\frac{\mathrm{f_{Edd}}}{\mathrm{0.1}}\right)
\left(\frac{\mathrm{t_{acc}}}{\mathrm{20 Myr}}\right)
\left(\frac{\mathrm{f_{esc,BH}}}{\mathrm{1.0}}\right)
\left(\frac{\mathrm{N_{phot}}}{\mathrm{64000}}\right)^{-1}\\
\hspace{4.7cm}
\left(\frac{\mathrm{\langle E_{\gamma}}\rangle}{\mathrm{keV} }\right)^{-1}
\left(\frac{\mathrm{f_{esc,*}}}{\mathrm{0.1}}\right)^{-1}
\,,
\end{array}
\end{equation}
\noindent where $N_{phot}$ denotes the number of ionizing photons emitted per hydrogen nucleus involved in star formation, $\langle E_{\gamma} \rangle$ denotes the mean photon energy of the radiation emitted by the accreting black hole, and f$_{\rm{esc,*}}$ (f$_{\rm{esc,BH}}$) denotes the fraction of ionizing photons emitted by the star (accreting black hole) that escape from the galaxy and contribute to the heating and reionization of the IGM.
We have substituted reasonable numbers for each of the parameters: N$_{\rm{phot}}$=64000 corresponds to metal-free forming stars with a top-heavy Initial Mass Function (IMF). For a normal stellar population this number would be 16 times lower (\cite{Schaerer03}). The escape fraction of ionizing photons emitted by stars was taken to be f$_{\rm{esc,*}}=0.1$, which is consistent with the mean observed escape fraction of ionizing photons from star-forming galaxies (\cite{Shapley06}) at z=2-3.
HMXBs are expected to inject photons into the IGM at a rate close to Eddington (and possibly super Eddington), and our choice f$_{\rm{edd}}=0.1$ is likely to be conservative (see footnote \footnotemark[1]). The accretion lasts t$_{\rm{acc}} \approx 20$ Myrs, a mean lifetime of donor stars of $M_*=10-30 M_{\sun}$ (\cite{Turk09}; \cite{Stacy10}). The mean photon energy of the radiation emitted by the accreting source depends on the assumed spectral shape, but ${\langle E_{\gamma}}\rangle=1$ keV appears -within a factor of a few- of values derived for both the thermal and power-law components of the black hole spectra. Finally, f$_{\rm{esc,BH}}=1.0$ is not well constrained. We took f$_{\rm{esc,BH}}$ to be larger than f$_{\rm{esc,*}}$ simply because energetic photons can propagate much more easily through HI column densities in the range $10^{17}-10^{20} \rm{cm}^{-2}$. For reasonable choices for each of these model parameters, we find that \textit{an accreting black hole in a high-mass binary emits a total number of ionizing photons that is comparable to its progenitor star}.
However, it should be kept in mind that the ionizing photons emitted by the accreting black hole are more energetic than those from the progenitor star and capable of ionizing more than one hydrogen atom. In a fully neutral medium, the number of secondary ionizations $\rm{N}_{sec} = 25(E_{\gamma}/1 \rm{keV})$, where $E_{\gamma}$ is the photon energy (\cite{Shull85}). Therefore, the ionizing power of
the resulting black hole could be greater than that of its progenitor.
\section{BH-HMXBs and massive star formation rates in the epoch of reionization}
The progenitor star of the black hole should have formed in a molecular cloud, which in turn may have formed more stars, each of which contributed to the total number of ionizing photons emitted by stars. Therefore, in the following we look at the emission of ionizing radiation from a star-forming region as a whole.
Observations of galaxies in the local universe show that their X-ray luminosity in the energy range 2-10 keV correlates strongly with the rate at which they are forming stars (\cite{Grim03}). This local correlation states that the X-ray luminosity (in erg s$^{-1}$) scales with the SFR in M$_{\sun}$ yr$^{-1}$, as L$_{2-10} = 7 ~ 10^{39}$ SFR erg s$^{-1}$. When modeling the impact of X-ray emission from galaxies at very high redshift, the following more general correlation is used (\cite{Furlanetto06a}):
\begin{equation}
L_{2-10} = \rm{f_X} \times 3.5~ 10^{40} \times \rm{SFR ~~erg~ s^{-1}}.
\end{equation}
Here, the parameter f$_X$ accounts for the likely case that the normalization of the observed correlation depends on redshift. Observations indicate that f$_X = 0.2$ for local galaxies. This important parameter f$_X$ depends on several physical processes (all of which are expected to change with redshift). To illustrate this quantitatively, we express the X-ray luminosity of a star-forming galaxy in more fundamental quantities as
\begin{equation}
\begin{array}{l}
L_{2-10}= \\
\rm{f_{2-10}} \times \frac{dM_{BH}}{dt} \times t_{acc} \times f_{bin} \times f_{Edd} \times 1.5 10^{38} \rm{erg~s^{-1} M_{\sun}^{-1}}
\end{array}
\end{equation}
or
\begin{equation}
\begin{array}{l}
L_{2-10}= \rm{f_{2-10}} \times f_{BH} \times {SFR} \times t_{acc} \\
\hspace{2.5cm}
\times \rm{f_{bin}} \times f_{Edd} \times 1.5~10^{38} \rm{erg~s^{-1} M_{\sun}^{-1}}
\end{array}
\end{equation}
Most of the parameters in this equation were introduced earlier. The fraction f$_{2-10}$ denotes the fraction of the total luminosity that emerges in the 2-10 keV band. For the power law component of the spectra we expect f$_{2-10} = 0.1 - 0.5$ (equal flux per logarithmic bin of energy or steeper), and we conservatively adopt f$_{2-10} = 0.1$. We also introduced the parameter f$_{BH}$, which relates black hole formation rate dM$_{BH}$/dt to SFR. This fraction can be computed for a given initial IMF under the assumption that every star above some critical mass M$_{*,\rm{crit}}$ ends up as black hole. As argued previously, due to mass lost by metallicity-dependent stellar winds even massive stars may end up as neutron stars instead of directly as black holes. However, in the evolution of a close massive binary of low metallicity, due to mass transfer and common envelope phase the primary could lose mass and end its life as a neutron star rather than a black hole, leading to suppression of black hole formation. Linden et al. (private communication) discussed the importance and frequency of such scenario finding that the transition from neutron star to black hole dominated HMXBs is very sharp and metallicity dependent, and that for $Z \leq 0.02 Z_{\sun}$ all primary stars with $M \geq 20 M_{\sun}$ in HMBs end as black holes in HMXBs. Therefore we assume here that mass transfer in close binaries has little impact on the mass range of stars becoming black holes. For the conservative choices of a Salpeter IMF in the range M$_{\rm{low}} = 0.1 M_{\sun}$, M$_{up} = 100 M_{\sun}$, and M$_{*,\rm{crit}} = 25 M_{\sun}$ we find f$_{BH} = 0.03$.
Finally, we expect black holes in binaries to be very efficient accreters, while isolated black holes are not. The parameter f$_{bin}$ denotes the mass fraction of accreting black holes in binaries. The total mass fraction of massive stars in binaries is close to unity, and for simplicity we assume that this total mass fraction is $50~ \%$. Since BH-HMXBs are persistent sources of radiation and jets for our model we assume f$_{bin} = 0.5$.
Both Eqs. (2) and (4) depend linearly on SFR, and we can express the parameter f$_X$ as a combination of physical parameters:
\begin{equation}
f_{X} = \frac {\rm{f_{2-10}} \times f_{BH} \times t_{acc} \times f_{bin} \times f_{Edd} \times 1.5~10^{38}} {3.5~10^{40}} \end{equation}
or
\begin{equation}
f_{X} =
4.0
\left(\frac{\mathrm{f_{2-10}}}{\mathrm{0.1}}\right)
\left(\frac{\mathrm{f_{BH}}}{\mathrm{0.01}}\right)
\left(\frac{\mathrm{f_{Edd}}}{\mathrm{0.1}}\right)
\left(\frac{\mathrm{f_{bin}}}{\mathrm{0.5}}\right)
\left(\frac{\mathrm{t_{acc}}}{\mathrm{20 ~Myr}}\right)
\end{equation}
Linden et al. (2010) estimate that at ULX luminosities the number of Z = 0.02 Z$_{\sun}$ sources should outnumber the Z = Z$_{\sun}$ HMXBs by a factor of 5. Therefore, in our fiducial conservative model for primordial starbursts f$_X$ is at least one order of magnitude higher than the locally observed value (\cite{Grim03}). There are several reasons why we expect f$_X$ to be much higher in the young Universe. (1) Microquasars can have spectra that are harder and f$_{2-10}$ could be in some cases as high as f$_{2-10} = 0.5$; and (2) f$_{BH}$ is higher in low metallicity environments, i.e., the formation of black holes in metal-enriched environments of galaxies in the local universe is likely to be strongly suppressed compared to more pristine environments. (3) Stars in the young Universe wre very likely formed with an IMF that was more top-heavy.
This evolution in the IMF alone could
boost f$_{BH}$ to much higher values than the fiducial assumed value in Eq. (4). Finaly, the IGM at z = 10 would essentially be transparent to the hard X-ray photons ($> 1$ keV),
and following Dijkstra, Haiman \& Loeb (2004) we explicitly verified that these models are consistent with the observed unresolved soft X-ray background.
\footnotetext[1]{To estimate the ionizing power of BH-HMXBs in the early universe we use the observed spectra of the Galactic black hole binary Cygnus X-1, which has a blue supergiant companion, as template for moderate accretion rates. In this system, accretion is persistent and composed mainly by the donor stellar wind. Its X-ray spectrum is characterized by two components: a UV-soft X-ray bump due to thermal emission from the accretion disk with a typical temperature of $\approx 7$ eV, and a non-thermal power-law component of hard X-rays that result from Compton up scattering of thermal accretion disk and synchrotron photons by a hot coronal plasma and/or jet. In addition to X-rays, these sources can indeed produce jets and winds of accelerated particles, which in turn may heat and ionize the surrounding medium. \\
On the other hand, for higher accretion rates one can use the ULXs observed in the local universe as templates of BH-HMXBs in the early universe. ULXs often have spectra that resemble the very high (super-soft) state observed in some Galactic black hole binaries (e.g. GRS 1915+105). The majority of ULXs exhibit a complex curvature, which can be modeled by a cool disk component, together with a power law that breaks above 3 keV, probably due to a cool, optically thick corona produced by super-Eddington accretion flows (\cite{Gladstone09}). Examples of steady super-Eddington sources are also SS 433 in the Milky Way, which is blowing the nebula W50 laterally, and the microquasar that is inflating the nebula S26 in the galaxy NGC 7793 (\cite{Pakull10}). SS 433 injects more than $10^{39}$ erg s$^{-1}$ in the interstellar medium and the microquasar in S26 more than $10^{40}$ erg s$^{-1}$. The overall energy injected by these microquasars during their whole lifetime can be more than $10^{54} erg$, which is orders of magnitude more than the photonic and baryonic energy from a typical core collapse supernova.}
\section{Stellar black holes and the thermal history of the IGM}
It is an open question how significant may be boosting the X-ray emissivity of star-forming galaxies in the high-redshift Universe for the global ionized fraction (i.e. averaged over the entire volume of the observable Universe). However, it has been shown (\cite{Furlanetto06a}) that soft X-rays (E$_{\gamma} < 2$ keV, \cite{ Pritchard07}) and inverse-Compton scattering from relativistic electrons could have profound implications for the amount of heating of the low-density neutral IGM. BH-HMXBs are powerful sources of soft X-rays and relativistic jets, and their heating could in turn affect the overall reionization process indirectly in ways that will need to be investigated further. In Fig ~\ref{tplot} we show that f$_X$ (as defined in equations 5 and 6) is the parameter that determines the thermal history of the IGM. Increasing f$_X$ causes the neutral IGM to be heated earlier. As shown in the thermal evolution for f$_X = 10.0$, when the gas temperature approaches $10^4$ K, cooling through collisional excitation of the atomic hydrogen becomes efficient, and further heating is not possible in practice. As discussed in section 8, the formation of low-mass galaxies in the neutral IGM at z=10-20 is suppressed when the gas temperature is as high as $10^4$ K, namely, for f$_X > 5$. For further details on this particular model we refer the reader to Pritchard \& Furlanetto (2007).
\begin{figure}
\centering
\includegraphics[width=8.cm]{tplot.pdf}
\caption{Thermal history of the low-density neutral inter galactic medium (IGM) due to heating by accreting stellar black holes in high mass X-ray binaries (BH-HMXBs). This figure shows the gas temperature of the IGM as a function of redshift z for three possible values of f$_X$ as defined in Eq. (6): f$_X = 0.1$ [blue dashed line], f$_X = 1.0$ [red dotted line], f$_X = 10.0$ [black solid line]). As discussed in the text, most likely f$_X >1$.
}
\label{tplot}
\end{figure}
\section{The 21cm line of HI during reionization}
The precise temperature evolution of the neutral IGM is known to strongly affect the global 21 cm signature expected from neutral HI during the epoch of reionization (\cite{Furlanetto06b}). In Fig~\ref{tplot_nu}, we plot the evolution of the brightness temperature (averaged over the entire sky) as a function of redshift following the same prescriptions as in Pritchard \& Loeb (2010), again for the same three models with different values of f$_X$. The gas temperature couples to the excitation temperature through collisions, and through scattering of Lyman $\alpha$ photons. When this excitation temperature, also known as the spin temperature, is higher (lower) than the temperature of the cosmic microwave background (CMB), then it is possible to observe hydrogen atoms in emission (absorption) against this CMB. The difference in spin and CMB temperature is referred to as the ``brightness" temperature (\cite{Furlanetto06b}). Clearly, when the gas is heated earlier, the spin temperature can increase earlier, and hydrogen can be seen in emission earlier in the evolution of the Universe. Increasing f$_X$ also reduces the interval in redshift - and therefore - frequency over which hydrogen can be seen in absorption against the CMB (corresponding to negative brightness temperature). Single dipole experiments such as EDGES are currently attempting to measure this signal (\cite{Bowman08}). One of the present challenges in observational astronomy is to directly observe this 21 cm signal from neutral hydrogen in the young Universe (\cite{Morales10}). This can be accomplished with single radio dipole experiments, such as EDGES (\cite{Bowman08}), which are potentially capable of detecting the global 21 cm signal (\cite{Pritchard10}) at redshifts z $< 30$, directly measuring early heating of the IGM.
\begin{figure}
\centering
\includegraphics[width=8.cm]{tbplot_nu.pdf}
\caption{Brightness temperature of the hyperfine transition of the ground state of atomic hydrogen (the wavelength of this transition is 21 cm), averaged over the entire sky, as a function of redshift z for the same three different values of f$_X$ as in Fig.~\ref{tplot}.
}
\label{tplot_nu}
\end{figure}
A boosted X-ray emissivity of star forming also affects the fluctuations in the 21 cm background (\cite{Pritchard07}; \cite{Pritchard08}). Detecting these fluctuations is one of the prime scientific drivers for the next generations of radio interferometers such as the MWA, LOFAR, and SKA. The fluctuations in the 21 cm background radiation contain the largest amount of cosmological information (\cite{Loeb04}), and are therefore invaluable constraints on cosmological parameters (as well as fundamental physics). However, ``astrophysics" introduces additional fluctuations, for example, through temperature fluctuations and through fluctuations in the Lyman $\alpha$ background (\cite{Pritchard07}; \cite{Pritchard08}). Both are sourced (on different scales) by galaxies that themselves provide biased tracers of the underlying density field. To fully exploit the rich data set provided by the 21 cm fluctuations therefore requires a good understanding of the nature of the astrophysical sources illuminating, heating, and ionizing the hydrogen in our Universe. Interestingly, one heats the neutral IGM earlier by boosting f$_X$, which implies that temperature fluctuations are suppressed during the later stages of reionization, which improves the prospects for extracting cosmological information from the 21 cm background (\cite{Pritchard07}; \cite{Pritchard08}).
\section{The role of stellar black holes in the formation of dwarf galaxies}
The cold dark matter model of the universe provides the framework for significant progress in understanding the
large-scale properties and physical principles that govern the large-scale evolution of the universe during the first 400 thousand years.
However, it is still poorly understood how the first stars and black holes in galaxies were formed and the way that in
less than a billon year these pristine objects re-ionized and re-heated most of the matter in
the universe over large volumes of space. The apparent disparity between the number of dwarf galaxies predicted by the cold dark matter model of the universe
and the number of small galaxies observed so far in the halo of the Galaxy is a subject of topical interest in cosmology (\cite{Loeb10}). Power et al. (2009) had already pointed out the possible implications of X-ray binaries in primordial globular clusters for the reionization, hence, galaxy formation at high redshifts.
It is believed that the first stars had formed in gas clouds with a virial temperature of a few hundred K due to cooling by molecular hydrogen, H$_2$ (\cite{Loeb10}). But the UV radiation produced by these stars could have easily dissociated H$_2$, making atomic hydrogen (H I) cooling necessary for further star formation. The galaxies that reionized the IGM were therefore likely to have a virial temperature above the H I cooling threshold of $10^4$ K. X-ray, and UV heating by BH-HMXBs of the diffuse IGM during reionization would have resulted in an additional increase in the total minimum galaxy mass M$_{min}$
\begin{equation}
\begin{array}{l}
M_{min} =
10^9
\left(\frac{\mathrm{\rho}}{\mathrm{100 \rho_C}}\right)^{-\frac{1}{2}}
\left(\frac{\mathrm{\mu}}{\mathrm{0.6}}\right)^{-\frac{3}{2}}\\
\hspace{4.5cm}
\left(\frac{\mathrm{T(K)}}{\mathrm{10^4}}\right)^{\frac{3}{2}}
\left(\frac{\mathrm{1+z}}{\mathrm{10}}\right)^{-\frac{3}{2}}M_{\sun},
\end{array}
\end{equation}
\noindent where $\rho_C$ is the critical mass density for a flat Universe, $\rho$ the mass density in the galaxy, $\mu$ the mean molecular weight, z the redshift, and T the temperature of the IGM.
Once the IGM was heated to a temperature of $10^4$ K, dark matter halos with masses below $10^9 M_{\sun}$ could no longer accrete IGM material because the temperature of the infalling gas increased by an extra order of magnitude as its density increased on its way into these galaxies. In that regime, only gaseous halos with virial temperatures above $10^5$ K could have accreted fresh IGM gas and converted it to stars. The census of dwarf galaxy satellites of the Milky Way requires a related suppression in the abundance of low-mass galaxies relative to low-mass dark matter halos (see \cite{Munoz09} and references therein). The thermal history of the IGM therefore has a direct impact on the properties of the faintest galaxies at high redshifts, as well as on the smallest dwarf galaxies in the local universe.
It is interesting to note that black holes of different mass scales play a role in galaxy formation. Feedback from supermassive black holes halt star formation, quenching the unlimited mass growth of massive galaxies (\cite{Cattaneo09}), and we show here that feedback from stellar black holes in HMXBs during the reionization epoch suppress the number of low-mass dwarf galaxies. Therefore, BH-HMXBs in the early universe are an important ingredient in reconciling the apparent disparity between the observed number of dwarf galaxies in the Galactic halo with the number of low-mass galaxies predicted by the cold dark matter model of the universe.
\section{Conclusions}
The main conclusions of this work are the following:
\begin{enumerate}
\item The ratio of black holes to neutron stars and the ratio of black hole binaries to solitary black holes should increase with redshift; that is, the rate of
formation of BH-HMXBs was significantly higher in the early Universe than at present.
\item Feedback from one of those BH-HMXBs during its whole lifetime can be more than $10^{54} erg$ (orders of magnitude larger than that of the photonic and barionic energy from a typical core collapse supernova).
\item An accreting black hole in a high-mass binary emits a total number of ionizing photons that is comparable to its progenitor star, but one X-ray photon emitted by an accreting black hole may cause the ionization of several tens of hydrogen atoms in a fully neutral medium.
\item The most important effect of BH-HMXBs in the early universe is the heating of the IGM. Soft X-rays and inverse-Compton scattering from relativistic electrons produced by BH-HMXBs heat the low-density medium over large distances to temperatures of $\approx 10^4$ K, which limits the recombination rate of hydrogen and keeps the IGM ionized.
\item A temperature of the IGM of $\approx 10^4$ K limits the formation of faint galaxies at high redshifts. It constrains the total mass of dwarf galaxies to $\geq 10^9 M_{\sun}$ .
\item BH-HMXBs in the early universe are important ingredients for reconciling the apparent disparity between the observed number of faint dwarf galaxies in the Galactic halo with the number of low-mass galaxies predicted by the cold dark matter model of the universe.
\item An additional effect of metallicity in the formation of BH-HMXBs (\cite{Mirabel10}) is to boost the formation
of BH-BH binaries as more likely sources of gravitational waves than NS-NS systems (\cite{Belczynski10}).
\end{enumerate}
\begin{acknowledgements}
I.F.M thanks the referee G. Meynet, and A. King, P. Fabbiano, T. Piran, T. Linden and V. Kalogera for useful information and kind comments. This work was supported in part by NSF grant AST-0907890 and NASA grants NNX08AL43G and NNA09DB30A.
\end{acknowledgements}
|
1,116,691,497,857 | arxiv |
\section{List of Profiled Websites}\label{app:urls}
\noindent
\vspace{-5mm}
\begin{table}[h]
\centering
\caption{Websites profiled in this work. Entries 1-30 are taken from the top websites listed by Alexa~\cite{AlexaInternet2017}, while URLs 31-40 are a selection of whistleblowing portals.}
\label{table:websites}
\begin{tabular}{@{}ll@{}}
\toprule
\multicolumn{2}{c}{Website Number and URL} \\ \midrule
1) Netflix.com & 21) Office.com \\
2) Amazon.com & 22) Microsoftonline.com \\
3) Facebook.com & 23) Chase.com \\
4) Google.com & 24) Nytimes.com\\
5) Yahoo.com & 25) Blogspot.com\\
6) Youtube.com & 26) Paypal.com\\
7) Wikipedia.org& 27) Imdb.com \\
8) Reddit.com & 28) Wordpress.com\\
9) Twitter.com & 29) Espn.com\\
10) Ebay.com & 30) Wikia.com\\
11) Linkedin.com& 31) Wikileaks.org \\
12) Diply.com & 32) Aljazeera.com/investigations\\
13) Instagram.com& 33) Balkanleaks.eu \\
14) Live.com & 34) Unileaks.org\\
15) Bing.com & 35) Globaleaks.com\\
16) Imgur.com & 36) Liveleak.com\\
17) Ntd.tv & 37) Globalwitness.org\\
18) Cnn.com & 38) Wikispooks.com\\
19) Pinterest.com & 39) Officeleaks.com \\
20) Tumblr.com & 40) Publeaks.nl\\
\end{tabular}
\end{table}
\end{appendix}
\section{Background and Related Work}\label{sec:background}
This section provides background information and related work regarding Machine Learning techniques, hardware performance events, and website fingerprinting. Subsequently, we briefly compare our work to previous ones.
\subsection{Machine Learning Techniques}
Machine Learning provides powerful tools to automate the process of understanding and extracting relevant information from noisy observations. All of the techniques we use in this work are \emph{supervised}, meaning that known samples (training set) are used to derive a model that is subsequently employed to classify unknown samples (test set). The success rate of an ML technique in an experiment denotes the percentage of unknown samples that are classified correctly. To reliably determine the success rate, classification is performed multiple times with different training and test sets that are derived through statistical sampling. This is called cross-validation and is especially useful, if the number of overall samples is low. A brief description of the four ML techniques we use in our experiments is given in the following paragraphs.
\paragraph{\textbf{k-th Nearest Neighbor (kNN).}} The main purpose of kNN is to find the training sample that is closest to a test sample. The Euclidean distance is used to determine how far training and test samples are apart. The smallest distance is taken as the first nearest neighbor and the test sample is marked with the corresponding label~\cite{weinberger2009distance}. As an example, Gong et al.~\cite{gong2010fingerprinting} showed that kNN could be applied to infer websites using remote traffic analysis.
\paragraph{\textbf{Decision Tree (DT).}} Decision Trees are used to classify samples by creating branches for the given data features. The general method to decide on the boundaries is to find the feature which gives the best split among the classes. The child branches are then created with other features. While choosing the values for each branch, the entropy is computed to optimize the values. Decision Trees are used by Demme et al.~\cite{DemmeEtAl2013} to detect malware in Intel and ARM processors with HPEs.
\paragraph{\textbf{Support Vector Machine (SVM).}} In SVM based learning, input data is converted to a multi-dimensional representation by using mapping functions. Hyperplanes are then created to classify the data. The classification strategy is to find the optimal decision boundaries between classes by increasing the distance between them~\cite{libsvm}. Gulmezoglu et al.~\cite{gulmezoglu2017cache} showed that SVM can be applied in a noisy environment to detect applications that are running in virtual machines on Amazon EC2 cloud.
\paragraph{\textbf{Convolutional Neural Network (CNN).}} Convolutional Neural Networks are one of the most popular Deep Learning techniques that has been proven successful in numerous applications. In contrast to the other ML techniques, CNNs automatically determine important features of the input data. This is achieved by creating nodes between higher and lower dimensional mappings of the input data. The meaningful features are then extracted by finding the optimal functions for each node. When the test data is fed into the CNN, the highest probability is taken as the predicted label~\cite{goodfellow2016deep}. In 2016, Maghrebi et al.~\cite{maghrebi2016breaking} showed that Deep Learning techniques could be applied in side-channel attacks to recover secret information from cryptographic implementations.
\vspace{-2mm}
\subsection{Hardware Performance Events}
The microarchitectures of modern processors implement a large spectrum of performance enhancing features that speed up memory accesses and code execution. As a compromise, performance enhancements introduce input dependent runtimes and weak separation between executing applications. For critical software and mutually untrusted users, this raises severe security and privacy concerns that have been addressed in literature for more than two decades. Kocher ~\cite{kocher1996timing} first describes timing attacks on software implementations of cryptosystems and provides an early anticipation of memory hierarchies, branching units, and variable-time instructions being further exploited. Literature subsequently showed that data and instruction caches~\cite{TromerEtAl2010,AciicmezEtAl2010}, branch prediction units~\cite{AciicmezEtAl2006}, and arithmetic logic units~\cite{AciicmezSeifert2007} can indeed be targeted in attacks. All of them are evidence that the microarchitectural state of a processor contains crucial information about the processes that are executed on it. Hardware performance events are an interface to this state that is implemented on most modern processors. A dedicated piece of hardware, the performance monitoring unit (PMU), is responsible to keep track of microarchitectural events that occur while executing code on the processor. These events include, e.g., instruction retirements, branch mispredictions, and cache references. They provide a comprehensive picture of a processor's runtime behavior and are therefore interesting for adversaries and developers alike. In general, HPEs are useful for application profiling~\cite{AmmonsEtAl1997}, debugging~\cite{YilmazPorter2010,GreathouseEtAl2011}, and even load balancing~\cite{RaoXu2011}. However, the high level of details contained in HPEs also introduces security and privacy issues. Clock cycle events have been recognized as a vital timing source for a large class of cache-based attacks~\cite{ZhangEtAl2016,LippEtAl2016}. In particular, Uhsadel et al.~\cite{UhsadelEtAl2008} demonstrate that cache miss and clock cycle events can be used to mount attacks on software implementations of the Advanced Encryption Standard (AES). Bhattacharya and Mukhopadhyay~\cite{BhattacharyaMukhopadhyay2015} show that branch mispredictions during RSA decryptions reveal the secret exponent because of conditional branches in the multiplication routine during modular exponentiation. In contrast, HPEs have improved our understanding of attacks~\cite{TiriEtAl2007,AticiEtAl2013}, facilitated the evaluation of software components~\cite{ZanklEtAl2016}, and helped to analyze malware samples~\cite{WillemsEtAl2012}. They have also been leveraged to reverse-engineer cache internals on modern processors~\cite{MauriceEtAl2015} and to construct random number generators~\cite{SuciuEtAl2011,MartonEtAl2012}. A large class of previous work is dedicated to the real time detection of attacks and malware infections, a selection of which relies on Machine Learning and related techniques. In particular, naive Bayes~\cite{SinghEtAl2017}, probabilistic Markov models~\cite{KazdagliEtAl2016}, k-Nearest Neighbors~\cite{DemmeEtAl2013}, Decision Trees~\cite{DemmeEtAl2013,KazdagliEtAl2016,SinghEtAl2017}, Random Forests~\cite{DemmeEtAl2013}, Support Vector Machines~\cite{BahadorEtAl2014,TangEtAl2014}, and (Artificial) Neural Networks~\cite{ChiappettaEtAl2016,DemmeEtAl2013,SinghEtAl2017} are studied.
\vspace{-1mm}
\subsection{Website Fingerprinting}
The protection of the browser history is important to ensure the privacy of web users. Yet, literature offers a large spectrum of
history stealing attacks that allow to recover entries of previously visited websites. Most of them can be launched by malicious web servers and rely on caching~\cite{FeltenSchneider2000} and rendering~\cite{LiangEtAl2014} of website elements, visited URL styles~\cite{JacksonEtAl2006}, and user interactions~\cite{WeinbergEtAl2011}. In addition, attacks have also been demonstrated on the client side in the form of malicious browser extensions~\cite{TerLouwEtAl2008}. If no browsing history is stored, e.g. in private browsing modes, it is still possible to detect websites a user is actively visiting. This is investigated in the field of website fingerprinting, to which we contribute with this work. The following paragraphs discuss different attack vectors for website fingerprinting.
\vspace{-2mm}
\paragraph{\textbf{Network based.}} A significant fraction of website fingerprinting literature is dedicated to network traffic analysis. Attacks typically require an adversary to sniff network communication between the web server and the client machine. Most of the previous works tolerate encrypted traffic, e.g., generated by SSL/TLS or SSH connections, and some even work with anonymized traffic, e.g., routed over the Tor network. To fingerprint and classify websites, previous works have employed a variety of mathematical techniques, many of which are related to Machine Learning. In particular, the Jaccard Index~\cite{SunEtAl2002,SpreitzerEtAl2016}, multinomial naive-Bayes~\cite{HerrmannEtAl2009}, cosine similarity~\cite{ShiMatsuura2009}, Levenshtein distances and related metrics~\cite{LuEtAl2010,CaiEtAl2012,WangGoldberg2013}, k-th Nearest Neighbours~\cite{WangEtAl2014}, Decision Trees~\cite{JuarezEtAl2014}, Random Forests~\cite{HayesDanezis2016}, and Support Vector Machines~\cite{WangGoldberg2013} are studied.
\paragraph{\textbf{Browser/OS based.}} Website fingerprinting that targets the browser or the underlying operating system typically requires to execute malicious code, e.g. JavaScript, on the client machine. Through this attack vector, Gruss et al.~\cite{GrussEtAl2015} infer opened websites by targeting the memory deduplication feature of modern operating systems and hypervisors. Kim et al.~\cite{KimEtAl2016} exploit the Quota Management API of modern browsers. The authors recover opened websites via storage profiles, which they obtain by continuously reading the remaining space in the temporary storage. Vila and K{\"o}pf~\cite{VilaKoepf2017} employ a similar strategy by timing tasks in shared event loops that handle user interactions on all opened websites. A different approach is proposed by Jana and Shmatikov~\cite{JanaShmatikov2012}, who measure the memory footprint of browsers that is available through the \texttt{procfs} filesystem in Linux. The authors show that different websites exhibit different footprints and subsequently recover opened websites by comparing their footprints to previously recorded ones.
\paragraph{\textbf{Hardware based.}} The third attack vector for website fingerprinting leverages properties of the hardware that runs the web browser. Attacks are mounted by malicious code within the browser, by other processes on the same system, or by an external adversary with physical access to the device. Oren et al.~\cite{OrenEtAl2015} demonstrate that websites exhibit different profiles in the processor cache that can be observed from JavaScript. Hornby~\cite{Hornby2016} also fingerprints websites via the cache, but from another process that is running on the same processor as the web browser. Lee et al.~\cite{LeeEtAl2014} demonstrate that websites can be inferred from rendering traces that are retained in GPUs. The authors obtained these traces with a separate process that is running on the same system as the web browser. Booth~\cite{Booth2015} demonstrates that website fingerprints can also be constructed from the CPU load. The author stresses processor cores via JavaScript and indirectly measures the load of the system that is caused by other opened websites. Experiments are done using kNN classification and dynamic time warping comparisons. Clark et al.~\cite{ClarkEtAl2013} measure the power consumption of laptop and desktop systems and attribute different power profiles to different websites. The authors use Support Vector Machines to classify websites. Yang et al.~\cite{YangEtAl2017} extends this idea to mobile devices that are charged via USB. The authors use Random Forests for website classification.
\subsection{Our Work}
Similar to Hornby~\cite{Hornby2016} and Lee et al.~\cite{LeeEtAl2014}, we assume that a malicious application is running on the same processor as the web browser. In contrast to previous hardware based website fingerprinting, we leverage more than just the processor cache~\cite{OrenEtAl2015} or the processor load~\cite{Booth2015}. To the best of our knowledge, this work is the first that investigates hardware performance events in the context of website fingerprinting. In compliance with the state of the art in this field, we employ supervised Machine Learning techniques in the form of k-Nearest Neighbors, Decision Trees, and Support Vector Machines. While these are recognized instruments for network based fingerprinting, their application to hardware based website inference attacks is still fragmented~\cite{Booth2015,ClarkEtAl2013,YangEtAl2017}. In this work, we directly compare their effectiveness in multiple practical scenarios. In addition, we demonstrate that Deep Learning (in the form of Convolutional Neural Networks) outperforms traditional Machine Learning techniques that are established in both hardware performance event and website fingerprinting literature. To the best of our knowledge, CNNs have not been investigated in neither of these fields before.
\section{Conclusion}\label{sec:conclusion}
When websites are loaded in the browser, they stress the underlying hardware in a distinct pattern that is closely related to the contents of the website. This pattern is reflected in the microarchitectural state of the processor that executes the browser, which can be observed with high precision by counting hardware performance events. Since these events can be legitimately measured by user space applications, it is feasible to infer opened websites via performance event measurements. We demonstrated this by utilizing Machine Learning techniques, achieving high recognition rates even in the presence of background noise, trace misalignment, and varying network delays. In addition, the results show that CNN is able to obtain better classification rates from high number of classes in the presence of noise. By applying CNN, the whistleblowing websites are classified with 79\% accuracy among 40 websites while the overall classification rate increases up to 89.25\% with 5 guesses in Tor browser.
\section{Countermeasures}\label{sec:countermeasures}
The website inference technique presented in this work has two main requirements. First, websites loaded by a browser exhibit a unique footprint in the microarchitectural state of a processor. Second, this state can be observed via hardware performance events with sufficient precision. Any efforts impacting these two requirements directly affect the reliability, success, or practicality of our approach. The following two paragraphs subsequently discuss such efforts, formulate possible countermeasures, and approximately assess their feasibility.
\paragraph{\textbf{Displaying Websites.}} The first requirement of our classification technique implies that the executed operations during downloading and rendering of website elements are closely related to the type and amount of content displayed on a website. From a more abstract perspective this means that the execution flow and memory accesses of the browser vary for different websites.
A thorough approach for solving this issue is writing code such that instruction sequences and operand addresses are independent of the input that is processed. While this is reasonable to aspire for security software, it has considerable practical drawbacks in the context of web browsers. First, removing input dependencies almost always impairs performance, because runtime optimizations typically rely on skipping operations and handling special cases differently. As a result, websites take longer to display, which is not in favor of user experience. Second, the larger the code, the more complex it gets to remove input dependencies. For web browsers, at least the code related to networking, storing, and rendering elements must be changed. Given that security critical software has much smaller code bases and still struggles to remove input dependencies in practice~\cite{DoychevKoepf2016}, it is questionable that browser software will successfully implement this in the foreseeable future. If input dependencies cannot be entirely removed, artificial noise can be added to the website loading process. This is, for instance, achieved by introducing random delays between operations or by adding functions that process dummy data instead of real inputs. While this does not solve the underlying problem, it distorts the microarchitectural footprint each website exhibits while being displayed.
\paragraph{\textbf{Observing Events.}} The second requirement is the ability to observe the state of the processor microachitecture with high precision. Since performance monitoring units are dedicated parts of the processor, they cannot simply be removed or permanently deactivated. However, operating systems can block access to them from the software side. On Linux, the kernel can be compiled without the \texttt{perf} subsystem, e.g., by disabling the \texttt{CONFIG\_PERF\_EVENTS} configuration option. Also, the \texttt{perf\_event\_paranoid} file can be set to \texttt{3} or above to disable event counter access from user space. However, blocking or deactivating \texttt{perf} impairs applications that use performance events for legitimate profiling or debugging purposes. If event counting is generally needed, a possible compromise could be more fine-grained profiling restrictions, such that processes can only count events caused by themselves. Profiling any other process is prohibited, even if it belongs to the same user. While this requires changes to the \texttt{perf} interface, it provides legitimate applications access to profiling and at the same time impairs the fingerprinting technique presented in this work. This profiling restriction could be conveniently added as a dedicated setting in the \texttt{perf\_event\_paranoid} configuration file. An alternative solution is to lower the measurement precision of hardware performance events. This can, for instance, be achieved by artificially adding a certain level of noise to the event counts while retaining a sufficiently high signal-to-noise ratio, or by reducing sampling frequencies with which applications can acquire event counts. Yet again, this would also affect benign applications. A possible solution is to detect malicious programs and then only degrade their observations. However, the presented measurement approach behaves identically to legitimate applications and does not rely on exotic operations or measurement settings.
\section{Discussion}\label{sec:discussion}
The experiments on ARM were conducted with core-wide measurements, whereas HPEs were acquired in a process-specific fashion on Intel. In general, core-wide acquisition is expected to introduce more noise in the measurements, e.g. from system activity in the background. For process-specific acquisition the activity of the rest of the system does not impede the measurements, as the \texttt{perf} subsystem accumulates event counts only for the specified process. According to the results presented in the previous section, however, both scenarios allow to classify websites with success rates of over 80\% for SVM. Similarly, the precise synchronization on ARM and the approximate process-scanning approach on Intel are both suitable to achieve high classification rates.
Compared to Google Chrome in Incognito mode, the results of the Tor Browser are in general worse. This can be explained with the browser start-up phase, which is always captured for Tor. Also, random network delays introduce jitter in the observations of the website loading. Another adverse effect is the changing geo-location of the Tor exit nodes. Many websites, particularly news sites like New York Times and Yahoo, customize their appearance based on the location of their visitors and therefore introduce additional noise in the measurements.
Among the Machine Learning techniques, Convolutional Neural Networks have proven to be the most capable for classifying websites, if enough samples are available. This is the reason why CNNs performed well in Google Chrome and Tor Browser experiments, but not in ARM experiments. CNNs are built for multi-classification of complex structures by extracting meaningful features. On the contrary, SVM and kNN are designed to create hyperplanes to separate space into classes. Since the number of dimensions is high in the experiments, it is difficult to find the best hyperplane for each dimension. Nevertheless, there is a still need for further studies on CNN, since the results could be improved by modifying the parameters, number of layers and neurons.
In general, the feasibility of website fingerprinting via hardware performance events is not limited to the specific profiling scenarios and test platforms used in our experiments. This is because loading different websites creates different microarchitectural footprints. This a logical consequence of optimized software that is designed to provide best user experience. Therefore, similar results are expected also for other x86 and ARM processors, as well as for other HPE interfaces and web browsers, unless mitigation strategies are implemented.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{all_bar.pdf}
\caption{CNN success rates per website for Tor Browser on Intel. The dashed line shows the average classification rate of 68\%.}
\label{fig:all_bar}
\end{figure}
\section{Introduction}\label{sec:intro}
Web browsers are indispensable components in our lives. They provide access to news and entertainment, and more importantly serve as a platform through which we perform privacy and security sensitive interactions such as online banking, web enabled healthcare, and social networking. Knowing the websites a user is visiting therefore reveals personal and highly sensitive information. To preserve the privacy of users, browsers consequently implement \emph{private browsing} or \emph{incognito} modes, which leave no traces of visited websites. More comprehensive protection is achieved by \emph{Onion routing}, e.g. \emph{Tor}, which protects users against Internet surveillance by obscuring packet routing information. By using a Tor enabled browser users may hide the websites they visit from adversaries monitoring their network communication. This has become indispensable for whistleblowers and dissidents who try to protect their identity against powerful corporations and repressive governments. Besides web browsers, other tools have emerged to mask the identity of the user, e.g. Signal/Redphone, Silent Phone, and Telegram. However, even the installation of such tools can be viewed as subversive action by a repressive regime. In contrast, privacy preserving browsers come pre-installed on many platforms.
While browsers have significantly matured in providing privacy assurances, they are still far from perfect. For instance, an adversary can still infer web browsing activity by exploiting microarchitectural leakages at the hardware level. In 2012, Jana and Shmatikov~\cite{jana2012memento} found that memory footprints of processes are unique and that they can be used to detect opened websites. In 2015, Liu et al.~\cite{liu2015last} demonstrated how the entire last-level cache of a processor can be profiled, which Oren et al.~\cite{OrenEtAl2015} leveraged to infer a small set of opened websites. The key to such inference attacks is that most applications exhibit different execution behavior depending on the input they are processing. They consequently stress the processor hardware in different ways. Whichever application is able to observe these load patterns can learn a great deal of what is being processed in other programs. What eventually enables real-world attacks is that many of the applications we use every day run in the background. Users trust these applications, even though they have little control over what is executed by third-parties.
In this work, we show that it is feasible for such a third-party application to collect data using hardware performance events (HPEs) and infer private user activity across application boundaries. In particular, we demonstrate that it is possible to infer opened websites, even when users browse in Incognito mode or with the Tor Browser. Such malicious behavior is facilitated in modern operating systems, as HPEs can often be monitored from user space. For the experiments in this work, we use the \texttt{perf} subsystem of the Linux kernel. Since HPE based information is incidental and often noisy, advanced methods for data analysis are needed. The recent advances in Machine Learning (ML) provide us with a powerful tool to classify the complex noisy data in an effective manner. We show that while k-th Nearest Neighbors, Support Vector Machines, and Decision Trees are not sufficient to classify the complex and noisy observed data into a high number of different classes, Convolutional Neural Networks, a Deep Learning technique, can efficiently extract meaningful data even in the presence of severe noise. As a result, we demonstrate that a malicious user space process can infer the web activity of users with very high success rates and in a highly automated fashion.
\medskip
\noindent
{\bf Our Contribution.} In summary, we
\begin{itemize}
\item employ advanced Machine Learning techniques, including Convolutional Neural Networks, and compare their efficiency,
\item use \texttt{perf} to access different types of hardware performance events and combine them to get a better classification rate,
\item cover 40 different websites, including 30 of the top Alexa sites and 10 whistleblowing portals,
\item detect different web pages of a domain to show that fine-grained browser profiling is possible,
\item demonstrate that the attacker does not need to precisely synchronize with the browser, as misalignment is compensated by the ML techniques,
\item show that it suffices to monitor Google Chrome and Tor Browser for at most 5 seconds to classify websites with high accuracy, and
\item outline possible mitigation strategies that impede website inference while still allowing access to performance profiling.
\end{itemize}
The rest of the paper is organized as follows. Section~\ref{sec:background} provides background information and related work for hardware performance events, machine learning techniques, and website fingerprinting. Section~\ref{sec:perf} explains how we measure HPEs, and Section~\ref{sec:scenarios} describes the profiling scenarios. Section~\ref{sec:mlearning} discusses our installments of the ML techniques, before Section~\ref{sec:results} presents the results of our experiments. A further discussion of the results is given in Section~\ref{sec:discussion}. Finally, Section~\ref{sec:countermeasures} describes mitigation strategies and Section~\ref{sec:conclusion} concludes this work.
\section{Usage of Machine Learning Techniques}\label{sec:mlearning}
After the hardware performance events have been acquired, the measurements for every event are concatenated to create the input data for the Machine Learning techniques. Both training and test sets are normalized to reduce the computation time. Cross-validation is used to obtain reliable success rates. All algorithms are implemented in Matlab 2017a and run on a standard dual-core Intel processor. The training phase of the Convolutional Neural Network is reduced with the help of an NVIDIA Tesla K20 GPU accelerator. Further implementation details of the Machine Learning techniques are discussed in the following paragraphs.
\paragraph{\textbf{k-th Nearest Neighbor (kNN).}} The \textit{fitcknn} command is used to implement kNN and to train our model. By default, the prior probabilities are the respective relative frequencies of the classes in the data, which are initially set to be equal to each other. The Euclidean metric is used to determine the distance between classes.
\paragraph{\textbf{Decision Tree (DT).}} For the Decision Tree, the \textit{fitctree} command is used to train the model. The default values for maximum split is N-1 where N denotes the number of classes. For the training phase, the minimum leaf size is 1 and the minimum parent size is 10.
\paragraph{\textbf{Support Vector Machine (SVM).}} We use \texttt{libsvm}~\cite{libsvm} in our experiments to implement multi-class Support Vector Machines. The model is created and trained based on a linear SVM. In general, the type of the SVM can be set to C-SVC or v-SVC. The parameter C is used to regularize the mapping function, whereas the parameter v denotes the upper and lower bound of the fraction of the training error. In our experiments, we chose C-SVC.
\paragraph{\textbf{Convolutional Neural Network (CNN).}} We choose two autoencoders to classify our measurements into N classes. In each autoencoder, different levels of abstraction are learned from the feature vectors and mapped to a lower dimensional space. While the number of layers in the first autoencoder is 100$\,\cdot\,$N, the second autoencoder has 10$\,\cdot\,$N layers. The maximum number of iterations is set to 400 and L2 weight regularization is set to 0.001 for both autoencoders. The last layer is the softmax layer. The training data is trained in a supervised fashion using labels. After the neural network is established and first classification results are obtained, the accuracy of the multilayer network model is improved using backpropagation and repeated training using labeled data. While CNNs have many advantages, the most important disadvantage is their memory demands. When we run out of GPU memory, we downsample the input data to reduce the length of the feature vectors.
\section*{Acknowledgments}
This work is supported by the National Science Foundation, under grants CNS-1618837 and CNS-1314770.
{\footnotesize \bibliographystyle{acm}
\section{Monitoring Hardware Performance Events}\label{sec:perf}
The performance monitoring unit (PMU), which is responsible for counting hardware performance events, implements a set of counters that can each be configured to count events of a certain type. The number of available events is often considerably larger than the number of available counters. Consequently, only a limited number of events can be counted in parallel. In order to measure more events, software layers that use the PMU typically implement time multiplexing. All experiments in this work succeed by measuring only as many events as hardware counters are available, i.e., time multiplexing is not needed. Access to PMUs is typically restricted to privileged, i.e., kernel or system level code, but interfaces exist through which user space applications can gather event counts. On Unix and Linux based operating systems, \texttt{PAPI}~\cite{LKD2015} or \texttt{perf}~\cite{LPM2016} interfaces are commonly implemented. In this work, we focus on the \texttt{perf} interface that is mainly found on Linux systems. Note that this work demonstrates the general feasibility of website fingerprinting with HPEs. Therefore, similar results are also expected on systems with other HPE interfaces.
\subsection{Profiling with Perf}
The \texttt{perf} event monitoring subsystem was added to the Linux kernel in version 2.6.31 and subsequently made available to the user space via the \texttt{perf\_event\_open} system call. Listing~\ref{lst:perfsyscall} shows the system call signature.
\begin{center}
\begin{minipage}{.48\textwidth}
\begin{lstlisting}[numbers=none, language=C, caption=\texttt{perf\_event\_open} system call signature~\cite{LPM2016}., label=lst:perfsyscall]
int perf_event_open(struct perf_event_attr *attr,
pid_t pid, int cpu,
int group_fd,
unsigned long flags);
\end{lstlisting}
\end{minipage}
\end{center}
The \texttt{perf\_event\_attr} is the main configuration object. It determines the type of event that should be counted and defines a wide range of acquisition properties. We focus only on a very limited number of settings and use zero values for all others. This renders our measurements to be reproducible on a larger number of systems. The \texttt{type} field in \texttt{perf\_event\_attr} specifies the generic event type. As we focus on hardware based events, we only use \texttt{PERF\_TYPE\_HARDWARE} or \texttt{PERF\_TYPE\_HW\_CACHE}. The \texttt{config} field determines the actual event type. The event selection used in this work is given in Section~\ref{sec:scenarios}. In addition, we set the \texttt{exclude\_kernel} option, which avoids counting kernel activity. This improves the applicability of our measurement code, because kernel profiling is prohibited on some systems. Finally, the \texttt{size} field is set to the size of the event attribute struct. The \texttt{pid} and \texttt{cpu} parameters are used to set the scope of the event profiling. In this work, we focus on two profiling scenarios: \emph{process-specific} and \emph{core-wide}. To limit event counting to a single process, \texttt{pid} is set to the process identifier and \texttt{cpu} is set to \texttt{-1}. Subsequently, events are counted only for the given process, but on any processor core. To enable core-wide counting, \texttt{cpu} is set to the core number that should be observed and \texttt{pid} is set to \texttt{-1}. Events are then counted only on one processor core, but for all processes running on it. The \texttt{group\_fd} parameter is used to signal that a selection of events belongs to a group. The \texttt{perf} system then counts all members of a group as a unit. Since this is not a strict requirement for our approach, we omit \texttt{group\_fd} and set it to \texttt{-1}. The \texttt{flags} parameter is used to configure advanced settings including the behavior when spawning new processes and monitoring Linux control groups (cgroups). As none of the settings are relevant to our measurement scenarios, we set \texttt{flags} to zero.
Once \texttt{perf\_event\_open} succeeds, the returned file descriptor can be used to read and reset event counts, and to enable and disable counting. In our measurements, we read event counts using the standard \texttt{read} system call. We found this to yield a sufficiently high sampling frequency and subsequently high success rates during website fingerprinting. On our test systems, the duration of the \texttt{read} system call ranges between 1.5\,$\mu$s and 3.0\,$\mu$s when reading one counter value.
\paragraph{\textbf{Access Control.}} On Linux, access to \texttt{perf} can be configured for user space applications. The access level is specified as an integer value that is stored in \texttt{/proc/sys/kernel/perf\_event\_paranoid} in the \texttt{procfs} filesystem. A negative value grants user space applications full access to performance profiling. If the \texttt{paranoid} level is set to \texttt{0}, comprehensive profiling of the kernel activity is prohibited. A value of \texttt{1} prevents user space applications from core-wide event counting (\texttt{pid\,=\,-1}, \texttt{cpu\,$\geq$\,0}). A \texttt{paranoid} level of \texttt{2} prohibits process-specific event counts while the application gives control to kernel space, e.g., during a system call. Values above \texttt{2} deny event counting even in user space and essentially deactivate \texttt{perf} for user space applications. Note that the \texttt{paranoid} setting is typically overridden by applications started with the \texttt{CAP\_SYS\_ADMIN} capability, e.g., programs started by the root user.
\section{Website Profiling Results}\label{sec:results}
In each of the profiling scenarios described in Section~\ref{sec:scenarios}, we monitor HPEs for 30 of the most visited websites according to Alexa~\cite{AlexaInternet2017} (excluding adult sites). This selection is listed in Appendix~\ref{app:urls} (1-30) and used to illustrate the general effectiveness of the Machine Learning techniques to classify websites based on hardware performance events. To demonstrate that also fine-grained website classification is feasible, 10 different sub-pages of the \texttt{Amazon.com} domain are monitored in Google Chrome on Intel. Finally, a selection of whistleblowing websites is measured when visited with the Tor browser. They are also listed in Appendix~\ref{app:urls} (31-40).
\subsection{Google Chrome on ARM}
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{arm_bar.pdf}
\caption{SVM success rates per website for Google Chrome on ARM. The dashed line shows the average classification rate of 84\%.}
\label{fig:arm_bar}
\end{figure}
For the experiments on ARM, each website is monitored 20 times to train the models. In turn, each measurement consists of 25,000 samples per hardware performance event, which are concatenated for all six events to yield a final measurement size of 150,000 samples. For 30 websites, the total training data size is therefore $90\cdot 10^{6}$ samples. Based on this training set, the success rates after cross-validation are 84\% for linear SVM, 80\% for kNN, and less than 50\% for DT and CNN. The low success rates of DT and CNN indicate that not enough samples have been acquired. Figure~\ref{fig:arm_bar} illustrates the classification rates for each of the visited websites when classified with SVM. Since the number of samples collected in this scenario is small, 10-fold cross-validation is used. The lowest detection rate is 70\%, which shows that core-wide profiling is still feasible even in the presence of noise and background system activity. The average classification rate of 84\% is shown as a dashed line in the figure.
\subsection{Google Chrome (Incognito) on Intel}
For the Google Chrome experiments on Intel, the number of measurements per website is increased to 50. As more samples are acquired, fixed training and test sets are derived instead of using cross-validation. Out of the 50 observations, 40 are used for the training phase whereas 10 are collected to test the derived models. Since each website is monitored for only 1 second, every measurement now consists of 10,000 samples per event. With three observed events, this yields a total training set size of $36\cdot 10^{6}$ and a test set size of $9\cdot 10^{6}$ samples.
Figure~\ref{fig:chrome_diff} shows the success rates over an increasing number of training measurements for all Machine Learning techniques and Google Chrome in Incognito mode. Clearly, CNN achieve the highest classification rate, if enough training samples are available. In particular, the success rate for 40 training observations per website is 86.3\%. If the training data size is small, SVM and kNN achieve similar success rates as CNN. Due to the large size of feature vectors in the training and test data, DT gives lower success rates than other ML techniques. Regarding the computation effort, the training phase of CNN takes 2 hours on a GPU and is consequently the longest among the Machine Learning techniques. In contrast, the test phase takes approximately 1 minute for every ML technique.
The second experiment for Google Chrome in Incognito mode on Intel assumes that an adversary has detected a website that the user has visited. Consequently, the attacker tries to infer which page of the website the user is looking at. To illustrate the feasibility of this attack, we selected 10 pages of the \texttt{Amazon.com} domain that display different sections of the online store (kitchen, bedroom, etc.). Naturally, this scenario is more challenging, as the difference between web pages of the same domain is smaller than for entirely different websites. Nevertheless, it is still possible to correctly classify the visited web pages with moderate success. This is illustrated in Figure~\ref{fig:chrome_amazon}. When using CNN and SVM, the success rate is 64\%. kNN yields 60\% success rate, while DT drops to 52\%.
For CNN and SVM, we also investigate the success rates when the number of guesses are increased. This is shown in Figure~\ref{fig:guess_incognito}. If the first 5 result classes are considered, websites can be detected with 99\% accuracy for SVM and CNN. Similar results are obtained for the same domain experiments, where both CNN and SVM yield 92\% accuracy. Relaxing the number of guesses therefore significantly improves the success rates.
\begin{figure}[t!]
\centering
\subfigure[Alexa Top 30]{\includegraphics[width=0.48\textwidth]{chrome_diff_web.pdf}
\label{fig:chrome_diff}}
\subfigure[Same Domain Pages]{\includegraphics[width=0.48\textwidth]{chrome_amazon.pdf}\label{fig:chrome_amazon}}
\caption{Success rate vs. number of training measurements for Google Chrome (Incognito) and (a) 30 different websites (b) 10 same domain web pages.}
\end{figure}
\begin{figure}[t!]
\centering
\subfigure[Google Chrome (Incognito)]{\includegraphics[width=0.48\textwidth]{guesses_incognito.pdf}
\label{fig:guess_incognito}}
\subfigure[Tor Browser]{\includegraphics[width=0.48\textwidth]{guesses_tor.pdf}
\label{fig:guess_tor}}
\caption{Number of guesses vs. classification rate for (a) Google Chrome (Incognito) and (b) Tor Browser. Solid line represents results for Alexa Top 30, while the dashed line illustrates the same domain results.}
\end{figure}
\subsection{Tor Browser on Intel}
For the Tor Browser experiments on Intel, the same events are observed and the same number of measurements are taken for each website. Again, 40 of those measurements are used to construct the training set, while 10 measurement form the test set. As the Tor Browser is monitored for 5 seconds, 50,000 samples are acquired for each event and website. This yields 150,000 samples for one measurement, $180\cdot 10^{6}$ samples for the entire training set, and $45\cdot 10^{6}$ samples for the test set.
Similar to the Google Chrome experiments on Intel, Figure~\ref{fig:tor_diff} shows the success rates over an increasing number of training measurements for all Machine Learning techniques and Tor Browser. CNN yields the highest success rate of 71\%. While SVM and kNN have similar success rates around 66\%, Decision Tree yields a lower accuracy of 60\%. The results show that CNN can handle noisy data and misalignment problems better than other methods, since CNN learns the relations between traces.
\begin{figure}[t!]
\centering
\subfigure[Alexa Top 30]{\includegraphics[width=0.48\textwidth]{tor_diff.pdf}
\label{fig:tor_diff}}
\subfigure[Same Domain Pages]{\includegraphics[width=0.48\textwidth]{tor_amazon.pdf}
\label{fig:tor_amazon}}
\caption{Success rate vs. number of training measurements for Tor Browser and (a) 30 different websites (b) 10 same domain web pages.}
\end{figure}
The experiment for 10 web pages on \texttt{Amazon.com} is repeated for the Tor Browser and the results are illustrated in Figure~\ref{fig:tor_amazon}. In contrast to the Google Chrome results, Decision Tree yields the highest success rate of 59\%. We believe the reason is the small number of classes that increases the efficiency of DT. The remaining algorithms classify the same domain web pages with a similar success rate of approximately 49\%. Also, Figure~\ref{fig:guess_tor} shows the success rates for CNN and SVM over an increasing number of guesses. While the random selection success rate is around 16\% for 5 guesses, CNN achieves a success rate of 94\%. For the same domain web pages, the success rate of CNN 88\% for 5 guesses. SVM achieves slightly worse results. Slightly increasing the number of guesses thus yields a significant increase in classification success.
Finally, we investigate whistleblowing websites, since visiting them anonymously is one of the important reasons to use Tor Browser. For the experiments, we select 10 websites from~\cite{whistle_list}, which are given in Appendix~\ref{app:urls}. In the first step, these whistleblowing websites are classified using all ML techniques. While CNN yields the best classification rate of 84\%, SVM exhibits a success rate of 78\%. In contrast, DT and kNN have lower success rates around 60\%. In the second step, the classification is repeated for all websites considered so far (whistleblowing and Alexa Top 30). Figure~\ref{fig:tor_all} illustrates the success rates for all ML techniques. When classifying 40 websites, CNN yields a success rate of 68\%, while SVM achieves 55\%. In contrast, kNN and DT algorithms cannot classify the websites effectively. When the number of guesses is increased, the success rate improves again. Figure~\ref{fig:whistle_all_guess} shows the classification rates over an increasing number of guesses. If only whistleblowing websites and 5 guesses are considered, CNN yields a success close to 100\%. When all websites are considered, the success rate of CNN is 89.25\%. SVM achieves slightly worse results.
Individual success rates for CNN are shown in Figure~\ref{fig:all_bar}. The lowest success rate is around 20\% for two websites and seven websites are classified correctly with 100\% accuracy. An interesting observation is that among the 40 websites, the whistleblowing portals are still classified with good success rates. With an average success rate of 68\%, CNN is more capable than other ML techniques to correctly classify websites opened in Tor browser.
\begin{figure}[t!]
\centering
\subfigure[All Websites]{\includegraphics[width=0.48\textwidth]{tor_all.pdf}
\label{fig:tor_all}}
\subfigure[Whistleblowing and All Websites]{\includegraphics[width=0.48\textwidth]{whistle_all_guess.pdf}
\label{fig:whistle_all_guess}}
\caption{(a) Success rate vs. number of training measurements for Tor Browser and all websites. (b) Number of guesses vs. classification rate for whistleblowing (dashed) and all websites (solid).}
\end{figure}
\section{Browser Profiling Scenarios}\label{sec:scenarios}
We investigate the inference of opened websites via HPEs in three distinct scenarios hosted on two Linux test systems. As we are relying on the standardized \texttt{perf\_event\_open} system call of the Linux kernel, there is no need to change the measurement code when switching between systems. The following paragraphs describe each scenario in more detail.
\paragraph{\textbf{1.) Google Chrome on ARM.}} In this scenario, we profile the Google Chrome browser (v55.0.2883) with default options on an ARM Cortex-A53 processor. While the browser loads websites, a malicious user space application is measuring six hardware performance events. In particular, we acquire \texttt{HW\_INSTRUCTIONS}, \texttt{HW\_\-BRANCH\_\-INSTRUCTIONS}, \texttt{HW\_\-CACHE\_\-REFERENCES}, \texttt{L1\_\-DCACHE\_\-LOADS}, \texttt{L1\_\-ICACHE\_\-LOADS}, and \texttt{HW\_\-BUS\_\-CYCLES} events. This selection of events covers instruction retirements, cache accesses, and external memory interfaces. It gives a comprehensive view of the microarchitectural load the browser is putting on the processor. The selected events are measured core-wide, hence including noise from other processes and background activity of the operating system. Since we want to assess the feasibility of core-wide profiling, the browser process is bound to the measured processor core. The events are then measured for five seconds.
\paragraph{\textbf{2.) Google Chrome (Incognito) on Intel.}} In this scenario, we profile Google Chrome in Incognito mode with default options on an Intel i5-2430M processor. The malicious user space application is measuring three hardware performance events, namely \texttt{HW\_\-BRANCH\_\-INSTRUCTIONS}, \texttt{HW\_\-CACHE\_\-REFERENCES}, and \texttt{LLC\_\-LOADS}. In contrast to the ARM scenario, the malicious application acquires process-specific events. Hence, the browser processes float on all processor cores. Since the Intel platform only features three configurable hardware counters, not all of the events measured on ARM can be considered. Compared to the overall retired instructions, we found the retired branch instructions to yield more usable information. As the browser processes are not bound to one core anymore, we substitute events related to the L1 cache with last-level cache loads. In addition, the bus cycle event is omitted, because it is noisier on the Intel platform. The selected events are then measured for one second specifically for the rendering process of the opened website.
\paragraph{\textbf{3.) Tor Browser on Intel.}} In this scenario, we profile the Tor Browser (v6.5.1, based on Firefox v45.8.0) on the same Intel platform as before. In contrast to Chrome, the Tor Browser renders all tabs in one process, which is profiled by the malicious application. While the same performance events are observed, the measurement duration is prolonged. This is because the Tor network introduces significant delays while opening websites.
\paragraph{\textbf{Synchronization.}} None of the scenarios require strict synchronization between the browser and the process of the adversary. Small misalignment is simply passed on to the Machine Learning step. Therefore, we only investigate simple synchronization techniques that can be achieved in practice. For Google Chrome on Intel, the adversary scans the running processes twice per second and checks whether a new rendering process has been spawned. Once a new process is detected, the adversary starts to measure the corresponding process-specific events. The Tor Browser, in contrast, is started freshly for every opened website. The adversary again checks all running processes twice per second and once the Tor Browser is detected, the process-specific profiling is started. This includes additional noise as the browser startup phase is also captured. In the ARM scenario, the measurements are precisely aligned with the start of loading a website. This is used to investigate whether more precise alignment yields better results. Such a trigger signal could be derived from a sudden change or characteristic pattern in the event counts, as the load of the system changes when a website is opened.
|
1,116,691,497,858 | arxiv |
\section{Conclusion}
\label{sec6}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
We addressed the potential negative density and pressure problem that emerges when the high order WENO schemes are applied to solve compressible Euler equations in some extreme situations. The approach that we propose is in the conservative high order finite difference WENO approximation framework. We generalized the MPP flux limiting technique for the high order finite difference WENO methods solving scalar conservation law to a class of PP flux limiters for compressible Euler equations. We also developed the parametrized flux limiters for equations with source terms. Extensive numerical tests show the capability of the proposed approach: without sacrificing accuracy and much of the efficiency, the new schemes produce solutions satisfying the PP property for scalar problems with a source term, and solutions with positive density and pressure for compressible Euler equations with or without source terms.
\section{Introduction}
\label{sec1}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
The success of the high order essentially non-oscillatory (ENO) or weight ENO (WENO) methods solving hyperbolic conservation laws has been well documented in the literature \cite{harten1987uniformly, shu1988efficient, liu1996nonoscillatory, Jiang_Shu} and the references therein. At the heart of the high order ENO/WENO schemes solving hyperbolic problem is the robustness, namely stability in the sense of suppressing spurious oscillations around discontinuities. The application of the high order finite difference, finite volume ENO/WENO methods to hyperbolic systems \cite{shu1988efficient, Jiang_Shu}, such as the compressible Euler equations
\begin{eqnarray}
\label{eq:euler}
\left(
\begin{array}{l}
\rho\\
\rho u\\
E
\end{array}
\right)_t+
\left(
\begin{array}{l}
\rho u \\
\rho u + P\\
(E+ P)u
\end{array}
\right)_x=0,
\end{eqnarray}
achieves the goal of suppressing oscillations when discontinuous solution emerges during the time evolution. However, in the extreme case, such as high Mach flow simulation, a slightly different (although equally important) problem is that the high order schemes that we are using might produce solutions with negative density and pressure, which leads to an ill-posed problem, often seen as blow-up of the numerical simulation. The failure of preserving positive density and pressure by the above mentioned schemes in such circumstance pose tremendous difficulty of applying high order schemes to some of the challenging simulations in practice.
In the earlier work, see \cite{einfeldt1991godunov, linde1997robust, perthame1996positivity} and references included, much attention has been paid to the positivity preservation of schemes up to second order. It wasn't until the recent work by Zhang \& Shu \cite{zhang2010positivity} that arbitrarily high order finite volume WENO and discontinuous Galerkin methods are designed to preserve positivity. The method proposed in \cite{zhang2010positivity} is a successful generalization of their earlier work on the maximum principle preserving (MPP) computations of scalar conservation laws,
see \cite{zhang2011maximum}. Their approach relies on limiting the reconstructed polynomials (finite volume WENO) or representing polynomials (discontinuous Galerkin) around cell averages to be MPP.
The positivity preserving (PP) finite volume WENO scheme and DG scheme by Zhang \& Shu can be proved to have the designed arbitrary high order accuracy when equipped with proper CFL number. In the later work by the authors \cite{zhang2012positivity}, a PP finite difference WENO method is presented when the density and pressure is strictly greater than a fixed positive constant. In \cite{hu2013positivity}, a flux cut-off limiter method is applied to the high order finite difference WENO method to ensure positive density and pressure.
In this paper, we continue along the line of research on the parametrized flux limiters proposed in \cite{mpp_xu, mpp_xuMD, mpp_xqx} for high order ENO/WENO methods solving a scalar hyperbolic conservation law
\begin{eqnarray}
\label{hcl} u_t+f(u)_x=0
\end{eqnarray}
subject to the initial condition $u({x}, 0)= u_0 ({x})$. For this particular family of equations, the solution satisfies a strict maximum principle
\begin{eqnarray}
\label{CMPP}
u_m\le u(x, t) \le u_M \quad \text{if} \quad u_m\le u_0(x)\le u_M.
\end{eqnarray}
The idea of the parametrized flux limiters for general conservative scheme solving scalar conservation laws is to modify high order numerical fluxes to enforce the discrete maximum principle for the updated solution. In general, a conservative high order scheme with explicit multi-stage Runge-Kutta (RK) time integration for (\ref{hcl}) can be written as
\begin{equation}
\label{Conserative}
u^{n+1}_j=u^{n}_j-\frac{\Delta t}{\Delta x} (\hat H^{rk}_{j+\frac12}-\hat H^{rk}_{j-\frac12}),
\end{equation}
where $\hat{H}^{rk}_{j\pm\frac12}$ are the corresponding fluxes at the final stage of RK methods.
The MPP properties of high order schemes are realized by taking a convex combination of a high order flux $\hat{H}^{rk}_{j+\frac12}$ and a first order monotone flux $\hat{h}_{j+\frac12}$: $\tilde H^{rk}_{j+\frac12}=\hat{h}_{j+\frac12} + \theta_{j+\frac12} (\hat{H}^{rk}_{j+\frac12}-\hat{h}_{j+\frac12})$, with $\theta_{j+\frac12} \in [0, 1]$. The limiting parameters $\theta_{j+\frac12}$, which measure the change of numerical fluxes, can be found out through decoupling the following MPP constraints that are linear with respect to $\theta_{j\pm\frac12}$,
\begin{equation}
\label{DMP}
u_m\le u^{n+1}_j=u^{n}_j-\frac{\Delta t}{\Delta x} (\tilde H^{rk}_{j+\frac12}-\tilde H^{rk}_{j-\frac12})\le u_M.
\end{equation}
The similar idea is utilized in this paper in the sense of making sufficient modification of the high order numerical fluxes to ensure that the updated density and pressure are positive.
When such parametrized flux limiters are generalized to preserve the positivity of density and pressure of numerical solutions for Euler equations with source terms, there are several new challenges.
One of the main difficulties is that the linear MPP constraint (\ref{DMP}) becomes nonlinear for positivity preservation of pressure, which has nonlinear dependence on the density, momentum and energy. We address such challenges by decoupling the nonlinear PP constraint for a `convex set' of the limiting parameters. The proposed approach provides a sufficient condition for preserving positive pressure. The presence of the source term can also be conveniently handled in the parametrized flux limiting framework. Notice that we only require positivity preservation for the solutions at the final stage of RK method for the sake of preserving the designed high order temporal accuracy. If there are negative density and pressure in intermediate stages of the RK method, the speed of sound is computed by $c=\sqrt{\gamma\frac{|p|}{|\rho|}}$.
Our approach is similar to those very early discussions of the flux limiting approach \cite{boris1973flux, chakravarthy1983high, engquist1980stable, van1974towards, sweby1984high} for the purpose of achieving a total variation diminishing (TVD) property, which is a much stronger stability requirement than the maximum principle. The schemes are expected to be TVD, therefore, most of the schemes are at most of second order accurate.
To distinguish our work from others' in the context of designing arbitrarily high order schemes, we would like to point out that the method we are proposing only involves the modification of high order numerical fluxes. Another critical difference is that the parametrized flux limiters are only applied to the final stage of the multi-stage RK methods. These new features are designed to produce numerical solutions with positive density and pressure, while allowing for relatively large CFL numbers without sacrificing accuracy in our extensive numerical tests.
The proposed method is essentially different from those by Zhang \& Shu \cite{zhang2012positivity}, in which the PP property is realized only with fine enough numerical meshes, when the density and the pressure is extremely close to $0$. The flux limiting method we are proposing is also different from the flux cut-off method by Hu \cite{hu2013positivity}, whose approach demands significantly reduced CFL for accuracy as illustrated in their analysis and numerical tests.
However, the proof of maintaining high order accuracy when the PP flux limiters are applied to the finite difference WENO method solving the Euler system is very difficult. In this paper, we rely on numerical observations to demonstrate the maintenance of high order accuracy. A rigorous proof of that the MPP flux limiters modify the original high order flux with up to third order accuracy for general nonlinear scalar cases is provided in \cite{mpp_xqx} and that with up to fourth order accuracy for linear advection equations is provided in \cite{mpp_vp}.
The paper is organized as follows. In Section \ref{sec2}, we give a brief review of the parametrized MPP flux limiters for high order conservative schemes solving (\ref{hcl}). We then generalize the MPP flux limiters to a scalar problem with source terms. In Section \ref{sec3}, we present the main algorithm of the parametrized PP finite difference WENO RK method for the compressible Euler equation in one and two dimensions. An implementation procedure is given in the presence of source terms. In Section \ref{sec5}, we perform extensive numerical tests to illustrate the effectiveness of the proposed method. We finally conclude in Section \ref{sec6}.
\section{Parametrized MPP flux limiters for scalar equations}
\label{sec2}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\subsection{Review of MPP flux limiters for scalar equations}
\label{sec2.1}
For simplicity, we consider a simple one-dimensional hyperbolic conservation equation
\begin{eqnarray}
\label{eq: adv}
u_t+f(u)_x=0, \quad x \in [0, 1],
\end{eqnarray}
with an initial condition $u(x,0) = u_0(x)$ and a periodic boundary condition.
We adopt the following spatial discretization for the domain $[0, 1]$
\[
0 = x_\frac12 < x_\frac32 < \cdots < x_{N+\frac12} = 1,
\]
where $I_j = [x_{j-\frac12}, x_{j+\frac12}]$ has the mesh size $\Delta x = \frac1N$.
Let $u_j(t)$ denote the solution at grid point $x_j = \frac12(x_{j-\frac12}+x_{j+\frac12})$ at continuous time $t$.
The finite difference scheme evolves the point values of the solution in a conservative form
\begin{eqnarray}
\label{eq: semi-discrete}
\frac{d}{dt}u_j(t)+ \frac{1}{\Delta x} (\hat H_{j+1/2}-\hat H_{j-1/2}) = 0.
\end{eqnarray}
The numerical flux $\hat{H}_{j+\frac12}$ in equation \eqref{eq: semi-discrete} can be reconstructed from neighboring flux functions $f(u(x_i, t)),$ $i=j-p, \cdots, j+q$ with high order by WENO reconstructions \cite{Jiang_Shu,shu1998essentially}. By adaptively assigning nonlinear weights to neighboring candidate stencils, the WENO reconstruction preserves high order accuracy of the linear scheme around smooth regions of the solution, while producing a sharp and essentially non-oscillatory capture of discontinuities. Equation \eqref{eq: semi-discrete} can be further discretized in time by a high order time integrator via the method-of-line approach. For example, the scheme with a third order total variation diminishing (TVD) RK time discretization is
\begin{eqnarray}
u_j^{(1)} &=& u_j^n+\Delta t L(u_j^n), \nonumber \\
u_j^{(2)} &=& u_j^n+\frac{1}{4}\Delta t (L(u_j^n) + L(u_j^{(1)})), \nonumber \\
u_j^{n+1} &=& u_j^n+\frac{1}{6} \Delta t \bigl( L(u_j^{n})+ L(u_j^{(1)})+4 L(u_j^{(2)})\bigr).
\label{eq:rk3}
\end{eqnarray}
where $u^{(k)}_{j}$ and $u^n_j$ denotes the numerical solution at $x_j$ at $k^{th}$ RK stage and at time $t^n$ respectively.
Let $\Delta t$ be the time step size. $L(u^{(k)}) \doteq -\frac{1}{\Delta x} (\hat H^{(k)}_{j+\f12}-\hat H^{(k)}_{j-\f12})$ with $\hat H^{(k)}_{j+\f12}$ being the numerical flux from
finite difference WENO reconstruction based on $\{u_j^{(k)}\}_{j=1}^N$ at intermedia RK stages. Equation (\ref{eq:rk3}) in the final stage of RK method can be re-written as
\begin{eqnarray}
\label{eq:rkfinal}
u^{n+1}_j=u^{n}_j-\lambda (\hat H^{rk}_{j+\f12}-\hat H^{rk}_{j-\f12}),
\end{eqnarray}
with $\lambda=\frac{\Delta t}{\Delta x}$ and
\begin{eqnarray}
\hat H^{rk}_{j+\f12} \doteq \frac{1}{6}\left(\hat H^n_{j+\f12}+\hat H^{(1)}_{j+\f12}+4\hat H^{(2)}_{j+\f12}\right).
\label{eq:rkflux}
\end{eqnarray}
The parametrized MPP flux limiters in \cite{mpp_xqx} is based on the finite difference RK WENO scheme for equation \eqref{eq: adv} reviewed above.
Let $u_{m}= \underset{x}{\text{min}}(u(x, 0))$ and $u_{M}=\underset{x}{\text{max}}(u(x, 0))$.
The idea of the parametrized MPP flux limiter is to modify the high order flux $\hat{H}^{rk}_{j\pm\frac12}$ in equation \eqref{eq:rkflux} towards a first order monotone flux denoted as $\hat{h}_{j\pm\frac12}$ by taking
a linear combination of them,
\begin{equation}
\label{eq: linear_comb}
\tilde H^{rk}_{j\pm\f12} \doteq \hat{h}_{j\pm\frac12} + \theta_{j\pm\frac12} (\hat{H}^{rk}_{j\pm\frac12}-\hat{h}_{j\pm\frac12}), \quad \theta_{j\pm\frac12} \in [0, 1].
\end{equation}
the original high order flux $\hat H^{rk}_{j\pm\f12}$ in equation \eqref{eq:rkflux} is then replaced by the modified flux $\tilde H^{rk}_{j\pm\f12}$ above.
To preserve the MPP property, we wish to have $u_{m}\le u^{n+1}_{j} \le u_{M}$ at the final RK stage on each time step, i.e.
\begin{eqnarray}
\label{eq:mpp}
u_{m} \le u^{n}_j-\lambda (\tilde H^{rk}_{j+\f12}-\tilde H^{rk}_{j-\f12}) \le u_{M}.
\end{eqnarray}
For the parametrized MPP flux limiter, a pair $(\Lambda_{-\f12, {I_j}}, \Lambda_{+\f12, {I_j}})$ needs to be found such that
any pair $(\theta_{j-\f12}, \theta_{j+\f12}) \in [0, {\Lambda_{-\f12, {I_j}}}]\times [0, {\Lambda_{+\f12, {I_j}}]}$ satisfies (\ref{eq:mpp}). Under such a constraint, $\theta_{j\pm\f12}$ are chosen to be as close to $1$ as possible for accuracy, which is done by the following three steps. Below $\epsilon$ is
a small positive number to avoid the denominator to be $0$, e.g., $\epsilon=10^{-13}$.
\begin{enumerate}
\item
The right inequality of (\ref{eq:mpp}), that is the maximum value part, can be rewritten as
\begin{eqnarray}
\label {umax}
\lambda \theta_{j-\f12} (\hat H^{rk}_{j-\f12}-\hat h_{j-\f12}) - \lambda \theta_{j+\f12} (\hat H^{rk}_{j+\f12}-\hat h_{j+\f12})-\Gamma^M_j \le 0,
\end{eqnarray}
where $\Gamma^M_j=u_{M}-u_j+\lambda (\hat h_{j+\f12}-\hat h_{j-\f12}) \ge 0$.
Let $F_{j-\f12}=\hat H^{rk}_{j-\f12}-\hat h_{j-\f12}$, the decoupling of (\ref{umax}) on cell $I_j$ gives:
\begin{enumerate}
\item If $F_{j-\f12}\le 0$ and $F_{j+\f12}\ge 0$, let $(\Lambda^M_{-\f12, I_j}, \Lambda^M_{+\f12, I_j})=(1, 1)$.
\item If $F_{j-\f12}\le 0$ and $F_{j+\f12} < 0$, let $(\Lambda^M_{-\f12, {I_j}}, \Lambda^M_{+\f12, {I_j}})=(1, \min(1, \frac{\Gamma^M_j}{-\lambda F_{j+\f12}+\epsilon}))$.
\item If $F_{j-\f12} > 0$ and $F_{j+\f12}\ge 0$, let $(\Lambda^M_{-\f12, {I_j}}, \Lambda^M_{+\f12, {I_j}})=(\min(1, \frac{\Gamma^M_j}{\lambda F_{j-\f12}+\epsilon}), 1)$.
\item If $F_{j-\f12} > 0$ and $F_{j+\f12} < 0$,
\begin{itemize}
\item if $(\theta_{j-\f12}, \theta_{j+\f12})=(1, 1)$ satisfies (\ref{umax}), let $(\Lambda^M_{-\f12, {I_j}}, \Lambda^M_{+\f12, {I_j}})=(1, 1)$;
\item otherwise, let
$(\Lambda^M_{-\f12, {I_j}}, \Lambda^M_{+\f12, {I_j}})=(\frac{\Gamma^M_j}{\lambda F_{j-\f12}- \lambda F_{j+\f12}+\epsilon},\frac{\Gamma^M_j}{\lambda F_{j-\f12}- \lambda F_{j+\f12}+\epsilon} )$.
\end{itemize}
\end{enumerate}
\item
The left inequality of (\ref{eq:mpp}), that is the minimum value part, can be rewritten as
\begin{eqnarray}
\label {umin}
0\le \lambda \theta_{j-\f12} (\hat H^{rk}_{j-\f12}-\hat h_{j-\f12}) - \lambda \theta_{j+\f12} (\hat H^{rk}_{j+\f12}-\hat h_{j+\f12})-\Gamma^m_j,
\end{eqnarray}
where $\Gamma^m_j=u_{m}-u_j+\lambda (\hat h_{j+\f12}-\hat h_{j-\f12}) \le 0$. Similar to the maximum value case, the decoupling of (\ref{umin}) on cell $I_j$ gives:
\begin{enumerate}
\item If $F_{j-\f12}\ge 0$ and $F_{j+\f12}\le 0$, let $(\Lambda^m_{-\f12, I_j}, \Lambda^m_{+\f12, I_j})=(1, 1)$;
\item If $F_{j-\f12}\ge 0$ and $F_{j+\f12}> 0$, let $(\Lambda^m_{-\f12, {I_j}}, \Lambda^m_{+\f12, {I_j}})=(1, \min(1, \frac{\Gamma^m_j}{-\lambda F_{j+\f12}-\epsilon}))$;
\item If $F_{j-\f12}< 0$ and $F_{j+\f12}\le 0$, let $(\Lambda^m_{-\f12, {I_j}}, \Lambda^m_{+\f12, {I_j}})=(\min(1, \frac{\Gamma^m_j}{\lambda F_{j-\f12}-\epsilon}), 1)$;
\item If $F_{j-\f12}< 0$ and $F_{j+\f12}> 0$,
\begin{itemize}
\item when $(\theta_{j-\f12}, \theta_{j+\f12})=(1, 1)$ satisfies (\ref{umin}),
let $(\Lambda^m_{-\f12, {I_j}}, \Lambda^m_{+\f12, {I_j}})=(1, 1)$;
\item otherwise, let $(\Lambda^m_{-\f12, {I_j}}, \Lambda^m_{+\f12, {I_j}})=(\frac{\Gamma^m_j}{\lambda F_{j-\f12}- \lambda F_{j+\f12}-\epsilon},\frac{\Gamma^m_j}{\lambda F_{j-\f12}- \lambda F_{j+\f12}-\epsilon} )$.
\end{itemize}
\end{enumerate}
\item
The locally defined limiting parameter is given as
\begin{eqnarray}
\label{limit1}
\Lambda_{j+\f12}=\min(\Lambda^M_{+\f12, {I_j}}, \Lambda^M_{-\f12, {I_{j+1}}}, \Lambda^m_{+\f12, {I_j}}, \Lambda^m_{-\f12, {I_{j+1}}}), \quad j = 0, \cdots N.
\end{eqnarray}
\end{enumerate}
The flux limiting procedure above guarantees the MPP property of the numerical solution by the design. It is theoretically proved to preserve up to fourth order spatial and temporal accuracy for smooth solutions \cite{mpp_xqx, mpp_vp}.
\subsection{Scalar advection equations with source terms}
\label{sec2.2}
We consider scalar advection problems with a source term
\begin{eqnarray}
u_t+f(u)_x=s(u).
\label{eq:source1}
\end{eqnarray}
In particular, we consider the class of problems whose solutions enjoy the PP property, that is, the lower bound of the solution is $0$ (such kind of problem might not preserve the MPP property). For example, when $s(u)=-k u$ with a positive $k$, with positive initial values and periodic boundary conditions, the solution satisfies the PP property.
The flux limiter is designed base on the PP property of a first order scheme
\begin{eqnarray}
\label{eq:1st}
u^{n+1}_j=u^{n}_j-\lambda (\hat h_{j+\f12}-\hat h_{j-\f12})+ \Delta t s(u^n_j),
\end{eqnarray}
under the time step constraint
\begin{equation}
\Delta t \le \frac{\text{CFL } \Delta x}{\lambda_{max}+ s_{max} \Delta x},
\label{eq:timestep}
\end{equation}
where $\lambda_{max}=\max|f'(u)|$ and $s_{max}=\max|s'(u)|$.
We propose to first modify the source term such that $\tilde{u}^{n+1}_j \ge \epsilon_s$, with $\epsilon_s=\min_j(u^{n+1}_j, 10^{-13})$, where $\{u^{n+1}_j\}$ are positive solutions computed from (\ref{eq:1st}) and $10^{-13}$ is a small positive number related to machine precision. Here $\tilde{u}^{n+1}_j$ is
\begin{eqnarray}
\tilde {u}^{n+1}_j=u^{n}_j-\lambda (\hat h_{j+\f12}-\hat h_{j-\f12})+ \Delta t \tilde{s}^{rk}_j,
\label{eq:source2}
\end{eqnarray}
with
\begin{equation}
\label{eq:mdsource}
\tilde{s}^{rk}_j=r_j (\hat s^{rk}_j-s(u^n_j))+s(u^n_j),
\end{equation}
and
\begin{eqnarray}
\hat s^{rk}_j \doteq \frac{1}{6}\left(s(u^n_j)+ s(u^{(1)}_j) + 4 s(u^{(2)}_j)\right),
\label{eq:rksource}
\end{eqnarray}
as in (\ref{eq:rkflux}). $r_j$ is designed by the linear constraints to preserve the PP property of $\{\tilde{u}^{n+1}_j\}_j$. Specifically,
\[
r_j=
\begin{cases}
\min(\frac{\epsilon_s-u^{n+1}_j}{\Delta t \Delta s_j}, 1),& \quad \text{if } \tilde{\tilde{u}}_j < \epsilon_s \\
1, &\quad \text{otherwise }
\end{cases},
\]
where $\Delta s_j=\hat s^{rk}_j-s(u^n_j)$ and $\tilde{\tilde{u}}_j=u^{n}_j-\lambda (\hat h_{j+\f12}-\hat h_{j-\f12})+ \Delta t \hat s^{rk}_j$.
Next the parametrized MPP flux limiters are applied as in (\ref{eq:mpp}) to satisfy
\begin{eqnarray}
\label{eq:pps}
\epsilon_s \le u^{n}_j-\lambda (\tilde H^{rk}_{j+\f12}-\tilde H^{rk}_{j-\f12})+\Delta t \tilde s^{rk}_j .
\end{eqnarray}
(\ref{eq:pps}) leads to the same decomposed inequality (\ref{umin}) for the minimum value part, only
with $\Gamma^m_j$ given by
\begin{eqnarray}
\Gamma^m_j&=&\epsilon_s-u_j+\lambda (\hat h_{j+\f12}-\hat h_{j-\f12})-\Delta t \tilde s^{rk}_j \le 0.
\end{eqnarray}
The procedure proposed above for treating equations with a source term is PP by the design, and is shown to maintain high order accuracy by numerical tests in Section~\ref{sec5}.
\section{Parametrized PP flux limiters for compressible Euler equations }
\label{sec3}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this section, we first extend the parametrized MPP flux limiters to PP flux limiters for the compressible Euler equations. We then describe how to generalize
the proposed approach to systems with source terms and to high dimensional systems. In this section, we use letters in bold for vectors.
\subsection{Parametrized positivity preserving flux limiters}
\label{sec3.1}
For compressible Euler equations in one dimension
\begin{eqnarray}
\label{eq:eulers}
{\bf u}_t+{\bf f}({\bf u})_x=0,
\end{eqnarray}
with ${\bf u}=(\rho, \rho u, E)^T$, ${\bf f}({\bf u})=(\rho u, \rho u^2+ p, (E+p)u )^T$,
where $\rho$ is the density, $u$ is the velocity, $p$ is the pressure, $m=\rho u$ is the momentum, $E=\frac{1}{2}\rho u^2+\frac{p}{\gamma-1}$ is the total energy from equation of state (EOS) and $\gamma$
is the ratio of specific heat ($\gamma=1.4$ for the air).
Denote $\hat {\bf h}_{j+\f12}$ to be a first order monotone flux, and $\hat {\bf H}^{rk}_{j+\f12}$ to be the linear combinations of fluxes from multiple RK stages, similar to equation (\ref{eq:rkflux}), but in a component-by-component fashion.
For positivity preserving, we are seeking the flux limiters of the type
\begin{eqnarray}
\label{mhrk}
\tilde{\bf H}^{rk}_{j+\f12}=\theta_{j+\f12} (\hat{\bf H}^{rk}_{j+\f12}-\hat {\bf h}_{j+\f12})+\hat {\bf h}_{j+\f12}
\end{eqnarray}
such that
\begin{eqnarray}
\label{eq:pp}
\begin{cases}
\rho^{n+1}_{j}>0,\\
p^{n+1}_j>0,
\end{cases}
\end{eqnarray}
for the updated solution
\begin{eqnarray}
\label{eq:rkeuler}
{\bf u}^{n+1}_j={\bf u}^{n}_j-\lambda (\tilde{\bf H}^{rk}_{j+\f12}-\tilde{\bf H}^{rk}_{j-\f12}).
\end{eqnarray}
In the parametrized flux limiters' framework,
a pair of $(\Lambda_{-\f12, {I_j}}, \Lambda_{+\f12, {I_j}})$ is found such that
the updated solution satisfies (\ref{eq:pp}) for any $(\theta_{j-\f12}, \theta_{j+\f12}) \in [0, {\Lambda_{-\f12, {I_j}}}]\times [0, {\Lambda_{+\f12, {I_j}}]}$.
The high order flux $\hat {\bf H}^{rk}_{j+\f12}$ is modified by (\ref{mhrk}) to preserve positive density and pressure. In simulations, preserving positivity is implemented by
\begin{eqnarray}
\label{eq:pp2}
\begin{cases}
\rho^{n+1}_{j}\ge\epsilon_{\rho},\\
p^{n+1}_j\ge\epsilon_p.
\end{cases}
\end{eqnarray}
where we introduce small positive numbers $\epsilon_{\rho}$ defined by $\min_{j}(\rho^{n+1}_j, 10^{-13})$ and $\epsilon_p$} defined by $\min_{j}(p^{n+1}_j, 10^{-13})$. $\rho^{n+1}_j$ and $p^{n+1}_j$ are positive density and pressure obtained by the first order monotone scheme and $10^{-13}$ is related to the machine precision.
Let us denote the first order monotone flux by $\hat {\bf h}({\bf u})=(f^\rho, f^m, f^E)^T$, similarly $\hat {\bf H}^{rk}=(\hat f^\rho, \hat f^m, \hat f^E)^T$ and
$\tilde{\bf H}^{rk}=(\tilde f^{\rho}, \tilde f^{m}, \tilde f^{E})^T$.
The proposed process can be dissected into two steps.
\begin{enumerate}
\item Find the limiting parameters $\theta_{j\pm\f12}$ to preserve the positivity of the density,
\begin{eqnarray}
\label{density}
\rho^{n+1}_j=\rho^n_j-\lambda (\tilde f^{\rho}_{j+\f12}-\tilde f^{\rho}_{j-\f12}).
\end{eqnarray}
Thus, the limiting parameters $\theta_{j\pm\f12}$ are found to satisfy
\begin{eqnarray}
\label{d1}
\epsilon_{\rho} \le \Gamma_j-\lambda (\theta_{j+\f12} (\hat f^{\rho}_{j+\f12}-f^{\rho}_{j+\f12}) -\theta_{ j-\f12} (\hat f^{\rho}_{j-\f12}-f^{\rho}_{j-\f12})),
\end{eqnarray}
which is equivalent to
\begin{eqnarray}
\label{d2}
0 \le \Gamma_j-\epsilon_{\rho}-\lambda (\theta_{j+\f12} (\hat f^{\rho}_{j+\f12}-f^{\rho}_{j+\f12}) -\theta_{ j-\f12} (\hat f^{\rho}_{j-\f12}-f^{\rho}_{j-\f12})),
\end{eqnarray}
where $\Gamma_j=\rho^n_j-\lambda (f^{\rho}_{j+\f12}- f^{\rho}_{j-\f12}) \ge \epsilon_{\rho}$.
A pair of limiting parameters $(\Lambda^{\rho}_{-\f12, {I_j}}, \Lambda^{\rho}_{+\f12, {I_j}})$ for the positive density of (\ref{d2}) can be identified by a similar procedure as described in Section~\ref{sec2.1}. We can define a set for the positive density $\rho^{n+1}_j$
\begin{eqnarray}
\label{Srho}
S_{\rho} =\{ (\theta_{j-\f12}, \theta_{j+\f12}): 0\le \theta_{j-\f12} \le \Lambda^{\rho}_{-\f12, {I_j}}, 0\le \theta_{j+\f12} \le \Lambda^{\rho}_{+\f12, {I_j}} \},
\end{eqnarray}
which is plotted as the rectangle bounded by the dash line in Figure~\ref{set}.
\item Find the limiting parameters $\theta_{j\pm\f12}$ within the region $S_{\rho}$ to preserve the positivity of the pressure.
We seek a sufficient condition such that the pressure given by (\ref{eq:rkeuler}) satisfies
\begin{eqnarray}
\label{pre}
p^{n+1}_j (\theta_{j-\f12}, \theta_{j+\f12})=(\gamma-1)\left(E^{n+1}_j-\frac{1}{2} \frac{(m^{n+1}_j )^2}{\rho^{n+1}_j}\right)\ge \epsilon_p.
\end{eqnarray}
The decoupling of (\ref{pre}) for ($\theta_{j-\f12}, \theta_{j+\f12}$) is different from
the scalar case since the principal variables are nonlinearly dependent on each other. However the idea is
still to separate $\theta_{j-\f12}$ and $\theta_{j+\f12}$. Since $\rho^{n+1}_j \ge \epsilon_{\rho}$ is guaranteed by the previous step, we first put the concave property of pressure \cite{zhang2010positivity} in the following remark for future reference:
\begin{rem}
\label{conv}
The pressure as a function of $(\rho , m, E)$ is concave, i.e., $p(\alpha {\bf U_1}+(1-\alpha) {\bf U_2})\ge \alpha p({\bf U_1})+(1-\alpha) p({\bf U_2})$ for $0\le \alpha \le 1$
if $\rho_1, \rho_2> 0$. Therefore $p^{n+1}_j (\theta_{j-\f12}, \theta_{j+\f12})$ is a concave function of $(\theta_{j-\f12}, \theta_{j+\f12})$ on $S_\rho$ due to the linear dependence of $(\rho^{n+1}_j , m^{n+1}_j, E^{n+1}_j)$ on $(\theta_{j-\f12}, \theta_{j+\f12})$. Therefore, if
$p^{n+1}_j (\vec{\theta}^l) \ge \epsilon_p$, with $\vec{\theta}^l = (\theta^l_{j-\f12}, \theta^l_{j+\f12})$ for $l=1, 2$,
then
$
p^{n+1}_j (\alpha \vec{\theta}^1 + (1-\alpha) \vec{\theta}^2) \ge \epsilon_p, \quad 0\le\alpha\le1.
$
\end{rem}
We define an admissible set
\begin{eqnarray}
\label{pAD}
S_\theta =\{(\theta_{j-\f12}, \theta_{j+\f12})\in S_\rho: (\theta_{j-\f12}, \theta_{j+\f12}) \text{ satisfies } (\ref{pre})\}.
\end{eqnarray}
$S_\theta$ is a convex set thanks to Remark \ref{conv}. Let the three vertices of the rectangle $S_\rho$ other than $(0, 0)$ be denoted by
\begin{eqnarray}
\label{vert}
A^1=(0, \Lambda^{\rho}_{+\f12, {I_j}}),\quad
A^2=(\Lambda^{\rho}_{-\f12, {I_j}}, 0),\quad
A^3=(\Lambda^{\rho}_{-\f12, {I_j}} , \Lambda^{\rho}_{+\f12, {I_j}}),
\end{eqnarray}
see Figure~\ref{set}. Based on the concave property in Remark~\ref{conv}, we propose the following way of decoupling (\ref{pre}).
\begin{enumerate}
\item For i=1, 2, 3, if $p(A^i)\ge \epsilon_p$, let $B^i =A^i$; otherwise find $r$ such that $p(r A^i)\ge \epsilon_p$ and let $B^i= r A^i$. The three $B^i$'s and $(0, 0)$ form a convex polygonal region, denoted as $S_p$, inside $S_\theta$. Such convex polygonal region $S_p$ is outlined by the dash dot line in Figure~\ref{set}.
\item We define the decoupling rectangle, as a subset of $S_p$, to be
\begin{eqnarray}
\label{dreg}
R_{\rho, p}=[0, \min(B^2_1, B^3_1)]\times [0, \min(B^1_2, B^3_2)],
\end{eqnarray}
see the region outlined by the solid line in Figure~\ref{set}.
That is, within $S_p$, we find the decoupling rectangle $R_{\rho, p}$ with left-bottom node on $(0, 0)$ and right-top node $(\Lambda_{-\f12, {I_j}}, \Lambda_{+\f12, {I_j}})$ as close to $(1, 1)$ as possible to best preserve the accuracy while achieving the PP property of high order numerical schemes.
Let
\begin{eqnarray}
\label{dcp}
(\Lambda_{-\f12, {I_j}}, \Lambda_{+\f12, {I_j}})=(\min(B^2_1, B^3_1), \min(B^1_2, B^3_2)).
\end{eqnarray}
\end{enumerate}
\end{enumerate}
Finally, similar to equation \eqref{limit1} for the MPP flux limiters, the locally defined limiting parameter is given as
$\theta_{j+\f12}=\min(\Lambda_{-\f12,{I_j}}, \Lambda_{+\f12,{I_{j+1}}})$.
\begin{figure}
\centering
\includegraphics[totalheight=3.5in]{./pic/set.eps}
\caption{The decoupling rectangle $R_{\rho,p}$ (bounded by the solid line) with the right-top node $(\Lambda_{-\f12, {I_j}}, \Lambda_{+\f12, {I_j}})$. $S_{\rho}$ is the rectangle bounded by the dash line. $S_p$ is the polygonal bounded by the dash dot line.}
\label{set}
\end{figure}
\begin{rem}
\label{geos}
The limiter above can preserve positive density and pressure by its design due to the two sufficient conditions (\ref{d1}) and (\ref{pre}).
For general equation of state, if $\rho> 0$, then $p>0 \Leftrightarrow e>0$, where the internal energy $e$ can always be written as a concave function of $(\rho, m, E)^T$ similarly as (\ref{pre}) \cite{zhang2012positivity}. Similar procedure can be followed for PP property of numerical solutions.
\end{rem}
\subsection{Extension to Euler system with source term}
\label{sec3.2}
The compressible Euler equations may come with source terms in the form of
\begin{eqnarray}
\label{eq:eulersource}
{\bf u}_t+{\bf f}({\bf u})_x={\bf s}({\bf u}),
\end{eqnarray}
For example, four kinds of source terms were discussed in \cite{zhang2011positivity}: geometric, gravity, chemical
reaction and radiative cooling. The PP flux limiters can be applied by the following three steps.
\begin{enumerate}
\item Choose a time step, such that the first order scheme (\ref{step1}) is PP,
\begin{eqnarray}
{\bf u}^{n+1}_j={\bf u}^{n}_j-\lambda (\hat {\bf h}_{j+\f12}-\hat {\bf h}_{j-\f12})+\Delta t {\bf s}({\bf u}^n_j).
\label{step1}
\end{eqnarray}
\item Find $r$ such that the scheme (\ref{step2}) with the modified source terms is PP
\begin{eqnarray}
{\bf u}^{n+1}_j={\bf u}^{n}_j-\lambda (\hat {\bf h}_{j+\f12}-\hat {\bf h}_{j-\f12})+\Delta t \tilde {\bf s}^{rk}_j,
\label{step2}
\end{eqnarray}
with $\tilde {\bf s}^{rk}_j = r(\hat {\bf s}^{rk}_j-{\bf s}({\bf u}^n_j))+{\bf s}({\bf u}^n_j)$, $\hat {\bf s}^{rk}_j$ is similarly defined as (\ref{eq:rksource}) component-by-component.
\item Finally find $\theta_{j\pm\f12}$ for the modified high order flux $\tilde{\bf H}^{rk}_{j+\f12}$, such that (\ref{step3}) is PP
\begin{eqnarray}
{\bf u}^{n+1}_j={\bf u}^{n}_j-\lambda (\tilde {\bf H}^{rk}_{j+\f12}-\tilde {\bf H}^{rk}_{j-\f12})+\Delta t \tilde {\bf s}^{rk}_j.
\label{step3}
\end{eqnarray}
The procedure is similar as in the previous subsection.
\end{enumerate}
\subsection{Extension to the multi-dimensional Euler system}
\label{sec3.3}
In this subsection, we extend the previously proposed PP flux limiters to Euler equations
in two-dimensions
\begin{equation}
{\bf u}_t+{\bf f}({\bf u})_x+{\bf g}({\bf u})_y=0,
\label{eq:euler2d}
\end{equation}
with ${\bf u}=(\rho, m_u, m_v, E)^T$, ${\bf f}({\bf u})=(m_u, \rho u^2+p, \rho u v, (E+p)u)^T$ and
${\bf g}({\bf u})=(m_v, \rho u v, \rho v^2+p, (E+p)v)^T$.
$\rho$ is the density, $u$ is the velocity in $x$ direction, $v$ is the velocity in $y$ direction,
$p$ is the pressure, $m_u=\rho u$ and $m_v=\rho v$ are the momenta, $E=\frac{1}{2}\rho u^2+\frac{1}{2}\rho v^2+\frac{p}{\gamma-1}$ is the total energy and $\gamma$ is the ratio of specific heat.
The high order finite difference scheme with PP flux limiters at the final stage of a RK time discretization
is given by
\begin{eqnarray}
{\bf u}^{n+1}_{i,j}={\bf u}^{n}_{i,j}-\lambda_x(\tilde {\bf H}^{rk}_{i+\f12,j}-\tilde {\bf H}^{rk}_{i-\f12,j})
-\lambda_y(\tilde {\bf G}^{rk}_{i,j+\f12}- \tilde {\bf G}^{rk}_{i,j-\f12}),
\end{eqnarray}
with
\begin{eqnarray}
\label{mhrk2d}
\tilde {\bf H}^{rk}_{i+\f12,j}&=&\theta_{i+\f12,j}(\hat {\bf H}^{rk}_{i+\f12,j}-\hat {\bf h}_{i+\f12,j})+\hat {\bf h}_{i+\f12,j}, \\
\label{mgrk2d}
\tilde {\bf G}^{rk}_{i,j+\f12}&=&\theta_{i,j+\f12}(\hat {\bf G}^{rk}_{i,j+\f12}-\hat {\bf g}_{i,j+\f12})+\hat {\bf g}_{i,j+\f12},
\end{eqnarray}
where $\hat {\bf H}^{rk}_{i+\f12,j}$ and $\hat {\bf G}^{rk}_{i,j+\f12}$ are linear combinations of fluxes from multiple RK stages similarly as (\ref{eq:rkflux}) in the scalar case but in a component-wise fashion, $\hat {\bf h}_{i+\f12,j}$ and $\hat {\bf g}_{i,j+\f12}$ are first order monotone fluxes.
Similar to the 1D case, we find the four parametrized limiters $\Lambda^{\rho}_{L, {I_{ij}}}$, $\Lambda^{\rho}_{R,{I_{ij}}}$,
$\Lambda^{\rho}_{U, {I_{ij}}}$ and $\Lambda^{\rho}_{D,{I_{ij}}}$, such that for all $\theta_{i\pm\f12,j}$ and $\theta_{i,j\pm\f12}$
in the set
\begin{align}
\label{Srho2d}
S_{\rho} = &\{ (\theta_{i-\f12,j}, \theta_{i+\f12,j},\theta_{i,j-\f12}, \theta_{i,j+\f12}): 0\le \theta_{i-\f12,j} \le \Lambda^{\rho}_{L, {I_{ij}}}, \nonumber \\
&0\le \theta_{i+\f12,j} \le \Lambda^{\rho}_{R, {I_{ij}}},0\le \theta_{i,j-\f12} \le \Lambda^{\rho}_{D, {I_{ij}}},
0\le \theta_{i,j+\f12} \le \Lambda^{\rho}_{U, {I_{ij}}} \}
\end{align}
we have $\rho^{n+1}_{i,j}\ge\epsilon_\rho$. With the positive density $\rho^{n+1}_{i,j}$, the pressure is updated by the constraint
\begin{align}
\label{pre2d}
p^{n+1}_{i,j}& (\theta_{i-\f12,j}, \theta_{i+\f12,j},\theta_{i,j-\f12}, \theta_{i,j+\f12})=\nonumber \\
&(\gamma-1)(E^{n+1}_{i,j} -\frac{1}{2} \frac{((m_u)^{n+1}_{i,j} )^2+((m_v)^{n+1}_{i,j} )^2}{\rho^{n+1}_{i,j}})\ge \epsilon_p.
\end{align}
Let the convex admissible set for positive pressure be
\begin{align}
\label{pAD2d}
S_\theta =\{(\theta_{i-\f12,j}, \theta_{i+\f12,j},\theta_{i,j-\f12}, \theta_{i,j+\f12})\in S_\rho:
(\theta_{i-\f12,j}, \theta_{i+\f12,j},\theta_{i,j-\f12}, \theta_{i,j+\f12}) \text{ satisfies } (\ref{pre2d})\}
\end{align}
Let the sixteen vertices of $S_\rho$ denoted by
\begin{equation}
A^{k_1,k_2,k_3, k_4}=(k_1 \Lambda^{\rho}_{L,{I_{ij}}}, k_2 \Lambda^{\rho}_{R,{I_{ij}}},k_3 \Lambda^{\rho}_{D,{I_{ij}}},k_4 \Lambda^{\rho}_{U,{I_{ij}}}),
\end{equation}
with $k_1, k_2, k_3, k_4$ to be $0$ or $1$. We decouple (\ref{pre2d}) in the following way:
\begin{enumerate}
\item For $(k_1,k_2,k_3,k_4)\neq(0,0,0,0)$, if $p(A^{k_1,k_2,k_3,k_4})\ge \epsilon_p$, let $B^{k_1,k_2,k_3,k_4} =A^{k_1,k_2,k_3,k_4}$;
otherwise find $r$ such that $P(r A^{k_1,k_2,k_3,k_4})\ge \epsilon_p$ and let $B^{k_1,k_2,k_3,k_4} =r A^{k_1,k_2,k_3,k_4}$.
The 15 $B^{k_1,k_2,k_3,k_4}$'s with the origin $(0, 0,0,0)$ form a four dimensional polyhedra inside $S_{\theta}$;
\item The decoupling tesseract can be defined by
\begin{align}
\label{dreg2d}
R_{\rho, p}=&[0, \min(B^{1,1,1,0}_1, B^{1,1,0,1}_1,B^{1,0,1,1}_1)] \times [0, \min(B^{1,1,1,0}_2, B^{1,1,0,1}_2,B^{0,1,1,1}_2)] \nonumber \\
\times& [0, \min(B^{1,1,1,0}_3, B^{1,0,1,1}_3, B^{0,1,1,1}_3)]\times [0, \min(B^{1,1,0,1}_4, B^{1,0,1,1}_4,B^{0,1,1,1}_4)].
\end{align}
Let
\begin{align}
\label{dcp2d}
(\Lambda_{L, {I_{ij}}}&, \Lambda_{R, {I_{ij}}},\Lambda_{D, {I_{ij}}}, \Lambda_{U, {I_{ij}}}) =
(\min(B^{1,1,1,0}_1, B^{1,1,0,1}_1,B^{1,0,1,1}_1), \min(B^{1,1,1,0}_2, B^{1,1,0,1}_2, \nonumber \\
&B^{0,1,1,1}_2),\min(B^{1,1,1,0}_3, B^{1,0,1,1}_3, B^{0,1,1,1}_3), \min(B^{1,1,0,1}_4, B^{1,0,1,1}_4,B^{0,1,1,1}_4)).
\end{align}
\end{enumerate}
Finally, similar to equation \eqref{limit1} for the MPP flux limiters, the locally defined limiting parameter is given as
$\theta_{i+\f12,j}=\min(\Lambda_{L,{I_{ij}}}, \Lambda_{R,{I_{i+1,j}}})$ and
$\theta_{i,j+\f12}=\min(\Lambda_{D,{I_{ij}}}, \Lambda_{U,{I_{i,j+1}}})$.
\begin{rem}
For two dimensional compressible Euler equations with source terms, it can be done similarly as the one dimensional case.
\end{rem}
\section{Numerical simulations}
\label{sec5}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this section, we will use the 5th order finite difference WENO scheme for space discretization \cite{Jiang_Shu} and a 4th order Runge-Kutta time discretization \cite{shu1988efficient}, denote as ``WENO5RK4'',
with the proposed PP flux limiters for simulating the compressible Euler equations.
Here a 4th order RK time discretization is adopted for better observation of accuracy
by taking the time step to be $\Delta t=\text{CFL } \Delta x$. Most of the tests are from
\cite{zhang2012positivity}. Below, $\text{CFL }=0.6$ unless otherwise specified.
\begin{exa}(Accuracy test for a scalar problem with a source term.)
\label{ex2}
We consider $u_t+u_x=-u$ with the initial condition
\begin{equation*}
u(x,0)=\sin^4(x),
\end{equation*}
and the periodic boundary condition. The exact solution is given by
\begin{equation*}
u(x,t)=e^{-t}\sin^4(x-t).
\end{equation*}
The minimum value of the exact solution is $u_{m}=0$. This example is used to test the PP property
and accuracy of dealing with a source term. In Table \ref{tab2}, we can see the PP property
is preserved and the 5th order accuracy has been maintained.
\begin{table}
\centering
\caption{Example \ref{ex2}. A scalar advection problem with a source term at $T=0.1$. $v_{min}$ is the minimum value of the numerical solution.}
\vspace{0.2cm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& N & $L^1$ error & order & $L^\infty$ error & order & $v_{min}$ \\\hline
\multirow{5}{*}{without limiters}
& 40 & 3.36E-04 & --& 8.78E-04 & --& -1.35E-04 \\ \cline{2-7}
& 80 & 2.03E-05 & 4.05& 1.24E-04 & 2.82& -1.05E-05 \\ \cline{2-7}
& 160 & 6.75E-07 & 4.91& 6.25E-06 & 4.31& -1.88E-06 \\ \cline{2-7}
& 320 & 1.67E-08 & 5.34& 1.29E-07 & 5.60& -3.02E-09 \\ \cline{2-7}
& 640 & 4.30E-10 & 5.28& 2.60E-09 & 5.63& -5.28E-11 \\ \hline
\multirow{5}{*}{with limiters}
& 40 & 3.25E-04 & --& 8.66E-04 & --& 5.67E-15 \\ \cline{2-7}
& 80 & 1.92E-05 & 4.08& 1.17E-04 & 2.89& 1.18E-05 \\ \cline{2-7}
& 160 & 6.38E-07 & 4.91& 6.25E-06 & 4.22& 3.01E-16 \\ \cline{2-7}
& 320 & 1.67E-08 & 5.26& 1.29E-07 & 5.60& 6.33E-10 \\ \cline{2-7}
& 640 & 4.31E-10 & 5.28& 2.60E-09 & 5.63& 3.46E-11 \\ \hline
\end{tabular}
\label{tab2}
\end{table}
\end{exa}
\begin{exa}(Accuracy test for the global Lax-Friedrichs flux.)
\label{ex22}
We consider the Burgers' equation with the initial condition
\begin{equation*}
u(x,0)=(1+\sin(x))/2
\end{equation*}
and a periodic boundary condition.
We consider the WENO5RK4 scheme with the global Lax-Friedrichs (LxF) fluxes. Let
\begin{equation}
\label{eq: g_LxF}
f^\pm_i=\f12(f(u^n_i)\pm \alpha u^n_i),\quad i=j-p, \cdots, j+q,
\end{equation}
with $\alpha\ge \max_{u} |f'(u)|$. The numerical flux $\hat H_{j+\f12}=f^-_{j+\f12}+f^+_{j+\f12}$ in (\ref{eq: semi-discrete}), where $f^\pm_{j+\f12}$ are reconstructed based on WENO schemes from (\ref{eq: g_LxF}) with the corresponding upwind mechanism. We numerically investigate the time step restriction for maintaining high order accuracy using the global Lax-Friedrichs flux, since it is frequently used in the computation of the Euler system. In \cite{mpp_xqx}, local truncation analysis is performed to prove that MPP flux limiters can maintain up to third order accuracy of the original scheme with no additional CFL constraint (i.e. $\text{CFL}\le1$) when the upwind flux is used. However, when the global LxF flux with extra large $\alpha$ in equation \eqref{eq: g_LxF} is used, there is a mild time step restriction with $\text{CFL}\le0.886$. It is technically challenging to theoretically estimate such time step restriction for maintaining high order accuracy (e.g. fifth order) of the MPP flux limiters even for scalar equations, therefore we rely on extensive numerical tests.
We consider the scheme with the global Lax-Friedrichs flux with extra large $\alpha = 1.3$ (greater than $\max_u|f'(u)|=1$). The time step is chosen to be $\Delta t=\text{CFL} \Delta x / \alpha$. In Table \ref{tab22}, we show that for the 5th order linear scheme (linear weights instead of nonlinear weights in WENO5) with the 4th order Runge-Kutta time discretization, when $\text{CFL}=0.886$, the 5th order accuracy is maintained with the MPP flux limiters. In fact, $\text{CFL}=0.886$ works for all other $\alpha$'s we tested,
the results are not listed here to save space.
\begin{table}
\centering
\caption{Example \ref{ex22}. Burgers' equation at $T=0.2$. $\alpha=1.3$ for the global LxF flux (\ref{eq: g_LxF}). $\Delta t=0.886 \Delta x / \alpha $. $u_{max}-v_{max}$ is the difference of the maximum values between the numerical solution and the exact solution.}
\vspace{0.2cm}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& N & $L^1$ error & order & $L^\infty$ error & order & $u_{max}-v_{max}$ \\\hline
\multirow{5}{*}{without limiters}
& 40 & 2.05E-04 & --& 2.76E-03 & -- & -5.47E-06 \\ \cline{2-7}
& 80 & 1.20E-05 & 4.09& 2.33E-04 & 3.57 & -2.24E-07 \\ \cline{2-7}
& 160 & 4.32E-07 & 4.79& 9.68E-06 & 4.59 & -1.98E-08 \\ \cline{2-7}
& 320 & 1.38E-08 & 4.97& 3.15E-07 & 4.94 & -1.37E-09 \\ \cline{2-7}
& 640 & 4.39E-10 & 4.98& 1.01E-08 & 4.97 & -8.92E-11 \\ \hline
\multirow{5}{*}{with limiters}
& 40 & 2.06E-04 & --& 2.76E-03 & -- & 7.33E-06 \\ \cline{2-7}
& 80 & 1.20E-05 & 4.10& 2.33E-04 & 3.57 & 9.99E-14 \\ \cline{2-7}
& 160 & 4.32E-07 & 4.79& 9.68E-06 & 4.59 & 1.00E-13 \\ \cline{2-7}
& 320 & 1.38E-08 & 4.97& 3.15E-07 & 4.94 & 1.00E-13 \\ \cline{2-7}
& 640 & 4.39E-10 & 4.98& 1.01E-08 & 4.97 & 9.99E-14 \\ \hline
\end{tabular}
\label{tab22}
\end{table}
\end{exa}
\begin{exa}{(Accuracy test for 2D vortex evolution problem.)}
\label{ex1}
We consider the vortex evolution problem \cite{hu1999weighted} to test the accuracy.
For this problem, the mean flow is $\rho=p=u=v=1$ and is added by an isentropic vortex perturbation centered at $(x_0,y_0)$ in $(u,v)$ with $T=p/\rho$, no perturbation in entropy $S=p/\rho^\gamma$,
\begin{eqnarray}
(\delta u,\delta v)=\frac{\varepsilon_{vortex}}{2\pi}e^{0.5(1-r^2)}(-\bar{y},\bar{x}),
\quad \delta T=-\frac{(\gamma-1)\epsilon^2}{8\gamma\pi^2}e^{(1-r^2)},
\quad \delta S=0,
\end{eqnarray}
where $(\bar{x},\bar{y})=(x-x_0,y-y_0)$, $r^2=\bar{x}^2+\bar{y}^2$.
The computational domain is taken to be $[-5, 15]\times[-5, 15]$ and $(x_0,y_0)=(5,5)$.
The boundary condition is periodic. $\gamma=1.4$ and the vortex strength is $\varepsilon_{vortex}=10.0828$ as in \cite{zhang2012positivity}. The exact solution is the passive convection of the vortex with the mean flow. The lowest density and pressure of the exact solution are $7.8\times 10^{-15}$ and $1.7\times 10^{-20}$.
$\epsilon_{WENO}$ in the nonlinear WENO weights is chosen to be $10^{-5}$, which is between $10^{-2}$ and $10^{-6}$ \cite{hu1999weighted}. In Table \ref{tab1}, we can clearly observe the 5th order accuracy with the PP flux limiters.
\begin{table}
\centering
\caption{Example \ref{ex1}. Vortex evolution problem at $T=0.01$. $\epsilon_{WENO}=10^{-5}$. $\rho_{min}$ and
$p_{min}$ are the minimum density and pressure of the numerical solution respectively.}
\vspace{0.2cm}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& N & $L^1$ error & order & $L^\infty$ error & order & $\rho_{min}$ & $p_{min}$\\ \hline
\multirow{5}{*}{without limiters}
& 64 & 1.49E-04 & -- & 5.25E-02 & -- & -9.10E-05 & 2.79E-04 \\ \cline{2-8}
& 128 & 1.57E-06 & 6.57 & 5.39E-04 & 6.61 & 3.04E-06 & -2.16E-06 \\ \cline{2-8}
& 256 & 1.29E-07 & 3.60 & 1.37E-04 & 1.97 & -2.83E-06 & -4.68E-07 \\ \cline{2-8}
& 512 & 4.69E-09 & 4.79 & 3.37E-06 & 5.35 & 2.42E-07 & 1.27E-08 \\ \cline{2-8}
& 1024 & 1.15E-10 & 5.35 & 7.92E-08 & 5.41 & 1.31E-08 & 1.87E-10 \\ \hline
\multirow{5}{*}{with limiters}
& 64 & 1.49E-04 & -- & 5.25E-02 & -- & 6.30E-04 & 1.91E-04 \\ \cline{2-8}
& 128 & 1.57E-06 & 6.57 & 5.39E-04 & 6.61 & 3.72E-06 & 1.00E-13 \\ \cline{2-8}
& 256 & 1.32E-07 & 3.57 & 1.30E-04 & 2.05 & 2.42E-07 & 5.36E-10 \\ \cline{2-8}
& 512 & 4.69E-09 & 4.81 & 3.37E-06 & 5.27 & 2.42E-07 & 1.27E-08 \\ \cline{2-8}
& 1024 & 1.15E-10 & 5.35 & 7.92E-08 & 5.41 & 1.31E-08 & 1.87E-10 \\ \hline
\end{tabular}
\label{tab1}
\end{table}
\end{exa}
\begin{exa}{1D low density and low pressure problems.}
\label{ex3}
We consider two 1D low density and low pressure problems for the ideal gas.
The first one is a 1D Riemann problem, the initial condition is $\rho_L=\rho_R=7$,
$u_L=-1$, $u_R=1$, $p_L=p_R=0.2$ and $\gamma=1.4$, which is a double rarefaction problem.
The exact solution contains vacuum. In Fig. \ref{fig52} (left), we show the results with
the PP flux limiters at $T=0.6$ on a mesh size of $\Delta x=1/200$.
The second one is the 1D Sedov blast wave. For the initial condition, the density is $1$,
the velocity is $0$, the total energy is $10^{-12}$ everywhere except in the center cell, which
is a constant $E_0/\Delta x$ with $E_0=3200000$. $\gamma=1.4$. In Fig. \ref{fig52} (right),
we show the results with the PP flux limiters at $T=0.001$ on a mesh size of $\Delta x=1/200$.
\begin{figure}
\centering
\includegraphics[totalheight=2.0in]{./pic/den1.eps},
\includegraphics[totalheight=2.0in]{./pic/den2.eps}\\
\includegraphics[totalheight=2.0in]{./pic/pre1.eps},
\includegraphics[totalheight=2.0in]{./pic/pre2.eps}\\
\includegraphics[totalheight=2.0in]{./pic/vex1.eps},
\includegraphics[totalheight=2.0in]{./pic/vex2.eps}
\caption{Example \ref{ex3}. Left: double rarefaction problem at $T=0.6$. Right: 1D Sedov blast wave at $T=0.001$. $\Delta x=\frac{1}{200}$. The solid lines are the exact solutions. Symbols are the numerical solutions.}
\label{fig52}
\end{figure}
\end{exa}
\begin{exa}{2D low density and low pressure problems.}
\label{ex4}
Now we consider two 2D low density and low pressure problems for the ideal gas.
The first one is the 2D Sedov blast wave. The computational domain is a square of $[0, 1.1]\times[0, 1.1]$.
For the initial condition, similar to the 1D case, the density is $1$, the velocity is $0$, the total energy is $10^{-12}$ everywhere except in the lower left corner is the constant $\frac{0.244816}{\Delta x \Delta y}$.
$\gamma=1.4$. The numerical boundary on the left and bottom edges is reflective. In Fig. \ref{fig53} (left), we show the numerical density at the mesh sizes $\Delta x=\Delta y=\frac{1.1}{160}$ with the PP flux limiters at $T=1$. The numerical solution with cutting along the diagonal matches the exact solution very well in Fig. \ref{fig53} (right).
\begin{figure}
\centering
\includegraphics[totalheight=2.5in,angle=270]{./pic/den32.eps}
\includegraphics[totalheight=2.5in,angle=270]{./pic/den3c2.eps}\\
\caption{Example \ref{ex4}. 2D Sedov blast wave. $T=1$. $\Delta x=\Delta y=\frac{1.1}{160}$.
Left: contour of density. Right: cut along diagonal, the solid line is the exact solution, symbols are
the numerical solution.}
\label{fig53}
\end{figure}
The second one is the shock diffraction problem. The computational domain is the union of
$[0,1]\times[6,11]$ and $[1,13]\times[0,11]$. The initial condition is a pure right-moving shock
of $Mach=5.09$, initially located at $x=0.5$ and $6\le y \le 11$, moving into undisturbed air ahead
of the shock. The undisturbed air has a density of $1.4$ and a pressure of $1$. The boundary conditions
are inflow at $x=0$, $6\le y \le 11$, outflow at $x=13$, $0\le y \le 11$, $1\le x\le 13$, $y=0$ and $0\le x \le 13$, $y=11$, and reflective at the walls $0\le x \le 1$, $y=6$ and $x=1$, $0\le y \le 6$. $\gamma=1.4$.
The density and pressure at the mesh sizes $\Delta x=\Delta y=\frac{1}{32}$ with the PP flux limiters at $T=2.3$ are presented in Fig. \ref{fig532}.
\begin{figure}
\centering
\includegraphics[totalheight=2.5in,angle=270]{./pic/den42.eps},
\includegraphics[totalheight=2.5in,angle=270]{./pic/pre42.eps}
\caption{Example \ref{ex4}. 2D shock diffraction problem. $T=2.3$. $\Delta x=\Delta y=\frac{1}{32}$.
Left: density, 20 equally spaced contour lines from $\rho=0.066227$ to $\rho=7.0668$. Right: pressure,
40 equally spaced contour lines from $p=0.091$ to $p=37$.}
\label{fig532}
\end{figure}
\end{exa}
\begin{exa}{High Mach number astrophysical jets.}
\label{ex5}
We consider two high Mach number astrophysical jets without the radiative cooling \cite{ha2008positive, zhang2012positivity}.
The first one is a Mach 80 problem. $\gamma=5/3$. The computational domain is $[0,2]\times[-0.5,0.5]$,
which is full of the ambient gas with $(\rho, u, v, p)=(0.5,0,0,0.4127)$ initially. The boundary conditions for the right, top and bottom are outflows. For the left boundary, $(\rho, u, v, p)=(5,30,0,0.4127)$ if $y\in[-0.05, 0.05]$ and $(\rho, u, v, p)=(5,0,0,0.4127)$ otherwise. The numerical density on a mesh of $448\times224$ grid points with the PP flux limiters at $T=0.07$ is shown in Fig. \ref{fig54} (left).
Then a Mach 2000 problem is considered to show the robustness of the scheme with the PP flux limiters. The computational domain is taken as $[0,1]\times[-0.25,0.25]$, initially full of the ambient gas with
$(\rho, u, v, p)=(0.5,0,0,0.4127)$. Similarly, the right, top and bottom boundary are outflows. For the left
boundary, $(\rho, u, v, p)=(5,800,0,0.4127)$ if $y\in[-0.05, 0.05]$ and $(\rho, u, v, p)=(5,0,0,0.4127)$ otherwise. The numerical density at a mesh of $800\times400$ grid points with the PP flux limiters at $T=0.001$ is shown in Fig. \ref{fig54} (right).
\begin{figure}
\centering
\includegraphics[totalheight=2.5in,angle=270]{./pic/Mach80jet2.eps}
\includegraphics[totalheight=2.5in,angle=270]{./pic/Mach2000jet2.eps}
\caption{Example \ref{ex5}. High Mach number astrophysical jet. $T=2.3$.
Left: density of Mach 80 at $T=0.07$ with mesh $448\times224$.
Right: density of Mach 2000 at $T=0.001$ with mesh $800\times400$.}
\label{fig54}
\end{figure}
\end{exa}
\begin{exa}{The reactive Euler equations.}
\label{ex6}
We consider the following two-dimensional Euler equations with a source term, which are often used to model the detonation waves \cite{wang2011robust, zhang2012positivity}:
\begin{align}
& \mathbf{u}_t+\mathbf{f}(\mathbf{u})_x+\mathbf{g}(\mathbf{u})=\mathbf{s}(\mathbf{u}),
\quad t\ge0,\quad (x,y)\in \mathbb{R}^2,
\label{react1}\\
& \mathbf{u}=
\begin{pmatrix}
\rho \\
m_u \\
m_v \\
E \\
\rho Y
\end{pmatrix}, \quad
\mathbf{f}(\mathbf{u})=
\begin{pmatrix}
m_u \\
\rho u^2+p \\
\rho u v \\
(E+p)u \\
\rho u Y
\end{pmatrix} ,\quad
\mathbf{g}(\mathbf{u})=
\begin{pmatrix}
m_v \\
\rho u v \\
\rho v^2+p\\
(E+p)v \\
\rho v Y
\end{pmatrix},\quad
\mathbf{s}(\mathbf{u})=
\begin{pmatrix}
0 \\
0 \\
0 \\
0 \\
\omega
\end{pmatrix}
,
\label{react2}
\end{align}
with
\begin{equation*}
m_u=\rho u,\quad m_v=\rho v,\quad E=\frac{1}{2}\rho u^2+\frac{1}{2}\rho v^2+\frac{p}{\gamma-1}+\rho q Y,
\end{equation*}
where $q$ is the heat release rate of reaction, $\gamma$ is the specific heat ratio and $Y$ is the reactant
mass fraction. The source term is assumed to be in an Arrhenius form
\begin{equation}
\omega=-\tilde{K}\rho Y\exp(-\tilde{T}/T),
\label{omega}
\end{equation}
where $T=\frac{p}{\rho}$ is the temperature, $\tilde{T}$ is the activation temperature and $\tilde{K}$ is a constant.
The eigenvalues of the Jacobian $\mathbf{f}'(\mathbf{u})$ are $u-c, u, u, u, u+c$ and the eigenvalues of the
Jacobian $\mathbf{g}'(\mathbf{u})$ are $v-c, v, v, v, v+c$, where $c=\sqrt{\gamma\frac{p}{\rho}}$.
The computation domain for this problem is the union of $[0,1]\times[2,5]$ and $[1,5]\times[0,5]$. The initial
conditions are, if $x<0.5$, $(\rho,u,v,E,Y)=(11,6.18,0,970,1)$; otherwise, $(\rho,u,v,E,Y)=(1,0,0,55,1)$. The boundary conditions are reflective except at $x=0$, $(\rho,u,v,E,Y)=(11,6.18,0,970,1)$. Here the parameters are chosen to be $\gamma=1.2$, $q=50$, $\tilde{T}=50$ and $\tilde{K}=2566.4$.
This problem is similar to the shock diffraction problem in Example \ref{ex4}, but this one has a source term. The time step is taken to be
\begin{equation}
\Delta t=\frac{\text{CFL}}{ \lambda_{max}(\frac{1}{\Delta x}+\frac{1}{\Delta y})+\tilde{K} },
\end{equation}
where $\lambda_{max}=\max\{\||u|+c\|_{\infty}, \||v|+c\|_{\infty}\}$ on all grids, and $\tilde{K}$ comes from the source term (\ref{omega}), such that the first order monotone scheme is PP.
The numerical density and pressure at a mesh of $400\times400$ grid points with the PP flux limiters at $T=0.6$ are shown in Fig. \ref{fig55}, which are comparable to the results in \cite{wang2011robust, zhang2012positivity}.
\begin{figure}
\centering
\includegraphics[totalheight=2.5in,angle=270]{./pic/den52.eps}
\includegraphics[totalheight=2.5in,angle=270]{./pic/pre52.eps}
\caption{Example \ref{ex6}. Detonation diffraction at a $90^\circ$ corner. $T=0.6$.
Mesh $400\times400$. Left: density; Right: pressure.}
\label{fig55}
\end{figure}
\end{exa}
\begin{exa}{General equation of state.}
\label{ex7}
We consider the three species model of the one-dimensional Euler system with a more general equation of state in \cite{wang2009high, zhang2012positivity}.
The model involves three species, $O_2$, $O$ and $N_2$ ($\rho_1=\rho_O$, $\rho_2=\rho_{O_2}$ and $\rho_3=\rho_{N_2}$) with the reaction
\begin{equation}
O_2 + N_2 \rightleftharpoons O + O + N_2.
\end{equation}
The governing equations are
\begin{eqnarray}
\begin{pmatrix}
\rho_1 \\
\rho_2 \\
\rho_3 \\
\rho u \\
E
\end{pmatrix}_t
+
\begin{pmatrix}
\rho_1 u \\
\rho_2 u \\
\rho_3 u \\
\rho u^2 + p \\
(E+p) u
\end{pmatrix}_x
=
\begin{pmatrix}
2M_1\omega \\
-M_2\omega \\
0 \\
0 \\
0
\end{pmatrix}
,
\label{eos}
\end{eqnarray}
and
\begin{equation}
\rho=\sum_{s=1}^3\rho_s, \quad p=RT\sum_{s=1}^3\frac{\rho_s}{M_s}, \quad E=\sum_{s=1}^3 \rho_s e_s(T)+\rho_1 h_1^0+\frac{1}{2}\rho u^2,
\end{equation}
where the enthalpy $h_1^0$ is a constant, $R$ is the universal gas constant, $M_s$ is the molar mass of species $s$, and the
internal energy $e_s(T)=\frac{3RT}{2M_s}$ and $\frac{5RT}{2M_s}$ for monoatomic and diatomic species respectively. The rate of the chemical reaction is given by
\begin{eqnarray}
\omega=\left(k_f(T)\frac{\rho_2}{M_2}-k_b(T)\left(\frac{\rho_1}{M_1}\right)^2\right)\sum_{s=1}^3\frac{\rho_s}{M_s}, \quad k_f=C_0 T^{-2}\exp(-E_0/T), \\
k_b=k_f/\exp(b_1+b_2\log z+ b_3 z+b_4 z^2+b_5 z^3), \quad z=10000/T.
\end{eqnarray}
The parameters and constants are $h_1^0=1.558\times10^7$, $R=8.31447215$, $C_0=2.9\times10^{17} m^3$, $E_0=59750 K$,
and $b_1=2.855$, $b_2=0.988$, $b_3=-6.181$, $b_4=-0.023$, $b_5=-0.001$.
The eigenvalues of the Jacobian are $(u,u,u,u+c,u-c)$ where $c=\sqrt{\gamma\frac{p}{\rho}}$ with $\gamma=1+\frac{p}{T\sum_{s=1}^3\rho_s e_s'(T)}$.
Similar to Example \ref{ex6}, the time step is chosen to be
\begin{equation}
\Delta t=\frac{\text{CFL } \Delta x}{\lambda_{max}+s_{max} \Delta x},
\end{equation}
here $\lambda_{max}=\max\{\||u|+c\|_{\infty}\}$ on all grids and $s_{max}$ is
\begin{equation}
s_{max}=\max\left\{\left|\frac{M_2\omega}{\rho_2}\right|,\left|\frac{2M_1\omega}{\rho_1}\right|\right\}.
\end{equation}
A shock tube problem is considered for the reactive flows with high pressure on the left and low pressure on the right initially
in the chemical equilibrium ($\omega=0$). The initial conditions are:
\begin{equation}
(p_L, T_L)=(1000 N/m^2, 8000K), \quad (p_R, T_R)=(1 N/m^2, 8000K),
\end{equation}
with zero velocity everywhere and the densities satisfying
\begin{equation}
\frac{\rho_1}{2M_1}+\frac{\rho_2}{M_2}=\frac{21}{79}\frac{\rho_3}{M_3},
\end{equation}
where $M_1=0.016$, $M_2=0.032$ and $M_3=0.028$. The initial densities of $O$, $O_2$ and $N_2$ are
$5.251896311257204\times10^{-5}$, $3.748071704863518\times10^{-5}$ and $2.962489471973072\times10^{-4}$
on the left respectively, and $8.341661837019181\times10^{-8}$, $9.455418692098664\times10^{-11}$ and $2.748909430004963\times10^{-7}$
on the right respectively.
The numerical solution with the PP flux limiter is computed on a mesh size of $\Delta x=\frac{2}{4000}$ up to $T=0.0001$. $\epsilon_{WENO}=10^{-20}$ is taken as in \cite{zhang2012positivity}. In Fig. \ref{fig56}, the positivity of $\rho_1$, $\rho_2$, $\rho_3$ and $p$ is preserved and converged solutions are observed.
\begin{figure}
\centering
\includegraphics[totalheight=2.0in]{./pic/O.eps},
\includegraphics[totalheight=2.0in]{./pic/O2.eps}\\
\includegraphics[totalheight=2.0in]{./pic/N2.eps},
\includegraphics[totalheight=2.0in]{./pic/vex6.eps}\\
\includegraphics[totalheight=2.0in]{./pic/pre6.eps}
\caption{Example \ref{ex7}. Three species reaction problem at $T=0.0001$. The
solid lines are the reference solutions at $\Delta x=\frac{2}{8000}$. Symbols are the numerical solutions
at $\Delta x=\frac{2}{4000}$.}
\label{fig56}
\end{figure}
\end{exa}
|
1,116,691,497,859 | arxiv | \section*{Abstract}
Market dynamic is quantified in terms of the entropy $S(\tau,n)$ of the clusters formed by the intersections between the series of the prices $p_t$ and the moving average $\widetilde{p}_{t,n}$. The entropy $S(\tau,n)$ is defined according to Shannon as $\sum P(\tau,n)\log P(\tau,n)$, with $P(\tau,n)$ the probability for the cluster to occur with duration $\tau$.
\par
The investigation is performed on high-frequency data of the Nasdaq Composite, Dow Jones Industrial Avg and Standard \& Poor 500 indexes downloaded from the Bloomberg terminal. The cluster entropy $S(\tau,n)$ is analysed in raw and sampled data over a broad range of temporal horizons $M$ varying from one to twelve months over the year 2018. The cluster entropy $S(\tau,n)$ is integrated over the cluster duration $\tau$ to yield the \emph{Market Dynamic Index} $I(M,n)$, a synthetic figure of price dynamics. A systematic dependence of the cluster entropy $S(\tau,n)$ and the \emph{Market Dynamic Index} $I(M,n)$ on the temporal horizon $M$ is evidenced.
\par
Finally, the \emph{Market Horizon Dependence}, defined as $H(M,n)=I(M,n)-I(1,n)$, is compared with the horizon dependence of the pricing kernel with different representative agents obtained via a Kullback-Leibler entropy approach.
The \emph{Market Horizon Dependence} $H(M,n)$ of the three assets is compared against the values obtained by implementing the cluster entropy $S(\tau,n)$ approach on artificially generated series (Fractional Brownian Motion).
\section{Introduction}
Entropy, as a tool to quantify heterogeneity and dynamics in complex systems, has found a number of applications in different contexts \cite{crutchfield2012between,bandt2002permutation,grassberger1983characterization,marcon2014generalization,karpiarz2014international,rubido2018entropy}.
In economics and finance, the entropy ability to quantify heterogeneity and disentangle ordered and disordered patterns in data relevant to complex systems, has been adopted for portfolio selection to outperform traditional methods based on Markowitz covariance and Sharpe single-index models \cite{philippatos1972entropy,fernholz2002stochastic,ou2005theory,xu2011portfolio,usta2011mean,zhou2013portfolio,zhang2012possibilistic, bera2008optimal,demiguel2009optimal,rodder2010entropy,
chandrinos2018construction,gospodinov2017general,chen2017study,ormos2014entropy,lahmiri2018informational,lahmiri2018randomness,lahmiri2018long,lahmiri2017disturbances,lahmiri2017nonlinear}.
Entropy ability to quantify dynamics other than heterogeneity has gained interest to the aim of implementing entropy-derived tools to shed light on fundamental aspects of asset pricing dynamics beyond portfolio optimization \cite{Hansen1991implications,Hansen2014Nobel,Hansen2019Macroeconommic,backus2014sources,ghosh2017what}.
\par
Macroeconomic shocks are becoming increasingly important due to the growing connectedness of the assets in a global economy. The propagation of these shocks, which intrinsically are not diversifiable, cannot be averaged out by diversifying investments and, thus, even the best selection of portfolio assets might fail in keeping investors safe.
Asset pricing models aim at providing estimates of endogenous risk by quantifying market evolution in terms of a stochastic function: the {\em pricing kernel} $m_{t}$. Equilibrium prices $p_t$ of traded securities can be represented as the conditional expectation of the discounted future payoff $z_t$:
\begin{equation}
p_t= E\left [\frac{m_{t+1}}{m_{t}}z_{t+1}\right ] \hspace{15pt},
\end{equation}
where $m_{t+1}/m_{t}$ is known as the {\em stochastic discount factor}. The pricing kernel $m_{t}$ is factorizable into a function of the consumption growth $\mu_{t+1}$ times $\psi_{t}$ (a model specific term):
\begin{equation}
m_t= \mu_{t+1}\psi_{t} \hspace{15pt}.
\end{equation}
The standard consumption-based asset pricing model identifies the pricing kernel as a simple parametric function of the consumption growth ${C_t}$. In this framework, with time-separable power utility representative agent models, the function $\mu_{t+1}$ is simply proportional to $\Delta C_t = \log ({C_t}/{C_{t-1}})$. More sophisticated agent behaviours have been suggested to explain puzzling phenomena such as amplitude and cross-sectional dispersion of returns among different categories of financial assets, equity premia and risk-free rates.
\par
Pricing kernel dispersion and dynamics with different representative agents are modelled by using the Kullback-Leibler entropy in \cite{backus2014sources} and extending the findings of \cite{Hansen1991implications}. The work \cite{Hansen1991implications} was addressed to quantify standard deviation and volatility to define the pricing kernels bounds. A lower bound was provided for the
volatility of the permanent component of asset pricing kernels, showing that stochastic discount factors need to be very volatile to be consistent with high Sharpe ratios \cite{Hansen1991implications}.
A relative entropy minimization approach, based on the Kullback-Leibler divergence, is put forward in \cite{ghosh2017what} to extract the model dependent term $\mu_{t+1}$ and quantify the minimum amount of extra information to be embedded in the standard pricing kernel models for reproducing asset return correctly.
The Kullback-Leibler divergence between the probability distribution functions of the components $\mu_{t+1}$ and $\psi_{t}$ has been used as criterion to estimate the deviation of $m_{t+1}$ with respect to the simple consumption flow growth model. It was argued that the Kullback-Leibler divergence criterion is equivalent to maximize the entropy of the fundamental pricing kernel component \cite{ghosh2017what}.
\par
An information theoretical tool has been recently developed which yields the weights of the efficient portfolio by using the \emph{cluster entropy} estimated via the detrending moving average algorithm proposed in \cite{carbone2013information,carbone2007scaling,carbone2004analysis}. Interestingly, the works \cite{ponta2017detrending,ponta2018information} show that the cluster entropy of the volatility takes values depending on each market, as opposed to the entropy of the prices, which was shown to be approximately invariant across the markets. The \emph{Market Heterogeneity Index}, defined as the integral of the cluster entropy, provides a cumulative figure allowing a straightforward comparison with the portfolio weights obtained by the Sharpe ratio approach. The main advantage of the cluster entropy approach is not to require a specific distribution of returns like a symmetric Gaussian distribution. Such a distribution is quite elusive in real-world financial assets and thus hindering, in principle, the application of Markowitz-based portfolio models.
\par
In this work, we implement the cluster entropy approach for quantifying the intrinsic dynamics of prices and capturing the endogenous sources of risk over different temporal horizons.
The present work builds upon and extends the study \cite{ponta2018information} that was limited to extract the portfolio weights from the cluster entropy of the prices and volatility of the financial series over a constant time horizon (about 6 years from 1998 to 2004). Under the condition of constant temporal horizon, the cluster entropy of the prices has been found to be almost invariant across the markets in \cite{ponta2018information}.
\par
Here the focus is on gaining insights in the intrinsic dynamic ruling price evolution. Hence, the cluster entropy analysis is performed over multiple horizons. The horizon dependence was not studied in \cite{ponta2018information} that reported the quantitative comparison of cluster entropy observed in several markets over the same horizon (i.e. same time interval of six years 1998-2004).
\par
The ability of the cluster entropy approach to quantify the intrinsic dynamic of the prices is proved by analysing several assets. For the sake of simplicity in this work we report the results obtained on the three markets described in Table \ref{tab:data}.
\par
Cluster entropy has been analysed for prices of market indices (tick-by-tick prices from Jan $1^{st}$ to Dec. $31^{st}$ 2018) NASDAQ, DJIA and S$\&$P500 with length $N=6982017$, $N=5749145$ and $N=6142443$ respectively. Data have been downloaded from the terminal www.bloomberg.com/professional. The three financial markets have been selected based on homogeneity and similarity criteria. The three markets are traded in the same country and with same currency. Furthermore the assets are characterised by a comparable number of transactions over time. The similarity criteria rule out differences in the dynamics that might be due to external causes. Another condition ensuring that the observed behavior is genuinely related to the intrinsic price dynamics rather than exogenous factors is to keep limited the maximum extension of the temporal horizon. In the current study the max temporal extension is one year (12 Months) and the analysis has been performed on multiples of monthly subsets from one to twelve months.
\par
It is worth noting that though in literature many study have been performed to understand the asset pricing dynamics using low-frequency data, for example, to estimate the low-frequency components of returns, in this paper the analysis is performed on high-frequency data to investigate and capture the endogenous sources of risk. The data ranges over one single year. The huge data sets allow one to apply the cluster entropy algorithm over monthly segmented series with average lengths of the order of $\sim 500000$.
\par
A systematic dependence of the cluster entropy of the asset prices over varying temporal horizons has been observed, that could be related to the macroeconomic fundamental properties and exogenous dynamics rather than to simple variations across different markets.
\par
The manuscript is organized as follows. The main relationships relevant to understanding and implementing the cluster entropy approach are shortly recalled in Section \ref{Method}. The analysed data sets (financial assets and artificially generated series) are described in Section \ref{Data}. In Section \ref{Results} the cluster entropy and the Market Dynamic Index of the prices series as a function of the temporal horizon $M$ are reported together with a comparison against the Kullback-Leibler entropy results obtained by simulating the pricing kernel with different representative agent models.
The cluster entropy and the market dynamic index estimated for Fractional Brownian Motion (FBM) sequences are reported and discussed. The artificially generated FBM data are taken as reference to validate the accuracy of the deviations observed in the real-world assets markets and validate the findings via a standard T-paired test.
\section{Methods}
\label{Method}
In this section, we briefly recall the main definitions and equations used which are the core computational ingredients of the algorithm. \par
The cluster entropy is obtained by taking the intersection of the asset prices $p_t$ and its moving average $\tilde{p}_{t,n}$ for different moving average window $n$ \cite{ponta2017detrending,ponta2018information,carbone2013information,carbone2007scaling,carbone2004analysis}. For each window $n$, the subsets $\{p_t: t=s,....,s-n \}$ between two consecutive intersections are considered. The subsets are named \emph{clusters}. The clusters are exactly defined as the portions of the series between death/golden crosses according to the technical trading rules. Therefore, the information content has a straightforward connection with the trader's perspective on the price and volatility series. Then, the clusters are ranked according to their characteristic size, the duration $\tau$. The probability distribution function $P(\tau,n)$ of the cluster duration is obtained.
The present approach directly yields either power-law or exponential distributed cluster distributions, thus enabling us to separate the sets of inherently correlated/uncorrelated blocks along the sequence.
The continuously compounded return is defined by:
\begin{equation}
\label{returnlin}
r_t = p_t - p_{t-h} \hspace{10pt} ,
\end{equation}
where $p_t$ is the price at the time $t$, with $ 0<h<t<N $ and $N$ the maximum length of the time series.
Alternatively, one can consider the log-return defined as:
\begin{equation}
\label{returnlog}
r_t = \log p_t - \log p_{t-h} \hspace{10pt}.
\end{equation}
\par
The approach adopted in this work builds upon the idea of Claude Shannon to quantify the ‘expected’ information contained in a message extracted from a sequence $\{x_t \}$ \cite{shannon1948mathematical} by using the entropy functional:
\begin{equation}
S[P(x_t)] = \int_X p(x_t) \log p(x_t) dx_t \hspace{5pt},
\label{Shannon_int}
\end{equation}
with $P$ a probability distribution function associated with the sequence $\{x_t \}$.
For discrete sets, Eq. (\ref{Shannon_int}) reduces to:
\begin{equation}
S[P(x_t)] = \sum_X p(x_t) \log p(x_t) \hspace{5pt}.
\label{Shannon}
\end{equation}
Consider the time series $\{x_t \}$ of length $N$ and the moving average $\{\widetilde{x}_{t,n}\}$ of length $N-n$ with $n$ the moving average window.
The function $\{\widetilde{x}_{t,n}\}$ generates, for each $n$, a partition $\{\cal{C}\}$ of non-overlapping clusters between two consecutive intersections of $\{x_t \}$ and
$\{\widetilde{x}_{t,n}\}$. Each cluster $j$ has duration:
\begin{equation}
\label{l} \tau_j\equiv \|t_{j}-t_{j-1}\|
\end{equation}
\noindent
where the instances $t_{j-1}$ and $t_j$ refer to two subsequent intersections.
The probability distribution function $P(\tau,n)$ can be obtained by ranking the number of clusters ${\mathcal N}(\tau_1,n),{\mathcal N}(\tau_2,n), ..., {\mathcal N}(\tau_j,n)$ according to their length $\tau_1, \tau_2,..., \tau_j$ for each $n$. A stationary sequence of clusters $\cal{C}$ is generated with probability distribution function varying as \cite{carbone2013information}:
\begin{equation}
\label{Pl} P(\tau,n)\sim\tau^{-\alpha} {\mathcal F}\left({\tau},{n}\right) \hspace{5pt},
\end{equation}
with the factor ${\mathcal F}\left({\tau},{n}\right)$ taking the form $ \exp({-\tau}/{n})$, to account for the finite size effects when $\tau\gg n$, resulting in the drop-off of the power-law and the onset of the exponential decay.
The cluster entropy writes (the details of the derivation can be found in \cite{carbone2013information,carbone2004analysis}):
\begin{equation}
S[P(\tau_j,n)] = \sum_j P(\tau_j,n)\log P(\tau_j,n) \hspace{5pt},
\label{Shannon}
\end{equation}
that by using Eq.~(\ref{Pl}) simplifies to:
\begin{equation}
\label{lentropy2}
S(\tau,n)=S_0+\log\tau^{\alpha}+{\tau\over n}\hspace{5pt},
\end{equation}
where $S_0$ is a constant, $\log\tau^{\alpha}$ and $\tau/ n$ are related respectively to the terms $\tau^{-{\alpha}}$ and ${\mathcal F}(\tau,n)$.
The minimum value of the entropy is obtained for the fully ordered (deterministic) set of clusters with duration $\tau=1$. Eq.~(\ref{lentropy2}) in the limit $n\sim\tau\rightarrow1$ and $S_0\rightarrow-1$ reduces to $S(\tau,n)\rightarrow0$. Conversely, the maximum value of the entropy $S(\tau,n)=\log N^{\alpha}$ is obtained when $n\sim\tau\rightarrow N$ (with $N$ the maximum length of the sequence). This condition corresponds to the maximum randomness (minimum information) carried by the sequence, when a single longest cluster is obtained coinciding with the whole series.
\par
For a fractional Brownian motion, the exponent $\alpha$ is equal to the fractal dimension $D=2-H$ with $H$ the Hurst exponent of the time series. The term $\log\tau^{\alpha}$ can be thus interpreted as a generalized form of the Boltzmann entropy $S=\log\Omega$, where $\Omega = \tau^D$ corresponds to the fractional volume occupied by the fractional random walker.
The term $\tau/n$ represents an excess entropy (excess noise) added to the intrinsic entropy term $\log\tau^D$ by the partition process. It depends on $n$ and is related to the finite size effect discussed above.
\par
We stress the difference between the time series partitions obtained either by using equal size boxes or moving average clusters.
For equal size boxes, the excess noise term ${\tau / n}$ vanishes (as it becomes a constant that can be included in the constant term) thus the entropy reduces to the logarithmic term as found in Ref.~\cite{grassberger1983characterization}, which corresponds to the intrinsic entropy of an ideal fractional random walk. When a moving average partition is used, an excess entropy term ${\tau / n}$ emerges accounting for the additional heterogeneity introduced by the random partitioning process operated by the moving average intersections.
\par
To univocally quantify market properties through the entropy Eq.~(\ref{lentropy2}), a cumulative information measure has been defined as follows:
\begin{equation}
I(n)=\int_0^{\tau_{max}} S (\tau,n)d\tau \hspace*{5 pt},
\label{Integral}
\end{equation}
which, for discrete sets, reduces to:
\begin{equation}
I(n) = \sum_{\tau} S (\tau,n) \hspace{5pt}.
\label{Integrald}
\end{equation}
\par
The function $I(n)$ has been used to quantify cross-market heterogeneity in \cite{ponta2018information}. The cluster entropy of the volatility $v_T$ was integrated over the cluster duration $\tau$ to the purpose of obtaining the weights of the optimal portfolio.
\par
In this work, the function $I(n)$ will be used to quantify the intrinsic market dynamic. The cluster entropy of the prices will be integrated over the cluster duration $\tau$ to the purpose of obtaining the horizon dependence.
\par
As a concluding remark to this section, it is worth mentioning the relation between the cluster entropy approach adopted in this work, the multiscale entropy (MSE) and its variants \cite{costa2002multiscale,niu2015quantifying,humeau2015multiscale}.
The multiscale entropy provides insights into the complexity of fluctuations over a range of time scales and thus extends the standard one-sample entropy.
The computational implementation of multiscale entropy implies a coarse graining of the time series at increasingly time resolutions. Coarse graining the data basically means averaging different numbers of consecutive points to create different scales or resolutions of the signal. In the cluster entropy approach proposed here, the coarse graining of the signal is performed through the moving average, i.e. a time dependent averaging.
The multiscale entropy analysis aims at quantifying the interdependence between entropy and scale, achieved by evaluating sample entropy of univariate time series coarse grained at multiple temporal scales. This facilitates the assessment of the dynamical complexity of the system whose behavior is reflected by the time series data.
\section{Data}
\label{Data}
Prices of market indices traded in the US, namely NASDAQ, DJIA and S$\&$P500 are investigated.
Data sets have been downloaded from the terminal www.bloomberg.com/professional.
For each index, the data set includes tick-by-tick prices $p_t$ from January to December 2018.
Details (Ticker; Extended name; Country; Currency; Members; Length) as provided by Bloomberg for the three assets are reported in Table \ref{tab:data}. The length of each index is referred to the year 2018 (last column).
Different temporal horizons have been considered as monthly integer multiples of one-month period $M$ ranging from $M=1$ up to $M=12$. The individual lengths of the subsequences referred to the twelve time periods are reported for each index in Table \ref{tab:sampleddata}.
\par
To the purpose of performing cluster entropy analysis over sequences with constant lengths, the raw data are sampled, thus yielding data series with equal length.
The sampling frequency is defined for each series by dividing the length of the series corresponding to the longest horizon by the minimum, rounding the ratio to the nearest whole, that is used to sample the raw data.
\par
Consider for example the S$\&$P500 market ($3^{rd}$ column in Table \ref{tab:sampleddata}). The minimum value of the length is that at $M=1$ (January with $N=516635$) and the maximum value is longest horizon of interest (for example $N=5180006$ for horizon $M=10$ equal to ten months from January to October). As the sampling frequency is different for each series we consider the minimum value to perform the analysis with the same length.
\par
Furthermore for the sake of validating the obtained results, a set of computational tests have been performed on artificially generated series of different lengths $N$. The artificial series have been generated by means of the FRACLAB tool available at: https://project.inria.fr/fraclab/. The artificial series have been generated with lengths $N$ corresponding to those of the financial markets under investigation (Table \ref{tab:sampleddata}). Further details and results are reported in the following Sections.
\section{Results}
\label{Results}
\par
Probability distribution $P(\tau,n)$ and entropy $S(\tau,n)$ have been calculated for a large set of prices series by means of the procedure summarized in Section \ref{Method}. The series of the NASDAQ, DJIA and S$\&$P500 indexes described in Section \ref{Data} have been used for the investigation.
\par
Fig.~\ref{Fig:entropyrawpriceM1} shows the cluster entropy $S(\tau,n)$ calculated by using raw data prices. In particular, the plots refer to one month of data ($M=1$). The series lengths are $N=586866$, $N=516644$ and $N=516635$ respectively for NASDAQ, DJIA and S$\&$P500 as given in Table \ref{tab:sampleddata}.
\par
Fig.~\ref{Fig:entropyrawpriceM12} shows the cluster entropy $S(\tau,n)$ calculated by using raw data prices, as in Fig.~\ref{Fig:entropyrawpriceM1}, but here the series refer to a horizon of twelve months ($M=12$). The series lengths are $N= 6982017$, $N=5749145$ and $N=6142443$ respectively for NASDAQ, DJIA and S$\&$P500 as one can find in the last row of Table \ref{tab:sampleddata}.
\par
Fig.~\ref{Fig:entropysampledpriceM1} shows the cluster entropy $S(\tau,n)$ calculated by using the prices series of the sampled data. The plots refer to the first month of data ($M=1$). All the series have same length $N=492035$.
\par
Fig.~\ref{Fig:entropysampledpriceM12} shows the cluster entropy $S(\tau,n)$ calculated by using the prices series of the sampled data. The plots refer to twelve months ($M=12$). All the series have same length $N=492035$.
\par
Different curves in each figure correspond to moving average values varying from $n=30\hspace{2pt}\mathrm{s}$, $n=50\hspace{2pt}\mathrm{s}$, $n=100\hspace{2pt}\mathrm{s}$, $n=150\hspace{2pt}\mathrm{s}$, $n=200\hspace{2pt}\mathrm{s}$ $\ldots$ up to $n=1500\hspace{2pt}\mathrm{s}$ (with step $100\mathrm{s}$).
\par
One can note that the entropy curves exhibit a behaviour consistent with Eq.~(\ref{lentropy2}). At small values of the cluster duration $\tau \leq n$, entropy behaves as a logarithmic function. At large values of the cluster duration $\tau \geq n$ the curves increase linearly with the term ${\tau /n}$ dominating.
${S}(\tau,n)$ is $n$-invariant for small values of $\tau$, while its slope decreases as $1/n$ at larger $\tau$, as expected according to Eq.~(\ref{lentropy2}), meaning that clusters with duration $\tau > n $
are not power-law correlated, due to the finite-size
effects introduced by the partition with window $n$. Hence, they are characterized
by a value of the entropy exceeding the curve $\log \tau^D$, which corresponds to power-law correlated clusters. It is worthy to remark that clusters with same duration $\tau$
can be generated by different values of the moving average window $n$.
At a constant value of $\tau$, larger entropy values are obtained as $n$ increases.
\par
The entropy ${S}(\tau,n)$ of the NASDAQ, DJIA and S$\&$P500 prices (shown in Fig.~\ref{Fig:entropyrawpriceM1}, Fig.~\ref{Fig:entropyrawpriceM12}, Fig.~\ref{Fig:entropysampledpriceM1} and Fig.~\ref{Fig:entropysampledpriceM12}) is representative of a quite general behaviour observed in several markets analysed by using the proposed cluster entropy approach.
\bigskip
\par
In the following, we will discuss how to quantify the horizon dependence of the asset prices by using the cluster entropy function $S (\tau,n)$ estimated over different periods $M$. To this purpose, we use the \emph{cumulative information measure} function defined in Eq.~(\ref{Integral}).
\par
The quantity $I(M,n)$ is calculated by using the values of the entropy $S (\tau,n)$ of the asset prices $p_t$ estimated over several periods $M$, ranging from one to twelve months, by using raw and sampled data.
The first period ($M=1$) of the price sequences is taken in correspondence of January 2018 for all the assets. Multiple period sequences have been built by considering $M=2$ (January and February 2018) and, so on, up to $M=12$ (one year from January to December 2018). Details concerning lengths of the series corresponding to the temporal horizons $M$ are reported in Table \ref{tab:sampleddata}.
\par
The \emph{cumulative information measure} $I(M,n)$ has been plotted in Fig.~\ref{Fig:integral} for the prices of the NASDAQ, DJIA and S$\&$P500. One can observe a dependence of the function $I(M,n)$ at different $M$ horizons.
\par
$I(M,n)$ is the same for all $M$ implying that the horizon dependence $H(M,n)$ is negligible at small scales (small $n$/ small $\tau$ values). Conversely, at large $n$ values, i.e. with a broad range of cluster lengths $\tau$ spanning more than one decades of values in the power law distribution, a horizon dependence $H(M,n)$ varying with $M$ is found.
\par
For identically distributed sequences of clusters, $I(M,n)$ does not change with $M$ regardless of the value of $n$. This, has been shown in Fig. \ref{Fig:entropysampledpriceM1612Artif} where the cluster entropy $S (\tau,n)$ of artificially generated series (fractional random walks) are shown. One can note that the curves are practically unchanged at varying horizons $M$ and cluster duration $\tau$.
The departure from the {\em iid} case can be taken as a measure of price dynamics.
\par
Furthermore, by comparing the figures corresponding to the different assets a dependence of the function $I(n)$ is observed. In the case of the NASDAQ the variation seems larger than for the S$\&$P500, and even larger than for the DJIA.
\section{Discussion and Conclusions}
\label{Discussion}
Next, the main results of the analysis of the \emph{cluster entropy} $S (\tau,n)$ and the \emph{cumulative information measure} $I(M,n)$ are compared with the results obtained by using information theoretical approaches by other authors.
\par
To build a cluster entropy index of horizon dependence, i.e. a synthetic numerical parameter with the ability to provide an estimate of the horizon dependence, we consider the entropy integral $I(n)$ defined by Eq.~(\ref{Integrald}) at one-period ($M=1$) and at multiples of one period $M$ respectively defined as $I(1,n)$ and $I(M,n)$. The quantity $I(M,n)$, defined above on the basis of Eq.~(\ref{Integrald}) is called \emph{Market Dynamic Index}.
\par
To the purpose of comparing our results with those of paper \cite{backus2014sources}, the horizon dependence $H(M,n)$ is calculated as:
\begin{equation}
\label{horizon}
H(M,n) = I(M,n)-I(1,n) \hspace{5pt}.
\end{equation}
Values of \emph{Market Dynamic Index} $I(M,n)$ and horizon dependence $H(M,n)$ calculated by using the NASDAQ, DJIA and S$\&$P500 data are reported in Table \ref{tab:horizonsNASDAQ}. The quantity $I(1,n)=I(1)$ is a reference value of the one-period entropy (lower bound). It is taken as $I(1)=0.0049$, $I(1)=0.0214$ and $I(1)=0.0197$ respectively for power utility, recursive utility and difference habit agent models of the consumption growth following \cite{backus2014sources}. The value $I(12,n)$ has been obtained from the curves in Fig.~\ref{Fig:integral} for the prices of the NASDAQ, DJIA and S$\&$P500. $H(12,n)$ is the difference between $I(12,n)$ and $I(1,n)$ on account of Eq.~(\ref{horizon}).
\par
Next, the values of the horizon dependence obtained by using the cluster entropy will be checked against those obtained by using different representative agent models for the definition of the pricing kernel in \cite{backus2014sources}.
The pricing kernel dynamics has been quantified by a measure of entropy dependence on the investment horizon for popular asset pricing models. The pricing kernel accounts for the stochastic dynamic evolution of asset returns, which in their turn contain information about the pricing kernel. The analysis is based on the Kulback-Leibner divergence (also known as relative entropy) of the true probability distribution of the prices with respect to the risk-adjusted probability. On account of those results, it was argued that a realistic asset pricing model should have substantial one-period entropy and modest horizon dependence to justify equity mean excess returns and bond yields at once.
\par
The Kullback-Leibler (KL) divergence of the continuous probability measure $p(x)$ with respect
to some probability measure $p^*(x_t)$, writes:
\begin{equation}
\label{KL1}
{KL}(P||P^*) = \int_X p(x_t) \log \left( \frac{ p(x_t)}{p^*(x_t)}\right) dx_t
\end{equation}
Eq.~(\ref{KL1}) can be interpreted as the expectation of the function $\log { p(x_t)}/{p^*(x_t)}$ with respect to the probability $p(x_t)$:
\begin{equation}
\label{KL2}
{KL}(P||P^*)= E \left[\log \left( \frac{ p(x_t)}{p^*(x_t)}\right)\right]
\end{equation}
It can be easily shown that the relative entropy Eq.~(\ref{KL1}) reduces to Eq.~(\ref{Shannon_int}) for constant probability $p^*(x_t)$.
\par
Investigation of asset price dispersion and dynamics has been put forward by using a variant of the Kullback-Leibler (KL) divergence of the pricing kernels $m_{t,t+n}$ expressed in terms of the ratio between the true and risk-adjusted distribution \cite{backus2014sources}. In this work, different representative agent models have been considered to quantify the {\em Market Horizon Dependence} $H(M)$ :
\begin{equation}
H(M)= I(M) -I(1)
\end{equation}
with the quantity $I(M)$ defined as:
\begin{equation}
I(M)= \frac{EL_t(m_{t,t+M})}{M} \hspace{10pt}.
\end{equation}
where ${EL_t(m_{t,t+M})}$ is defined as the average of the relative entropy of the pricing kernel, and $I(1)$ is calculated at the month $M=1$. A summary of the horizon dependence obtained by estimating the Kullback-Leibler (KL) entropy with pricing kernels generated by different representative agent models according to the approach of \cite{backus2014sources} is reported in Table \ref{tab:hdconstant}.
\par
To further validate the behaviour observed in real-world financial markets, simulations have been performed on artificial data generated by means of the FRACLAB tool available at: https://project.inria.fr/fraclab/. The FRACLAB tool generates Fractional Brownian Motion series with assigned Hurst exponent $H$. The Hurst exponent corresponding to financial prices is generally assumed to be $H\sim 0.5$. In Fig.~\ref{Fig:entropysampledpriceM1612Artif}, the cluster entropy curve is shown for a FBM series with $H=0.5$ and different lengths $N$. For the curves shown in Fig.~\ref{Fig:entropysampledpriceM1612Artif} the artificial series has been generated with a total length equal to the one of the NASDAQ index ($N=6982017$). Then the artificial series has been divided in twelve consecutive segments with the same lengths of the NASDAQ sub-sequences (values of first column of Table \ref{tab:sampleddata}). Figures refer respectively to the first segment ($M=1$), the first sixth segments ($M=6$) and twelve segments ($M=12$).
\par
To the purpose of fully appreciating the different behaviour of real world market series compared to those exhibited by the artificially generated sequences, the market dynamic index has been calculated for the artificial series (Fig. \ref{Fig:integralMDIArtif}). The Market Dynamic Index has quite a constant value at the different horizons $M$ and moving average clusters $n$, thus exhibiting a behaviour different than real market dynamic indexes shown in Fig.~\ref{Fig:integral}.
\par
Last but not least, results of statistical significance tests are reported in Table \ref{tab:significance}. The test has been performed by using the paired t-test to check the null hypothesis $h=0$ that the cluster entropy values obtained on the real-world financial markets and those obtained on the artificial series (FBMs with $H=0.5$ assumed as benchmark) come from distributions with equal mean and same variance with a probability $p$.
\par
One can note in Table \ref{tab:significance} that the probability $p$ that the null hypothesis holds true ranges from $0.5154$ to $0.7584$. This confirms that the NASDAQ market behaves quite differently from the traditional interpretation of independent elementary stochastic process of price variations, as the FBM with $H=0.5$ implies.
The S\&P 500 exhibits an intermediate tendency to behave as ideal market being the probability $0.7399\leq p \leq 0.9248$. The probability for Dow Jones ranges within the interval $0.8892\leq p \leq 0.9434$. Thus it seems that the DJIA index reproduces more closely the behaviour of the fully independent stochastic process involved in the FBM with $H=0.5$.
\par
From an economic perspective the results have shown how financial market apparently very similar in terms of regional features, size and volumes may exhibit different horizon dependence. The obtained results are very robust from a statistical point of view. Thus they can represent a valid basis for developing investment tools to quantify risks and extremely useful for investors in classifying markets and choosing their strategy.
\clearpage
\newpage
\clearpage
\newpage
|
1,116,691,497,860 | arxiv | \section{Introduction}
The main goal of the experiments on heavy-ion collisions at relativistic energies is to study the properties of strongly interacting matter under extreme conditions, especially those of the quark-gluon plasma (QGP), the hadronic matter, and the transition between these two phases of matter~\cite{RevModPhys.89.035001,BRAUNMUNZINGER201676,CHEN20181}. Studies based on lattice quantum chromodynamics (LQCD) calculations~\cite{Fodor:2004nz} and various effective models~\cite{Asakawa:1989bq,Stephanov:1998dy,Hatta:2002sj} have indicated that the transition between the QGP and the hadronic matter is a smooth crossover at vanishing baryon chemical potential ($\mu_B$) but likely changes to a first-order phase transition at large $\mu_B$, with an associated critical endpoint (CEP) or a tricritical endpoint seperating these two transitions~\cite{Ding:2015ona}. Locating the position of the CEP in the QCD phase diagram is one of the most important issues in particle and nuclear physics. To search for this CEP, experiments have been carried out already at the Beam Energy Scan programs at SPS~\cite{Kaon1Afanasiev:2002mx,Kaon2Alt:2007aa,phiAlt:2008iv,LaXi-SPSAlt:2008qm} and at RHIC~\cite{Aggarwal:2010cw,Luo:2017faz} as well as planned at the future Facility for Antiproton and Ion Research (FAIR) and Nuclotron-based Ion Collider Facility (NICA).
As suggested in Ref.~\cite{Asakawa:2000wh}, the QCD phase transition can be probed by studying the fluctuations of physical observables in relativistic heavy ion collisions. This is because enhanced long-wavelength fluctuations near the CEP can lead to singularities in all thermodynamic observables. In heavy-ion collisions, these fluctuations have been studied by using experimental data on an event-by-event basis and looking at event-by-event fluctuations~\cite{Friman2011,Endrodi:2011gv}. For example, the fourth-order fluctuation of net-proton distribution had been measured in the BES program by the STAR Collaboration, and a possible non-monotonic behavior in its dependence on the center-of-mass collision energy $\sqrt{s_{\rm NN}}$ ~was observed~\cite{Adamczyk:2013dal}. Also, large baryon density fluctuations are expected to be developed in the produced QGP when its evolution trajectory in the QCD phase diagram passes across the CEP~\cite{Asakawa:2000wh,Hatta:2003wn}. Recent studies based on both the hydrodynamic approach~\cite{Steinheimer:2012gc,Steinheimer:2013xxa} and the transport model~\cite{Li:2016uvu} have shown that the spinodal instabilities associated with a first-order QGP to hadronic matter phase transition at finite baryon chemical potential can generate appreciable fluctuations in the baryon density distribution. The CEP in the QCD phase diagram can thus be located from relativistic heavy ion collisions by studying the collision energy dependence of quark density fluctuations and determining the temperature and baryon chemical potential at the collision energy near which the quark density fluctuations show a non-monotonic behavior. This idea was used in Refs.~\cite{Sun:2017xrx,SUN2018499} to study the neutron density fluctuation in heavy ion collisions at the SPS energies in the framework of the nucleon coalescence model for light nuclei production, and a non-monotonic dependence on $\sqrt{s_{\rm NN}}$~was found in the yield ratio $\mathcal{O}_\text{p-d-t}=N_{\rm p}N_{\rm ^3H}/N_{\rm d}^2$ of proton ($\rm p$), deuteron ($\rm d$), and triton ($\rm ^3H$). Although it was suggested in Refs.~\cite{Sun:2017xrx,SUN2018499} that the extracted collision energy dependence of neutron density fluctuation may originate from the light quark density fluctuations when the evolution trajectory of produced QGP passes through the CEP in the QCD phase diagram, a quantitative study of this relation based on a viable dynamic model is still missing.
To probe more directly the quark density fluctuations, it was suggested in Ref.~\cite{Ko2018} to study the yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ = $\frac{N(K^+)N(\Xi^-)}{N(\phi)N(\Lambda)}$~of \rm $K^+$, \rm $\Xi^-$, \rm $\phi$, and \rm $\Lambda$~in relativistic heavy ion collisions. This is because strange hadrons are known to scatter less frequently than nucleons during the hadronic evolution, and their final abundances at kinetic freeze out are expected to be similar to those at hadronization if both include the contribution from resonance decays. A non-monotonic behavior in the $\sqrt{s_{\rm NN}}$~dependence of this ratio then indicates a possible strange quark density fluctuation in these collisions.
The use of strange hadrons produced in relativistic heavy ion collisions to probe the properties of QGP has a long history. Because the mass of strange quark has the same magnitude as the QGP phase transition temperature, strange quarks can be abundantly produced in QGP~\cite{Rafelski1982} and converted to strange hadrons after hadronization. Enhanced production of strange hadrons has thus been considered a good signature for the formation of QGP in relativistic heavy ion collisions. For example, the well known peak in the $\langle K^+ \rangle$/$\langle \pi^+ \rangle$ ratio in central Pb+Pb collisions at a beam energy of 30 A GeV~\cite{Kaon2Alt:2007aa}, the change of the $\Omega/\phi$ ratios scaled by the number of constituent quarks in central Au+Au collision between $\sqrt{s_{\rm NN}}$~=11.5 GeV and $\sqrt{s_{\rm NN}}$~$\geq$ 19.6 GeV~\cite{OmPhiAdamczyk:2015lvo}, the non-monotonic suppression of the nuclear modification factor $R_{\rm CP}$ for charged hadrons including the kaons from $\sqrt{s_{\rm NN}}$~=62.4 GeV to 7.7 GeV~\cite{Adamczyk:2017nof} have all been considered as the signals for the onset of deconfinement transition in the matter produced in relativistic heavy ion collisions.
In the present study, we analyze the published data on \rm $\Xi^-$, \rm $K^+$, \rm $\Lambda$~and \rm $\phi$~yields in central Pb+Pb collisions at SPS energies from the NA49 Collaboration~\cite{Kaon1Afanasiev:2002mx,Kaon2Alt:2007aa,phiAlt:2008iv,LaXi-SPSAlt:2008qm} and in central Au+Au collisions at RHIC energies from the STAR collaboration~\cite{La-STARAdam:2019koz,K-STARAdamczyk:2017iwn,Phi-STARAdamczyk:2015lvo,STAR200,phi200,K130,la130,Xi130,Abelev:2008ab} to find the dependence of the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ on $\sqrt{s_{\rm NN}}$. We then use the quark coalescence model~\cite{CSERNAI1986223,PhysRevC.66.025205,PhysRevLett.90.202303,PhysRevLett.90.202302,PhysRevLett.91.092301,PhysRevC.78.034907,PhysRevC.74.064902,Zhang:2019bkf}
to interpret the result. In particular, we show that in the quark coalescence model the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ is sensitive to the strange quark relative density fluctuation $\Delta \rho_{s} = \langle(\delta \rho_{s})^2\rangle/\langle \rho_{s}\rangle^2$~at the QGP to hadronic matter phase transition. It is known from the success of the statistical model in describing the yield ratios of hadrons that the chemical freeze-out in heavy-ion collisions occurs at the phase transition temperature and remain essentially unchanged during the hadronic evolution. As shown in Ref.~\cite{Jun2017}, this is due to the constancy of entropy per particle during the evolution from the chemical to the kinetic freeze-out~\cite{Jun2017}. Because of the constancy of the yield ratios of hadrons during the hadronic evolution, studying their dependence on the collision energy is expected to provide a unique probe to the quark density fluctuations during the first-order phase transition of the QGP to the hadronic matter, which would help locate the CEP in the QCD phase diagram.
\begin{table*}[htbp]
\centering
\caption{Yields of $\Xi^-$, $K^+$, $\Lambda$ and $\phi$ in full rapidity space from central (0-7.2\% centrality) Pb+Pb collisions at SPS energies measured by the NA49 Collaboration~\cite{Kaon1Afanasiev:2002mx,Kaon2Alt:2007aa,phiAlt:2008iv,LaXi-SPSAlt:2008qm}. Only statistical uncertainties are listed. Also given are the yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ extracted from experimental data and the extracted relative strange quark fluctuation $\Delta s$ based on the quark coalescence model using the hadronization temperature $T_C$~\cite{Wheaton:2004qb}. The units for E, $\sqrt{s_{\rm NN}}$,~and $T_C$ are A GeV, GeV, and MeV, respectively.}
\label{Tab1}
\begin{tabular}{lccccccccc}
\hline
E & $\sqrt{s_{\rm NN}}$ & $\Xi^-$ & $K^+$ &$\Lambda$ &$\phi$ &$\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ &$T_C$ & $\Delta s$ \\
\hline \hline
20 &6.3 &1.50$\pm$0.13 &40.7$\pm$0.7 &27.1$\pm$0.2 &1.89$\pm$0.31 &1.19$\pm$0.22 &131.3 & 0.08 $\pm$ 0.20\\
30 &7.6 &2.42$\pm$0.19 &52.9$\pm$0.9 &36.9$\pm$0.3 &1.84$\pm$0.22 &1.88$\pm$0.27 &140.1 &0.71 $\pm$ 0.25\\
40 &8.8 &2.96$\pm$0.20 &59.1$\pm$1.9 &43.1$\pm$0.4 &2.55$\pm$0.17 &1.59$\pm$0.16 &146.1 & 0.44 $\pm$ 0.15\\
80 &12.3 &3.80$\pm$0.26 &76.9$\pm$2.0 &50.1$\pm$0.6 &4.04$\pm$0.19 &1.44$\pm$0.13 &153.5 & 0.31 $\pm$ 0.12\\
\hline
\end{tabular}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{Same as TABLE~\ref{Tab1} for midrapidity strange hadrons except the last two columns, which give the yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ from the statistical model~\cite{Wheaton:2004qb} and the coalescence model using the hadronization temperature $T_C$. For the statistical model, the percentage of contributions from different decay channels is taken from that calculated at $\sqrt{s_{\rm NN}}$~= 200 GeV.}
\label{Tab2}
\begin{tabular}{lccccccccccc}
\hline
E & $\sqrt{s_{\rm NN}}$ &$\Xi^-$ & $K^+$ &$\Lambda$ &$\phi$ &$\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ &$T_C$ &$\Delta s$ &stat. model &COAL-SH \\
\hline \hline
20 &6.3 &0.93$\pm$0.13 &16.4$\pm$0.6 &13.4$\pm$0.1 &1.17$\pm$0.23 &0.97$\pm$0.24 &131.3 & 0$^{+0.10}$ &1.30 &1.10 \\
30 &7.6 &1.17$\pm$0.13 &21.2$\pm$0.8 &14.7$\pm$0.2 &0.94$\pm$0.13 &1.79$\pm$0.33 &140.1 & 0.63 $\pm$ 0.30 &1.40 &1.10 \\
40 &8.8 &1.15$\pm$0.11 &20.1$\pm$0.3 &14.6$\pm$0.2 &1.16$\pm$0.16 &1.36$\pm$0.23 &146.1 & 0.24 $\pm$ 0.21 &1.33 &1.10 \\
80 &12.3 &1.22$\pm$0.14 &24.6$\pm$0.2 &12.9$\pm$0.2 &1.52$\pm$0.11 &1.53$\pm$0.21 &153.5 & 0.39 $\pm$ 0.19 &1.23 &1.10 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[htbp]
\centering
\caption{Same as Table~II for central Au+Au collisions at RHIC energies from the STAR Collaboration~\cite{La-STARAdam:2019koz,K-STARAdamczyk:2017iwn,Phi-STARAdamczyk:2015lvo,STAR200,phi200,K130,la130,Xi130,Abelev:2008ab} except the centrality is 0-5\% at $\sqrt{s}=200$ GeV and the quoted errors for $K^+$ are the quadratic sum of statistical and the dominant systematic uncertainties.}
\label{Tab3}
\begin{tabular}{lccccccccccc}
\hline
$\sqrt{s_{\rm NN}}$ & $\Xi^-$ &$K^+$ &$\Lambda$ &$\phi$ &$\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ &$T_C$(MeV) & $\Delta s$ &stat. model &COAL-SH \\
\hline \hline
7.7 &1.11$\pm$0.02 &19.06$\pm$0.42 &13.21$\pm$0.08 &1.23$\pm$0.11 &1.30$\pm$0.12 &144.3 & 0.18 $\pm$ 0.11 &1.40 &1.10 \\
11.5 &1.21$\pm$0.01 &22.89$\pm$0.47 &12.62$\pm$0.06 &1.68$\pm$0.11 &1.31$\pm$0.09 &149.4 & 0.19 $\pm$ 0.08 &1.27 &1.10 \\
19.6 &1.50$\pm$0.01 &26.94$\pm$0.53 &11.37$\pm$0.03 &2.58$\pm$0.14 &1.38$\pm$0.08 &153.9 & 0.25 $\pm$ 0.07 &1.21 &1.10 \\
27 &1.49$\pm$0.01 &28.48$\pm$0.56 &10.65$\pm$0.03 &3.05$\pm$0.17 &1.31$\pm$0.08 &155.0 & 0.19 $\pm$ 0.07 &1.20 &1.10 \\
39 &1.39$\pm$0.01 &29.88$\pm$0.58 &9.70$\pm$0.02 &3.33$\pm$0.17 &1.29$\pm$0.07 &156.4 & 0.17 $\pm$ 0.06 &1.20 &1.10 \\
130 &2.04$\pm$0.16 &42.10$\pm$0.42 &15.00$\pm$0.27 &5.73$\pm$0.37 &1.00$\pm$0.10 & 165 & 0$^{+0}$ &1.20 &1.10 \\
200 &2.17$\pm$0.06 &51.30$\pm$6.50 &14.80$\pm$0.20 &7.39$\pm$0.11 &1.02$\pm$0.13 &166 & 0$^{+0.05}$ &1.20 &1.10 \\
\hline
\end{tabular}
\end{table*}
We first summarize in Tables~\ref{Tab1} and \ref{Tab2} the experimental data in full rapidity space and midrapidity, respectively, from Pb+Pb collisions and in Table~\ref{Tab3} those in midrapidity from Au+Au collisions. Also shown are the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ extracted from these data. To see more clearly the collision energy dependence of the ratio, we plot Fig.~\ref{Fig1} its dependence on $\sqrt{s_{\rm NN}}$, where only the statistical errors are shown because most of the systematic errors cancel out in calculating the ratio. As in the analysis based on the statistical hadronization model~\cite{PhysRevC.82.011901}, we do not include the data from Pb+Pb collisions at 158 A GeV because of the different centrality bins used for \rm $K^+$, \rm $\phi$, \rm $\Xi^-$, and \rm $\Lambda$. A non-monotonic behavior in the dependence of the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ on the collision energy is clearly seen at $\sqrt{s_{\rm NN}}$~$\sim$~8 GeV, which is similar to that found in Refs.~\cite{Sun:2017xrx,SUN2018499} from the yield ratio $\mathcal{O}_{\text{p-d-t}}$.
\begin{figure}[!h]
\includegraphics[scale=0.43]{s-ratio-E.pdf}
\caption{Collision energy $\sqrt{s_{\rm NN}}$~dependence of the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$=$\frac{N(K^+)N(\Xi^-)}{N(\phi)N(\Lambda)}$~in central Pb+Pb collisions at SPS energies and in central Au+Au collisions at RHIC energies. Filled and open circles denote, respectively, the ratio obtained from midrapidity and in full rapidity space. Error bars represent the statistical uncertainties. The horizontal line on the right side of the figure show the ratio calculated from the COAL-SH model without quark density fluctuations~\cite{Sun:2017ooe}. The dash lines are the ratio calculated from the statistical model~\cite{Wheaton:2004qb}.}
\label{Fig1}
\end{figure}
To interpret the experimental results, we use the analytical coalescence formula COAL-SH developed in Ref.~\cite{Sun:2017ooe} to calculate the hadron yield at the QGP to hadronic matter phase transition temperature $T_C$. According to this formula, the yield $N_h$ of a hadron species $h$ consisting of $A$ constituent quarks of mass $m_q$ from a QGP of $N_q$ number of quark species $q$ uniformly distributed in a volume $V_C$ is given by
\begin{eqnarray}\label{Eq1}
N_h &=& g_cg_{\rm rel}g_{\rm size}\left(\sum_{i=1}^Am_i\right)^{3/2}\bigg[\prod_{i=1}^A \frac{N_i}{m_i^{3/2}}\bigg] \nonumber\\
&\times& \prod_{i=1}^{A-1} \frac{(4\pi/\omega)^{3/2}}{V_Cx(1+x^2)}\bigg(\frac{x^2}{1+x^2}\bigg)^{l_i}G(l_i,x).
\end{eqnarray}
In the above, $g_c = (2S+1)/6^A$ is the coalescence factor for colored constituent quarks of spin $1/2$ to form a colorless hadron of spin S, $g_{\rm size}$ is the correction due to the finite size of produced hadron, and it is taken to be one due to the very large size of the QGP compared to the sizes of produced hadrons, and $g_{\rm rel}$ is the relativistic correction given by
\begin{eqnarray}\label{Eq1.1}
g_{\rm rel} \approx \left[1+\frac{15}{8}T\left(\sum_{i=1}^{A}m_i\right)^{-1}\right]\prod_{i=1}^A \left(1+\frac{15}{8}\frac{T}{m_i}\right)^{-1}.
\end{eqnarray}
For the quantities in the second line of the equation, $\omega$ is the oscillator frequency used to obtain the quark wave functions inside the hadron, $x = (2T_C/\omega)^{1/2}$, $l_i$ is the orbital angular momentum associated with the $i$-th relative coordinate, and $G(l,x)=\sum_{k=0}^l\frac{l!}{k!(l-k)!}\frac{1}{(2k+1)x^{2k}}$ is the suppression factor due to the orbital angular momentum on the coalescence probability.
For the four strange hadrons considered in the present study, their constituent quarks are all in the $s$-state ($l$ = 0) according to the constituent quark model, which leads to $G(l,x) = 1$. The yields for these four strange hadrons are then given by:
\begin{eqnarray}
\label{Eq2}
N_{K^+}& = & g_{K^+} \frac{(m_u + m_{\bar{s}})^{3/2}}{m_u^{3/2}m_{\bar{s}}^{3/2}} \frac{N_uN_{\bar{s}}}{V_C}
\frac{(2\pi/T_C)^{3/2}}{1+\omega/(2T_C)},\\
\label{Eq3}N_{\Xi^-} &= & g_{\Xi^-} \frac{(m_d + 2m_s)^{3/2}}{m_d^{3/2}m_s^3} \frac{N_dN_s^2}{V_C^2}
\frac{(2\pi/T_C)^3}{[1+\omega/(2T_C)]^2},\\
\label{Eq4}N_{\phi} &=& g_{\phi} \frac{(m_s + m_{\bar{s}})^{3/2}}{m_s^{3/2}m_{\bar{s}}^{3/2}} \frac{N_sN_{\bar{s}}}{V_C}
\frac{(2\pi/T_C)^{3/2}}{1+\omega/(2T_C)},\\
\label{Eq5}N_{\Lambda} &=& g_{\Lambda} \frac{(m_u + m_d + m_s)^{3/2}}{m_u^{3/2}m_d^{3/2}m_s^{3/2}} \frac{N_uN_dN_s}{V_C^2}
\frac{(2\pi/T_C)^3}{[1+\omega/(2T_C)]^2},\nonumber\\
\end{eqnarray}
where $g_{K^+}=g_{\rm rel, K^+}/36$, $g_{\Xi^-}=g_{\rm rel, \Xi^-}/108$, $g_{\phi}=g_{\rm rel, \phi}/12$, and $g_{\Lambda}=g_{\rm rel, \Lambda}/108$.
The above results can be generalized to take into account quark density fluctuations by expressing the quark density distributions as
\begin{eqnarray}\label{Eq6}
n_q(\vec{r}) = \frac{1}{V_C}\int n_q(\vec{r})d\vec{r} + \delta n_q(\vec{r}) = \langle q\rangle + \delta q(\vec{r}),
\end{eqnarray}
where $\langle \cdot \rangle$ denotes the average value over the coordinate space and $\delta q(\vec{r})$ with $\langle \delta q\rangle=0$ is its deviation from the average value $\langle q\rangle$. Defining the quark relative density fluctuations $\Delta q=\langle(\delta q)^2\rangle/\langle q\rangle^2$ and the quark density fluctuation correlation coefficients $\alpha_{q_1q_2}=\langle\delta q_1\delta q_2\rangle/(\langle q_1\rangle\langle q_2\rangle)$, and neglecting higher-order correlation coefficients $\alpha_{q_1q_2q_3}=\langle\delta q_1\delta q_2\delta q_3\rangle/(\langle q_1\rangle\langle q_2\rangle\langle q_3\rangle)$, Eqs.~(\ref{Eq2})-(\ref{Eq5}) can be rewritten as
\begin{eqnarray}
\label{Eq17}N_{K^+} &=& g_{K^+}\frac{(2\pi/T_C)^{3/2}}{1+\omega/(2T_C)}V_C\langle \bar{s}\rangle\langle u\rangle(1 + \alpha_{\bar{s}u}),\\
\label{Eq18}N_{\Xi^-} &=& g_{\Xi^-}\frac{(2\pi/T_C)^3}{[1+\omega/(2T_C)]^2}
V_C\langle s\rangle^2\langle d\rangle\nonumber\\
&&\times(1 + \Delta s + 2\alpha_{sd}),\\
\label{Eq19}N_{\phi} &=& g_{\phi}\frac{(2\pi/T_C)^{3/2}}{1+\omega/(2T_C)}V_C \langle s\rangle\langle \bar s\rangle(1 + \alpha_{s\bar{s}}),\\
\label{Eq20}N_{\Lambda} &=& g_{\Lambda}\frac{(2\pi/T_C)^3}{[1+\omega/(2T_C)]^2}
V_C\langle s\rangle\langle u\rangle\langle d\rangle\nonumber\\
&&\times(1 + \alpha_{sd} + \alpha_{su}+ \alpha_{ud}).
\end{eqnarray}
These equations then lead to the following expression for the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$:
\begin{eqnarray}\label{Eq21}
\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$} &=& \frac{1}{3} \frac{g_{\rm rel,K^+}g_{\rm rel,\Xi^-}}{g_{\rm rel,\phi}g_{\rm rel,\Lambda}}\nonumber\\
&\times&\frac{(m_u + m_{\bar{s}})^{3/2}(m_d + 2m_s)^{3/2}}{(m_s + m_{\bar{s}})^{3/2}(m_u + m_d + m_s)^{3/2}} \nonumber\\
&\times& \frac{(1 + \alpha_{\bar{s}u})(1 + \Delta s+2\alpha_{sd})}{(1 + \alpha_{s\bar{s}})(1+\alpha_{sd} + \alpha_{su} + \alpha_{ud})}.
\end{eqnarray}
In the absence of quark density fluctuations, the yield ratio becomes simply $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}=g$
with the constant $g$ denoting the expressions in the first two lines of Eq.(\ref{Eq21}). Since the value of $g$ changes very little for temperature between 100 and 160 MeV, the value of $
\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ is thus essentially independent of the collision energy, contradicting to the non-monotonic dependence on the collision energy seen in experiments. However, the value of $g$ depends on the contribution to the four ground state strange hadrons from decays of other
strange hadrons and their resonances that are included in the COAL-SH calculations. In the present study,
we include the decay of $\Xi(1530)$ to \rm $\Xi^-$, the decay of $\Sigma^0(1192)$, $\Sigma(1385)$,
$\Lambda(1405)$ and $\Lambda(1520)$ to $\Lambda$, and the decay of $K^*(892)$, $K_1(1270)$ and $K_1(1400)$
to $K^+$. The resulting numbers of $K^+$, $\Xi^-$, and $\Lambda(1115)$ that should be compared with the measured ones are $N_{K^+}^{\text{measured}} = N_{K^+} +
\frac{1}{3}N_{K^{*+}(892)} + \frac{2}{3}N_{K^{*0}(892)}+0.51N_{K_1^+(1270)} +0.47N_{K_1^0(1270)}
+0.56N_{K_1^+(1400)}+0.44N_{K_1^0(1400)}\approx 10 N_{K^+}$, $N_{\Xi^-}^{\text{measured}} = N_{\Xi^-} + \frac{1}
{2}N_{\Xi(1530)} = 3N_{\Xi^-}$, and $N_{\Lambda}^{\text{measured}} = N_{\Lambda} + \frac{1}{3} N_{\Sigma(1192)} +
(0.87+\frac{0.11}{3})N_{\Sigma(1385)} + \frac{1}{3}N_{\Lambda(1405)}+0.25N_{\Lambda(1520)}=
8.27N_{\Lambda(1115)}$. In obtaining the contribution from $\Lambda(1405)$ and $\Lambda(1520)$ to $\Lambda(1115)$, we have taken into account the suppression due to quark orbital angular momentum as one of the quarks in these two resonances is in the $p-$state ($l=1$). For the $\phi$ meson, we assume no strong and electromagnetic decay corrections from resonances and thus have $N_{\phi}^{\text{measured}} = N_{\phi}$. Here $N_{K^+}$, $N_{\Xi^-}$, $N_{\phi}$ and $N_{\Lambda}$ represent the corresponding hadron yields obtained directly from the quark coalescence model. Using the constituent quark masses $m_u=m_d=0.3$ GeV and $m_s=0.5$ GeV, the phase transition temperature $T_C$ taken from the parametrization given in Ref.~\cite{Cleymans:2005xv} based on the statistical model fit to the available experimental data, we obtain the values of $g=1.1$ from COAL-SH. The corresponding value for the yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}=g$ is shown by the solid line on the right side of Fig.~\ref{Fig1} and also given in Tables~\ref{Tab2} and~\ref{Tab3}. It is seen that the yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ extracted from experimental data at $\sqrt{s_{\rm NN}}$ = 200 GeV, where one does not expect any quark density fluctuations, is described reasonably well by COAL-SH.
Eq.(\ref{Eq21}) shows that to extract the strange quark relative density fluctuation $\Delta s$, one needs information on the quark density fluctuation correlation coefficients $\alpha_{q_1q_2}$. Without information about these coefficients, we consider two extreme cases of uncorrelated and strongly correlated density fluctuations of quarks of different flavors. For uncorrelated quark density fluctuations, one has $\langle\delta q_1\delta q_2\rangle=\langle\delta q_1\rangle\langle\delta q_2\rangle$ and thus $\alpha_{q_1q_2}=0$, which leads to
\begin{eqnarray}\label{Eq22}
\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$} = g(1 + \Delta s).
\end{eqnarray}
The yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ is then linearly proportional to the strange quark relative density fluctuation $\Delta s$. For strongly correlated quark density fluctuations, one has instead $\langle\delta q_1\delta q_2\rangle=\sqrt{\langle(\delta q_1)^2\rangle\langle(\delta q_2)^2\rangle}$ and thus $\alpha_{q_1q_2}=\sqrt{\Delta q_1\Delta q_2}$. The yield ratio is then
\begin{eqnarray}\label{Eq23}
\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}&=&g\frac{1 + \sqrt{\Delta \bar{s}\Delta u}}{1+\sqrt{\Delta s \Delta \bar{s}}}\nonumber\\
&\times&\frac{1 + \Delta s+2\sqrt{\Delta s\Delta d}}{1+\sqrt{\Delta s\Delta d} + \sqrt{\Delta s\Delta u} + \sqrt{\Delta u\Delta d}}.
\end{eqnarray}
In the limit of SU(3) symmetry, i.e., the $u$, $d$, and $s$ current quark masses are the same, one has $\Delta u=\Delta d=\Delta s$. The yield ratio in this case then has the constant value $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}=g$ and is independent of the collision energy. Since the SU(3) symmetry is broken with the strange quark mass much larger than those of $u$ and $d$ quarks, which would lead to different interactions for $s$ quarks than $u$ and $d$ quarks in hot dense quark matter~\cite{Song:2012cd}, $\Delta s$ could be different from $\Delta u$ and $\Delta d$. As a result, one expects the yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ to show some collision energy dependence.
We note a similar expression can be derived for the dependence of the yield ratio $\mathcal{O}_\text{$\bar K^0$-p-$\pi^+$-$\Lambda$}=\frac{N_{\bar K^0}N_p}{N_{\pi^+}N_\Lambda}$ on the $u$ quark relative density fluctuation, but it is a challenging task to measure $\bar K^0$ in experiments since it always mixes with $K^0$.
To illustrate the possible dependence of the strange quark relative density fluctuation on the collision energy in heavy ion collisions, we show in Tables \ref{Tab1}-\ref{Tab3} the extracted values of $\Delta s$ for the case of $\alpha_{q_1q_2}=0$ using the value $g=1.1$ from COAL-SH. It is seen that $\Delta s$ shows a non-monotonic behavior in its dependence on the collision energy, and this can be understood as follows. For central collisions at higher incident energies when the baryon chemical potential of produced QGP is small, the phase transition from QGP to hadronic matter is likely a smooth crossover. The density fluctuations in the produced matter at these collision energies is thus insignificant. As the collision energy decreases, the produced matter may have its evolution trajectory in the temperature and baryon chemical potential plane pass by or approach closely to the CEP of the QCD phase diagram and can thus develop a large density fluctuation. With further decrease in collision energy, its trajectory moves away from the CEP and enters the region of a first-order phase transition. Because of the spinodal instability associated with the first-order phase transition~\cite{Steinheimer:2012gc}, the hot dense matter may also develop large density fluctuations. With further decrease in collision energy, the density fluctuation diminishes as a result of the smaller size and shorter lifetime of the produced QGP. In this picture, the non-monotonic behavior shown in Fig.~\ref{Fig1} for the $\sqrt{s_{\rm NN}}$~dependence of the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ thus indicates that the evolution trajectory of the produced QGP in these collisions may have reached or closely approached the CPE or have undergone a first-order phase transition.
The non-monotonic collision energy dependence of the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$, particularly the possible peak at $\sqrt{s_{\rm NN}}$$\sim 8$ GeV cannot be explained by the statistical hadronization model~\cite{KOCH1986167,Cho:2017dcy,Cho:2010db,Cho:2011ew} either. In this model, the number of hadrons of a given type $h$ produced at the chemical freeze-out temperature $T_C$ and volume $V_C$ is given in the non-relativistic limit by
\begin{eqnarray}\label{Eq24}
N_{h}^{stat} = \gamma_{h} g_{h} V_C\bigg(\frac{m_hT_C}{2\pi}\bigg)^{3/2}e^{-m_{h}/T_C},
\end{eqnarray}
where $g_{h}$ is the degeneracy of the hadron, $\gamma_{h}$ is the fugacity, and $m_{h}$ is the mass of the hadron. The ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ in this model is then given by
\begin{eqnarray}\label{Eq25}
\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$} &=& \frac{1}{3}\frac{m_{K^+}^{3/2} m_{\Xi^-}^{3/2}}{m_{\phi}^{3/2} m_{\Lambda}^{3/2}} \nonumber \\
&\times& e^{-(m_{\Xi^-}+m_{K^+}-m_{\Lambda}-m_{\phi})/T_C}.
\end{eqnarray}
To include the contribution from resonance decays, we use the THERMUS package~\cite{Wheaton:2004qb} with the chemical freeze-out temperature and baryon chemical potential determined in Ref.~\cite{Cleymans:2005xv}. The obtained results for the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ of midrapidity $K^+$, $\Xi^-$, $\phi$ and $\Lambda(1115)$ are summarized in Tables~\ref{Tab2} and \ref{Tab3} and also shown by the dashed lines in Fig.~\ref{Fig1}. We note that the value of the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ in central (0-10\%) Pb+Pb collisions at LHC energy of 2.76 TeV is $1.09\pm0.07$, which is consistent with the value of 1.20 from the statistical model calculation. Results from both the COAL-SH model and the statistical model clearly fail to quantitatively describe the non-monotonic collision energy dependence of the ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ extracted from the experimental data in a wide range of collision energies.
We note that the uncertainties of the extracted ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ from experimental data are large. Further experimental and theoretical investigations are needed to verify and extend the present results and to eventually establish the strange hadron yield ratio as a robust probe to the QCD critical point. In this respect, the ongoing phase II of the BES program at RHIC, which can measure precisely the total multiplicity of these strange hadrons in full rapidity space, will be very useful for studying the CEP in the QCD phase diagram via the yield ratio of strange hadrons. It is also of interest to study the strange quark density fluctuation in small collision systems to compare with the results from large collision systems. The ALICE collaboration~\cite{ALICE:2017jyt} has recently measured the multiplicity dependence of the yield ratio of multistrange baryons in $pp$ collisions at $\sqrt{s}$ = 7 TeV, and it will be very useful to have data also for charged kaon and $\phi$ meson from events with small multiplicities.
In summary, we have found a possible non-monotonic behavior in the collision energy dependence of the yield ratio $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$ that is extracted from measured yields of strange hadrons in central Pb+Pb collisions at SPS energies from the NA49 Collaboration and in central Au+Au collisions at RHIC energies from the STAR Collaboration. This behavior cannot be explained by the usual coalescence model with uniform quark density distributions and by the statistical hadronization model. Including quark density fluctuations in the coalescence model allows one to extract the collision energy dependence of the strange quark fluctuation from that of $\mathcal{O}_\text{K-$\Xi$-\rm $\phi$-\rm $\Lambda$}$. Our study thus suggests that the CEP in the QCD phase diagram may have been reached or closely approached in heavy ion collisions at SPS and at RHIC BES energies. Future studies of strange hadron production in the phase II of BES program at RHIC and other heavy-ion collisions experiments, which will provide more accurate measurements of their multiplicities, are needed to versify present observations and to further advance our knowledge on the location of the CEP in the QCD phase diagram.
The authors thank Xianglei Zhu for helpful discussions. The work of T.S. and J.C. was supported in part by the National Natural Science Foundation of China under Contract Nos. 11890710, 11775288, 11421505 and 11520101004, while that of C.M.K. and K.J.S. was supported by the US Department of Energy under Contract No. DE-SC0015266 and the Welch Foundation under Grant No. A-1358.
|
1,116,691,497,861 | arxiv | \section{Introduction}
The study of Mg\,{\sc ii}\ absorption line systems in the spectra of quasars (QSOs) has
been useful in detecting distant normal field galaxies situated close to the lines of sight of QSOs
\citep{1991A&A...243..344B,1994ApJ...437L..75S}. Conventionally,
all such absorbers with
velocity $<5000$ ${\rm km\ s}^{-1}$\ relative to the background QSO are believed to
be associated to the QSO itself (`associated systems') while those at
larger velocity offset are believed to be entirely independent of
background QSO.
This general belief was questioned recently by
the puzzling results on the abundance of strong Mg\,{\sc ii}\ absorber having
equivalent width ($W_r$) more than 1.0 \AA\ : (i) by
\citet{Prochter2006ApJ...648L..93P} where they found $2\hbox{--}4$ time excess of strong
Mg\,{\sc ii}\ absorber towards the $\gamma$ ray burst (GRB) sources relative to QSO sight
lines (see also \citealt{2007ApJ...669..741S,Vergani2009A&A...503..771V,Tejos2009ApJ...706.1309T}), and (ii) by
\citet[]{Bergeron2011A&A...525A..51B},
where they found similar excess by a factor of about 2 (3 $\sigma$
confidence) towards 45 blazar sight lines.
These counter-intuitive
results, have inspired many alternative explanations, such as dust
extinction towards QSO sight lines which can lower the apparent
incidence rate of absorbers, or gravitational lensing which can
increase it toward GRBs/blazars, but none have been found to explain
the above discrepancies \citep{Porciani2007ApJ...659..218P,Menard2008MNRAS.385.1053M,Lawther2012A&A...546A..67L}.
However the blazars, as a
class, are believed to have relativistic jet pointed close to our line of sight.
\cite{Bergeron2011A&A...525A..51B} speculated that such powerful jets in the blazars are
capable of sweeping sufficiently large column densities of gas (up to
$10^{18}\hbox{--}10^{20} {\rm cm}^{-2}$ ) and accelerating such clouds to velocities of
order $\sim 0.1c$, thereby possibly accounting for the excess of Mg\,{\sc ii}\
absorption systems towards blazars in comparison with QSOs. However, such an
excess in number of Mg\,{\sc ii}\ absorbers per unit redshift ($dN/dz$) was not confirmed in the analysis of flat-spectrum radio
quasars (FSRQs) by \citet{Chand2012ApJ...754...38C}, though FSRQs also
possess powerful jets similar to blazars, which they reconciled with
the above hypothesis of relativistically ejected absorbing clouds, by suggesting that perhaps due to larger angle from the line
of sight (unlike blazars with smaller angle), these accelerated clouds
might not intersect the line-of-sight in the case of FSRQs.
Using a larger sample size of 95 GRB (including 12 GRB from
\citealt{Prochter2006ApJ...648L..93P}), \citet{Cucchiara2012arXiv1211.6528C} did not confirm the
original enhancement found in the case of GRB by \citet{Prochter2006ApJ...648L..93P}, though a signature of mild excess of about 1.5
times was noticed for strong Mg\,{\sc ii}\ absorption systems, albeit with
only a low confidence level of 90\%.
The firm conclusion for jet based above excess still await the
realistic numerical modelling of jet-ambient gas interaction especially
for the excess seen towards blazars (about a factor of 2) and CDQs (about
10\%) \citep{Joshi_etal} vis-a-vis normal QSOs.
However an alternative scenario, which could be more plausible, is the dust or radiation
driven outflows \citep[e.g.][]{1995ApJ...451..510S}.
For instance, if there is some contribution to $dN/dz$ of
strong Mg\,{\sc ii}\ absorber from these outflows, then one will expect that
AGN luminosity should have statistical correlation with the velocity
offset of the strong Mg\,{\sc ii}\ absorber relative to the background AGN,
which is usually defined by,
\begin{equation}
\beta = \frac{(1+z_{\rm qso})^2-(1+z_{\rm abs})^2}{(1+z_{\rm qso})^2+(1+z_{\rm abs})^2}
\end{equation}
where $\beta=v/c$, $z_{\rm qso}$ is the emission redshift of the QSO and $z_{\rm abs}$ is the absorption redshift of the Mg\,{\sc ii}\ system.
In this letter we report a correlation between the $\beta$ of strong Mg\,{\sc ii}\ absorbers and the bolometric luminosity ($L_{\rm bol}$) of QSOs, using the strong Mg\,{\sc ii}\ absorber catalogue by \citet{Lawther2012A&A...546A..67L}. We also propose an explanation for this correlation which draws upon radiation driven outflow models. In \S 2 we describe the sample of strong Mg\,{\sc ii}\ absorbers and our selection criteria. In \S 3 we present our results and a theoretical model of radiation driven outflows. In \S 4 we study the fractional number counts of absorbers, and discuss our results in \S 5.
\section{Description of the sample}
We consider a sample of 10367 strong Mg\,{\sc ii}\ absorbers with equivalent
width $W_r(2796) > 1$\AA\ belonging to 9144 QSOs, from the recent
compilation by \citet{Lawther2012A&A...546A..67L} based on 105783 QSOs
of SDSS DR7 \citep{Abazajian2009ApJS..182..543A,Schneider2010yCat.7260....0S}. However, the
range of $\beta$ varies with
$z_{\rm qso}$, and the observed wavelength range of the spectrum.
Therefore, in order to make the sample unbiased,
firstly, we have considered a SDSS
spectral range from 4000-9000 \AA\, which is a little narrower (by about 100 \AA) than the actual one.
We then applied the following four selection filters.
\begin{enumerate}
\item We removed 773 broad absorption line (BAL) QSOs
from our above sample to avoid any contamination in our analysis by BAL features which
has resulted in the removal of corresponding 931 strong Mg\,{\sc ii}\ absorbers.
\item For all the quasars having $z_{\rm qso} > 2.21024$, the Mg\,{\sc ii}\
emission line will fall above 9000 \AA, which is our conservative upper limit on wavelength of
SDSS spectrum. As a result, SDSS spectra for such
sources will not allow any detection of strong Mg\,{\sc ii}\ doublet falling
in the redshift range between 2.21024 up to $z_{\rm qso}$. Therefore to avoid
this observational bias, we excluded all sources having $z_{\rm qso} \ge 2.21024$
from our sample, which resulted in the removal of 43 QSOs having 52 strong
Mg II absorbers.
\item Another filter was applied to avoid the observational bias
which might result from the lower wavelength limit, viz 4000 \AA, in
the SDSS spectra. In our analysis we aim to see any correlation of luminosity
with velocity offset up to about 0.4c. However for 4000 \AA\ considered as
the conservative starting wavelength of our spectra, $z_{\rm qso} = 1.185$ will be the minimum redshift, which allows us to detect Mg\,{\sc ii}\
absorber (if any) at least up to a velocity offset of $0.4 c$. Therefore, we have removed 1461
sightlines with $z_{\rm qso} < 1.185$ having 1544 strong Mg\,{\sc ii}\ absorbers in their spectra.
\item After applying the above mentioned redshift cuts, we are left with
the systems with $2.21024>z_{\rm qso}\geq 1.185$. In these intermediate
redshift systems, the $\beta$ value can be larger than $0.4$, which in principle may give rise to a bias of higher $\beta$ with increasing $z_{\rm qso}$.
Hence we also remove all the absorbers with $\beta>0.4$ from the remaining sample which amounts to exclusion of 1523 absorbers along 1439 sightlines.
One should note that $\beta =0.4$ is chosen because if we keep $\beta$ value less or greater than this, then the sample is significantly reduced. Another motivation as will be clear in the coming sections, is that $\beta \sim 0.4$ is an upper limit for the radiation (dust) driven outflows.
\end{enumerate}
Finally, we are left with
6317 strong Mg\,{\sc ii}\ systems along 5682 QSOs in the selected redshift range.
Bolometric luminosities for the QSOs in SDSS DR7 are calculated in a recent
study by \citet{shen2011ApJS..194...45S}. We cross matched the QSOs in our sample, with the catalogue described in Shen's paper to obtain the bolometric luminosity. We then removed two more absorbers whose QSO luminosities were $< 10^{45}$ erg s$^{-1}$. Our final bias free sample consists of 6315 strong Mg\,{\sc ii}\ systems with luminosity
range $10^{45.5} < L_{\rm bol} \le 10^{47.8}$ erg s$^{-1}$, with redshift range $1.185 \le
z_{\rm qso} < 2.21024$, and with the velocity offset range of
$0 < \beta c \le 0.4 c$.
In Figure 1, the blue dashed
line represents the distribution of strong Mg II absorbers in
SDSS-DR7, compiled by \citet{Lawther2012A&A...546A..67L}, and the black solid line is the
final sample selected for this study.
\begin{figure}
\includegraphics[scale=0.4]{fig_1_hist.eps}
\caption{Histograms with $z_{\rm qso}$ of the samples of strong Mg\,{\sc ii}\ absorbers in SDSS-DR7. The blue dashed line is for 10367 strong absorbers compiled by \citet{Lawther2012A&A...546A..67L}. The black solid line represents the sample used in this work.}
\end{figure}
\section{Correlation between $\beta$ and $L_{\lowercase{\rm bol}}$ : signature of radiation driven outflow}
In order to test the dependence of $\beta$ on luminosity, we divide the sample in bins of bolometric luminosity. Most of the absorbers (5651 out of 6315) belong to QSO sightlines having a luminosity range $10^{46}\hbox{--}10^{47}$ erg s$^{-1}$. We divide these 5651 systems into four bins of bolometric luminosity. We also have two more bins, one for $L_{\rm bol}<10^{46}$ erg s$^{-1}$, and another with $L_{\rm bol}>10^{47}$ erg s$^{-1}$, the first having 27 systems and the second with 637 systems.
Consider the case of the absorbers being distributed uniformly in the allowed range of $z_{\rm abs}$ (which in turn is determined from
the allowed range of $\beta$), then the median value of $\beta$ should be independent of $z_{\rm qso}$ (see Appendix A for a proof). Hence, irrespective of the distribution of $z_{\rm qso}$ in a luminosity bin, the median value of beta should be same in all luminosity bins.
To test this hypothesis, we estimate the median, the lower 25 percentile and the upper 25 percentile of data in each of the above mentioned six luminosity bins. We plot the median with circles, and the upper and lower percentiles as the end points of vertical dotted bars in Figure 2.
\begin{figure}
\includegraphics[scale=0.4]{fig_2.eps}
\caption{Correlation between the bolometric luminosity of the QSO and $\beta$ of Mg\,{\sc ii}\ absorber. The circles represent the median of data in a particular luminosity bin. The upper and lower extreme of the dotted vertical lines give the location of upper and lower 25 percentile, respectively. Sizes of the luminosity bins are shown by the horizontal bars. The solid, dotted and dash-dotted line represent the theoretical model discussed in \S 3.1. }
\end{figure}
Interestingly, we find that the median is not constant. The data shows a correlation of $\beta$ with the $L_{\rm bol}$. The 5651 absorbers systems with $10^{46}\leq L_{\rm bol}\leq 10^{47}$ erg s$^{-1}$, which form the mainstay of the sample show a power law increase of $\beta$ with $L_{\rm bol}$, with a slope of $\sim1/4$.
Increase of median value of $\beta$ with the bolometric luminosity, proves that the distribution in each bin is not uniform random.
This fact is also hinted in the evolution of $dN/dz$ with $z_{\rm abs}$ \citep{Zhu2012arXiv1211.6215Z}.
Using the evolution in $dN/dz$ for our sample, we evaluated the expected relation between the median value of $\beta$ and $z_{\rm qso}$ from equation (\ref{app_med}). We then converted it to the corresponding relation between $\beta$ and luminosity by using the best fit relation between $z_{\rm qso}$ and luminosity, a characteristic of magnitude limited survey such as SDSS. We have shown this relation using a dashed curve in Figure 2.
Although the dashed curve does show some evolution, but it is clear that it
cannot fully explain the observed $\beta\hbox{--}L_{\rm bol}$ correlation.
Which physical processes can give rise to non-uniformity of absorber distribution? The evolution in $dN/dz$ has been attributed to the evolution in global star formation rate \citep{Zhu2012arXiv1211.6215Z}, although without any concrete evidence.
Also, observations of intervening galaxies
show a small covering fraction ($\le 0.3$) for strong Mg\,{\sc ii}\ absorbers ($W \ge 1$ \AA)
\citep{2012arXiv1211.1380N,2010ApJ...714.1521C}.
Here we explore an alternate based on outflows associated with QSOs, which can give rise to non-uniformity of incidence of absorbers. As we explore in next section, the relation $\beta \propto L_{\rm bol}^{1/4}$, is a natural consequence of QSO radiation driven outflows.
\subsection{Absorbers as radiation driven outflows}
Radiation driven outflows have been invoked repeatedly in literature to explain the co-evolution of black hole and bulge, to explain the accretion disc winds \citep[e.g.][]{2000ApJ...543..686P} and galactic winds \citep[e.g.][]{2005ApJ...618..569M,2011ApJ...736L..27S}. We consider here the radiation driven outflows, where the photons scatter the dust grains and impart their momentum to dust. The dust in turn is collisionally coupled to the gas, and the momentum is uniformly distributed to the dust and gas mixture. In this scenario, the motion of dust and gas mixture surrounding the QSO is governed by the following equation,
\begin{equation}
v{dv \over dr} = {\kappa L_{\rm uv} \over 4\pi r^2 c} - {G M_\bullet \over r^2} - {d\Phi \over dr}
\label{eq_diff}
\end{equation}
where $M_\bullet$ is the mass of the black bole and $\Phi$ is the dark matter halo potential. $L_{\rm uv}$ is the integrated UV luminosity and for QSOs where the main emission is in high frequency bands, luminosity over UV and EUV bands is roughly half of the bolometric luminosity ($L_{\rm uv}\sim0.5 L_{\rm bol}$)\citep{2004ASSL..308..187R}. $\kappa$ is the frequency averaged opacity for the scattering and absorption of UV photons by dust grains. For wavelength of photon $<0.3\ \mu$m, the $\kappa$ for a dust and gas mixture ranges from 200 to as large as 1000 cm$^{2}$g$^{-1}$ \citep{2001ApJ...554..778L}. We take a value $\kappa=500$ cm$^{2}$g$^{-1}$, which roughly serves as an average effective value of opacity.
We can integrate equation (\ref{eq_diff}) to obtain the following expression for velocity
\begin{equation}
v^2 = {\kappa L_{\rm bol} \over 4 \pi c}\left({1 \over r_b} - {1\over r}\right) - 2(\Phi(r)-\Phi(r_b)
\end{equation}
where $r_b$ is the launching radius of the outflow. In the case of radiation pressure on dust grains, the opacity is generally quite high and hence the radiation force is many times larger than the gravity, therefore the gravitational force can be neglected. At a large distance the velocity attains the following terminal value
\begin{equation}
v_{\infty} \simeq \left({\kappa L_{\rm bol} \over 4\pi c\ r_b}\right)^{1/2}
\label{wind_easy}
\end{equation}
The base radius ($r_b$) for launching these outflows is an important factor and it should be the minimum distance at which the dust grains can survive. Studies on dust survival yield following relation between the sublimation radius of the dust grains and the luminosity of the AGN \citep{2012MNRAS.420..526M},
\begin{equation}
r_b = R_{\rm sub} \sim r_{b,0}\ \left(\frac{L_{\rm bol}}{10^{46}\ {\rm erg\ s^{-1}}}\right)^{0.5} \,.
\label{eq_mor}
\end{equation}
The value of $r_{b,0}$ is $0.5$ pc for graphite grains and $1.3$ pc for the silicate grains. Substituting equation (\ref{eq_mor}) into (\ref{wind_easy}), we obtain the following expression for wind terminal speed,
\begin{equation}
v_\infty \sim 0.1 c\ \left(\frac{\kappa}{500\ {\rm cm^2\ g^{-1}}}\right)^{1/2} \left(\frac{L_{\rm bol}}{10^{46}\ {\rm erg\ s^{-1}}}\right)^{1/4}\left(\frac{r_{b,0}}{{0.5\ {\rm pc}}}\right)^{-1/2}
\end{equation}
We note that this mechanism has previously been discussed in the context of AGN outflows by \cite{1995ApJ...451..510S}. These authors also arrived at similar terminal speed for a radiation driven outflow.
We plot this scaling to compare with the observed correlation of $\beta$ and
$L_{\rm bol}$ in Figure 2. The dash-dotted, solid and dotted line in Figure 2 correspond to $r_{b,0}=0.2,0.5, 1.0$ pc respectively. We find that this simple theoretical model fits the observed correlation pretty well, which indicates that the absorber systems are likely to be radiation(dust) driven outflows.
One is then tempted to ask as to how these outflows fit in the unification schemes of AGN. We find that the launching radius of the outflows is the dust sublimation radius, which is also the inner radius of the dusty torus. Inside the torus, the UV photons are quickly reprocessed into IR. Although the IR photons can also drive outflows \citep{2011ApJ...741...29D, 2012arXiv1209.0242S}, however the speeds would not be large, as the IR to dust scattering cross section is more than an order of magnitude smaller than in UV. One possible way to reconcile this is the following.
Let us suppose that the outflows do not plough through the main body of the torus, but consist of material lifted from the outer surface of the torus. In that case, as the torus material is dilute and highly porous at its periphery, the UV photons can in principle travel a large distance without being attenuated and impart their momentum to gas and dust mixture lifted
from the outer surface of the torus. More specifically, in the picture presented in \cite{2000ApJ...545...63E}, the region which we are considering should take place between the BAL envelope and the torus. We note that, this scenario not only gives rise to large velocity outflows, but it may also account for the small fraction ($\lesssim 0.1$) of the QSOs which show these absorbers owing to the fact that the region allowed for the outflows (periphery of the torus) occupies a very small fraction of the viewing angle.
\section{Fractional number of absorbers}
Next we study the dependence of fractional number counts of absorbers as a function of QSO luminosity.
We define the fractional number count as below,
\begin{equation}
\rm Frac.\ number\ count = \frac{Number\ of\ absorbers\ found}{Number\ of\ QSOs\ searched\ in\ a\ bin} \,. \nonumber
\end{equation}
Again, we limit our analysis to the spectral region with $\beta<0.4$. From our
sample, as described in \S 2, we can easily estimate the ``Number of absorbers found"
in a given luminosity
bin, having $\beta<0.4$. However to find the corresponding ``Number of QSOs
searched in a bin",
we also need to count those QSOs in the parent sample of SDSS-DR7 from which the QSOs with Mg\,{\sc ii}\ absorbers are selected. We use the parent catalogue from \citet{shen2011ApJS..194...45S} of which the sample used in this work is a subset.
Therefore, we estimate the
``Number of QSOs searched in a bin" by using non-BAL QSOs from Shen et al (2012) catalogue, which satisfy the redshift criteria $1.185 \le z_{\rm qso} < 2.21$,
to ensure the absence of any observational biases (see \S 2).
We plot the fractional number count as a function of luminosity ($L_{\rm bol}$) in Figure 3. The values are shown by filled diamonds whose x-coordinate is the centre of each luminosity bin. We also show the overall average of the sample using a horizontal line, whose value is $\sim0.1$.
\begin{figure}
\includegraphics[scale=0.4]{fig_3.eps}
\caption{The fractional number count as a function of bolometric luminosity is shown using filled diamonds, and the size of the luminosity bin is shown by the horizontal bar. Thin horizontal line represents the average value of fractional number count over the entire sample.}
\end{figure}
We find that the fraction increases steeply with increasing QSO luminosity and reaches a maximum roughly for $L_{\rm bol} \sim 10^{47}$ erg s$^{-1}$. For $L_{\rm bol}> 10^{47.5}$ erg s$^{-1}$ there is a mild decrease with luminosity, however this decrease is uncertain as in this bin, we have many apparently faint high redshift quasars, for which the signal to noise criterion removes large chunks of spectra and the corresponding absorbers (D. Lawther pvt. comm.).
We note that the fractional number of absorbers has a contribution from outflowing and intervening systems, which we can not separate here.
\section{Discussion}
We would like to emphasize an important point in connection with our result. There is a general consensus in the literature, which goes along the line that the absorbers which have $\beta<0.0167$ ($v<5000$ km s$^{-1}$), are associated with the QSO and with $\beta$ higher than this represent the intervening media.
We stress here that this criterion is not adequate to denote the associated systems, and the true associated systems can also have $\beta>0.0167$, e.g. the QSO driven high velocity outflows considered here.
To illustrate this, we plot in Figure 4, the ratio of absorbers with $\beta<0.0167$ to the total number of absorbers in a particular luminosity bin as a function of bolometric luminosity. One can clearly see that lower $\beta$ are possible for only lower luminosity, and vice versa. Firstly, the figure once again confirms that the velocity offset $\beta$ is correlated with luminosity, because low
$\beta$ absorbers appear along the sightlines of low luminosity QSOs. Secondly, this plot, in conjunction with the correlation of $\beta$ with $L_{\rm bol}$, shows that the systems which are really `{\it associated} ' with the QSOs are spread all the way from $\beta=0.0$ to $0.4$.
Our results call for a study to separate out the truly associated (outflowing) systems and the intervening ones. Of course, one tedious way to do this is to locate the intervening galaxies in each quasar sightline, however yet another way can be through the detailed study of line shapes and features arising from outflows and intervening material. We look forward to such a study in the future.
\begin{figure}
\includegraphics[scale=0.4]{fig_4.eps}
\caption{Ratio of absorbers with $\beta<0.0167$ to the total number of quasars in a particular luminosity bin is plotted against the bolometric luminosity. The horizontal bar represents the size of the luminosity bin.}
\end{figure}
There is another implication of the observed dependence of fractional number
count of absorbers on QSO luminosity. If one considers a sample of a particular
type of QSOs that covers a restricted luminosity range, then the relative
number of absorbers may differ for different samples, and be different from
the overall average.
If we consider the right side of Figure 3, corresponding to $L_{\rm bol} > 10^{46.5}$ erg/s, there the fractional number counts are roughly double of the overall average value of $0.1$. We note
that recent observations of $\sim 45$ blazars \citep{Bergeron2011A&A...525A..51B} report an excess of Mg\,{\sc ii}\
absorbers relative to that in QSOs. We speculate here that this excess may also arise from the fact that the blazar sample is small, and it may be possible that it is biased towards higher luminosity, where the fractional number count is larger. It is
possible that if the analysis is repeated with a larger sample of blazars then the excess may fade away. In fact, a similar conclusion has been
reached for a sample of FSRQs and $7156$ lobe and core dominated QSOs where in both cases one finds only a mild excess \citep{Joshi_etal}. In this regard we bring a recent paper by \citet{Cucchiara2012arXiv1211.6528C} to the attention of the reader, regarding the excess seen towards GRBs, where with a large sample of GRBs the puzzle of Mg\,{\sc ii}\ incidence rate indeed disappears, and one does not find any excess.
In summary, we have found a correlation between the velocity offset of strong Mg\,{\sc ii}\ absorbers and the luminosity of QSOs. The velocity offset ($\beta c$) has been found to increase with the luminosity with a power law index $\sim 1/4$. We have found that radiation driven outflows from QSOs can give rise to such a dependence of $\beta$ on $L_{\rm bol}$. These findings lead us to conclude that a significant fraction of strong Mg\,{\sc ii}\ absorbers (even with $v> 5000$ ${\rm km\ s}^{-1}$\ ) along QSO sightlines may be the AGN driven outflows.
We are grateful to D. Lawther for supplying the redshift path data. We thank an anonymous referee for insightful comments.
\footnotesize{
|
1,116,691,497,862 | arxiv | \subsection*{Estimation of the entropy of the input signal}
Because it is possible to analytically compute the entropy of the
input signal, the inference of the entropy of the input signal
provides a good test case for our procedure to estimate the
entropy. At each elementary time step, there is a chance $r=\delta
t / \tau_{\rm s}$ that the spin flips:
\begin{equation}
P(S_t = 1 | S_{t-1} = -1) = P(S_t = - 1 | S_{t-1} = 1) = r
\end{equation}
where $S_t$ is the state of the spin at time $t$ (in units of $\delta t$). Similarly,
\begin{equation}
P(S_t = 1 | S_{t-1} = 1) = P(S_t = - 1 | S_{t-1} = -1) = (1-r).
\end{equation}
The signal is a Markovian process since the chance of a spin
flip does not depend on the history of the trajectory. The entropy
rate of this process is then given by:
\begin{equation}
h(\mathcal{S}) = H(S_{t} | S_{t-1}) = -r \log r - (1-r) \log (1-r),
\end{equation}
where we assume that the spin up and spin down states are
equally likely. Using this entropy rate, the quantity of interest, the
true entropy of the input
signal, for $\Delta t \to \delta t$, is
\begin{align}
H(\textbf{S}_L) &= H(S) + \frac{L}{\delta t} h(\mathcal{S}) \\
&= \log(2) - \frac{L}{\delta t} \big[ r \log r + (1-r) \log (1-r) \big].
\end{align}
\begin{figure}[t]
\centering
\includegraphics[height = 75 mm,width=0.7\textwidth]{SI2.eps}
\caption{The NSB estimator outperforms the naive estimator when
estimating the entropy of the input signal $H=H({\bf S}_L,\Delta
t)$. Here we increase the
trajectory length $L=(n-1)\Delta t$ by increasing the number of states $n$ in the
trajectory, while keeping the sampling interval $\Delta t$
constant at $\Delta t = 4$. Both estimators suffer from a
systematic bias at larger $L$, where the entropy is
underestimated. For small $L < 100$, there is sufficient
sampling ($N_{\rm tot} = 5 \times 10^6$) and the estimators
agree with the theoretically predicted value of the
entropy. The correlation time of the input signal $\tau_{\rm s} = 40$.}
\label{fig:Entropy_compare}
\end{figure}
We now consider the effect of sampling the input trajectory at a
sampling interval $\Delta t$. The chance
of a spin flip at the next time interval $\Delta t$ is
\begin{align}\label{eq:qdef}
P(S_t|S_{t-\Delta t}) = \begin{cases} \sum_{i=0,{\rm even}}^{\Delta t}
(1-r)^{\Delta t -i} r^i \frac{\Delta t!}{i!(\Delta t - i)!} =
q(\Delta t), \quad &\text{for} \quad S_t = S_{t-
\Delta t} \\
\sum_{i=1,{\rm odd}}^{\Delta t} (1-r)^{\Delta t - i} r^i \frac{\Delta
t!}{i!(\Delta t - i)!} = (1-q(\Delta t)) , \quad &\text{for} \quad
S_t = -S_{t-\Delta t},
\end{cases}
\end{align}
where we sum over each possible step where the spin could flip
within a sampling interval $\Delta t$. The entropy rate is thus
\begin{align}
h(\mathcal{S},\Delta t) = H(S_t|S_{t-\Delta t}) &= - \sum_{S_{t-\Delta
t}} P(S_{t-\Delta t}) \sum_{S_t} P(S_t|S_{t-\Delta t})\log
P(S_t|S_{t-\Delta t}) \\
&= q(\Delta t)\log (q(\Delta t)) + (1-q(\Delta t)) \log (1-q(\Delta t)),
\end{align}
such that the entropy is
\begin{equation}
H(\mathbf{S}_L,\Delta t) = \log(2) - \frac{L}{\Delta t} \big[ q(\Delta t)\log (q(\Delta t)) + (1-q(\Delta t)) \log (1-q(\Delta t)) \big].
\end{equation}
This expression reduces to Eq. 7, when $\Delta t /\delta t = 1$, as it
should.
It is possible to compare this theoretical value of
$H(\mathbf{S}_L,\Delta t)$ with estimates of the entropy using
simulations of the input signal for a given sampling
interval. Different estimators have been proposed to estimate the
entropy \cite{Miller1974,Grassberger2008}. Here, we compare the naive
estimator, in which the probability of a specific trajectory is simply
given as $P(\mathbf{S}_L = \mathbf{s}_L) = N_{\mathbf{s}_L} / N_{\rm
tot}$, where $ N_{\mathbf{s}_L} $ is the number of observations of
the trajectory $\textbf{s}_L$, and $N_{\rm tot}$ is the total number
of observations, to the estimator proposed by Nemenman \textit{et al.}
\cite{Nemenman2004}, called the NSB estimator. When the performance of
the two estimators are compared, we see, by comparing the
computational estimates to the theoretical value, that the NSB
estimator has overall a smaller error than the naive
estimator. However, both of these estimators suffer from a bias at
large $L$, where the entropy of the input signal is underestimated
because the number of states of the input trajectory, $K = 2^n$,
exceeds the number of observations. We have chosen the NSB estimator,
but also have developed a procedure to estimate the information
transmission rate without undersampling.
\subsection*{For sufficient sampling, we can reliably estimate the mutual information}
When we increase the length of the trajectories $L$ by increasing $n$
keeping $\Delta t$ constant, we can distinguish three regimes for the
mutual information $I({\bf S}_L; {\bf X}_L,\Delta t)$, see
Fig. \ref{fig:Naive_MI}: first, the mutual information increases with
a low, constant rate. Here, we can reliably estimate the entropies of
all three ensembles $\{\textbf{S}_L\}(\Delta t)$,
$\{\textbf{X}_L\}(\Delta t)$, and
$\{\textbf{S}_L,\textbf{X}_L\}(\Delta t)$ and the slope of the mutual
information equals the information transmission rate $I_{\rm R}(\Delta
t)$ for this value of the sampling interval $\Delta t$. Then as
$L=(n-1)\Delta t$ is increased further (by raising $n$), the mutual
information rises at a higher pace. In this regime, only the
entropy estimation of the joint trajectory
$\{\textbf{S}_L,\textbf{X}_L\}(\Delta t)$ suffers from
undersampling, causing $H({\bf S}_L;{\bf X}_L,\Delta t)$ to be
underestimated. The joint trajectory suffers from
undersampling first, because it contains twice the number of spin
states as compared to the input or output trajectories. Since the
entropy $H({\bf S}_L;{\bf X}_L,\Delta t)$ of the joint trajectory is
subtracted from the mutual information $I({\bf S}_L;{\bf X}_L,\Delta t)$, its underestimation
will cause the mutual information to be overestimated (see Eq.
\ref{eq:MI_Dt}). Finally, at larger values of $L$, the slope of the
mutual information decreases again. All three entropies are
now underestimated and the slope of the mutual information decreases.
Clearly, only in the initial linear regime, the
information transmission rate can be reliably inferred from the
slope of the mutual information $I({\bf S}_L;{\bf X}_L,\Delta t)$.
By increasing the number of observations, the initial regime is valid
for a larger range of trajectory lengths $L$. By increasing the number
of observations with a factor of $10^2$, the correct regime is
elongated with approximately $n\approx 5$ spin states
in the trajectories. Additionally, the collapse of all three lines in
the initial regime gives us confidence that we can reliably estimate
the mutual information when the trajectory does not contain too many
spin states. When the estimate of the mutual information does not
change when we repeat the simulation with more observations, then we
can be confident of our estimate of the mutual information: as we saw
in Fig. \ref{fig:Entropy_compare}, the NSB estimator does not have
any bias when there is sufficient sampling.
From inspection of Fig. \ref{fig:Naive_MI}, we see that for $N
= 10^7$ observations, the mutual information
$I(\textbf{S}_L;\textbf{X}_L,\Delta t)$ stays in the initial, correct, linear
regime up to $L \approx 72$, which corresponds to $n \approx L /
\Delta t + 1 = 72 / 8 + 1 \approx 10$ spin states in the trajectory, corresponding
to a state space of $K = 2^{2n} \approx 10^6$ for the join trajectory. For the
results of the main text, we have used $N = 4 * 10^7$ for $n = 9$ spin
states in the input and output trajectories. When increasing or decreasing
the number of spin states $n$, we adjusted the number of observations $N$
according to the change in the size of the state space $K$.
Using these parameters, we have $N >> K$ and there is a vanishingly
small error on the estimate of the mutual information.
\begin{figure}[t]
\centering
\begin{adjustbox}{center}
\includegraphics[height=80mm,width=0.75\columnwidth]{SI3.eps}
\end{adjustbox}
\caption{The mutual information $I({\bf S}_L;{\bf X}_L,\Delta t)$ between the input spin and output
spin as a function of the trajectory length $L=(n-1)\Delta t$,
where we increase the length $L$ of the trajectories $\textbf{S}_L$
and $\textbf{X}_L$ by increasing the number of spin states $n$ in
the trajectory keeping the sampling interval $\Delta t$
constant. Due to undersampling we can observe three regimes of our
estimate of $I(\textbf{S}_L,\textbf{X}_L,\Delta t)$: Initially the entropies
of the three trajectories are all correctly estimated such that the
initial slope is the true information transmission rate. Then we
underestimate only the joint entropy
$H(\mathbf{S}_L,\mathbf{X}_L,\Delta t)$ (see Eq. \ref{eq:I_R}), which increases the mutual
information. Finally, all three entropies are underestimated such
that the mutual information again decreases. When we increase the
number of observations $N$, we can elongate the length of the
correctly estimated regime. The size of the system is $3 \times 3$
spins at a temperature $T=2.4$, using a sampling interval of
$\Delta t = 8$ and the correlation time of the input signal $\tau_{\rm s} = 25$.}
\label{fig:Naive_MI}
\end{figure}
\subsection*{The information transmission rate increases for a smaller sampling interval}
To reliably estimate the information transmission rate, it is
necessary to compute the rate at a sufficiently long trajectory length
$L$, which should be longer than the longest timescale in the system,
$L > \tau_{\rm s}, \tau_{\rm r}$, where $\tau_{\rm s}$ is the input
timescale and $\tau_{\rm r}$ the response time of the system. Yet,
the number of spin states in the trajectories, $n$, cannot be too
large because this will create a sampling problem, as discussed
above. We thus need to increase the sampling interval $\Delta t$
beyond $\delta t$.
However, for a given overall trajectory length $L=(n-1) \Delta t$, the
mutual information $I(\textbf{S}_L;\textbf{X}_L,\Delta t)$ depends on
$\Delta t$ while we would like to obtain the limit $\Delta t \to
\delta t$,
which is the elementary time step of the Glauber dynamics.
Figure \ref{fig:IR_dt} illustrates how the information transmission
rate $I_{\rm R}(\Delta t)$ increases for decreasing sampling interval
$\Delta t$. The 6 black points show the computed information
transmission rate $I_{\rm R}(\Delta t)$ for 6 different values of $\Delta
t$. For these 6 values of $\Delta t$, the number of observations $N =
5 \times 10^7$ and the number of spins states in the trajectory $n=5$
is kept constant, such that $N \gg K$, where $K$ is the number of
unique possible trajectories (see previous section), and we can
reliably estimate the entropies of the trajectories. Because $n$ is
constant for these 6 black points, the trajectory length
$L=(n-1)\Delta t$ decreases for smaller $\Delta t$. Yet, the trajectories
remain long enough, meaning that $L > \tau_{\rm s}, \tau_{\rm r}$
(at the temperature $T = 2.45$ of the simulations, the response time
$\tau_{\rm r} = 51$ and the correlation time of the input signal
$\tau_{\rm s} = 63$). The mutual information $I({\bf S}_L;{\bf
X}_L,\Delta t)$ thus increases linearly with $L$. From the
slope of $I({\bf S}_L;{\bf X}_L,\Delta t)$ as a function of $L$, i.e.
from Eq. \ref{eq:I_R}, we
can therefore reliably estimate the information transmission rate
$I_{\rm R}(\Delta t)$ for each value of $\Delta t$
To get the quantity of interest, the information transmission rate
$I_{\rm R}(\Delta t \to 1)$, we fit a quadratic function to the estimates
of $I_{\rm R}(\Delta t)$ at the 6 values of $\Delta t$ corresponding
to the black points. This function is the black dashed line. This
function is then extrapolated
to $\Delta t = 1$ to retrieve the value of the information
transmission rate at the elementary time step of the Glauber dynamics
(see the extrapolated black dashed line). In the case of
Fig. \ref{fig:IR_dt}, we find a value of {$I_{\rm R}(\Delta t = 1) =
0.0048$.
\begin{figure}[t]
\centering
\begin{adjustbox}{center}
\includegraphics[height=85mm,width=0.8\columnwidth]{SI4.eps}
\end{adjustbox}
\caption{The information transmission rate $I_{\text{R}}$ increases
for decreasing sampling interval $\Delta t$. The 6 black points
correspond to
computations of $I_{\rm R}(\Delta t)$ at 6 values of $\Delta t$, obtained using
$N=5 \times 10^7$ observations and $n =5$ spin states in a
trajectory. $I_{\text{R}}(\Delta t =1)$ is estimated by
extrapolating these measurements using a quadratic fit (dashed
line). In order to verify this procedure, we have recomputed
$I_{\text{R}}$ at smaller $\Delta t$ red
points), where the number of observations $N = 10^8$. When
decreasing the sampling interval $\Delta t$, we
increase the number of spin states in the trajectory $n$ to make
sure that the trajectory length $L > \tau_{\rm s}, \tau_{\rm r}$.
At $\Delta t = 5$, the smallest sampling interval investigated, we
use $n=15$. The computed $I_{\rm R}$ at the smaller sampling
intervals (the red points)
is larger than the extrapolated values (given by the dashed black
line). This is due to undersampling, as illustrated in the inset
for $\Delta t = 5$: by
increasing the number of observations $N$, the estimated value
decreases to our extrapolated estimate, the horizontal line in the
inset. The size of the system is $5 \times 5$ spins, at a
temperature of $T = 2.45$ and distance $d = 1$. The time scales are
$\tau_{\rm r} = 51$ and $\tau_{\rm s} = 63$.}
\label{fig:IR_dt}
\end{figure}
In order to verify this scheme, we have recomputed the
information transmission rate $I_{\rm R}(\Delta t)$ for a number
of extrapolated values of $\Delta t$; these are the red
points. For these smaller sampling intervals $\Delta t$ we have
increased the number of spin states $n$ in the trajectories to
ensure that the trajectory length remains long enough, ie. $L >
\tau_{\rm s}, \tau_{\rm r}$. At the smallest sampling interval,
$\Delta t = 5$, we used $n = 16$ such that $L = 75$. For these
values of $\Delta t$ corresponding to the red
points, the number of samples is increased to $N=10^8$ and the
errorbar is estimated using the NSB method. The figure illustrates
that at small $\Delta t$ the computed values (red points)
overestimate the information transmission rate as compared to the
extrapolated values, given by the black dashed line. In the inset,
we see however that this overestimation is due to undersampling: by
increasing the number of observations $N$, the computed information
transmission rate decreases to approach the extrapolated value,
given by the horizontal line. The information transmission rate is
inversely proportional to the number of observations, as is shown by
the fit in the dashed line in the inset. Comparing the value that
$I_{\rm R}$ decays to, $B = 0.0046$, with the extrapolated value
from the main panel, $I_{\rm R}(\Delta t = 5) \approx 0.0044$,
shows that our extrapolation procedure gives a reliable estimate of
the information transmission rate for smaller $\Delta t$ values. We
also note that this figure underscores the observation of Fig. 1,
namely that even the NSB method suffers from undersampling.
\begin{figure}[t]
\centering
\begin{adjustbox}{center}
\includegraphics[height=70mm,width=0.85\columnwidth]{SI5.eps}
\end{adjustbox}
\caption{The sampling intervals $\Delta t$ must be smaller than the
correlation time $\tau_{\rm s}$ of the signal and the
response time $\tau_{\rm r}$ of the system. Both panels show the information
transmission rate $I_{\rm R}(\Delta t)$ in the same system of $5
\times 5$ spins at temperature $T=2.3$, with timescales $\tau_{\rm
r} = 88$ and $\tau_{\rm s} = 300$ and $n=9$ spin states in the
trajectory. The number of observations $N = 10^7$ such that $N \gg
K$. In panel a, the red dots correspond to a scheme in which the
sampling intervals $\Delta t > \tau_{\rm r}$. In contrast, the blue
dots correspond to a scheme in which $\Delta t < \tau_{\rm
r}$. Clearly, the extrapolation in the red scheme underestimates
the value of $I_{\rm R}(\Delta t = 1)$, because
the sampling interval $\Delta t$ needs to be shorter than
$\tau_{\rm r}$ and $\tau_{\rm s}$. This is also
supported by panel b, in which {\em both} the red and blue schemes use
sampling intervals $\Delta t$ that are all smaller than the response
time. Both schemes result in essentially the same value of $I_{\rm
R}(\Delta t=1)$, even though the extrapolation is based on
different values of $\Delta t$.}
\label{fig:dt_vs_tc}
\end{figure}
\subsection*{The sampling interval must be smaller that the input
correlation time and the response time of the system}
Above we discussed that the trajectory length $L=(n-1)\Delta t$ must
be larger than $\tau_{\rm s},\tau_{\rm r}$ and that $n$ cannot be too
large because of undersampling. Moreover, we described how the
information transmission rate of interest,
i.e. $I_{\rm R}(\Delta t)$ at $\Delta t =1$, can be obtained by extrapolating $I_{\rm R}
(\Delta t)$ computed for large $\Delta t$ to $\Delta t \to 1$ (see
Fig. \ref{fig:IR_dt}). However, how do we choose the sampling
interval $\Delta t$ for which we compute the information transmission rate? As
mentioned, it is necessary that $L = (n-1)\Delta t >
\tau_{\text{s}},\tau_{\text{r}}$; given that the maximum value of $n$
that allows for good sampling in reasonable CPU time is finite
$(n\approx 10)$, $\Delta t$ needs to be large enough. However, it is
not possible to indefinitely increase the sampling interval: $\Delta
t$ must be smaller than the correlation time of the input signal and the
response time of the system. This is illustrated in
Fig. \ref{fig:dt_vs_tc}. Both panels correspond to a $5 \times 5$ spin
system at a temperature $T = 2.3$, in which the response
time $\tau_{\rm r} = 88$ and the correlation time of the input signal
$\tau_{\rm s} = 300$. While keeping the number of spin states constant,
$n=9$, $\Delta t$
and $L=(n-1)\Delta t$ are varied, in both panels. The two panels
differ in which range of $\Delta t$ values is used to extrapolate to
$\Delta t=1$. In panel a, the red dots and the red dashed line
correspond to a scheme in which the extrapolation is based on $\Delta
t$ values that are larger than the response time $\tau_{\rm r}$
of the system}. In contrast, the blue dots and blue dashed line
correspond to a scheme in which the extrapolation is based on $\Delta
t$ values that are all smaller than $\tau_{\rm r}$. Clearly, the
extrapolation of the red scheme, based on $\Delta t$ values larger
than $\tau_{\rm r}$, severely underestimates the extrapolated value of
$I_{\rm R}$. We thus need to use $\Delta t$ values that are shorter
than $\tau_{\rm s},\tau_{\rm r}$. This is further supported by panel
b. In this panel, two extrapolation schemes are shown, which differ in
the values of $\Delta t$ used for the extrapolation. In contrast to
panel a, however, {\em both} of these schemes use $\Delta t$ values
that are all smaller than $\tau_{\rm s},\tau_{\rm r}$. Clearly, both
schemes give essentially the same extrapolated value of $I_{\rm R}$,
even though the extrapolation is based on different values of $\Delta
t$.
\subsection*{Sampling parameter requirements} In summary, the
parameters of the sampling procedure must satisfy the following constraints:
\begin{enumerate}
\item $\Delta t$ must be smaller than $\tau_{\rm s},
\tau_{\rm r}$
\item yet $L=(n-1) \Delta t$ must be larger than $\tau_{\rm
s}, \tau_{\rm r}$.
\item $N$ must be larger than $2^{2n}$ so that
undersampling does not occur.
\end{enumerate}
When these three criteria are met, the
extrapolation procedure illustrated in Figs. \ref{fig:IR_dt} and
\ref{fig:dt_vs_tc} yields a reliable estimate of $I_{\rm R} (\Delta t=1)$.
\subsection*{Pseudo-code}
\begin{itemize}
\item{We have computed for each temperature $T$ the correlation time
$\tau_{\text{c}}$ from the decay of the two-point time correlation
function $\langle S(0) X(t)\rangle$, which serves as a measure for
the response time of the system, $\tau_{\text{r}}$; in fact, simulations reveal that this
response time is similar to the timescale over which the total
magnetisation of the system relaxes to zero when the input spin, which
had been held fixed, is allowed to thermally equilibrate, indicating
that the driving of the system via the flips of the input spin keeps
the system in the linear-response regime \cite{Chandler1987}. In the optimal
systems that maximize information transmission, $\tau_{\rm r}$ is
typically on the order of the correlation
time $\tau_{\rm s}$ of the input signal.}
\item{For a given temperature $T$ and correlation time of the input
signal $\tau_{\text{s}}$, we choose six sampling intervals $\Delta
t < \tau_{\rm s},\tau_{\rm r}$.
}
\item{For each sampling interval $\Delta t$, we calculate
$I_{\text{R}}(\Delta t)$ and $I_{\text{inst}}$. The number of
observations $N$ and spin states $n$ in a trajectory depend on the
temperature. At low temperatures, the long response time allows
for relatively large sampling intervals such that we use $n=8$ and
$N = 10^7$, while still having trajectory lengths $L >
\tau_{\rm s}, \tau_{\rm r}$. At higher temperatures, the response
time becomes shorter, such that the maximum sampling interval
$\Delta t$ that we can use also decreases. In order to still
satisfy the condition on the trajectory lengths $L$, we increase
the number of spin states in a trajectory to $n=10$ at $T=2.7$,
while increasing the number of observations accordingly to $N = 16
\times 10^7$, since $N>K = 2^{2n}$ for estimating the joint entropy.}
\item{Samples of the trajectories $\{\textbf{S}_{L}\} (\Delta t)$,
$\{\textbf{X}_L\} (\Delta t)$,
and $\{\textbf{S}_L,\textbf{X}_L\}(\Delta t)$ with different sampling intervals are
collected in parallel when simulating the driven Ising system. As illustrated
in Fig. \ref{fig:sampling}, the signal produces a new spin state every time step
$\delta t$ with a correlation time of $\tau_{\rm s}$. Samples for the trajectory
$\{ \textbf{S}_L\}$, which is characterised by a number of spin states $n$ and the
sampling interval $\Delta t$, are collected by storing spin states at every
$\Delta t$ in a vector until it reaches length $L$. Trajectories with the same
sampling interval are collected in sequence and do not overlap, while trajectories
with different sampling intervals are collected in parallel and do overlap with each other.
Samples of the output and joint trajectories are collected similarly.}
\item{Using the
Bayesian estimator of Nemenman \textit{et al.} \cite{Nemenman2004},
we estimate the entropies of the (joint) trajectories and compute
the mutual information and information transmission rate according
to Eqs. 1 and 2 as a function of the sampling interval.}
\item{ By extrapolating the information transmission rate to $\Delta t
=1$, we get the value $I_{\text{R}}(\tau_{\text{s}},T)$ that is
plotted in Figs. 3 and 4 of the main text.}
\end{itemize}
\clearpage
\subsection*{The information rate in a system of $10 \times 10$ spins}
Similar to Figs. 3 and 4 of the main text, Figs. \ref{fig:10I_inst}
and \ref{fig:10_IR} show, respectively, the instantaneous mutual
information $I_{\rm inst}$ and the information transmission rate $I_{\rm
R}$ as a function of the input signal correlation time
$\tau_{\text{s}}$ for a larger system, of $10 \times
10$ spins, and for different distances $d$. Because the effects of
criticality are stronger in the larger system, the differences in
response times $\tau_{\rm r}$ between different temperatures $T$
are larger. For this reason, we investigate a broader range of values
for $\tau_{\text{s}}$ in the two figures. Clearly, we find
qualitatively the same behaviour of $I_{\text{inst}}(S;X)$ and
$I_{\rm R}$ as in the system of $5 \times 5$ spins that is
investigated in the main text. Fig. 4a of the main text is
constructed by retrieving the maximum value of $I_{\rm R}$ at each
temperature in figure \ref{fig:10_IR}.
\begin{figure}[t]
\centering
\begin{adjustbox}{center}
\includegraphics[height=90mm,width=0.9\columnwidth]{SI6.eps}
\end{adjustbox}
\caption{The instantaneous mutual information $I_{\rm inst}(S;X)$ as
a function of the input correlation time $\tau_{\rm s}$, for
different temperatures $T$, and for different distances $d$ between
the input and output spin, for a system of $10\times 10$
spins. $I_{\rm inst}(S;X)$ increases monotonically with
$\tau_{\rm s}$ until it reaches a plateau, which equals the static
mutual information. This plateau increases with decreasing
temperature because of the reduced thermal noise. At low
temperatures, the plateau value is reached at considerably longer
$\tau_{\rm s}$ than at high temperatures, reflecting the increased
response times at low temperatures. Increasing the distance
between the input and output spin, reduces the static mutual
information at all temperatures. Note that also that for small
$\tau_{\rm s}$ there exists an optimal temperature that maximizes
$I_{\rm inst}$. It results from a trade-off between the necessity
to respond fast and to respond reliably.}
\label{fig:10I_inst}
\end{figure}
\begin{figure}[t]
\centering
\begin{adjustbox}{center}
\includegraphics[height=90mm,width=0.90\columnwidth]{SI7.eps}
\end{adjustbox}
\caption{The information transmission rate $I_{\rm R}$ as a function
of the input correlation time $\tau_{\rm s}$ and temperature $T$
for different distances $d$ between the input and output spin. For
a given temperature, there exists an optimal $\tau_{\rm s}$ that
optimizes $I_{\rm R}$, even though the curve is flattened at lower
temperatures, where the system can only respond very
slowly. Increasing $\tau_{\rm s}$ gives the system more time to
respond to the input, which enhances the reliability of information
transmission. Yet, increasing $\tau_{\rm s}$ also decreases the
number of distinct input states that are transmitted per unit time,
reducing the entropy of the input signal. This interplay gives rise
to an optimal $\tau_{\rm s}$, at which $I_{\rm R}$ reaches it
maximal value $I_{\rm R, max}$, which is plotted in Fig. 4a of the
main text.}
\label{fig:10_IR}
\end{figure}
\clearpage
\subsection*{Scaling the system size}
In order to investigate the behaviour of the information transmission
rate as the distance $d$ between the input and output signal increases
{\em together} with the system size ${\cal N}$, we have repeated the
same computations in a system of $15 \times 15$ spins at a distance
between input and output spin of $d = 6$. Fig. \ref{fig:15_IR}a shows
the information transmission rate $I_{\rm R}$ as a function of the
correlation time of the input signal $\tau_{\rm s}$ for a range of
temperatures. While the sampling noise has increased due to the larger
distance between the input and output spin, it is clear that there is
an optimal temperature $T_{\rm opt}$ that maximizes the information
transmission rate. Panel b of Fig. \ref{fig:15_IR} shows this maximum
information transmission rate $I_{\rm R, max}$ as a function of
temperature. It is seen that there is an optimal temperature $T_{\rm
opt}$ that globally maximizes the information transmission rate for
this system. By fitting a quadratic function to this plot, we estimate
the optimal temperature to be $T_{\rm opt} = 2.38$.
Fig. \ref{fig:Scaling} shows the optimal temperature as a function
of system size ${\cal N}$, scaling the distance $d$ between input
and output spin together with system size ${\cal N}$. The three points
correspond to ($d=2, {\cal N}=5)$, ($d=3, {\cal N}=10$), and ($d=4,
{\cal N}=15$). It is seen that the optimal temperature that
maximizes the information transmission rate moves in the direction
of the critical temperature as the system size is increased.
\begin{figure}
\centering
\begin{adjustbox}{center}
\includegraphics[height=90mm,width=\columnwidth]{SI8}
\end{adjustbox}
\caption{(a) The information transmission rate $I_{\rm R}$ as a function
of the correlation time of the input signal $\tau_{\rm s}$ for
different temperatures $T$ in a system of $15 \times 15$ spins at
distance $d=6$. Due to the larger system size and distance between
the input and output spin $d$, the sampling noise has increased
significantly. However, there exists an optimal $\tau_{\rm s}$ that
maximizes $I_{\rm R}$ for each temperature $T$. This maximal
information transmission rate $I_{\rm R, max}$ is plotted as a
function of temperature in panel b. Clearly, there is an optimal
temperature that maximizes $I_{\rm R, max}$. }
\label{fig:15_IR}
\end{figure}
\begin{figure}
\centering
\begin{adjustbox}{center}
\includegraphics[height=70mm,width=0.6\columnwidth]{SI9}
\end{adjustbox}
\caption{The optimal temperature $T_{\rm opt}$ that maximizes the
information transmission rate decreases as the distance $d$ is
scaled {\em together} with the system size $\mathcal{N}$ from
($d=2,\mathcal{N} = 5$), to ($d=4, \mathcal{N} = 10$) and ($d =6,
\mathcal{N} = 15$).}
\label{fig:Scaling}
\end{figure}
|
1,116,691,497,863 | arxiv | \section{Introduction}
Stimulated Raman scattering (SRS)~\cite{Bloembergen_1967} imaging has gain tremendous interest over the last decade due to its ability to perform label free chemical imaging in biological sample~\cite{Cheng2015,Zhang2018}. In SRS microscopy two laser pulses, a pump at frequency $\omega_p$ and a Stokes at frequency $\omega_s$, are focused on a sample to generate an image by point scanning. If the frequency difference $\omega_p - \omega_s$ equals a molecular vibrational frequency $\Omega_R$ an energy transfer occurs between the pump and the Stokes beam~\cite{Rigneault2018} that can be detected using dedicated modulation schemes. Since SRS imaging first demonstrations~\cite{Freudiger08_science,Nandakumar09_njp}, efforts have been pursue to perform fast images~\cite{Saar03122010}, but also to acquire a complete vibrational spectrum at each pixel~\cite{Seto2013,Rock2013,Liao2015,Liao2016,He2017,Kumar2018,Figueroa2018,Ragni2019,Ozeki2019}. The fastest approaches to acquire a full spectrum~\cite{Liao2016,He2017} use the SRS spectral focusing scheme that allow to retrieve spectral resolution by adding dispersion on the femtosecond pulses~\cite{Gershgoren2003, Hellerer2004}. When the pump and Stokes pulses are identically chirped, a narrow wavenumber range can be probed within the spectral bandwidth allowed by the pulses (fig.~\ref{fig:figure1}d). Not only is spectral resolution recovered, but the probed wavenumber can easily be tuned by adjusting the time delay between the two pulses, allowing for spectral scanning without the need to change the lasers center wavelengths or fast spectrometer. Recently, acousto-optic programmable dispersive filters (AOPDF) have been used as ultrafast delay lines in different applications, such as transient absorption and terahertz spectroscopy~\cite{Urbanek2016,Audier2017}. Contemporary to this work, spectral focusing and AOPDF has been combined to rapidly sweep the delay between two chirped femtosecond pulses in an SRS microscope~\cite{Alshaykh2017}. Raman spectra covering the full lipid band (\SIrange[range-units = single]{2800}{3050}{\per \centi \meter}) were acquired at \SI{33}{\micro \second} per pixel, with a \SI{25}{\per \centi \meter} resolution. Although appealing, this demonstration were done on artificial samples providing strong Raman signals. In particular, the sensitivity was not showcased on biological tissues, and the increased acquisition rate was not applied to the study of dynamic behaviors, while those two aspects are essential for applications.
Here, we report hyperspectral SRS imaging over \SI{200}{\per \cm} in \SI{12}{\micro \second} at a scan rate of 40kHz with shot noise limited sensitivity. By characterizing the noise in our system and engineering the optimal laser pulses, we have designed an SRS imaging platform combining spectral focusing together with a faster AOPDF delay line to surpass the acquisition speed and sensitivity demonstrated previously. We demonstrate optimized speed and sensitivity by targeting two major applications that are dynamic chemical imaging and label-free histology on human tissue samples.
\section{Setup Description}
The SRS pump and Stokes beams are generated by a commercial femtosecond laser system (Chameleon OPO-VIS, Coherent) working at \SI{80}{\mega \hertz} repetition rate. (fig. \ref{fig:figure1}.a)
The pump (\SI{800}{\nano \meter}) and Stokes (\SI{1045}{\nano \meter}) optical pulses are synchronized, and both can be modeled as Gaussian pulses with temporal full widths at half maximum of \SI{160}{\femto \second}.
The pump laser is sent to an AOPDF (HR or WB Dazzler, Fastlite) which acts as both a tunable dispersive medium and an ultra fast delay line (see details below).
The Stokes laser is sent through an acousto-optic modulator (MT200-A0.2-1064, AA optoelectronics) driven by a sinusoidal modulation at one fourth of the laser repetition rate (\SI{20}{\mega \hertz}).
The first diffraction order of the Stokes beam is collected and sent through a double pass grating pair (custom gratings, Wasatch Photonics) where it undergoes negative dispersion.
The modulated and negatively chirped Stokes beam is then recombined with the pump beam by means of a dichroic mirror before being sent to an inverted scanning microscope (TiU, Nikon).
The two lasers are focused in the sample using a 20x air objective (0.75NA, CFI Plan Apo Lambda, Nikon) and collected with the same objective in the forward direction.
The Stokes beam is filtered out using an optical short pass filter (FES0900, Thorlabs), and the pump is collected using a photodiode who's output is then fed to a lock-in amplifier module.
The photodiode, the lock-in amplifier and frequency divider are commercial systems optimized to work at \SI{20}{\mega \hertz} (SRS Lockin Module, APE).
The lock-in amplifier bandwidth was reduced to \SI{1.35}{\mega \hertz} using an electronic lowpass filter (EF508 Thorlabs, 5\textsuperscript{th} order).
The signal from the lock-in was sent to a data acquisition card (ATS460, AlazarTech) and acquired with a sampling rate of \SI{20}{\mega \hertz}.
Simultaneously to SRS, and to address biological applications, second harmonic generation (SHG) was recorded at \SI{400}{\nano \meter} in the epi direction, using a dichroic mirror (770dcxr, Chroma) band-pass filter (HQ400/40, Chroma), and photomultiplier tube (R9110 tube and C7950 data socket, Hamamatsu).
Average laser powers at the sample plane were \SIlist[list-final-separator = {and}]{15;20}{\milli \watt} for the pump and Stokes beams, respectively.
\begin{table}
\centering
\begin{tabular}{r|c|c|c}
Model & Rep. rate (\si{\kilo \hertz}) & Range (\si{\pico \second}) & Ratio (\si{\femto \second / \micro \second})\\
\hline
HR & 30.6 & 8.5 & 260\\
\hline
WB & 40 & 3.5 & 161\\
\hline
\end{tabular}
\caption[AOPDF characteristics]{Characteristics of the two AOPDF types. Both models consist of a 25-mm-long $TeO_2$ crystal, but the cut angle is different for each, to allow for different acoustic propagation speed. The repetition rate is the maximum rate at which successive acoustic wave can be sent inside the crystal. The range is the maximum delay allowed by the optical index difference between the fast and slow axis, the values are given for \SI{800}{\nano \meter} light. The time ratio gives the amount of delay added per microsecond of acoustic propagation.}
\label{tab:AOPDFspecs}
\end{table}
\section{AOPDF Description}
The AOPDF consists of a birefriengent crystal in which an acoustic shearing wave co-propagates with an ordinary-polarized optical beam (fast axis).
Each acoustic frequency interacts with a specific optical frequency (acousto-optic phase-matching relationship) and gives rise to an extraordinary-polarized diffracted beam (slow axis)
Therefore, using the proper acoustic waveform, one can imprint an arbitrary phase profile on incoming optical pulses~\cite{Verluise2000}.
In addition to this tunable phase profile, the propagation of the acoustic wave inside the crystal effectively imprints a delay on successive optical pulses that increases linearly over time (fig.~\ref{fig:figure1}.c).
Using both of these properties, one can address the two requirements for spectral focusing SRS: chirped optical pulses and fast delay scanning.
Two different AOPDF, the High-Resolution (HR) and Wide-Band (WB), have been used in this study.
They are based on the same concept, but have different working parameters, reported in table~\ref{tab:AOPDFspecs}.
In particular, the WB model has a higher repetition rate, a higher diffraction efficiency, and a shorter delay range, making it better suited for vibrational spectroscopy.
\begin{figure}[htbp]
\centering
\includegraphics[scale=1]{./figure1.eps}
\caption{a) Setup schematics, OPO: optical parametric oscillator, AOM: acousto-optic modulator. b) Spectral focusing scheme: the instantaneous frequency difference between the pulses is a function of the pulses relative delay. c) Acousto-optic programmable dispersive filter (AOPDF): the acoustic wave propagating inside the crystal imprints both negative dispersion, and a delay that changes linearly with the acoustic wave propagation time. d) Signal and resolution obtained by using chirped pulses that are delayed as in b). }
\label{fig:figure1}
\end{figure}
\section{Noise levels}
The laser system used in this study is shot noise limited for electronic frequencies around \SI{20}{\mega \hertz} and average laser intensities at least up to \SI{70}{\milli \watt}~\cite{AudierNoise2019}.
However, to investigate if the AOPDF introduces additional noise, we measure the laser noise in the presence of the AOPDF, at the detector plane, using \SI{5}{\milli \watt} of average laser power and a commercial photodiode (Det10A, \SI{350}{\mega \hertz} bandwidth, Thorlabs).
The photodiode was loaded with a \SI{50}{\ohm} resistor and the resulting voltage was filtered with a \SI{5.6}{\mega \hertz} high-pass filter (EF515, 6\textsuperscript{th} order, Thorlabs).
The purpose of this filter is to damp the repetition rate and harmonics of the AOPDF~(Table~\ref{tab:AOPDFspecs}).
The filtered photodiode output was sent to a spectrum analyzer (HF2LI, Zurich instrument) and its electrical power spectral density was acquired between \SIlist[list-final-separator = {and}]{8;40}{\mega \hertz} (fig~\ref{fig:figure2}a)).
The power spectral density associated with shot noise is estimated at \SI{-191}{\dB \watt \per \hertz}, which is below the detection limit of our spectrum analyzer (\SI{-180}{\dB \watt \per \hertz}).
However, the laser intensity noise introduced by the AOPDF was high enough to be measured without amplification.
The laser intensity noise is particularly high around \SI{12}{\mega \hertz} for the HR AOPDF, and around \SI{25}{\mega \hertz} for the WB AOPDF.
This point has not been addressed in previous studies, and may have limited the sensitivity of previous systems using these devices.
It is suspected that the amplitude noise imprinted on the laser by the AOPDF results from the interference of successive acoustic waves by reflection and diffusion inside the crystal.
Consistent with this hypothesis, adding a waiting time between successive acoustic pulses significantly decreased the noise level (data not shown).
It is possible to avoid the excess noise introduced by the AOPDF by selecting a suitable modulation frequency and lock-in bandwidth.
Using the lock-in system described previously, with a modulation frequency of \SI{20}{\mega \hertz} and a \SI{1.35}{\mega \hertz} bandwidth, the measured noise from this system matches exactly the expected shot noise from the laser (fig~\ref{fig:figure2}b)).
For average laser intensities above \SI{9}{\milli \watt}, detector electronic noise becomes negligible compared to the laser shot noise.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{./figure2.eps}}
\caption{a) Laser intensity noise for the HR and WB AOPDF. b) System output noise power compared to shot noise and electronic noise, for increasing laser power. c) SRS signal loss and resolution gain as a function of the chirped Stokes pulse duration. The different losses (red lines) corresponds to different linewidths for the probed molecular vibration, indicated in red in \si{\per \cm}. d) Achieved spectral resolution as a function of the dispersion (second order phase) applied using the AOPDF.}
\label{fig:figure2}
\end{figure}
\section{Resolution and signal level}
Spectral focusing is used to recover spectral resolution when performing SRS with two femtosecond pulses, as illustrated in figure~\ref{fig:figure1}b.
The spectral resolution that can be achieved using spectral focusing increases with the amount of dispersion added to the two pulses, while the SRS signal drops (fig~\ref{fig:figure2}.c)
The associated mathematical derivations can be found in \cite{Su2013}.
We will focus here on the lipid vibrational band between \SI{2850}{\per \centi \meter} and \SI{3000}{\per \centi \meter} where spectral features are typically larger than \SI{20}{\per \centi \meter}.
For this reason, both pump and Stokes pulses were chirped to \SI{1}{\pico \second}, by adding negative dispersion with the AOPDF and grating pair, respectively.
Such pulse dispersion corresponds to a theoretical value of \SI{22}{\per \centi \meter} for the SRS spectral resolution, which is confirmed experimentally using the Raman line of DMSO at \SI{2913}{\per \centi \meter} (fig~\ref{fig:figure2}.d).
Because the AOPDF allows for tunable dispersion, the amount of added second order phase was optimized (fig~\ref{fig:figure2}.d), and the resulting total negative dispersion is consistent with the optics in the system.
The \SI{-77}{k\femto \second \square} second order phase added by the AOPDF includes the \SI{-59}{k\femto \second \square} required to negatively chirp the pulses at the sample plane, the \SI{-13}{k\femto \second \square} required to compensate the dispersion introduced by the AOPDF crystal, and the \SI{-5}{k\femto \second \square} compensating the rest of the optics on the optical path.
The expected drop in SRS signal resulting from the use of chirped pulses depends on the linewidth of the probed molecular bond (fig~\ref{fig:figure2}.c).
In the lipid band we can expect linewidths of \SI{25}{\per \centi \meter}, and as a result a \SI{3}{\dB} drop in signal.
\section{Chemical imaging of multiple species}
Figure~\ref{fig:figure4}.a shows the SRS spectra acquired for 5 pure species: melamine (Resin, bead 5um), polystyrene (PS, bead 20um), polymethylmethacrylate (PMMA, bead 10um), bovine serum albumin (BSA) crystals an olive oil solution. An artificial sample composed of these 5 species was prepared and imaged using the WB AOPDF using a pixel dwell time of \SI{25}{\micro \second} that allowed to record the full Raman spectrum between \SI{2850}{\per \centi \meter} and \SI{3000}{\per \centi \meter} with a resolution $\approx$\SI{22}{\per \centi \meter}. The recorded hyperspectral image was projected over the known pure spectra using linear decomposition that allowed to map the 5 species fig~\ref{fig:figure4}.b. The different components are easily distinguishable without further averaging.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[scale=0.7]{./WB2BeadsMix.eps}}
\caption{a) SRS spectra of pure species acquired with the AOPDF delay line (Resin: melanine, PS: polystyrene, PMMA: polymethylmethacrylate, Oil: olive oil, BSA: bovine serum albumin). b) Image components associated with each pure species, as extracted from the hyperspectral image. MIX represents the summed image of all components. Pixel dwell time \SI{25}{\micro \second}, scale bar: 25um, total image acquisition time: 1s}
\label{fig:figure4}
\end{figure}
\section{Dynamic imaging of chemical reactions}
We use here our fast SRS imaging platform to monitor the dynamic of chemical reactions. We concentrate on Mannitol, a common excipient used in pharmaceutical industry, that can crystallize in several polymorphic forms: $\alpha$, $\beta$, and $\delta$.
The $\delta$ polymorph can change into $\beta$ upon hydration, and both forms can be identified through their Raman spectra in the lipid band.
Pure $\delta$ Mannitol crystals were prepared between a microscope slide and a coverslip and imaged over twenty minutes during the introduction of water vapor. Snapshots of the user interface during the acquisition are displayed in figure~\ref{fig:Mannitol} (a-d), while the full video recording is available (Visualization 1). $\delta$ Mannitol (red) transforms in $\beta$ Mannitol (blue) over time upon water hydratation. We used the HR AOPDF with a frame rate of 1 image every \SI{1.6}{second} (two successive images were averaged to increase the signal to noise ratio).
The hyperspectral images were processed on the fly by projecting them on the components corresponding to pure $\beta$ and $\delta$ Mannitol (fig~\ref{fig:Mannitol}.e), therefore providing live feedback on the crystal transformation process.
Because of the dynamic nature of the process, image refocusing and tracking of the relevant feature was necessary, and was made possible with live two color rendering.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[scale=1]{./MannitolHydration.eps}}
\caption{a)-d) Timelapse of a $\delta$-Mannitol (red) crystal transforming into $\beta$-Mannitol (blue) in the contact of water vapour. The field of view is 100 micron large, pixel dwell time \SI{40}{\micro \second}, e) SRS spectra of pure $\delta$-Mannitol (red) and $\beta$-Mannitol (blue).}
\label{fig:Mannitol}
\end{figure}
\section{Label-free histological recordings}
Fast label free imaging of biopsy sections is a major application of coherent Raman imaging that can potentially revolutionize the field of histology by increasing diagnosis speed, lowering hospital costs, and improving patient care~\cite{Cicerone2018}.
Frozen sections of human cancer colon tissues were imaged with our fast SRS imaging platform, using the WB AOPDF with a pixel time of \SI{25}{\micro \second}. 8 by 10 adjacent fields of views (100um x 100um) were acquired separately and stitched together to reconstruct a wider image (fig.~\ref{fig:Histology}). The total acquisition time was 15 minutes.
Second harmonic generation at \SI{400}{\nano \meter} was recorded in the epi direction, to provide an additional contrast mechanism specific to collagen fibers. As previously for each pixel, the recorded hyperspectral images were projected on the spectra corresponding to pure BSA (protein) and oil (lipid) to highlight the nuclei and cell bodies, respectively. For rendering look up tables were adjusted to mimic eosin, saffron and haematoxylin staining~\cite{Orringer2017,Sarri2019}.
\begin{figure}[htbp]
\centering
\fbox{\includegraphics[width=\linewidth]{./Dazzlerhistology.eps}}
\caption{Frozen section of healthy human colon imaged using fast SRS and SHG. The spatial map of proteins (blue) and lipids (pink), were retrieved using the spectra of BSA and Oil for projection. The SHG signal, characteristic of collagen fibers is shown in red. Image total acquisition time 15 minutes.}
\label{fig:Histology}
\end{figure}
\section{Discussion}
Our SRS imaging platform acquires a full vibrational spectra over the [\SI{2850}{\per \centi \meter}, \SI{3050}{\per \centi \meter}] spectral range in \SI{12.5}{\micro \second}, this corresponds to the time during which the chirped pump and Stokes pulses overlapped, while the pump pulse delay is swept by the AOPDF. However the minimum total time per pixel is \SI{25}{\micro \second} when using the WB AOPDF corresponding to a duty cycle of 50 percent. This figure could be increased by designing a shorter AODPF delay line optimized for lower delay range and higher repetition rate. By applying dispersion with a grating pair, and using the AOPDF solely as a delay line, a \SI{80}{\kHz} pixel acquisition rate should be achievable.
In diluted samples such as biological tissues, image-wise averaging had to be performed to achieve signal to noise ratios (SNR) compatible with imaging (SNR > 10 dB). With our shot noise limited system, the SNR achieved on biological samples without averaging (pixel dwell time \SI{25}{\micro \second}) is on the order of 1, providing information on the associated SRS modulation.
Given a shot-noise limited system, with a lockin bandwidth $\Delta f = \SI{1.3}{\MHz}$, a probe beam average laser power $I_{p}$ of \SI{15}{\mW} at the wavelength $\lambda=\SI{800}{\nm}$, the SRS relative modulation $\beta=\frac{\Delta I}{I}$ associated with an SNR of 1 is given by~\cite{AudierNoise2019}:
\begin{equation}
\beta = 4 \sqrt{\frac{h c \Delta f}{I_{p} \lambda \eta}} \approx 2\times 10^{-5}
\end{equation}
where $\eta$=0.8 is the detector quantum efficiency and $h$ the Planck constant.
The sensitivity of the system could be increased by red-shifting the SRS pump and Stokes wavelength.
Using near-infrared light such as \SI{940}{\nm} pump and \SI{1310}{\nm} Stokes, one may significantly increase the average laser power on the sample, while keeping photo-damage low.
Such near-IR scheme would allow for an increase of both laser powers by a factor of 2 to 3, which would be sufficient to improve the SNR by a factor of 10, therefore allowing for single SRS spectral measurements per pixel.
We limited our investigation to the lipid band but the proposed approach can be easily extended to the fingerprint spectral range using appropriate AOPDF and large bandwidth tunable fs sources.
Other improvements may include the use of spatial multiplexing, provided higher laser powers are available. Using multiple foci has been demonstrated as a method to scale up the acquisition rate~\cite{Heuke2018a} in SRS. Another possible strategy would be to use our fast spectral acquisition scheme while under-sampling the image through matrix completion scheme~\cite{Lin2018}.
\section{Conclusion}
We demonstrated fast hyperspectral SRS imaging by combining spectral focusing with an acousto-optic delay line working at \SI{40}{\kilo \hertz}. Our system is at the shot noise level for \SI{15}{\milli \watt} of laser probe power, and a bandwidth of \SI{1.3}{\MHz}.
The achieved resolution is \SI{22}{\per \cm} over a bandwidth of \SI{130}{\per \cm} full width at half maximum.
SRS spectra in the lipid band were acquired with \SI{12.5}{\micro \second} recordings per pixel corresponding to a duty cycle of 50 percent.
Samples containing up to 5 different chemical species have been imaged, and individual component maps were retrieved with no visible cross-talk.
Time-resolved imaging of a chemical transformation ($\delta$ Mannitol to $\beta$ Mannitol) was made possible, opening the door to other studies requiring fast chemical imaging.
Finally, the sensitivity of the system was sufficient to perform stimulated Raman imaging of human frozen tissue sections, demonstrating the applicability of this measurement system for label-free and live histology.
\section*{Funding Information}
We acknowledge financial support from the Centre National de la Recherche Scientifique (CNRS), Aix-Marseille University A*Midex (ANR-11-IDEX-0001-02) (A-M-AAP-ID-17-13-170228-15.22-RIGNEAULT), ANR grants France Bio Imaging (ANR-10-INSB-04-01) and France Life Imaging (ANR-11-INSB-0006) infrastructure networks and Plan cancer INSERM PC201508 and 18CP128-00.
\section*{Acknowledgments}
The authors would like to thank Barbara Sarri (Institut Fresnel) and Flora Poizat (Institut Paoli-Calmettes) for providing and helping with the frozen tissue sections.
\section*{Disclaimer}
Nicolas Forget has financial interest in FASTLITE.
\section*{Supplemental Documents}
Visualization 1: Mannitol dilution.
|
1,116,691,497,864 | arxiv | \section{Introduction}
When a compact object spirals into a much more massive object via the
emission of gravitational radiation, the waves carry information on
the multipoles of the central mass (Ryan 1995). Therefore, by
monitoring the gravitational wave signal one could explore the
spacetime geometry outside the central body, as anticipated by
Abramovici et al. (1992). If the object is a black hole, it should be
possible to verify its black hole nature and measure its spin
parameter ($a/M$) from the ratios of the multipoles.
Practical application of this idea is expected to become feasible in
the next decade or two. When a neutron star or a stellar-mass black
hole spirals into a supermassive black hole (SMBH) in a galactic
nucleus, the gravitational waves have periods in the range $10-10^4$
s. Such periods are well-suited for detection with the proposed Laser
Interferometer Space Antenna (LISA, see Danzmann et al. 1996, Bender
et al. 1996, Folkner 1998, Cutler 1998). LISA could detect the
inspiral of a $10M_\odot$ black hole into a $10^6M_\odot$ SMBH out to
a redshift $\sim1$. At these distances event rates of up to several
per year are likely (Hils \& Bender 1995, Sigurdsson 1997).
Both the detection and the analysis of the signals will require a very
clean system with no significant perturbation of the orbiting star.
Is this likely? In an interesting series of papers, Chakrabarti
(1993, 1996) and Molteni, Gerardi \& Chakrabarti (1994) showed that,
under some cirumstances, the hydrodynamic interaction between the
orbiting star and an accretion disk surrounding the SMBH could be so
strong that the inspiral may be halted altogether and the star may be
trapped in a stable orbit. Chakrabarti's scenario is an extreme one
that involves a very strong hydrodynamic interaction. It presumably
occurs only rarely. However, even if the interaction is orders of
magnitude weaker than the level required by Chakrabarti, it could
still pose a serious problem. The detection of inspiral events
involves matching the observed signal with a theoretical template
covering many hundreds or thousands of wave periods, and even a tiny
perturbation could be fatal.
Motivated by this, we investigate the following simple question: What
is the expected strength of the hydrodynamic interaction in a
``typical'' galactic nucleus, and how significant is the orbital
perturbation due to this interaction?
Before embarking on a quantitative analysis, we make two points.
First, only a small fraction of galaxies harbor {\it active galactic
nuclei}. Active nuclei, such as quasars, Seyferts, etc. (see Krolik
1999 for a review), are thought to consist of SMBHs accreting mass at
rates close to or exceeding the Eddington rate $\dot M_{\rm Edd}$.
The accretion is believed to proceed via a thin accretion disk
(Shakura \& Sunyaev 1973, Novikov \& Thorne 1973) when $\dot M\mathrel{\mathpalette\@versim<}
\dot M_{\rm Edd}$, or a slim accretion disk (Abramowicz et al. 1988)
for $\dot M\mathrel{\mathpalette\@versim>} \dot M_{\rm Edd}$. Chakrabarti's work is focused on
such high-$\dot M$ SMBHs; indeed, he explicitly considers
super-Eddington accretion in his analysis. However, the vast majority
of galactic nuclei are much dimmer than typical quasars or Seyferts.
It has become clear in recent years that these ``normal'' nuclei also
contain SMBHs (e.g. Magorrian et al. 1998), but that the accretion
rates are much below $\dot M_{\rm Edd}$, perhaps as low as $\dot
M\sim10^{-2}-10^{-4}\dot M_{\rm Edd}$. The lower $\dot M$ implies
that there should be less gas in the vicinity of the SMBH and
therefore a weaker frictional drag on an orbiting star.
Second, at the low values of $\dot M$ relevant for normal galactic
nuclei, there is increasing evidence that the gas flows have a
different character from the flows found in bright nuclei; the
accretion does not occur in a thin or slim disk but via an
advection-dominated accretion flow (ADAF, see Narayan \& Yi 1994,
1995a,b, Abramowicz et al. 1995, Ichimaru 1977, Rees et al. 1982; also
Kato, Fukue \& Mineshige 1998 and Narayan, Mahadevan \& Quataert 1998
for reviews). For a given value of $\dot M$, the gas density in an
ADAF is significantly lower than that in a thin disk. This further
reduces the hydrodynamic drag on an orbiting star.
In \S2 of this paper we estimate the hydrodynamic torque on a compact
star orbiting inside an ADAF, and in \S3 we compare this with the
torque that results from gravitational wave emission. We conclude
with a discussion in \S4.
\section{Hydrodynamic Drag on a Compact Star Orbiting Inside an ADAF}
Consider a compact star of mass $M_c=10m_1M_\odot$ orbiting a SMBH of
mass $M_{\rm SMBH}=10^6m_6M_\odot$. For simplicity, we assume that the
star is in a circular orbit of radius $R=rR_g$, where $R_g=GM_{\rm
SMBH}/c^2$. We also work within a Newtonian framework. This is
adequate in view of the various other approximations we make and
considering that we seek only order of magnitude estimates. The
orbital velocity of the star is given by the Keplerian formula,
$$
v_K={c\over r^{1/2}}. \eqno (1)
$$
Let the mass accretion rate onto the SMBH be $\dot M_{\rm SMBH}$, and let us
express this in Eddington units:
$$ \dot M_{\rm SMBH}=\dot m\dot M_{\rm Edd}, \eqno (2)
$$
$$
\dot M_{\rm Edd}={4\pi GM_sm_p\over\eta\sigma_Tc}=1.4\times10^{24}m_6
~{\rm g\,s^{-1}}, \eqno (3)
$$
where $m_p$ is the proton mass and $\sigma_T$ is the Thomson
cross-section of the electron. The parameter $\eta$ refers to the
radiative efficiency of the accretion assumed for the purpose of
defining the Eddington rate; we use $\eta$=0.1 for calculating the
numerical value given in the right hand side of equation (3). Note
that the choice of $\eta$ in the definition of $\dot M_{\rm Edd}$ is
purely conventional. The actual radiative efficiency in any particular
accretion flow could be different from 0.1.
Let us define also the Salpeter time, which is the mass $e$-folding
time of an object that accretes at the Eddington rate:
$$
t_S={M_{\rm SMBH}\over\dot M_{\rm Edd}}={\eta\sigma_Tc\over4\pi Gm_p}
=4.5\times10^7 ~{\rm yr}. \eqno (4)
$$
Once again, we have set $\eta=0.1$ on the right.
We assume that the accretion flow around the SMBH occurs via an ADAF.
The key characteristic of an ADAF is that the thermal energy released
as the gas falls into the potential well of the central mass is not
radiated (as in a thin disk) but is retained in the gas and advected
down to the center (Narayan \& Yi 1994). This causes a number of
important effects on the dynamics of the gas.
First, since all the binding energy is retained as thermal energy, the
gas becomes extremely hot and the temperature approaches the virial
limit. Equivalently, the isothermal sound speed
$c_s\equiv\sqrt{p/\rho}$, where $p$ is the pressure and $\rho$ is the
density, approaches the Keplerian speed $v_K$. One consequence is
that the gas does not take up a disk-like configuration (as in a thin
accretion disk), but has a quasi-spherical morphology (Narayan \& Yi
1995a). Another consequence is that the orbital velocity of the gas
is significantly sub-Keplerian; a typical value is $v_\phi\sim v_K/3$.
Both of these effects simplify our analysis considerably since they
imply that the magnitude of the hydrodynamic drag is insensitive to
the orientation of the stellar orbit relative to the angular momentum
vector of the accreting gas. The radial velocity $v_R$ of the
accreting gas is quite large, roughly $v_R\sim\alpha v_K$, where
$\alpha$ is the standard dimensionless viscosity parameter (Shakura \&
Sunyaev 1973). For an ADAF, $\alpha$ typically lies in the range 0.3
(Esin, McClintock \& Narayan 1997) to 0.1 (Quataert \& Narayan 1999).
Thus the radial velocity of the gas is a substantial fraction of
$v_K$.
We can now express the density $\rho$ of the accreting gas in terms of
the mass accretion rate:
$$
\rho={\dot M_{\rm SMBH}\over 4\pi R^2v_R}
={\dot m\over 4\pi\alpha}{M_{\rm SMBH}\over t_S}
{c^3\over(GM_{\rm SMBH})^2} {1\over r^{3/2}}, \eqno (5)
$$
where we have expressed $\dot M_{\rm SMBH}$ in terms of the Eddington
rate and set $v_R=\alpha v_K$.
Ostriker (1999) has estimated the drag force $F_{df}$ on a mass
$M_c$ moving through a uniform gas of density $\rho$ with relative
velocity $v_{rel}$:
$$
F_{df}=-4\pi I\left({GM_c\over v_{rel}}\right)^2\rho, \eqno (6)
$$
where the negative sign indicates that the force acts in the opposite
direction to the velocity of the mass. The coefficient $I$ depends on
the Mach number, ${\cal M}\equiv v_{rel}/c_s$, of the relative motion.
For ${\cal M}\ll1$, Ostriker finds $I\to {\cal M}^3/3$, while for
${\cal M}\gg1$, $I\to\ln(R_{max}/R_{min})$, where $R_{max}$ is the
size of the gaseous system and $R_{min}$ is the effective size of the
mass $M_c$.
In our problem, $v_{rel}\sim c_s\sim v_K$ and the Mach number is of
order unity. We could therefore use either the subsonic or supersonic
estimate of $I$. We use the supersonic estimate as it gives a larger
force and thereby provides an upper limit on the hydrodynamic drag.
We set $R_{max}$ equal to the size of the system, namely the local
radius $R$. The choice of $R_{min}$ is less obvious since the radius
of the star, the natural choice, is inappropriate for a compact star.
We set $R_{min}$ equal to the gravitational capture radius,
$GM_c/v_{rel}^2$, since Ostriker's linear analysis is valid only for
gas streamlines with impact parameters larger than this radius.
Setting $v_{rel}=v_K$, we then find that
$$
I\sim\ln\left({Rv_{rel}^2\over GM_c}\right)=
\ln\left({M_{\rm SMBH}\over M_c}\right) =
12+\ln\left({m_6\over m_1}\right). \eqno (7)
$$
We use $I=10$ in numerical estimates.
Let us define the hydrodynamic drag time scale $t_{hd}$ to be the
$e$-folding time for the specific angular momentum of the orbiting
star. Thus
$$
t_{hd}\equiv{M_cv_K\over |F_{df}|}={c^3\over4\pi IG\rho GM_c}
\left({v_{rel}\over v_K}\right)^2{1\over r^{3/2}}. \eqno (8)
$$
Setting $v_{rel}\sim v_K$, and substituting for $\rho$ from equation
(5), we find
$$
t_{hd}={\alpha\over I\dot m}{M_{\rm SMBH}\over M_c} t_S
=10^5{\alpha\over I\dot m}{m_6\over m_1}t_S. \eqno
(9)
$$
Putting in numerical values, this gives
$$
t_{hd}\sim4.5\times10^{10}
{m_6\over\dot m m_1} ~{\rm yr}, \eqno (10)
$$
where we have used ``standard'' parameter values: $\alpha=0.1$,
$I=10$, $\eta=0.1$.
ADAFs are present only for low values of $\dot m\mathrel{\mathpalette\@versim<}\alpha^2\sim
10^{-1}-10^{-2}$ (Narayan \& Yi 1995b, Esin et al. 1997). The
galactic nuclei we are interested in probably have even lower
accretion rates: $10^{-4}\mathrel{\mathpalette\@versim<}\dot m\lsim10^{-2}$. For such values of
$\dot m$, we see that the hydrodynamic drag on an orbiting star is
extremely small. In fact, two further effects make the drag even
lower than the above estimate:
\noindent
1. At radii $r\lsim10$, which is the region we are interested in for
gravitational wave studies, the radial velocity of the ADAF is not
$\alpha v_K$ but closer to $v_K$ (Narayan, Kato \& Honma 1997, Chen,
Abramowicz \& Lasota 1997). This causes the density $\rho$ to
decrease and the time scale $t_{hd}$ to increase by a factor
$\sim1/\alpha\sim10$.
\noindent
2. Blandford \& Begelman (1999) suggested, following earlier work by
Narayan \& Yi (1994, 1995a), that there could be significant mass
outflows from ADAFs. As a result, close to the SMBH, $\dot m$ may be
one or two orders of magnitude lower than at large radii. This would
again reduce the hydrodynamic drag on a star orbiting close to the
SMBH.
\section{Comparison of Hydrodynamic and Gravitational Wave Torques}
The angular momentum of a binary system consisting of a SMBH and a
compact star is given by $J=\mu\sqrt{GMa}$, where $M=M_{\rm
SMBH}+M_c\approx M_{\rm SMBH}$ is the total mass, $\mu=M_{\rm
SMBH}M_c/M\approx M_c$ is the reduced mass, and $a$ is the radius of
the orbit. The angular frequency of the orbit is equal to
$\sqrt{GM/a^3}$.
The time scale on which $J$ changes as a result of gravitational wave
emission is (e.g. Shapiro \& Teukolsky 1983)
$$
t_{gw}={J\over|dJ/dt|}={5\over32}{c^5\over G^3}{a^4\over M^2\mu}.
\eqno (11)
$$
Let us express this in practical units. We write $a=10r_1R_g$, and
define $P_2=P/100$ s, where $P=\pi/\Omega$ is the period of the
quadrupolar gravitational waves (this period is equal to one half the
period of the orbit). Then
$$
t_{gw}=24{m_6^2r_1^4\over m_1}\,{\rm yr}=0.35{P_2^{8/3}\over
m_1m_6^{2/3}}\,{\rm yr}. \eqno (12)
$$
The time to merger is given by
$$
t_m={t_{gw}\over8}=3.0{m_6^2r_1^4\over m_1}\,{\rm
yr}=0.044{P_2^{8/3}\over m_1m_6^{2/3}}\,{\rm yr}. \eqno (13)
$$
LISA or a similar gravitational wave detector is likely to be able to
follow a merger event for at most a few years. Let us assume $t_m<10$
yr. The time scale on which the gravitational wave torque acts on the
orbit is given by $t_{gw}=8t_m<100$ yr. The time scale of the
hydrodynamic torque (eq 10) is seen to be longer than $t_{gw}$ by
nearly ten orders of magnitude (recall that $\dot m\lsim0.1$ for an
ADAF and is probably $\sim10^{-2} -10^{-4}$ in normal galactic nuclei).
Thus it is clear that the hyrdrodynamic torque is completely
negligible.
For a more quantitative consideration, let us compute the number of
wave periods prior to the merger of the two objects:
$$
n_m=\int_0^{t_m} {dt\over P} = {8\over5}{t_m\over P}. \eqno (14)
$$
Putting in numerical values,
$$
n_m = 3.1\times 10^5{m_6r_1^{5/2}\over m_1}
= 2.2\times10^4{P_2^{5/3}\over m_1m_6^{2/3}}
= 1.6\times10^5{t_{m,yr}^{5/8}\over m_1^{3/8}m_6^{1/4}}, \eqno (15)
$$
where $t_{m,yr}=t_m/1$ yr. With a strong signal and a theoretically
computed template of the wave train one expects to follow the phase of
the gravitational waves with a time resolution of better than a
period. Optimistically, one might be able to resolve down to a tenth
of a wave period (Kip Thorne, private communication). The importance
of the hydrodynamic drag is then estimated by the quantity
$$
\epsilon\equiv10n_m{t_{gw}\over t_{hd}}=2.8\times10^{-5} {\dot m
m_1^{5/8}t_{m,yr}^{13/8}\over m_6^{5/4}}. \eqno (16)
$$
If $\epsilon>1$, we expect the hydrodynamic perturbation to have a
noticeable effect on the inspiral wave form. For any reasonable
choice of the parameters, however, we see that $\epsilon\ll1$,
implying that the hydrodynamic drag has a negligible effect on the
observed orbital decay of the compact star.
\section{Discussion}
In this paper we have obtained a very robust result, namely that a
compact star orbiting inside an advection-dominated accretion flow
around a supermassive black hole experiences a negligible amount of
frictional drag from hydrodynamic forces. In deriving this result we
made conservative assumptions (e.g. we did not take into account the
mitigating effects mentioned at the end of \S2) and, where possible,
we erred on the side of overestimating the drag (e.g. in our estimate
of $I$ in eq 7). Yet we found that the perturbation due to
hydrodynamic forces, as measured by the parameter $\epsilon$ (eq 16),
is always extremely small. To determine how relevant this result is
for gravitational wave experiments, we need to consider several
questions.
First, what fraction of galactic nuclei contain SMBHs? Recent work
has revealed that the fraction is close to unity. In our local group
of galaxies, the three largest galaxies, namely M31, M32 and our own
Milky Way Galaxy, have dark masses in their nuclei with masses in the
range $10^{6.5}-10^{7.5} M_\odot$. Outside the local group,
observations with the Hubble Space Telescope and with ground-based
telescopes have revealed dark massive objects in the nuclei of nearly
all galaxies that are accessible to sensitive observations
(e.g. Magorrian et al. 1998, Richstone et al. 1998); the masses are in
the range $10^6-10^{9.5}M_\odot$. It is considered highly likely that
all these dark masses are SMBHs, though this is yet to be proved
conclusively. In the case of our own Galactic nucleus and the nucleus
of NGC 4258, the argument for a SMBH is quite compelling (Genzel et
al. 1997, Ghez et al. 1998, Miyoshi et al. 1995, Narayan et al. 1998).
Second, what fraction of galactic nuclei are inactive or only weakly
active, thus indicating highly sub-Eddington accretion? (We consider
a nucleus to be very sub-Eddington if its luminosity is less than a
few percent of the Eddington luminosity, $L_{Edd}=
10^{46}(M_{SMBH}/10^8M_\odot)~{\rm erg\,s^{-1}}$; when the mass of the
SMBH is not known, we take the luminosity limit to be $10^{43}~{\rm
erg\,s^{-1}}$, which corresponds to 3\% of $L_{Edd}$ for a
$10^{6.5}M_\odot$ SMBH.) The fraction of inactive/weakly active
nuclei appears to be close to unity. For instance, there are no
active galactic nuclei in our local group of galaxies or in the nearby
Virgo cluster of galaxies. (M87, the dominant galaxy in Virgo, has
what may be considered an active nucleus, but its luminosity is highly
sub-Eddington for a $2\times10^9M_\odot$ SMBH, cf. Reynolds et
al. 1996.) The number density of quasars averaged over a larger
volume of the nearby universe is also extremely low (Krolik 1999). If
we include the more numerous Seyferts, which have lower luminosities
than quasars but still fall under the definition of active nuclei, the
local number density is of order $2-3\%$ of the number density of all
galaxies brighter than $L_*$ (Huchra \& Burg 1992). The incidence of
nuclear activity increases with increasing redshift, reaching a peak
at $z\sim2-3$. However, out to $z\sim1$, the redshift range
accessible to LISA for stellar inspiral events, the fraction of
galaxies with active nuclei is very likely no more than $0.1-0.2$.
Thus, over the volume of the universe of interest to us, the majority
of galactic nuclei have significantly sub-Eddington accretion.
Third, how strong is the evidence that SMBHs with highly sub-Eddington
accretion have ADAFs around them? In the opinion of this author, the
evidence is fairly strong. For low mass accretion rates such as we
are considering, $\dot M<0.1-0.01\dot M_{\rm Edd}$, only two stable,
self-consistent, rotating, accretion flow solutions are known (Chen et
al. 1995): the thin disk solution and the ADAF solution. For all low
luminosity black holes for which sufficient data are available,
whether they be in galactic nuclei or in stellar-mass binaries, some
version of the ADAF model appears to explain the observations
(Narayan, Yi \& Mahadevan 1995, Narayan, McClintock \& Yi 1996,
Reynolds et al. 1996, Narayan, Barret \& McClintock 1997, Di Matteo \&
Fabian 1997, Manmoto et al. 1997, Hameury et al. 1998, Narayan et
al. 1998, Di Matteo et al. 1999, Quataert et al. 1999). In several
objects, the evidence suggests that both an ADAF and a thin disk are
present, but such that the ADAF is located close to the black hole,
while the thin disk is restricted to radii beyond tens, hundreds or
even thousands of $R_g$. For the gravitational wave experiments that
are the focus of this paper, the compact stars will be in orbits with
radii smaller than about $10R_g$. This region of the accretion flow
is not in the form of a thin disk in any low luminosity black hole
that has been studied so far; the evidence suggests that the accretion
flow forms an ADAF in all cases.
Finally, if the accretion is not in the form of an ADAF close to the
SMBH, what other form might it take, and could the hydrodynamic drag
be significant? This is a difficult question to answer since at
present the only solutions we are aware of for low values of $\dot m$
are the thin disk and the ADAF. One other formal solution is known,
due to Bondi (1952), but it corresponds to the unlikely case of zero
angular momentum in the accreting gas. In any case, the hydrodynamic
drag force on an orbiting star from a Bondi flow is even less (by a
factor of $\alpha$) than that from an ADAF.
A point worth making is that if the accretion occurs in a non-ADAF
mode, then the mass accretion rate must be a lot less than the values
$\dot M_{\rm SMBH}\sim10^{-2}-10^{-4}\dot M_{\rm Edd}$ that we have
assumed in this paper. This is because the {\it luminosities} of the
``normal '' nuclei we are considering are extraordinarily low. For
instance, the SMBH in the nucleus of our own Galaxy has a luminosity
$< 10^{-7}L_{\rm Edd}$ (e.g. Narayan et al. 1998) despite an estimated
mass accretion rate of $\sim 10^{-4}\dot M_{\rm Edd}$ (Quataert,
Narayan \& Reid 1999, and references therein). The mismatch between
the luminosity and the accretion rate is natural with an ADAF since
the accretion flow advects the missing energy through the event
horizon into the black hole. Indeed, this argument has been used to
argue for the presence of an event horizon in this and other black
hole systems with ADAFs (see Menou, Quataert \& Narayan 1999 for a
review). With a non-ADAF model, the accretion rate would have to be
$< 10^{-7}\dot M_{\rm Edd}$ in our Galactic nucleus, and comparably
small in most other low luminosity nuclei. Even without knowing the
details of the flow, it is probably safe to assume that the
hydrodynamic drag on an orbiting star in such an ultra-low-$\dot M$
accretion flow would be insignificant.
We thus conclude that the majority of inspiral events that LISA might
detect will be unaffected by hydrodynamic interactions.
\noindent{\it Acknowledgments.} It is a pleasure to thank Kip Thorne
for instigating this study, for useful discussions, and for persuading
the author to publish the results, John Huchra for advice on the
statistics of active galactic nuclei, and Eve Ostriker for help in
understanding the hydrodynamic drag force. This work was supported in
part by grant AST 9820686 from the NSF.
\bigskip\bigskip
{
\footnotesize
\hyphenpenalty=10000 \raggedright
\noindent {\large \bf References} \\
\hangindent=20pt \hangafter=1 \noindent Abramovici, A., et al. 1992, Science, 256, 325 \\
\hangindent=20pt \hangafter=1 \noindent Abramowicz, M., Chen, X., Kato, S., Lasota, J. P., \& Regev, O.,
1995, ApJ, 438, L37 \\
\hangindent=20pt \hangafter=1 \noindent Abramowicz, M., Czerny, B., Lasota, J. P., \& Szuszkiewicz,
E. 1988, ApJ, 332, 646 \\
\hangindent=20pt \hangafter=1 \noindent Bender, P., et al. 1996, LISA. Laser Interferometer Space Antenna
for the detection and observation of gravitational waves, Max-Planck
Institut fur Quantenoptik, Report No. MPQ 208, Garching, Germany \\
\hangindent=20pt \hangafter=1 \noindent Blandford, R. D., \& Begelman, M. C., 1999, MNRAS, 303, L1 \\
\hangindent=20pt \hangafter=1 \noindent Bondi, H. 1952, MNRAS, 112, 195 \\
\hangindent=20pt \hangafter=1 \noindent Chakrabarti, S. K. 1993, ApJ, 411, 610 \\
\hangindent=20pt \hangafter=1 \noindent Chakrabarti, S. K. 1996, Phys. Rev. D53, 2901 \\
\hangindent=20pt \hangafter=1 \noindent Chen, X., Abramowicz, M., \& Lasota, J. P. 1997, ApJ, 476, 61 \\
\hangindent=20pt \hangafter=1 \noindent Chen, X., Abramowicz, M., Lasota, J. P., Narayan, R., \& Yi,
I. 1995, ApJ, 443, L61 \\
\hangindent=20pt \hangafter=1 \noindent Cutler, C. 1998, Phys. Rev. D57, 7089 \\
\hangindent=20pt \hangafter=1 \noindent Danzmann, K., et al. 1996, LISA Pre-Phase A Report,
Max-Planck Institut fur Quantenoptik, Report No. MPQ 208, Garching,
Germany \\
\hangindent=20pt \hangafter=1 \noindent Di Matteo, T., \& Fabian, A. C. 1997, MNRAS, 286, L50 \\
\hangindent=20pt \hangafter=1 \noindent Di Matteo, T., Quataert, E., Allen, S. W., Narayan, R., \& Fabian,
A. C., 1999, ApJ, submitted (astro-ph/9905052) \\
\hangindent=20pt \hangafter=1 \noindent Esin, A. A., McClintock, J. E., \& Narayan, R. 1997, ApJ, 489,
867 \\
\hangindent=20pt \hangafter=1 \noindent Folkner, W. M., ed. 1998, Laser-Interferometer Space
Antenna, AIP Conf Proc. 456 (New York: AIP Press) \\
\hangindent=20pt \hangafter=1 \noindent Genzel, R., Eckart, A., Ott, T., \& Eisenhauer, F. 1997, MNRAS,
291, 219 \\
\hangindent=20pt \hangafter=1 \noindent Ghez, A. M., Klein, B. L., Morris, M., \& Becklin, E. E. 1998,
ApJ, 509, 678 \\
\hangindent=20pt \hangafter=1 \noindent Hameury, J. M., Lasota, J. P., McClintock, J. E., \& Narayan,
R. 1997, ApJ, 489, 234 \\
\hangindent=20pt \hangafter=1 \noindent Hils, D., \& Bender, P. 1995, ApJL, 445, L7 \\
\hangindent=20pt \hangafter=1 \noindent Huchra, J., \& Burg, R. 1992, ApJ, 393, 90 \\
\hangindent=20pt \hangafter=1 \noindent Ichimaru, S. 1977, ApJ, 214, 840 \\
\hangindent=20pt \hangafter=1 \noindent Kato, S., Fukue, J., \& Mineshige, S. 1998, Black-Hole Accretion
Disks (Kyoto: Kyoto Univ. Press) \\
\hangindent=20pt \hangafter=1 \noindent Krolik, J. 1999, Active Galactic Nuclei (Princeton: Princeton Univ.) \\
\hangindent=20pt \hangafter=1 \noindent Magorrian, J., et al. 1998, AJ, 115, 2285 \\
\hangindent=20pt \hangafter=1 \noindent Manmoto, T., Mineshige, S., \& Kusunose, M. 1997, ApJ, 489, 791 \\
\hangindent=20pt \hangafter=1 \noindent Menou, K., Quataert, E., \& Narayan, R. 1999, in Black Holes,
Gravitational Radiation, and the Universe, eds. B.R. Iyer \& B. Bhawal
(Dordrecht: Kluwer) \\
\hangindent=20pt \hangafter=1 \noindent Miyoshi, M., Moran, J., Herrnstein, J., Greenhill, L., Nakai, N.,
Diamond, P., \& Inoue, M. 1995, Nature, 373, 127 \\
\hangindent=20pt \hangafter=1 \noindent Molteni, D., Gerardi, G., \& Chakrabarti, S. K. 1994, ApJ, 436,
249 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., Barret, D., \& McClintock, J. E. 1997, ApJ, 482, 448 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., Kato, S. \& Honma, F. 1997, ApJ, 476, 49 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., Mahadevan, R., Grindlay, J. E., Popham, R. G., \&
Gammie, C. 1998a, ApJ, 492, 554 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., Mahadevan, R., \& Quataert, E. 1998, in Theory of
Black Hole Accretion Disks, eds. M. A. Abramowicz, G. Bjornsson, \&
J. E. Pringle, p148 (Cambridge Univ. Press) \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., McClintock, J. E., \& Yi, I. 1996, ApJ, 457, 821 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., \& Yi, I. 1994, ApJ, 428, L13 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., \& Yi, I. 1995a, ApJ, 444, 231 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., \& Yi, I. 1995b, ApJ, 452, 710 \\
\hangindent=20pt \hangafter=1 \noindent Narayan, R., Yi, I., \& Mahadevan, R. 1995, Nature, 374, 623 \\
\hangindent=20pt \hangafter=1 \noindent Novikov, I. D., \& Thorne, K. S. 1973, in Blackholes, eds. C. DeWitt,
\& B. DeWitt, p343 (Gordon \& Breach) \\
\hangindent=20pt \hangafter=1 \noindent Ostriker, E. C. 1999, ApJ, 513, 252 \\
\hangindent=20pt \hangafter=1 \noindent Quataert, E., Di Matteo, T., Ho, L., \& Narayan, R. 1999, ApJ,
submitted \\
\hangindent=20pt \hangafter=1 \noindent Quataert, E., \& Narayan, R. 1999, ApJ, in press
(astro-ph/9810136) \\
\hangindent=20pt \hangafter=1 \noindent Quataert, E., Narayan, R., \& Reid, M. J. 1999, ApJ, 517, L101 \\
\hangindent=20pt \hangafter=1 \noindent Rees, M. J., Phinney, E. S., Begelman, M. C., \& Blandford,
R. D. 1982, Nature, 295, 17 \\
\hangindent=20pt \hangafter=1 \noindent Reynolds, C. S., Di Matteo, T., Fabian, A. C., Hwang, U., \&
Canizares, C. R. 1996, MNRAS, 283, L111 \\
\hangindent=20pt \hangafter=1 \noindent Richstone, D., et al. 1998, Nature, 395A, 14 \\
\hangindent=20pt \hangafter=1 \noindent Ryan, F. D. 1995, Phys. Rev. D52, 5707 \\
\hangindent=20pt \hangafter=1 \noindent Shakura, N. I., \& Sunyaev, R. A. 1973, A\&A, 24, 337 \\
\hangindent=20pt \hangafter=1 \noindent Shapiro, S. L., \& Teukolsky, S. A. 1983, Black Holes, White
Dwarfs, and Neutron Stars (New York: Wiley) \\
\hangindent=20pt \hangafter=1 \noindent Sigurdsson, S. 1997, Class. Q. Grav., 14, 1425 \\
\end{document}
--e7a_4726-75c9_69b6-7fb_519c--
|
1,116,691,497,865 | arxiv | \section{Introduction}
A partition of $n\in\Z^+=\{1,2,3,\ldots\}$ is a way to write $n$ as a sum of positive integers (with repetitions allowed). Partitions of positive integers were first studied by Euler, and they play important roles in number theory and combinatorics.
Lagrange's four-square theorem states that each $n\in\N=\{0,1,2,\ldots\}$ can be written as $x^2+y^2+z^2+w^2$ with $x,y,z,w\in\N$.
Z.-W. Sun \cite{S1} refined this classical theorem in various ways and posed many conjectures
on sums of four squares with certain restrictions involving squares.
For example, Sun's 1-3-5 conjecture \cite{S1} states that any $n\in\N$ can be written as $x^2+y^2+z^2+w^2$
($x,y,z,w\in\N$) with $x+3y+5z$ a square, this was recently confirmed by Machiavelo and Tsopanidis
\cite{M1} via Hamilton quaternions.
Motivated by the refinements of Lagrange's four-square theorem, in this paper we study partitions of positive integers with certain restrictions involving squares.
Now we state our main results.
\begin{theorem}\label{x+11y+13z} Let $n>2$ be an integer.
{\rm (i)} We can write $n=x+y+z$ with $x,y,z\in\Z^+$ such that $x+11y+13z$ is a square.
{\rm (ii)} We can write $n=x+y+z$ with $x,y,z\in\Z^+$ such that $x+240y+720z$ is a square.
\end{theorem}
\begin{theorem} \label{p^m} Let $a,b,c,m\in\Z^+$ with $a<b\ls c$ and $\gcd(b-a,c-a)=1$.
Then, any sufficiently large integer can be written as $x+y+z$ with $x,y,z\in\Z^+$ such that $ax+by+cz=p^m$ for some prime number $p$.
\end{theorem}
\begin{remark} By P. Dusart [Du, Section 4], for $x\gs 3275$ there is a prime $p$ with $x\ls p\ls x+x/(2\log^2x)$. With the aid of this, we can modify
our proof of Theorem \ref{p^m} to show the following results:
(i) Any integer $n\gs6$ can be written as $x+y+z\ (x,y,z\in\Z^+)$ with $x+3y+6z=p^2$ for some prime $p$.
(ii) Any integer $n>6$ can be written as $x+y+z\ (x,y,z\in\Z^+)$ with $x+2y+7z=p^3$ for some prime $p$.
(iii) Any integer $n>12$ can be written as $x+y+z\ (x,y,z\in\Z^+)$ with $x+2y+10z=p^4$ for some prime $p$.
\end{remark}
Our third and fourth theorems were originally conjectured by Sun \cite{S13,S-13} in 2013.
\begin{theorem}\label{Th1.1} Let $n$ be a positive integer. We can write $n=x+y+z$ with $x,y,z\in\Z^+$ such that $x^2+y^2+z^2$ is a square, if and only if $n$
is neither of the form $2^a3^b\ (a,b\in\N)$ nor of the form $2^a7\ (a\in\N)$.
\end{theorem}
\begin{remark} This was stated as a conjecture by Sun in \cite[A230121]{S-13}. For example,
\begin{align*}
5=&1+2+2 \ \t{with} \ 1^2+2^2+2^2=3^2,\\
13=&1+4+8 \ \t{with} \ 1^2+4^2+8^2=9^2,\\
17=&2+9+6 \ \t{with} \ 2^2+6^2+9^2=11^2.
\end{align*}
\end{remark}
\begin{theorem}\label{Th1.2} Any integer $n>7$ with $n\not=11,14,17$
can be written as $n=x+y+2z$ with $x,y,z \in \Z^+$ such that $x^2+y^2+2z^2$ is a square.
\end{theorem}
\begin{remark} For each positive integer $n$, let $a(n)$ denote the number of ways to write
$n$ as $x+y+2z$ with $x,y,z\in\Z^+$ and $x\ls y$ such that $x^2+y^2+2z^2$ is a square.
The sequence $a(n)\ (n=1,2,3,\ldots)$ is available from \cite[A230747]{S-13}. In particular,
$a(n)=1$ for $n=9,21,34,56$. Note that
\begin{align*}9 =& 1 + 4 + 2\times2\ \t{with}\ 1^2 + 4^2 + 2\times2^2 = 5^2,
\\21 =& 5 + 8 + 2\times4\ \t{with}\ 5^2 + 8^2 + 2\times4^2 = 11^2,
\\34 =&7 + 25 + 2\times1\ \t{with}\ 7^2 + 25^2 + 2\times1^2 = 26^2,
\\56 =& 14 + 14 + 2\times14\ \t{with}\ 14^2 + 14^2 + 2\times14^2 = 28^2.
\end{align*}
\end{remark}
\begin{theorem}\label{Th1.3}
Let $k\geq 4$ be an integer. Then any integer $n> \max\{20k,1200\}$ can be written as the
$x_1+\cdots+x_k$ with $x_1,\ldots,x_k\in\Z^+$ such that $x_1^2+\cdots+x_k^2$ is a square.
\end{theorem}
We are going to prove Theorems 1.1-1.2 in the next section.
Theorems 1.3-1.5 will be proved in Sections 3-5 respectively.
\section{Proofs of Theorems 1.1-1.2}
\begin{lemma}\label{ax+by} Let $a,b\in\Z^+$ with $\gcd(a,b)=1$. Then any integer $n>ab$
can be written as $ax+by$ with $x,y\in\Z^+$.
\end{lemma}
\Proof. It is known that any integer $m>ab-a-b$ can be written as $au+bv$ with $u,v\in\N$.
As $n-a-b>ab-a-b$, there are $u,v\in\N$ with $au+bv=n-a-b$, and hence $n=ax+by$ with $x=u+1\in\Z^+$ and $y=v+1\in\Z^+$.
This concludes the proof. \qed
\medskip\noindent {\it Proof of Theorem} 1.1. (i) The result can be verified directly for $n=3,4,\ldots,30$.
Now let $n\in\N$ with $n\gs 31$. Choose $a\in\{\lfloor\sqrt n\rfloor+5,\lfloor\sqrt n\rfloor+6\}$ with $a\eq n\pmod 2$.
Since
$$\l(\sqrt n-\f35\r)^2\gs(\sqrt{31}-0.6)^2>\l(\f 35\r)^2+3.6,$$
we have $n-(6/5)\sqrt n>3.6$, i.e., $10n-12\sqrt n>36$. Therefore
$$a^2\ls(\sqrt n+6)^2=n+12\sqrt n+36<11n.$$
On the other hand,
$$a^2>(\sqrt n+4)^2=n+8\sqrt n+16$$
and hence
$$\f{a^2-n}2>4\sqrt n+8\gs 4\sqrt{31}+8>30=5\times6.$$
By Lemma \ref{ax+by}, there are $y,z\in\Z^+$ with $5y+6z=(a^2-n)/2$. Since
$$5(y+z)<5y+6z=\f{a^2-n}2<\f{11n-n}2=5n,$$
we have $y+z<n$. Hence $x=n-y-z\in\Z^+$ and
$$x+11y+13z=n+10y+12z=n+2(5y+6z)=n+2\times\f{a^2-n}2=a^2.$$
This concludes the proof of part (i) of Theorem 1.1.
(ii) For $n=3,4,\ldots,722$ we can easily verify the desired result via a computer.
Below we fix $n\in\N$ with $n\gs723$. Let $a=\lfloor \sqrt n\rfloor+k$ with $k=390$. Then
\begin{align*} a^2-n>&(\sqrt n+k-1)^2-n=2(k-1)\sqrt n+(k-1)^2
\\\gs& 2(k-1)\sqrt{723}+(k-1)^2>239\times719=171841.
\end{align*}
In view of Lemma \ref{ax+by}, we have $a^2-n=239y+719z$ for some $y,z\in\Z^+$. Note that
$$\l(\sqrt{239n}-\f k{\sqrt{239}}\r)^2\gs\l(\sqrt{239\times723}-\f k{\sqrt{239}}\r)^2\gs\f{240}{239}k^2-479$$
and hence $239n-2k\sqrt n\gs k^2-479.$ Thus
$$239y+719z=a^2-n\ls(\sqrt n+k)^2-n=k^2+2k\sqrt n\ls 239n+479$$
and hence $239(y+z)\ls 239n+479-(719-239)z<239n$. Therefore $x=n-y-z\in\Z^+$ and
$$x+240y+720z=n+239y+719z=n+(a^2-n)=a^2.$$
This completes our proof. \qed
\medskip\noindent {\it Proof of Theorem} 1.2. For $x>1$ let $\pi(x)$ denote the number of primes not exceeding $x$.
Let $\ve>0$. By the Prime Number Theorem,
$$\pi((1+\ve)x)-\pi(x)\ \sim\ \ve \f x{\log x}$$
as $x\to+\infty$. So, if $x$ is large enough then there is a prime $p$ with $x<p\ls(1+\ve)x$.
Observe that
$$\lim_{n\to+\infty}\f{(bn)^{1/m}}{(an+(b-a)(c-a))^{1/m}}=\l(\f ba\r)^{1/m}>1.$$
By the above, there is a positive integer $N$ such that for any integer $n\gs N$ there is a prime $p$ for which
$$(an+(b-a)(c-a))^{1/m}<p\ls (bn)^{1/m}$$
and hence
$$(b-a)(c-a)<p^m-an<(b-a)n.$$
As $\gcd(b-a,c-a)=1$, by Lemma \ref{ax+by} there are positive integers $y$ and $z$ such that
$$(b-a)(y+z)\ls (b-a)y+(c-a)z=p^m-an<(b-a)n.$$
Thus $x:=n-y-z\in\Z^+$ and $ax+by+cz=p^m$.
The proof of Theorem 1.2 is now complete. \qed
\section{Proof of Theorem 1.3}
\setcounter{lemma}{0}
\setcounter{remark}{0}
\setcounter{equation}{0}
For convenience, we set $\square=\{x^2:\ x\in\Z\}$.
\begin{lemma}
Let $n$ be a positive integer with $n, n/6, n/7\not\in \square$. Suppose that the equation
\begin{equation} \label{2.1} n=x^2+y^2-3z^2\ (x,y,z\in\Z)
\end{equation}
has solutions. Then, there are $x_0,y_0,z_0\in\Z^+$ with $x_0^2+y_0^2-3z_0^2=n$
satisfying
\begin{equation} x_0\geq z_0>0\ \t{and}\ \ y_0\geq 2z_0.
\end{equation}
Moreover, we may require $x_0> z_0$ if $n=x^2-2z^2$ for some $x,z\in\Z^+$
with $x/z \in (2,\,3.5]\cup (5,+\infty).$
\end{lemma}
\begin{proof}
If $n=x^2+y^2$ with $x,y\in\N$, then we may assume $x \geq y >0$ since $n\not\in \square$. Thus $n=x^2+(2y)^2-3y^2$, whence $(x_0,y_0,z_0)=(x,2y,y)$ meets (3.2).
Now assume that $n$ is not a sum of two squares. Choose a particular solution $(r,s,t)$ of \eqref{2.1}
with $r,s\in\N$ and
$$t=\min\{z\in\Z^+:\ n=x^2+y^2-3z^2\ \t{for some}\ x,y\in\N\}.$$
In view of the identity $a^2-3b^2=(3b-2a)^2-3(a-2b)^2$, the equation \eqref{2.1}
has three other solutions:
\begin{align}
&(r, 3t-2s, 2t-s),\\
&(3t-2r,s, 2t-r),\\
&(3t-2r, 2s+3r-6t, 4t-2r-s).
\end{align}
By the definition of $t$, we get $|2t-s|\geq t$ from the solution in (3.3). So we have either $s \leq t $ or $ s \geq 3t$. Similarly, by the solution in (3.4), either $r \leq t $ or $ r\geq 3t$.
Since $r^2+s^2-3t^2=n$, one of $r$ and $s$ is greater than $t$ and hence at least $3t$.
If $r\gs 3t$ and $s\gs 3t$, then $(x_0,y_0,z_0)=(r,s,t)$ satisfies (3.2).
Now we handle the case $r\leq t$ and $s\geq 3t$.
(The case $s\le t$ and $r\ge 3t$ can be handled similarly.)
Suppose $s < 5t-2r$. Then
$$-t < 4t-2r-s \leq t-2r \leq t.$$
By the definition of $t$ and the solution (3.5), we must have $|4t-2r-s|=t$ and hence
$4t-2r-s = t-2r = t$. So $r=0$ and $s=3t$. It follows that $n=r^2+s^2-3t^2=6t^2$
which contradicts $n/6\not\in\square$.
By the last paragraph, we must have
$s \geq 5t-2r$. Note that the solution
$$(x_0,y_0,z_0)=(3t-2r, s, 2t-r)$$
satisfies (3.2) since
$$s \geq 2(2t-r), \ \ 3t-2r \geq 2t-r\ \t{and}\ \ 2t-r\geq t>0.$$
In view of the above, we have proved the first assertion of Lemma 3.1.
Now we prove the second assertion in Lemma 3.1. Suppose that $n=x^2-2z^2$ for some $x,z\in \Z^+$ with $x/z \in (2,3.5]\cup (5,+\infty)$. As $n/7\not\in \square$, we have $x/z\not=3$.
We want to find a solution $(x_0,y_0,z_0)$ of (3.1) satisfying $(3.2)$ and the inequality $x_0>z_0$.
{\it Case} 1. $x/z\in(2,3)$, i.e., $0<2z<x<3z.$
In this case, $(x_0,y_0,z_0)= (z, 2x-3z, x-2z)$ meets our purpose since
\begin{align*}
x^2-2z^2&= (z)^2+(2x-3z)^2-3(x-2z)^2, \\
x_0-z_0&= z-(x-2z)= 3z-x >0,\\
y_0-2z_0&=(2x-3z)-2(x-2z)=z>0.
\end{align*}
{\it Case} 2. $x/z\in(3,3,5)$, i.e., $0< 3z < x \leq 3.5z$.
Using the identity
$$n=x^2-2z^2= (3x-8z)^2+(2x-3z)^2-3(2x-5z)^2, $$
we find that $(x_0,y_0,z_0)=(3x-8z, 2x-3z, 2x-5z)$ meets our purpose as
\begin{align*}
x_0-z_0&=(3x-8z)-(2x-5z)=x-3z>0,\\
y_0-2z_0&=(2x-3z)-2(2x-5z)=7z-2x \geq 0.
\end{align*}
{\it Case} 3. $x/z\in(5,6)$, i.e., $5z < x< 6z$.
In this case,
$$n=x^2-2z^2=(2x-9z)^2+(5z)^2-3(6z-x)^2$$
and hence $(x_0,y_0,z_0)=(2x-9z,5z,6z-x)$ meets our purpose.
{\it Case} 4. $x/z\in[6,+\infty)$, i.e., $x\gs 6z$.
In this case,
$$n=x^2-2z^2=(5z)^2+x^2-3(3z)^2$$
and hence $(x_0,y_0,z_0)=(5z,x,3z)$ meets our purpose.
In view of the above, we have completed the proof of Lemma 3.1.
\end{proof}
\begin{lemma} {\rm (\cite[p.\, 164]{D1})}
Let $p$ be an odd prime with $p\not\eq 1 \pmod{24}$. Let $F(x,y,z)$ be any classic, indefinite, anisotropic ternary quadratic form with determinant $-p$ . Then
\begin{align*}&\Z\sm\{F(x,y,z):\ x,y,z\in\Z\}
\\=&\{4^k(8l+p):\ k\in\N,\ l\in\Z\}
\\&\cup\l\{p^{2k+1}(pl+r^2):\ k\in\N,\ l\in\Z,\ 1\ls r\ls\f{p-1}2\r\}.
\end{align*}
\end{lemma}
\begin{remark}
The reader may consult \cite{Ro} for a more general result.
\end{remark}
\medskip
\noindent {\it Proof of Theorem 1.3}.
(i) We first prove the ``if" direction.
Let $p$ be the smallest prime divisor of $n$. Then $p>3$ and $p\not=7$. Write $n=pq$ with $q\in\Z^+$.
If $p=x+y+z$ for some $x,y,z\in\Z^+$ with $x^2+y^2+z^2\in\square$, then
$n=qx+qy+qz$ and $(qx)^2+(qy)^2+(qz)^2=q^2(x^2+y^2+z^2)\in\square$.
By the last paragraph, it suffices to consider only the case in which $n$ is an odd prime with $n\not=3,7$. We need to find $x,y,z\in\Z^+$ such that
$$x+y+z=n\ \t{and}\ x^2+y^2+z^2 \in \square.$$
If $a,b,c,d$ are integers with
\begin{equation}\label{abcd1} 2n=(a+c+d)^2+(b+c-d)^2-3c^2-3d^2,
\end{equation}
then, for
\begin{equation}\label{xyz} x=\f{a^2+b^2-c^2-d^2}2,\ y=ac-bd,\ z=ad+bc,
\end{equation}
we have $x+y+z=n$ and
$$x^2+y^2+z^2=x^2+(a^2+b^2)(c^2+d^2)=\l(\f{a^2+b^2+c^2+d^2}2\r)^2.$$
So, it suffices to find $a,b,c,d\in\Z$ satisfying \eqref{abcd1} such that $x,y,z$
given by \eqref{xyz} are positive.
As $2n$ is neither of the form $3^{2u+1}(3v+1)\ (u,v\in\N)$
nor of the form $4^{u}(8v+3)\ (u,v\in\N)$,
in view of Lemma 3.2 we have $2n\in\{x^2+y^2-3z^2:\ x,y,z\in\Z\}$.
By Lemma 3.1, there are integers $x_0,y_0,z_0\in\Z^+$ with $2n= x_0^2+y_0^2-3z_0^2$
for which $x_0 \geq 2z_0$ and $y_0\geq z_0$; moreover, we may require $y_0>z_0$
if $2n=r^2-2s^2$ for some $r,s\in\Z^+$ with $r/s\in(2,3.5]\cup[5,+\infty)$.
{\it Case} 1. $y_0>z_0$.
In this case, we set
$$ a=x_0-z_0, \ b=y_0-z_0, \ c=z_0, \ d=0. $$
It is easy to see that \eqref{abcd1} holds and $a\gs c >0, \ b>0$ so that $x,y,z$ given by \eqref{xyz} are positive.
{\it Case} 2. $y_0=z_0$.
In this case, $2n=x_0^2+y_0^2-3z_0^2=x_0^2-2z_0^2$ and $x_0/z_0\not\in(2,3.5]\cup(5,+\infty)$. It's clear $x_0/z_0=5$ contradicts the assumption that $n$ is a prime. Hence $x_0/z_0\in(3.5,5)$.
If $x_0/z_0\in(4,5)$, then it is easy to see that the integers
$$a=x_0-2z_0,\ b=2z_0\ \t{and}\ c=d=z_0$$
meet our purpose.
Now we assume that $3.5<x_0/z_0\ls 4$. If $x_0=4z_0$, then
$2n=x_0^2-2z_0^2=14z_0^2,$ which contradicts $n\neq 7$.
Thus $3.5z_0<x_0<4z_0$. Set
$$a=x_0-2z_0, \ b=5z_0-x_0, \ c=x_0-2z_0, \ d=z_0.$$
Then
$$a+c+d=2x_0-3z_0, \ b+c-d=2z_0, \ c=x_0-2z_0,$$
and hence \eqref{abcd1} holds. It is easy to see that $x>0$ and $z>0$.
Note also that
\begin{align*}
y=&ac-bd=(x_0-2z_0)^2-(5z_0-x_0)z_0
\\=& {x_0}^2-3x_0z_0-z_0^2=(x_0-1.5z_0)^2-3.25z_0^2
\\>&4z_0^2-3.25z_0^2>0.
\end{align*}
This concludes our proof of the ``if" direction.
(ii) Now we prove the ``only if" direction. If $n$ is even and $x,y,z$ are positive integers with
$x+y+z=n$ and $x^2+y^2+z^2\in\square$, then $x^2+y^2+z^2$ is a multiple of $4$
and hence none of $x,y,z$ is odd. Thus $n/2=x_0+y_0+z_0$ with $x_0^2+y_0^2+z_0^2\in\square$,
where $x_0=x/2,\ y_0=y/2,\ z_0=z/2$ are positive integers. So it remains to prove that
any $n\in\{7\}\cup\{3^b:\ b\in\N\}$ cannot be written as $x+y+z$ with $x,y,z\in\Z^+$ and $x^2+y^2+z^2\in\square$. It is easy to see that this holds for $n=3,7$.
Now assume $n=3^b$ for some integer $b\gs2$. Suppose that $n=x+y+z$ with $x,y,z\in\Z^+$ and $x^2+y^2+z^2\in\square$. If we don't have $x\eq y\eq z\pmod 3$, then exactly one of $x,y,z$
is divisible by $3$ since $x+y+z\eq0\pmod 3$, and hence $x^2+y^2+z^2\eq2\pmod 3$ which contradicts
$x^2+y^2+z^2\in\square$. Thus $x\eq y\eq z\eq \da\pmod3$ for some $\da\in\{0,1,2\}$.
Write $x=3x'+\da$, $y=3y'+\da$ and $z=3z'+\da$ with $x',y',z'\in\Z$.
Then $x'+y'+z'=n/3-\da\eq-\da\pmod 3$ and hence
$$x^2+y^2+z^2\eq 6(x'+y'+z')\da+3\da^2\eq-6\da^2+3\da^2=-3\da^2\pmod 9.$$
As $x^2+y^2+z^2$ is a square, we must have $\da=0$. Thus $n/3=x'+y'+z'$
with $(x')^2+(y')^2+(z')^2=(x^2+y^2+z^2)/9\in\square$.
Continuing this process, we finally get that $3$ can be written as $x+y+z$ with $x,y,z\in\Z^+$
and $x^2+y^2+z^2\in\square$, which is absurd. This contradiction concludes our proof of the
``only if" direction.
In view of the above, we have completed the proof of Theorem 1.3. \qed
\section{Proof of Theorem 1.4}
\setcounter{equation}{0}
\setcounter{example}{0}
\medskip
\noindent {\it Proof of Theorem 1.4}.
If $n=x+y+2z$ for some $x,y,z\in\Z^+$ with $x^2+y^2+2z^2\in\square$, then
$2n=2x+2y+2(2z)$ and $(2x)^2+(2y)^2+2(2z)^2=4(x^2+y^2+2z^2)\in\square$.
So, without loss of generality, we simply assume that $n$ is odd.
For positive odd integer $n\ls1.5\times10^6$, we can verify the desired result via a computer.
Below we suppose that $n$ is odd and greater than $1.5\times10^6$.
We need to find $x,y,z,w\in\Z^+$ such that
\begin{equation}n=x+y+2z \ \t{and}\ x^2+y^2+2z^2=w^2.
\end{equation}
Let $a$ and $c$ be positive odd integers. Define
\begin{equation}
b=\begin{cases}-1&\t{if}\ n+1-4ac=2,
\\|n+1-4ac|/2&\t{otherwise},\end{cases}
\end{equation}
and
\begin{equation}d=\begin{cases}-1&\t{if}\ n+1-4ac >2,
\\1&\t{otherwise}.\end{cases}
\end{equation}
Note that
\begin{equation}\label{abcd} n=4ac-2bd-d^2.
\end{equation}
Define
\begin{equation}\label{st} \ s= 4a^2-c^2+2b^2+2bd\ \ \t{and}\ \ t= 2bc+2ad+cd.
\end{equation}
Then
$$s\eq4-1+2b^2+2b\eq3\pmod4\ \ \t{and}\ \ t\eq cd\eq1\pmod 2.$$
Note that
\begin{equation}\label{xyzn}
x=\frac{n+(-1)^{(n+1)/2}s+2t}4,\ y= \frac{n+(-1)^{(n+1)/2}s-2t}4,\ z= \frac{n-(-1)^{(n+1)/2}s}4
\end{equation}
are all integers.
It is easy to verify that $(4.1)$ holds for such $x,y,z$ and
$$w=2a^2+b^2+bd+(c^2+d^2)/2.$$
We claim that $x,y,z$ are positive provided that
\begin{equation}\label{abc<}
a\geq450, \ \ 1.69a < c< 1.79a\ \ \t{and}\ \ |b|<0.658a.
\end{equation}
It is easy to see that $s\ge0$ and $t\ge0$.
By \eqref{abc<}, we have
$$4ac+c^2-4a^2-2b^2-4bc > 0.038a^2$$
and
$$|4bd+d^2+4ad+2cd| < 1+ 10.212a <10.22a <0.025a^2.$$
Combining these with \eqref{abcd} and \eqref{st} , we get
\begin{align*}
n-s-2t&=4ac-2bd-d^2-(4a^2-c^2+2b^2+2bd)-(4bc+4ad+2cd) \\
&\geq4ac+c^2-4a^2-2b^2-4bc -|4bd+d^2+4ad+2cd| >0.
\end{align*}
It follows that $x,y,z$ given by \eqref{xyzn} are positive.
Now it remains to find odd integers $a$ and $c$ satisfying \eqref{abc<}.
Choose $\da_1,\da_2\in\{0,1\}$ such that
\begin{equation}\label{ac0}
a_0 =\lfloor \sqrt{(n+1)/6.96}\rfloor +\delta_1
\ \ \t{and}\ \ c_0 = \lfloor \sqrt{1.74(n+1)/4} \rfloor+\delta_2
\end{equation}
are both odd. As $n>1.5\times10^6$, we have
\begin{align}\label{a0c0}
&a_0 \geq 465, \ \ c_0 \geq 807, \\
\label{c0/a0}
& 1.734< c_0/a_0 <1.747,\\
\label{a0-c0}
&16a_0-8c_0>2.024a_0,\\\label{11a0}
&|4a_0c_0-n-1| < 4(a_0+c_0)+4 <11a_0.
\end{align}
If $|n+1-4a_0c_0|/2 < 0.658a_0$, then $(a,c)=(a_0,c_0)$ meets our purpose.
Below we suppose $|n+1-4a_0c_0|/2\gs0.658a_0$. In light of \eqref{11a0},
we may choose $m\in\{0,\pm1\}$
with $|4a_0c_0-n-1-8ma_0|\leq|4a_0|$.
Then, in view of \eqref{a0-c0}, we choose $k\in\{0,\pm1,\pm2\}$ such that
\begin{equation}\label{m}
|4a_0c_0-n-1-8ma_0+k(16a_0-8c_0)|\leq 1.012a_0.
\end{equation}
If $k=\pm2$, then we must have
$$3.036a_0\leq |4a_0c_0- n-1-8ma_0|\leq 4a_0$$
and hence we can choose $m=0$ first. Therefore $|16km-32k^2|\leq 128$.
Clearly, \begin{equation}
a=a_0-2k \ \t{and}\ \ c=c_0-2m+4k,
\end{equation}
are odd integers with $a\geq 450$. Note also that
\begin{align*}|a-\sqrt{(n+1)/ 6.96}|\leq5
\ \ \t{and}\ \ |c-\sqrt{1.74(n+1)/4}|\leq9.
\end{align*}
Therefore, with the aids of (4.8) and (4.9), we get
\begin{align*}0.989\leq a / \sqrt{(n+1)/6.96}\leq 1.0116
\end{align*}
and
$$0.988\leq c /\sqrt{1.74(n+1)/4}\leq 1.0121.$$
Therefore, $1.69a \leq c\leq 1.79a$ as desired.
By \eqref{m}, we also have $|b|=|n+1-4ac|/2<0.658a$, since
\begin{align*}
|4ac-n-1|&= |4a_0c_0-n-1-8ma_0-8kc_0+16ka_0+16mk-32k^2| \notag \\
& \leq |4a_0c_0-n-1-8ma_0+k(16a_0-8c_0)|+|16mk-32k^2| \notag \\
&\leq 1.012a_0+128 \leq 1.012(a+4)+128 \leq1.316a.
\end{align*}
Thus \eqref{abc<} holds and this concludes our proof
of Theorem 1.4. \qed
Let us illustrate our proof of Theorem 1.4 by a concrete example.
\medskip
\noindent{\bf Example 3.1}. For $n=1,000,001$, we take $a_0=379$ and $c_0=659$ by \eqref{ac0}.
Then $|n+1-4a_0c_0| =958 > 1.316a_0$. As in our proof of Theorem 1.4, we choose $m=0$ and $k=1$, and then get $a=377,b=99,c=663,d=-1$. Then $s=148351$ and $t=129857$ by \eqref{st}. This yields the solution
\begin{align*}
\begin{cases}277841+147984+2\times287088 =1000001 ,\\
277841^2+147984^2+2\times287088^2=513745^2.\end{cases}
\end{align*}
\section{Proof of Theorem 1.5}
\setcounter{lemma}{0}
\setcounter{equation}{0}
\begin{lemma} {\rm (Cauchy's Lemma \cite[p.\,31]{Na})}
Let $a$ and $b$ be positive odd integers such that
\begin{equation}\label{ab}
b^2 <4a \ \ \t{and} \ \ 3a<b^2+2b+4.
\end{equation}
Then there are $s,t,u,v\in\N$ such that
\begin{equation}
s+t+u+v=b\ \t{and}\ s^2+t^2+u^2+v^2=a.
\end{equation}
\end{lemma}
\begin{lemma} Let $m$ and $n$ be positive odd integers with $3m^2< n^2 <4m^2 $.
Then there are $s_0,t_0,u_0,v_0\in\Z^+$ such that
$$s_0+t_0+u_0+v_0=n \ \t{and} \ \ s_0^2+t_0^2+u_0^2+v_0^2=m^2.$$
\end{lemma}
\begin{proof}
Let $a=m^2-2n+4$ and $b=n-4$. Then \eqref{ab} holds.
By Lemma 5.1, there are $s,t,u,v\in\N$
satisfy (5.2). Define
\[ s_0=s+1,\ t_0=t+1,\ u_0=u+1,\ v_0=v+1. \]
Then
$$ s_0+t_0+u_0+v_0 =b+4=n$$
and
$$ s_0^2+t_0^2+u_0^2+v_0^2 = a+ 2b+4 =m^2.$$
This concludes the proof. \end{proof}
\medskip
\noindent {\it Proof of Theorem 1.5}. Clearly, it suffices to consider only the case $2\nmid n$ with $n>\max\{10k,600\}$.
Let $j=k-4$ and consider the interval $I=(n/4+7j/2, \ n/3+10j/3)$.
Suppose that $I$ contains no odd square. Then, for some $h\in\Z$ we have
\[ (2h-1)^2 \leq \frac n 4+\frac {7j} 2 <\frac n 3+ \frac {10j} 3 \leq (2h+1)^2 \]
and hence
\[ 4h=(2h+1)^2-(2h-1)^2 > \frac n {12}-\frac j 6 > \frac n {15} >40, \]
which implies $h>10$. Thus
\[ \frac n 4 + \frac {7j} 2 \geq (2h-1)^2 > 19(2h-1)> 36h > 9\l(\frac n {12} - \frac j 6\r)\]
and hence $10j> n$, which contradicts our assumption.
By the above, there exists odd integer $m$ such that
\begin{equation} \frac n 4+\frac {7j} 2 < m^2 < \frac n 3+ \frac {10j} 3,
\end{equation}
and hence
$$3(m^2-4j) < n-2j < 4(m^2-4j).$$
By Lemma 5.2, there are $x_1,x_2,x_3,x_4\in\Z^+$
such that
$$x_1+x_2+x_3+x_4=n-2j\ \ \t{and}\ \ x_1^2+x_2^2+x_3^2+x_4^2=m^2-4j.$$
Set $x_i=2$ for $4<i\ls k$. Then $\sum_{i=1}^kx_i=n$ and
$$\sum_{i=1}^kx_i^2=m^2-4j+j\times2^2=m^2.$$
In view of the above, we have completed the proof of Theorem 1.5. \qed
|
1,116,691,497,866 | arxiv | \section{Introduction}
In \cite{ketonensolovay} Ketonen and Solovay prove the following theorem:
\begin{theorem}[Ketonen--Solovay]\label{thm:ketonensolovay}
If $X\geq 3$ is $\omega_{d+1}(c+5)$-large, then every colouring $C\colon [X]^{d+2} \rightarrow c$ has homogeneous $H\subseteq X$ of size $> \min H$.
\end{theorem}
This note consists of a modified version of the original presentation which should be better suited for reading from the reverse mathematics viewpoint. This may be of interest in light of the theorem's use, by Patey and Yokoyama, in the conservativity result for $\mathrm{RT}^2_2$ in \cite{pateyyokoyama}. As stated there, once it is understood, the original proof, for $d=0$ and standard $c$ (Lemma~\ref{lemma:ketonensolovayd=0} in this note), is not hard to be seen to be formalisable in $\mathrm{RCA}_0$, since one can restrict the uses of transfinite induction to transfinite induction on $\omega^{c+4}$. However, for readers unfamiliar with the subject matter, it is somewhat tedious to check this due to the distribution of the proof throughout the paper.
Thanks to arithmetic conservativity (Corollary~IX.1.11 from \cite{sosoa}), this is also a confirmation that the Ketonen--Solovay theorem is provable within $\mathrm{I}\Sigma_1$, as asked for in Problem~3.37 in \cite{hajekpudlak}. In the case of fixed standard $d$, one can easily weaken the base theory to $\mathrm{RCA}_0^{\displaystyle{*}}$ without modifying the proof. This shows that Ketonen and Solovay's copious use of transfinite induction on ordinals is readily circumvented for the theorem in question.
Finally, we confirm that one can also weaken the base theory to $\mathrm{RCA}_0^{\displaystyle{*}}$ for unrestricted $d$. Thanks to $\Pi^0_2$-conservativity (Corollary~4.9 in \cite{simpsonsmith}), this implies that the Ketonen--Solovay theorem is provable in elementary function arithmetic, $\mathrm{EFA}$.
The presentation within $\mathrm{RCA}_0$ is suitable for advanced master level students and those who are unfamiliar with the Ketonen--Solovay paper \cite{ketonensolovay}. We assume only basic knowledge on reverse mathematics in $\mathrm{RCA}_0$ as in II.1-II.3 from \cite{sosoa}. At some places we favour an intuitive description and we leave many of the details as exercises for the reader. The changes compared to Ketonen--Solovay are concentrated in Section~\ref{section:411replacement}, with the rest of the proof, in Section~\ref{section:proof}, being only slightly modified from the originals. If one is only interested in the case of $d=0$ (dimension 2, as used in \cite{pateyyokoyama}), the presentation ends at the remark after Lemma~\ref{lemma:ketonensolovayd=0}.
In the last section we will describe how to modify our arguments to work within $\mathrm{RCA}_0^{\displaystyle{*}}$.
\section{Ordinals below $\varepsilon_0$ in $\mathrm{RCA}_0$}\label{section:ordinals}
We will define the ordinals below $\varepsilon_0$ within $\mathrm{RCA}_0$ as in Definition 2.3 in \cite{simpson1988}.
\begin{definition}\label{def:ordinal}
We define the set $\mathcal{E}$ of notations of ordinals $<\varepsilon_0$ and order $<$ on $\mathcal{E}$ as follows:
\begin{enumerate}
\item If $\alpha_0 \geq \dots \geq \alpha_n \in \mathcal{E}$, then $\omega^{\alpha_0} + \dots + \omega^{\alpha_n} \in \mathcal{E}$.
\item $\omega^{\alpha_0} + \dots + \omega^{\alpha_n} < \omega^{\beta_0} + \dots + \omega^{\beta_m}$ if and only if:
\begin{enumerate}
\item $n<m$ and $\alpha_i=\beta_i$ for all $i \leq n$, or:
\item there is $i\leq \min\{ n,m\}$ with $\alpha_j=\beta_j$ for all $j < i$ and $\alpha_i < \beta_i$.
\end{enumerate}
\end{enumerate}
We use $0$ to denote the empty sum, $0 < \alpha$ for all $\alpha \neq 0$, $1=\omega^0$, $n=\overbrace{1 + \dots +1}^{n}$, $\omega=\omega^1$, $\omega_0(\alpha)=\alpha$, $\omega_{d+1}(\alpha)=\omega^{\omega_d(\alpha)}$ and $\omega_d=\omega_d(1)$.
\end{definition}
As usual, if $\alpha_n=0$, then $\alpha$ is called a successor, otherwise, when not equal to $0$, it is called a limit. One can define primitive recursive functions for ordinal-addition, natural (Hessenberg) sum, and ordinal multiplication on $\mathcal{E}$. Recall that, for $\alpha$ and $\beta$ as in Definition~\ref{def:ordinal}, the natural sum is:
\[
\alpha \oplus \beta = \omega^{\gamma_0} + \dots + \omega^{\gamma_{m+n+1}},
\]
where the $\gamma_i$'s are all the $\alpha_i$'s and $\beta_i$'s in descending order. The natural sum has the important property that none of the terms are lost, which can happen with ordinal addition. For example:
\[
\omega+\omega^2=\omega^2\neq \omega^2+\omega=\omega \oplus \omega^2.
\]
Every ordinal in $\mathcal{E}$ has a Cantor Normal Form:
\[
\alpha=_{\mathrm{CNF}}\omega^{\alpha_0} \cdot a_0 + \dots + \omega^{\alpha_n} \cdot a_n,
\]
where the $a_i$'s are positive integers and $\alpha_0 > \dots > \alpha_n$.
\begin{definition}[Maximal coefficient]
$\mathrm{MC}(0)=0$ and, given $\alpha=_{\mathrm{CNF}}\omega^{\alpha_0} \cdot a_0 + \dots + \omega^{\alpha_n} \cdot a_n>0$:
\[
\mathrm{MC}(\alpha)=\max \{ a_i , \mathrm{MC}(\alpha_i) \}.
\]
\end{definition}
\begin{definition}[Fundamental sequence]
For $\alpha = \omega^{\alpha_0} + \dots + \omega^{\alpha_n} \in \mathcal{E}$ and $x \in \mathbb{N}$, take $0[x]=0$, $(\alpha+1)[x]=\alpha$, and:
\begin{enumerate}
\item If $\alpha_n=\beta+1$, then $\alpha[x] = \omega^{\alpha_0} + \dots + \omega^{\alpha_{n-1}} + \omega^\beta \cdot x$,
\item If $\alpha_n$ is a limit, then $\alpha[x] = \omega^{\alpha_0} + \dots + \omega^{\alpha_{n}[x]}$.
\end{enumerate}
\end{definition}
\begin{definition}
A finite set $X=\{x_0 < \dots < x_{|X|-1}\}$ is called $\alpha$-large if:
\[
\alpha[x_0] \dots [x_{|X|-1}]=0.
\]
Any $X$ is $0$-large.
\end{definition}
Any $\omega$-large set $X$ has size $> \min X$. Unless otherwise specified, we will assume $\alpha$-large sets to be strictly above $2$.
At first glance one may think that we require transfinite induction to demonstrate properties of the fundamental sequences and $\alpha$-large sets. In the remainder of this section we avoid this usage to treat some properties for later use.
\begin{lemma}\label{lemma:fundbound}
If $\omega_d >\alpha > \beta$ and $x>\mathrm{MC}(\beta)$, then $\alpha[x] \geq \beta$, where the inequality is strict if $\alpha$ is a limit.
\end{lemma}
\emph{Proof:} Induction on $d$.
\qed
\begin{lemma}\label{lemma:shiftlarge}
For any $\alpha$ and any $x_0 < \dots < x_R$ , $\mathrm{MC}(\alpha)<y_0 < \dots < y_R$ with $0<x_i \leq y_i$ for all $i\leq R$: if $\alpha[x_0] \dots [x_R]>0$, then $\alpha[y_0] \dots [y_R]>0$.
\end{lemma}
\emph{Proof:} Use induction on $R$ with the aid of Lemma~\ref{lemma:fundbound} to show $\alpha[y_0] \dots [y_R] \geq \alpha[x_0] \dots [x_R]$.
\qed
\begin{lemma}\label{lemma:largerordinal}
For any $\alpha>\beta>0$ and any $\mathrm{MC}(\beta) < x_0 < \dots < x_R$, we have that $\alpha[x_0] \dots [x_R]> \beta[x_0] \dots [x_R]$.
\qed
\end{lemma}
\emph{Proof:} Use induction on $R$ with the aid of Lemma~\ref{lemma:fundbound}.
Define the thrice iterated exponential:
\[
E(x)=2^{2^{2^x}}.
\]
One can check that:
\begin{enumerate}
\item The smallest $\omega$-large interval which contains $x$ as its minimal element is $[x, 2x]$.
\item For $\omega^2$ this is bigger than $[x, 2^x\cdot x]$.
\item If $x \geq 3$, then $\omega^3[x] \dots [E(x)+x+8] > 0$.
\end{enumerate}
The following lemma shows that $\omega^3$-large sets $X$ are larger than $E(\min X)$. This is a rather weak lower bound, since Ketonen and Solovay showed in their original proof that one can use the tower function instead of $E$.
\begin{lemma}\label{lemma:Ebounds}
For any $3 \leq x_0 < x_1 < \dots$ we have $\omega^3[x_0] \dots [x_{E(x_0) + 8}] >0$.
\end{lemma}
\emph{Proof:} This follows from item (3) directly above and Lemma~\ref{lemma:shiftlarge}.
\qed
\section{Theorem 4.11 replacement:}\label{section:411replacement}
Take:
\[
\Phi(\alpha)= \omega^3 \cdot \alpha + \omega^3 +l +2.
\]
\begin{lemma}[Theorem 4.11-Replacement]\label{lemma:411replacement}
Suppose that $2<X=\{x_0, \dots x_{|X|-1}\}$ is $\Phi(\gamma_0)$-large and $\gamma_0 > \gamma_1 > \dots > \gamma_j$ is such that $\mathrm{MC}(\gamma_{i}) \leq E(x_i+l)$. Then $j \leq |X|-1$.
\end{lemma}
The outline of the proof is as follows: Take $\alpha_0=\Phi(\gamma_0)$ and $\alpha_{i+1}=\alpha_i [x_i]$. By $\Phi(\gamma_0)$-largeness we know that $\alpha_{|X|}=0$. In the original proof of Theorem 4.11 in \cite{ketonensolovay} it is shown that the $\Phi(\gamma_i)$'s are a subsequence of the $\alpha_i$'s. We will show that the $\alpha_i$'s contain a subsequence whose $i$th elements are \emph{larger} than the corresponding $\Phi(\gamma_i)$'s:
\begin{center}
\begin{tabular}{ccccccccc}
$\alpha_{a_0}$ & $>$ & $\dots$ &$>$ & $\alpha_{a_i}$ & $>$ & $\dots$ & $>$ & $0$ \\
& & & & $\vee$ & & & & \\
$\Phi(\gamma_0)$ & $>$ & $\dots$ &$>$ & $\Phi(\gamma_i)$ & $>$ & $\dots$ & & \ \ ,
\end{tabular}
\end{center}
thus demonstrating the conclusion of the lemma. The core of this lemma, namely pointing out the subsequence which has this property, is contained in the \emph{Claim}.
\noindent\emph{Proof:} Notice that:
\begin{enumerate}
\item $\mathrm{MC}(\omega^3 \cdot \alpha+x) \leq \max \{\mathrm{MC}(\alpha)+3, x \}$,
\item $\mathrm{MC}(\alpha[x]) \leq \max( \mathrm{MC}(\alpha) , x )$,
\item $E(x+1) > E(x) + 4$,
\item Take $\beta_0 = \omega^3 \cdot \beta+ \omega^3$ and $\beta_{k+1}=\beta_k[x_{i+k}]$, then, by Lemma~\ref{lemma:Ebounds}: $\beta_{E(x_i)+8} > \omega^3 \cdot \beta$.
\end{enumerate}
Take:
\[
a_i=\left\{
\begin{array}{ll}
0 & \textrm{ if $i=0$}, \\
E(x_{i+l+1}) & \textrm{otherwise}.
\end{array}
\right.
\]
\noindent\emph{Claim:} $\alpha_{a_i} > \Phi (\gamma_i)$ for all $0<i \leq j$.
\noindent\emph{Proof of the claim:} Induction on $i$. We show both the case $i=1$ and the induction step simultaneously.
\noindent We have the following , if $i=0$ by notice~(4), otherwise by induction hypothesis and all four notices:
\[
\alpha_{a_i + E(x_{a_i+l+1})+l+8} > \omega^3 \gamma_i,
\]
Therefore, thanks to $\gamma_{i+1} < \gamma_i$:
\[
\alpha_{a_i + E(x_{a_i+l+1})+l+8} > \omega^3 \gamma_i \geq \omega^3 \cdot (\gamma_{i+1} + 1)=\omega^3 \cdot \gamma_{i+1}+ \omega^3.
\]
Hence:
\[
\alpha_{a_{i+1}} \geq \alpha_{a_i + E(x_{a_i+l+1})} > \Phi(\gamma_{i+1}),
\]
thus ending the proof of the claim, hence the lemma.
\qed
\begin{remark}
As a side note, the claim in the lemma also implies that the $\Phi(\gamma_i)$'s are a subsequence of the $\alpha_i$'s by using the following fact which can be shown using induction on $i$:
If $\alpha_{j-i-1} > \beta \geq \alpha_j$ and $x_{j-i-1} > \mathrm{MC}(\beta)$, then $\beta=\alpha_l$ for some $j-i\leq l \leq j$.
\end{remark}
\section{How to prove the Ketonen--Solovay theorem}\label{section:proof}
We proceed with, essentially, the proofs from Section 5 and 6 of \cite{ketonensolovay}. The proofs have been streamlined into our setting, defining trees as sets of sequences, as is usual in reverse mathematics. The use of tree arguments for proving Ramsey-type theorems is attributed to Erd\"os and Rado. The outline is as follows:
\begin{enumerate}
\item
Show that, if $X$ is $(\omega\cdot c)$-large, then for every colouring $C\colon X \rightarrow c$ there exists $\omega$-large $C$-homogeneous $H \subseteq X$. This is Lemma~\ref{lemma:php}.
\item
Given $C\colon [X]^{d+2}\rightarrow c$, construct Erd\"os--Rado trees $T_i$ from the first $i$ elements of $X$. Derive, from these trees, a decreasing sequence of ordinals of length $|X|$. Use Lemma~\ref{lemma:411replacement} to determine that, if $X$ is ``large enough'' compared to $\alpha$, then $T_{|X|}$ contains an $\alpha$-large branch $Y$ such that the value of $C(x)$ on $[Y]^{d+2}$ does not depend on $\max x$. The case $d=0$ is handled in Lemma~\ref{lemma:ketonensolovayd=0}, the case $d>0$ is treated in Lemma~\ref{lemma:stepup}.
\item
Using induction on $d$, derive Theorem~\ref{thm:ketonensolovay} from the above.
\end{enumerate}
\begin{lemma}\label{lemma:php} If $X$ is $(\omega \cdot c)$-large then for every colouring $C\colon X \rightarrow c$ there exists $\omega$-large $C$-homogeneous $H \subseteq X$.
\end{lemma}
\emph{Proof:}
Since $X$ is $(\omega \cdot c)$-large it is the disjoint union of $\omega$-large sets: $X=X_0 \cup \dots \cup X_{c-1}$. Assume, without loss of generality, that the $\min C^{-1}(i)$'s are increasing. Assume, for a contradiction, that no $C^{-1} (i)$ is $\omega$-large. By induction on $i < c-1$ we have:
\[
\min C^{-1} (i) \leq \min X_i \ \& \ |\bigcup_{j \leq i} C^{-1} (j)| < |\bigcup_{j\leq i}X_j|,
\]
the latter being implied by the first, as the $X_i$'s are $\omega$-large whilst the $C^{-1}(i)$'s are not.
This implies $|\bigcup_{j \leq c-1} C^{-1} (j)| < |X|$, a contradiction.
\qed
\begin{definition}
For $0<i\leq d+1$, $C\colon [X]^{d+1} \rightarrow c$, we say $Y\subseteq X$ is ${\min}_i$-$C$-homogeneous if the value of $C$, on $[Y]^{d+1}$, depends only on the first $i$ elements of its input:
\[
C(x_0, \dots , x_{i-1} , y_i , \dots y_d)= C(x_0, \dots , x_{i-1} , z_i , \dots , z_d)
\]
for all $x_0 < \dots < x_{i-1} < y_i < \dots < y_d$, $x_{i-1} < z_i < \dots < z_d$ from $Y$.
\end{definition}
\begin{lemma}\label{lemma:ketonensolovayd=0}
If $X$ is $(\omega^{c+3}+\omega^3 + c+4)$-large, then every colouring $C\colon [X]^2 \rightarrow c$ has homogeneous $H\subseteq X$ of size $> \min H$.
\end{lemma}
\emph{Proof:} Given $X=\{ x_0 < \dots < x_{|X|-1}\}$ and $C\colon [X]^{2} \rightarrow c$, by the previous lemma it is sufficient so show that $X$ has an $(\omega\cdot c)$-large ${\min}_1$-$C$-homogeneous subset. Assume, for a contradiction, that is not the case.
Define $T_0 \subset \dots \subset T_{|X|}$ as follows: $T_0= \{ \emptyset \}$ and
\[
T_{i+1}= T_i \cup \{ \sigma \conc x_i\},
\]
where $\sigma \in T_i$ is the leftmost of maximum length such that $\{ \sigma_0 , \dots, \sigma_{\mathrm{lh}(\sigma )-1} ,x_i\}$ is ${\min}_1$-$C$-homogeneous. By construction, if $\sigma\conc y , \sigma\conc z \in T_i$ then:
\[
C( \sigma_{\mathrm{lh}(\sigma)-1} , y ) \neq C(\sigma_{\mathrm{lh}(\sigma)-1}, z).
\]
So the number of branches of $\sigma$ has upper bound $c$.
Let $(\omega\cdot c) [\sigma_0] \dots [\sigma_{\mathrm{lh}(\sigma)-1}]=\omega \cdot d_\sigma + r_\sigma >0$
Define: $n_{\sigma, i} = (c+1)^{r_\sigma} (c-\#$branches of $\sigma$ in $T_i)$.
Take $\gamma_0=\omega^c$ and, for $i>0$:
\[
\gamma_i= \bigoplus_{\mathclap{\substack{ j<c \\ \emptyset\neq \sigma \in T_i \\ j = d_\sigma}}} \omega^j \cdot n_{\sigma, i}.
\]
One can check that: $\mathrm{MC} (\gamma_{i}) \leq E(x_i + c)$.
Notice that, by the absence of an $(\omega \cdot c)$-large subset of $X$: $\gamma_{i+1} < \gamma_i$ and $\gamma_{|X|} >0$.
This is a contradiction due to Lemma~\ref{lemma:411replacement}.
\qed
\begin{remark}
Any $\omega^{c+4}$-large $X>2$ is also $(\omega^{c+3}+\omega^3 + c+4)$-large.
\emph{Proof:} This follows from $\omega^{c+4}[x_0] \dots [x_{c+4}] > (\omega^{c+3}+\omega^3 + c+4)$ with Lemmas~\ref{lemma:shiftlarge} and ~\ref{lemma:largerordinal},
\qed
\end{remark}
\begin{lemma}\label{lemma:stepup}
Suppose $X$ is $(\omega^3 \cdot \omega^\alpha + \omega^3 + \max \{c ,\mathrm{MC}(\alpha)\}+3)$-large, then for every colouring $C\colon [X]^{d+1} \rightarrow c$ there exists $\alpha$-large ${\min}_d$-$C$-homogeneous subset of $X$.
\end{lemma}
\emph{Proof:} Assume, for a contradiction, that the colouring $C\colon [X]^{d+1} \rightarrow c$ is such that it does not have $\alpha$-large ${\min}_d$-homogeneous subset of $X=\{ x_0 < \dots < x_{|X|-1}\}$. Define $T_0 \subset \dots \subset T_{|X|}$ as follows: $T_0= \{ \emptyset \}$ and
\[
T_{i+1}= T_i \cup \{ \sigma \conc x_i\},
\]
where $\sigma \in T_i$ is the leftmost of maximum length such that $\{ \sigma_0 , \dots, \sigma_{\mathrm{lh}(\sigma )-1} ,x_i\}$ is ${\min}_d$-$C$-homogeneous. By construction, if $\sigma\conc y , \sigma\conc z \in T_i$ then there are $\sigma_{j_0} < \dots < \sigma_{j_{d-1}}$ with
\[
C( \sigma_{j_0} , \dots , \sigma_{j_{d-1}}, y ) \neq C(\sigma_{j_0} , \dots , \sigma_{j_{d-1}}, z).
\]
So the number of branches of $\sigma$ is bound by the number of colourings $[\sigma_0, \dots , \sigma_{\mathrm{lh} (\sigma )-1} ]^d \rightarrow c$, which has upper bound $c^{2^{\sigma_{\mathrm{lh}(\sigma)-1}}}$.
Define: $m_{\sigma, i}= c^{2^{\sigma_{\mathrm{lh}(\sigma)-1}}}-\#$branches of $\sigma$ in $T_i$.
By the comment directly above $m_{\sigma, i} \geq 0$.
Take $\gamma_0=\omega^\alpha$ and:
\[
\gamma_i= \bigoplus_{\emptyset \neq \sigma \in T_i} \omega^{\alpha[\sigma_0]\cdots [\sigma_{\mathrm{lh}(\sigma)-1}]}\cdot m_{\sigma, i}.
\]
One can check that: $\mathrm{MC} (\gamma_{i}) \leq E(x_i+ \max \{c, \mathrm{MC}(\alpha)\} +1)$.
We can see that, by the absence of ${\min}_d$-homogeneous $\alpha$-large subsets of $X$: $\gamma_i > \gamma_{i+1}$ and $\gamma_{|X|}>0$.
This is a contradiction by Lemma~\ref{lemma:411replacement}.
\qed
Theorem~\ref{thm:ketonensolovay} can now be shown using induction on $d$ to prove:
If $X\geq 3$ is $\omega_{d}(\omega^{c+4}+d)$-large, then every colouring $C\colon [X]^{d+2} \rightarrow c$ has homogeneous $H\subseteq X$ of size $> \min H$.
Use Lemma~\ref{lemma:ketonensolovayd=0} for the base case and Lemma~\ref{lemma:stepup} in the induction step. Use Lemmas~\ref{lemma:shiftlarge} and ~\ref{lemma:largerordinal} to bridge the differences in largeness.
\begin{corollary}
Theorem~\ref{thm:ketonensolovay} is provable in $\mathrm{RCA}_0$.
\end{corollary}
\begin{remark}
Above proof is also fine in $\mathrm{RCA}_0^{\displaystyle{*}}$ if we fix a standard $d$.
\end{remark}
\section{A note on weakening the base theory}
We work in $\mathrm{RCA}_0^{\displaystyle{*}}$ as defined in X.4 from \cite{sosoa} and elaborated in Section~2 from \cite{simpsonsmith}. Since every elementary function's existence is proven within $\mathrm{RCA}_0^{\displaystyle{*}}$ we recommend Sections~2 and ~3 from Chapter~1 of \cite{rose} as background material. Defining our ordinals as in Section~\ref{section:ordinals} poses no problem, however note that, in the proof in $\mathrm{RCA}_0$, we used implicitely that $(\alpha, x) \mapsto \alpha[x]$ is primitive recursive.
As we are now working within the weaker system, we need this function to be elementary, which is non-obvious for nonstandard $d$. For example, if we encode the ordinals using prime numbers, as in \cite{simpsonsmith}, then we may need a nonstandard amount of iterations of the exponential function to determine the code of $\alpha[x]$, which is not available in $\mathrm{RCA}_0^{\displaystyle{*}}$. To solve this problem, we will give an explicit encoding of ordinals which is consistent with the previous definitions and which will allow us to use bounded recursion to define fundamental sequences.
Starting with the pairing map $j (x,y)= \frac{1}{2}(x+y+1)(x+y)+y$ and projections $j_1$ and $j_2$ for this map, we use:
\[
\pi^n_i (x)= \left\{
\begin{array}{ll}
j_1 j_2^{(n-i)}(x) & \textrm{if $1<i\leq n$}, \\
j_2^{(n)}(x) & \textrm{if $i=1$}.
\end{array}
\right.
\]
Notice that:
\[
j(\pi^n_n(x) , j(\pi^n_{n-1}(x), \dots , j(\pi^n_2(x), \pi^n_1(x)) \dots )))=x.
\]
Intuitively, the $\pi^n_i$ are the usual projection functions defined on alternatively coded $n$-tuples, for all (including non standard) $n$.
\begin{definition}[Codes of ordinals]
Using bounded recursion, we define the codes of ordinals from $\mathcal{E}$ and relation $\prec$ on the codes as follows:
\begin{enumerate}
\item $a$ is a code whenever $j_1(a)=n>0$ and $\pi^n_{1}(j_2(a)) \succeq \dots \succeq \pi^n_n (j_2(a))$ are codes.
\item Given codes $a,b$ with $n=j_1(a)$, $m=j_1 (b)$, $a'=j_2(a)$ and $b'=j_2(b)$,
$a \prec b$ if and only if:
\begin{enumerate}
\item $n<m$ and $\pi^n_i (a') = \pi^m_i (b')$ for all $0<i\leq m$, or
\item there is $0<i \leq \min\{n , m\}$ with $\pi^{n}_j(a')=\pi^n_j(b')$ for all $0<j<i$ and $\pi^{n}_i(a')\prec \pi^n_i(b')$
\end{enumerate}
\end{enumerate}
$0$ is the code for $0$, $w_0=j(1,0)$, $w_{i+1}=j(1, w_i)$ and $0 \preceq a$ for all codes of ordinals $a$. As we did with with the ordinals, we use $a_i=\pi^n_i(j_2(a))$.
\end{definition}
One can define ordinal addition, multiplication, natural (Hessenberg) sum, the Cantor Normal Form (coded version) and the maximum coefficient on the codes of the ordinals using bounded recursion.
Using bounded recursion one can define a function $\mathrm{code}\colon \mathcal{E} \mapsto \mathbb{N}$ such that it preserves the order and operations on the ordinals. Furthermore $\mathrm{code}(\omega_d)=w_d$.
Our next step is to define the fundamental sequences on the codes of ordinals.
\begin{definition}
On the codes of ordinals, following the definition of fundamental sequences on ordinals, define:
\[
a[x]= \left\{
\begin{array}{ll}
0 & \textrm{if $a=0$}, \\
j(n-1, j_2^{(2)}(a)) & \textrm{if $n=j_1(a)>0$ and $a_n=0$}, \\
j(n+x-1, f(x, b, a) ) & \textrm{if $n=j_1(a)>0$, $m=j_1(a_n)>0$, $(a_n)_m=0$} \\
& \textrm{and $b=j(m-1, j_2^{(2)}(a_n)) $},\\
j(n, j(a_n[x], j^{(2)}_2(a)) & \textrm{otherwise}, \\
\end{array}
\right.
\]
where $f(0, b, a)=j_2^{(2)}(a)$ and $f(i+1, b,a) = j(b, f(i, b,a))$.
\end{definition}
\begin{lemma}
The functions $(a,x) \mapsto a[x]$ and $(a , \{x_0 < \dots < x_n\}) \mapsto a[x_0] \dots [x_n]$ are elementary.
\end{lemma}
\emph{Proof:} One can check:
\[
f(i, b,a) \leq 2^i (a+b+1)^{2i},
\]
hence, for $0<a\prec w_d$ and $x>0$:
\[
a[x] \leq 3^d (2^{x+1}a^{2x})^{2^d}.
\]
So:
\[
a[x_0] \dots [x_n] \leq (6a)^{d(2x_n+2)^{(n+1)(d+1)}}.
\]
Therefore, these functions are elementary by bounded recursion.
\qed
Taking care to use $\Delta^0_0$-induction every time induction was used, one can proceed with the proofs as described, using codes of ordinals instead of $\mathcal{E}$ where necessary. Simply observe that if a set is $\alpha$-large, then it is also $\mathrm{code}(\alpha)$-large, where $a$-largeness is similar to $\alpha$-largeness, but defined on the codes of ordinals instead of on the ordinals.
As an example of dealing with the induction steps, examine Lemma~\ref{lemma:fundbound} modified to the ordinal codes:
\begin{lemma}
If $w_d \succ a \succ b$ and $x>\mathrm{MC}(b)$, then $a[x] \succeq b$, where the inequality is strict if $a$ is a limit.
\end{lemma}
\emph{Proof:} Given $x$, $a$ and $b$, use $\Delta^0_0$-induction on $d$ to prove the following:
If $w_d \succ a' \succ b'$, $a' \leq a$, $b'\leq b$, and $x>\mathrm{MC}(b)$, then $a'[x] \succeq b'$, where the inequality is strict if $a'$ is a limit.
To show that this can be expressed with a $\Delta^0_0$-formula, notice that the characteristic functions of $a \prec b$ and $a[x] \succeq b$ are elementary and use those functions as set parameter values.
\qed
\begin{corollary}
Theorem~\ref{thm:ketonensolovay} is provable in $\mathrm{RCA}_0^{\displaystyle{*}}$.
\end{corollary}
|
1,116,691,497,867 | arxiv | \section{Introduction}
Among the tasks of continuous finite-dimensional optimization, the most important from a practical point of view and, at the same time, the most difficult is the class of problems of global optimization. Methods for solving the global optimization problem are divided into deterministic stochastic and heuristic methods. Heuristic methods are a relatively new and rapidly developing class of global optimization methods. Among these methods, evolutionary and behavioral methods stand out. Behavioral methods are multiagent methods based on modeling the intellectual behavior of agent colonies (swarm intelligence). In nature, such an intelligence is possessed by groups of social insects, for example, termite colonies, ants, bees, and some species of wasps.
The dynamics of the population of social insects is determined by the interactions of insects with each other, as well as with the environment. These interactions are carried out through various chemical or physical signals, for example, pheromones secreted by ants.
Swarm intelligence has become a research interest to many research scientists of related fields since 2005. In general, there is quite a lot of modifications of the algorithm. First of all, the Bees algoritm starts by Pham, Ghanbarzadeh et al. in 2005 \cite{pham2005bees,pham2006bees}. It mimics the food foraging behaviour of honey bee colonies. More about effectiveness and specific abilities of this algorithm have been proven in the following papers \cite{pham2009bees,pham2014benchmarking,pham2015comparative}. Artificial Bee Colony (ABC) algorithm was introduced by Karaboga in 2005 \cite{karaboga2005artificial}. This algorithm is a swarm based meta-heuristic algorithm. More about this algorithm like performance and convergence can be found in the following papers \cite{karaboga2008performance,karaboga2009comparative}. The main difference of ABC from usual Bee Colony algorithm is that bees can leave their region if there is not enough nectar. This can be very useful when looking for a global extremum: the algorithm can converge much faster. The undoubted advantage of this algorithms is that it can be generalized to problems of any dimension, also it can be easily be run in parallel on supercomputers.
In this work we will use Modified Bee Colony algorithm (MBC) because the ABC is suitable only for finding a global extremum, in our problem, due to noise in the experimental data, local extremums can occur and it is important for us to know their coordinates. Therefore, MBC algorithm is more preferable to us as a tool. The difference from usual Bee Colony is that if we are unable to improve our found extremum for some time, we shrink the local search area.
To evaluate the operation of the algorithm written in the Python, test runs were carried out to find global and local minimums on well-known test functions. Despite the fact that all the functions below are generalized for the multidimensional case, in this work we will run our program only for the two-dimensional case, since the algorithm itself can easily be rewritten for any dimensions, and in the two-dimensional case it is easier to clearly demonstrate the operation of the algorithm itself. As benchmark problems were choosen Shekel, Rosenbrock, Himmelblau and Rastrigin functions.
The reactive transport in porous media is important component of many industrial and environmental problems like water purification, soil pollution and remediation, catalytic filters, $CO_2$ storage, oil recovery, etc., to name just a few. Historically, most of the theoretical and experimental research on transport in porous media in general, and on reactive transport in particular, has been carried out at macroscopic Darcy scale \cite{bear2013dynamics,helmig1997multiphase}. In many cases, the bottleneck in performing computational modeling of reactive transport is the absence of data for the pore scale adsorption and desorption rate (or in more general, the parameters of the heterogeneous reactions). Despite the progress in developing devices to perform experimental measurements at the pore–scale, experimental characterization of these rates is still a very challenging task.
In the case of heterogeneous (surface) reactions at pore scale, the species transport is coupled to surface reaction via boundary conditions. When the reaction rates are not known, their identification falls into the class of boundary value inverse problems, \cite{lavrentev1986ill,alifanov2011inverse,isakovinverse,samarskii2007numerical}. The additional information which is needed to identify the parameters is often provided in form of dynamic change of the concentration at the outlet (e.g., so called breakthrough curves). In the literature, inverse problems for porous media flow are discussed mainly in connection with parameter identification for macroscopic, Darcy scale problems. An overview on inverse problems in groundwater Darcy scale modeling can be found in \cite{sun2013inverse}. Identification of parameters for pore scale models is discussed in this paper, and the algorithms from \cite{sun2013inverse} and other papers discussing parameter identification at macroscale can not be applied here without modification. Let us shortly mention some general approaches for solving inverse problems.
Different algorithms can be applied for solving parameter identification problems, see, e.g.
\cite{tarantola2005inverse,Aster2013}. Many of the algorithms exploit deterministic methods based on Tikhonov regularization technique \cite{tikhonov1977solutions,engl2014inverse} and target at minimizing a functional of the difference between measured and computed quantities. An important part of such algorithms is the definition of feasible set of parameters on which the functional is minimized. Local or global optimization procedures are used in the optimization \cite{horst2013handbook,nocedal2006numerical}.
In this sense it could be pointed out that there is certain similarity between the mathematical formulation of an optimization problem and of a parameter identification one.
Stochastic-deterministic methods are also popular approach for solving parameter identification problems. A variant of the method based on deterministic sampling of points looks appropriate for the topic considered here. A stochastic approach for global optimization in its simplest form consists only of a random search and it is called Pure Random Search \cite{Zhigljavsky2008}. In this case the residual functional is evaluated at randomly chosen points from the feasible set.
Sobol sequences \cite{sobol1976uniformly,sobol1979systematic} can be used for sampling. The sensitivity analysis tool SALib \cite{Herman2017} has shown to be appropriate tool for this. Such an approach is successfully used, for example, in multicriteria parameter identification \cite{sobol1981choosing}.
The solution of the inverse problems we are interested in, is composed of two ingredients: (multiple) solution of the direct (called also forward) problem, and the parameter identification algorithm.
\begin{comment}
The direct pore scale simulation of reactive flow in porous media requires four essential components, namely,
\\ \textbf{(i)} model of the porous geometry; \textbf{(ii)} mathematical model of the processes to be investigated; \textbf{(iii)} algorithms and \textbf{(iv)} software for solving these models.
Let us give some very short comments on the above components.
\\ \textbf{(i)} The pore scale geometry in the simplest case can be represented by periodic arrangement of obstacles. Another class of pore scale geometries are the stochastic ones. Variety of approaches exist for generation of stochastic porous media \cite{}. Finally, nowadays 2D and 3D images of samples of real porous media (rocks, nonwoven, textiles, ceramic membranes etc.) can be used \cite{}. Let us also mention that there are a large number of papers and software tools (for example Aviso \cite{}, GeoDict \cite{GeoDict}, Ingrain \cite{}, etc.) dedicated to simulating a number of problems describing flow and (passive) transport in 3D computerized tomography images \cite{ComputedTomography}. For the needs of this paper we restrict our considerations to periodic porous media, the case of computer generated stochastic geometry or porous geometry coming from 3D imaging will be subject of forthcoming papers.
\\ \textbf{(ii)} The single phase or multiphase flow at pore scale in most of the cases is described by laminar Navier-Stokes equations \cite{Bear,Kaviany}. Single phase flow is considered in this paper. Several models exist for describing passive transport of substance(s) \cite{concentration, random walk, Bear, ...}. Variety of papers consider different models of volumetric and/or surface reactions (called also homogeneous and heterogeneous reactions, respectively) \cite{}. A relatively simple surface reaction model is considered in this paper. The reaction rates are constants, the pore scale geometry does not change for the considered reactions and time scales.;
\\ \textbf{(iii)} and \textbf{(iv)} Finite volume, finite element, meshfree methods have been developed for solving the above class of problems. Finite element method is used here, mainly aiming to benefit from availability of well developed software tools for FEM. The software platform FEniCS is exploited \cite{LoggMardalEtAl2012a}.
}
\end{comment}
The goal of this paper is to contribute to the understanding of the formulation and the solution of a class of parameter identification problems for pore scale reactive transport in the case when the measured concentration of the specie at the outlet of the domain is provided as extra information in order to carry out the identification procedure. Deterministic and stochastic parameter identification approaches are considered. The influence of the noise in the measurements on the accuracy of the identified parameters is discussed. Multistage identification procedure is suggested for the considered class of problems. The proposed identification approach is applicable for different geometries (random and periodic) and for a range of process parameters. In this paper the potential of the approach is demonstrated in identifying parameters of Langmuir isotherm for low Peclet and low Damkoler numbers reactive flow in a 2D periodic porous media with circular inclusions. It is supposed that this paper is the first one in series of papers dedicated to this topic. Simulation results for random porous media and other regime parameters are subject of follow up papers.
\begin{comment}
Periodic porous media in 2D and simple surface reaction are considered here to reduce the presentation and to keep track on the main focus of the paper. Suitable basic deterministic and stochastic algorithms for identification of unknown adsorption and desorption rates are discussed. Significant attention is paid to issues important for the solution of practical problems. Such issues are the influence of the noise in the measurement on the accuracy of the identification, the identification of admissible set of parameters (unlike the typical attempt to identify fixed values for parameters), the usage of prediction-refinement approach to parameter identification. The sensitivity with respect to important parameters is also studied.
\end{comment}
The reminder of the paper is organized as follows. The Modified Bee Colony Algorithm is described in detail in Section 2. The direct problem is considered in Section 3. At pore scale, single phase laminar flow described by incompressible Stokes equations, and solute transport described by convection-diffusion equation, are considered. The surface reaction is accounted in the boundary conditions. Henry and Langmuir adsorption isotherms are considered here \cite{kralchevsky2008chemical}, identification is carried out for adsorption and desorption parameters in the Langmuir isotherm. Section 4 is dedicated to description of the used computational algorithm. Finite element method is exploited after triangulation of the computational domain. The numerical investigation of the grid convergence and sensitivity with respect to parameters is also presented in this Section. The set up of the parameter identification problem for reactive flow in porous media is described in Section 5. Finally, Section 5 summarizes the results presented in this paper.
\section{Bee Colony Algorithms}
\begin{comment}
--> OI: this is repetition.
There are many different modifications to this algorithm. But we will mention the first people who started to deal with this algorithm. The Bees algoritm starts by Pham, Ghanbarzadeh et al. in 2005 \cite{pham2005bees,pham2006bees}. It mimics the food foraging behaviour of honey bee colonies. More about effectiveness and specific abilities of this algorithm have been proven in the following papers \cite{pham2009bees,pham2014benchmarking,pham2015comparative}. Artificial Bee Colony (ABC) algorithm was introduced by Karaboga in 2005 \cite{karaboga2005artificial}. This algorithm is a swarm based meta-heuristic algorithm. More about this algorithm like performance and convergence can be found in the following papers \cite{karaboga2008performance,karaboga2009comparative}. The main difference of ABC from usual Bee Colony algorithm is that bees can leave their region if there is not enough nectar. This can be very useful when looking for a global extremum: the algorithm can converge much faster. The undoubted advantage of this algorithms is that it can be generalized to problems of any dimension, also it can be easily be run in parallel on supercomputers.
\end{comment}
\subsection{Modified Bee Colony Algorithm}
In this work we will use Modified Bee Colony algorithm (MBC) because the ABC is suitable only for finding a global extremum, in our problem, due to noise in the experimental data, local extremums can occur and it is important for us to know their coordinates. Therefore, MBC algorithm is more preferable to us as a tool. The difference from usual Bee Colony is that if we are unable to improve our found extremum for some time, we shrink the local search area. Let us describe the idea of this algorithm. There are scout bees $sb$ that do random searches. After exploring the area, judging by the amount of nectar found, the area is divided into several local areas. The first $n$ areas are called the best locations to which more agent bees $abb$ are sent, to the remaining locations less agent bees $abp$ are sent. All bees working in their locations can search around their area to increase the amount of nectar. If the bees cannot improve the result for a certain time $\tau$, then they narrow their search.
In general, our implementation of the algorithm requires 8 control parameters, including
\begin{itemize}
\item[-] best locations $n$\;
\item[-] perspective locations $m$\;
\item[-] half side of local area $d$\;
\item[-] Euclidean distance value $\delta$\;
\item[-] number of scout bees $sb$\;
\item[-] number of agent bees at the best locations $abb$\;
\item[-] number of agent bees on perspective locations $abp$\;
\item[-] parameter of the number of iterations during which it will not be possible to improve the result, after which the zone will decrease $\tau$\;
\end{itemize}
The algorithm is listed below:\\
\begin{algorithm}[H]
\SetAlgoLined
Initialization of scout bees\;
Sort them by nectar value\;
Calculating the Euclid distance and divide into regions\;
\For{Check all regions}{
\If{Iteration counter >= Maximum iteration}
{Break\;}
Set initial extremum\;
Set \textit{divider} to zero\;
Set local search area size\;
Initialize local area where extremum is in the center\;
\While{Until converge}{
\textit{Iteration counter += 1}\;
Initialize agent bees\;
Sort them by feed value\;
\eIf{New found extremum is better than previous one}{
Change extremum to new one\;
Move local search area to the new extremum\;
Set \textit{fail = 0} \;
}{
\textit{fail += 1}\;
}
\If{\textit{fail} == \textit{stop fail}}{
Set \textit{fail = 0}\;
Set \textit{divider} += 2\;
Shrink local search area \textit{divider} times\;
\If{Euclid distance through founded extremums <= $\varepsilon$}{
Break\;}
}{
}
\If{Iteration counter == Maximum iteration}
{Break\;}
}
}
\For{All regions}
{
Print found extremum coordinates and values
}
\caption{Modified Bee Colony algorithm}
\label{algorithm}
\end{algorithm}
\subsection{Benchmark problems}
To evaluate the operation of the algorithm written in the Python, test runs were carried out to find global and local minimums on well-known test functions. Despite the fact that all the functions below are generalized for the multidimensional case, in this work we will run our program only for the two-dimensional case, since the algorithm itself can easily be rewritten for any dimensions, and in the two-dimensional case it is easier to clearly demonstrate the operation of the algorithm itself.
\begin{itemize}
\item \textit{Shekel function}. Multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques \citep{shekel1971test}.
\item \textit{Rosenbrock function}. Non-convex function, introduced by Howard H. Rosenbrock in 1960 \cite{rosenbrock1960automatic}, which is used as a performance test problem for optimization algorithms. The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial. To converge to the global minimum, however, is difficult.
\item \textit{Himmelblau function}. Multi-modal function, used to test the performance of optimization algorithms. The locations of all the minima can be found analytically. However, because they are roots of cubic polynomials, when written in terms of radicals, the expressions are somewhat complicated. The function is named after David Mautner Himmelblau, who introduced it \cite{himmelblau1972applied}.
\item \textit{Rastrigin function}. Non-convex, non-linear multimodal function. It was first proposed by Rastrigin \cite{rastrigin1974extremal} as a 2-dimensional function and has been generalized by Rudolph \cite{rudolph1990globale}. The generalized version was popularized by Hoffmeister and B\"ack \cite{hoffmeister1990genetic} and M\"uhlenbein et al \cite{muhlenbein1991parallel}. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima.
\end{itemize}
\begin{figure}[h]
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{func_3d_shekel.png}}
\caption{3D plot of Shekel function}
\label{shekel_3d}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{bee_crowd_Shekel.png}}
\caption{Result of ABC algorithm}
\label{bee_shekel}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{func_3d_rosenbrock.png}}
\caption{3D plot of Rosenbrock function}
\label{rosenbrock_3d}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{bee_crowd_rosenbrock.png}}
\caption{Result of ABC algorithm}
\label{bee_rosenbrock}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{func_3d_himmelblau.png}}
\caption{3D plot of Himmelblau function}
\label{himmelblau_3d}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{bee_crowd_himmelblau.png}}
\caption{Result of ABC algorithm}
\label{bee_himmelblau}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{func_3d_rastrigin.png}}
\caption{3D plot of Rastrigin function}
\label{rastrigin_3d}
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1\linewidth]{bee_crowd_rastrigin.png}}
\caption{Result of ABC algorithm}
\label{bee_rastrigin}
\end{minipage}
\end{figure}
\FloatBarrier
\begin{sidewaystable}[]
\centering
\caption{Benchmark definitions}
\begin{tabular}{|l|l|l|l|l|}
\hline
\textbf{Function} & \textbf{Formulae} & \textbf{Domain} \\ \hline
Shekel & $F(x,y) = -\frac{1}{1+(x-2)^2 + (y-10)^2} - \frac{1}{2+(x-10)^2 + (y-15)^2} - \frac{1}{2+(x-18)^2 + (y-4)^2}$ & $D = \{(x,y) | x \in [0, 20], y \in [0, 20]\}$ \\ \hline
Rosenbrock & $F(x, y) = 100(y - x^2)^2 + (1-x)^2$ & $D = \{(x,y) | x \in [-5, 5], y \in [-5, 5]\}$ \\ \hline
Himmelblau & $F(x,y) = (x^2 + y - 11)^2 + (x + y^2 - 7)^2$ & $D = \{(x,y) | x \in [-10, 10], y \in [-10, 10]\}$ \\ \hline
Rastrigin & $F(x,y) = 20 + x^2 + y^2 - 10(cos(2\pi x) + cos(2\pi y))$ & $D = \{(x,y) | x \in [-5, 5], y \in [-5, 5]\}$ \\ \hline
\end{tabular}
\label{tab_benchmarks_def}
\bigskip
\centering
\caption{Exact and found minimums}
\begin{tabular}{|l|l|l|c|c|}
\hline
\textbf{Function} & \textbf{Minimum} & \textbf{Program output} & \multicolumn{1}{l|}{\textbf{Iterations}} & \multicolumn{1}{l|}{\textbf{NFE}} \\ \hline
Shekel & \begin{tabular}[c]{@{}l@{}}F(2, 10) = -1.01439037\\ F(10, 15) = -0.5165\\ F(18, 4) = -0.5088\end{tabular} & \begin{tabular}[c]{@{}l@{}}F(2.01238, 10.00536) = -1.01424\\ F(9.99079, 14.9984) = -0.51645\\ F(18.00263, 4.00842) = -0.50875\end{tabular} & 93 & 1260 \\ \hline
Rosenbrock & F(1, 1) = 0 & \begin{tabular}[c]{@{}l@{}}F(1.00434, 1.00954) = 9.134e-05\\ F(0.96131, 0.92385) = 0.0015\\ F(0.80291, 0.64176) = 0.03969\end{tabular} & 138 & 1630 \\ \hline
Himmelblau & \begin{tabular}[c]{@{}l@{}}F(3.584428, -1.848126) = 0\\ F(-2.805118, -3.131312) = 0\\ F(-3.779310, -3.283186) = 0\\ F(3, 2) = 0\end{tabular} & \begin{tabular}[c]{@{}l@{}}F(3.57899, -1.8563) = 0.00284\\ F(-2.80447, 3.1313) = 1.356e-05\\ F(-3.77731, -3.27165) = 0.00543\\ F(3.00457, 2.00039) = 0.00081\end{tabular} & 113 & 1650 \\ \hline
Rastrigin & F(0,0) = 0 & \begin{tabular}[c]{@{}l@{}}F(-0.00168, -0.00483) = 0.00518\\ F(-2.98775, 2.98489) = 17.91085\\ F(2.98575, 2.97744) = 17.92021\\ F(1.99025, -3.97938) = 19.89912\end{tabular} & 96 & 1470 \\ \hline
\end{tabular}
\label{tab_minimums}
\end{sidewaystable}
\FloatBarrier
The definitions for each function you can see in table \ref{tab_benchmarks_def}. The 3D plots of each functions you can see in Figs. \ref{shekel_3d}, \ref{rosenbrock_3d}, \ref{himmelblau_3d}, \ref{rastrigin_3d}. The operation of the algorithm can be seen in the Figs. \ref{bee_shekel}, \ref{bee_rosenbrock}, \ref{bee_himmelblau}, \ref{bee_rastrigin}. The \textit{light blue} spots means processed points, \textit{red stars} indicate the extremum found in each locations. The functions of Shekel and Himmelblau are quite simple in terms of finding extrema, so the number of the best locations and perspective locations in total were chosen exactly to the number of extrema. Since the functions of Rosenbrock and Rastrigin are rather complex functions, despite the fact that they have only one global minimum, we gave the algorithm to work out more locations to increase probability of success. Also, it is extremely important for us to control the Number of Function Evaluations (NFE), because for our main problem, which is described in the next section, computing many direct problems is expensive. The list of minimums and program results can be found in table \ref{tab_minimums}. As you can see, the algorithm works fine in all benchmarks. All plots in this article built using the matplotlib library \cite{Hunter2007}.
\section{Mathematical model}
In this work, we will use absolutely the same mathematical model which described in our previous work \cite{grigoriev2019computational}, where we investigate diffusion dominated case, where our Damk\"oler numbers are small. At this work, we will deal with reaction dominated case, where we will have big Damk\"oler numbers and in this situation, finding a minimum can be difficult. For simplicity of analysis we will consider the porous media as periodically arranged circles in two dimensional case. Scheme of the domain is shown on Fig.\ref{f-1}. Due to the periodicity, we will only generate a computational grid from a segment of the entire domain which marked with a red rectangle on Fig.\ref{f-1}. A part of the domain, namely $\Omega_f$ is occupied by a fluid, while the other part is occupied be the obstacles $\Omega_s$. The obstacle surfaces (where the reaction occurs) are denoted by $\Gamma_s$, while symmetry lines are denoted by $\Gamma_{sim}$. It is supposed that dissolved substance is introduced via the inlet boundary $\Gamma_{in}$, and the part of the substance which did not react flows out via $\Gamma_{out}$. Since the adsorption reaction occurs only on the surface of obstacles, the computational domain consists only of $\Omega_f$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}
\shade[top color=blue!5, bottom color=blue!5] (7,0) rectangle +(3,5);
\shade[left color=blue!50, right color=blue!5] (3,0) rectangle +(4,5);
\shade[top color=blue!50, bottom color=blue!50] (0,0) rectangle +(3,5);
\draw [dashed] (0, 0) rectangle +(10,5);
\draw [color=red] (0,4.25) rectangle +(10,0.5);
\foreach \y in {3,...,6}
\foreach \x in {0,...,4} {
\filldraw [fill=yellow,draw=black] (\y,\x+0.25) circle (0.24);
\filldraw [fill=yellow,draw=black] (\y+0.5,\x+0.75) circle (0.24);
}
\draw [<-, line width=1, color=black] (1,1.5) -- (-1,1.5);
\draw [<-, line width=1, color=black] (1,2.5) -- (-1,2.5);
\draw [<-, line width=1, color=black] (1,3.5) -- (-1,3.5);
\draw [<-, line width=1, color=black] (11,1.5) -- (9,1.5);
\draw [<-, line width=1, color=black] (11,2.5) -- (9,2.5);
\draw [<-, line width=1, color=black] (11,3.5) -- (9,3.5);
\draw [-] (2,4.5) -- (2,6);
\draw (2,6.4) node {$\Omega_f$};
\draw [-] (4,4.375) -- (4,6);
\draw (4,6.4) node {$\Omega_s$};
\draw [-] (6,4.5) -- (6,6);
\draw (6,6.4) node {$\Gamma_s$};
\draw [-] (0,4.5) -- (-0.5,6);
\draw (-0.5,6.4) node {$\Gamma_{in}$};
\draw [-] (7.5,4.25) -- (8,6);
\draw [-] (8.5,4.75) -- (8,6);
\draw (8,6.4) node {$\Gamma_{sim}$};
\draw [-] (10,4.5) -- (10.5,6);
\draw (10.5,6.4) node {$\Gamma_{out}$};
\draw [->] (-0.5,-0.5) -- (1.5,-0.5);
\draw (1.9,-0.5) node {$x_1$};
\draw [->] (-0.5,-0.5) -- (-0.5,0.5);
\draw (-0.5,0.9) node {$x_2$};
\end{tikzpicture}
\caption{Sketch of the pore scale domain}
\label{f-1}
\end{center}
\end{figure}
\subsection{Flow problem}
The flow in the pores described here by the steady state incompressible Stokes equations:
\begin{equation}\label{1}
\nabla p - \mu \nabla^2 \bm{u} = 0,
\end{equation}
\begin{equation}\label{2}
\nabla \cdot \bm{u} = 0,
\quad \bm{x} \in \Omega_f,
\quad t > 0,
\end{equation}
where $\bm{u}(\bm{x})$ and $p(\bm{x})$ are the fluid velocity and pressure, respectively,
while $\mu > 0$ and $\rho > 0$ are the viscosity and the density, which we assume to be constants \cite{bear2013dynamics,acheson2005elementary}.
Denote by $\bm n$ the outer normal vector to the boundary.
Suitable boundary conditions on $\partial \Omega_f$ are specified.
The velocity of the fluid $\bar{u}$ is prescribed at the inlet
\begin{equation}\label{3}
\bm{u} \cdot \bm n = \bar{u},
\quad \bm{u} \times \bm n = 0,
\quad \bm x \in \Gamma_{in} .
\end{equation}
At the outlet, pressure and absence of tangential force are prescribed
\begin{equation}\label{4}
p - \bm \sigma \bm n \cdot \bm n = \bar{p},
\quad \bm \sigma \bm n \times \bm n = 0,
\quad \bm x \in \Gamma_{out} .
\end{equation}
Standard no-slip and no-penetration conditions are prescribed on the solid walls:
\begin{equation}\label{5}
\bm{u} \cdot \bm n = 0,
\quad \bm{u} \times \bm n = 0,
\quad \bm x \in \Gamma_{s} .
\end{equation}
Symmetry conditions are prescribed on the symmetry boundary of the computational domain:
\begin{equation}\label{6}
\bm{u} \cdot \bm n = 0,
\quad \bm \sigma \bm n \times \bm n = 0,
\quad \bm x \in \Gamma_{sim} .
\end{equation}
\subsection{Species Transport}
The concentration of the solute in the fluid is denoted by $c(\bm x, t)$. The unsteady solute transport in absence of homogeneous reactions is governed by convection diffusion equation
\begin{equation}\label{7}
\frac{\partial c }{\partial t} + \nabla (\bm{u} c)
- D \nabla^2 c = 0 ,
\quad \bm{x} \in \Omega_f,
\quad t > 0,
\end{equation}
where $D > 0$ is the solute diffusion coefficient which is assumed to be scalar and constant.
The concentration of the solute at the inlet is assumed to be known:
\begin{equation}\label{8}
c(\bm x, t) = \bar{c},
\quad \bm x \in \Gamma_{in} ,
\end{equation}
where $\bar{c} > 0$ is assumed to be constant.
Zero diffusive flux of the solute at the outlet and on
the external boundaries of the domain is prescribed as follows:
\begin{equation}\label{9}
D \nabla c \cdot \bm{n} = 0,
\quad \bm x \in \Gamma_{sim} \cup \Gamma_{out} .
\end{equation}
Note that convective flux via the outlet is implicitly allowed by the above equations.
The surface reactions that occur at the obstacles' surface $\Gamma_s$ satisfy the mass conservation law, in this particular case meaning that the change in adsorbed surface concentration is equal to the flux from the fluid to the surface. This is described as
\begin{equation}\label{10}
\frac{\partial m}{\partial t} = - D \nabla c \cdot \bm{n} ,
\quad \bm x \in \Gamma_{s},
\end{equation}
where $m$ is the surface concentration of adsorbed solute \cite{kralchevsky2008chemical}.
A mixed kinetic–diffusion adsorption description is used:
\begin{equation}\label{11}
\frac{\partial m}{\partial t} = f(c, m) .
\end{equation}
For reactive boundaries, the choice of $f$ and its dependence on $c$ and $m$ is critical for a correct description of the reaction dynamics at the solid–fluid interface. A number of different isotherms (i.e., different functions $f(c,m)$) exist for describing these dynamics, dependent on the solute attributes, the order of the reaction, and the interface type.
The simplest of these is the Henry isotherm, which assumes a linear relationship between the near surface concentration and the surface concentration of the adsorbed particles, and takes the form
\begin{equation}\label{12}
f(c, m) = k_a c - k_d m,
\end{equation}
Here $k_a \geq 0$ is the rate of adsorption, measured in unit length per unit time,
and $k_d \geq 0$ is the rate of desorption, measured per unit time. This isotherm we used in our previous work in diffusion dominated case investigation \cite{grigoriev2019computational}. In this work we use the Langmuir adsorption isotherm which is more complicated, three parameter model:
\begin{equation}\label{13}
f(c, m) = k_a c \left (1 - \frac{m}{m_\infty} \right ) - k_d m .
\end{equation}
Here $m_\infty > 0$ is the maximal possible adsorbed surface concentration.
In comparison to the Henry isotherm (\ref{12}), the Langmuir isotherm (\ref{13}) predicts a decrease in the rate of adsorption as the adsorbed concentration increases due to the reduction in available adsorption surface.
The formulation of the initial -- boundary value problem in addition to the governing equations
(\ref{1}), (\ref{2}), (\ref{7}), boundary conditions (\ref{3})--(\ref{6}), (\ref{8})--(\ref{11}), and specified isotherm (\ref{12}) or (\ref{13}), requires specification of initial conditions:
\[
\bm{u}(\bm x, 0) = \bm{\bar{u}}(\bm x),
\quad \bm x \in \Omega_f ,
\]
\begin{equation}\label{14}
c(\bm x, 0) = c_0(\bm x),
\quad \bm x \in \Omega_f .
\end{equation}
\begin{equation}\label{15}
m(\bm x, 0) = m_0(\bm x),
\quad \bm x \in \Gamma_s.
\end{equation}
\subsection{Dimensionless form of the equations}
When a problem like the above one need to be solved for a range of parameters (what is our goal here),
working with dimensionless form of the equations give definitive advantages.
For the dimensionless variables (velocity, pressure, concentration) below, the same notations are used as for the dimensional ones. The height of the computational domain $\Omega_f$, namely $l$, is used for scaling spatial sizes, the scaling of the velocity is done by the inlet velocity $\bar{u}$, and the scaling of the concentration is done by the inlet concentration $\bar{c}$.
The Stokes Eq.(\ref{3}) and its boundary conditions in dimensionless remain unchanged, keeping mind that in this case they are written with respect to dimensionless velocity and pressure, and considering the dimensionless viscosity to be equal to one.
In dimensionless form Eq. (\ref{7}) reads
\begin{equation}\label{20}
\frac{\partial c }{\partial t} + \nabla (\bm{u} c)
- \frac{1}{\mathrm{Pe}} \nabla^2 c = 0 ,
\quad \bm{x} \in \Omega_f,
\quad t > 0,
\end{equation}
where
\[
\mathrm{Pe} = \frac{l \bar{u} }{D}
\]
is the Peclet number.
Further on, Eq. (\ref{8}) is transformed into
\begin{equation}\label{21}
c = 1,
\quad \bm x \in \Gamma_{in} ,
\end{equation}
while the boundary condition (\ref{9}) take the form
\begin{equation}\label{22}
\nabla c \cdot \bm{n} = 0,
\quad \bm x \in \Gamma_{sim} \cup \Gamma_{out} .
\end{equation}
The dimensionless form of Eq.(\ref{10}) is given by
\begin{equation}\label{23}
\frac{\partial m}{\partial t} = - \nabla c \cdot \bm{n} ,
\quad \bm x \in \Gamma_{s},
\end{equation}
where $m$ is scaled as follows:
\[
\bar{m} = l \bar{c} .
\]
In dimensionless form the adsorption relations, in the case of Henry isotherm, are written as follows
\begin{equation}\label{24}
\frac{\partial m}{\partial t} = \mathrm{Da}_a c - \mathrm{Da}_d m,
\quad \bm x \in \Gamma_{s},
\end{equation}
where the adsorption and desorption Damkoler numbers are given by
\[
\mathrm{Da}_a = \frac{k_a}{\bar{u}} ,
\quad \mathrm{Da}_d = \frac{k_d l}{\bar{u}} .
\]
In the case when we consider Langmuir isotherm, (\ref{11}), (\ref{13}), the following dimensionless relation is used
\begin{equation}\label{25}
\frac{\partial m}{\partial t} = \mathrm{Da}_a c \left (1- \frac{m}{\mathrm{M}} \right ) - \mathrm{Da}_d m,
\quad \bm x \in \Gamma_{s},
\end{equation}
where the dimensionless parameter $M$ is given by
\[
\mathrm{M} = \frac{m_\infty }{l \bar{c}} .
\]
\section{Numerical solution of the direct problem}
Finite Element Method, FEM, is used for space discretization of the above problem, together with implicit discretization in time. The algorithm used here for solving the direct problem is practically identical of the algorithm used to study oxidation in \cite{oxidation}.
\subsection{Geometry and grid}
The computational domain is a rectangle with a dimensionless height of $x_2=1$ and dimensionless length of $x_1=17.5$, in which ten half cylinders are embedded.
The distance between the centers of the cylinders in $x_1$ direction is 1.5 dimensionless units, the radius of cylinders is 0.4 dimensionless units.
The computational domain $\Omega_f$ is triangulated using the grid generator Gmsh (website gmsh.info) \cite{Gmsh}.
The script for preparing the geometry is written in Python. A demonstration of the computational grid is shown in Fig.{\ref{mesh_grid}.
\begin{figure}[h!]
\center{\includegraphics[width=1\linewidth]{mesh.png}}
\caption{Computational mesh.}
\label{mesh_grid}
\end{figure}
\FloatBarrier
\subsection{Computation of steady state single phase fluid flow}
One way coupling is considered here. The fluid flow influences the species transport, but there is no back influence of the species concentration on the fluid flow. Based on this, the flow is computed in advance. The FEM approximation of the steady state flow problem \cite{gresho200incompressible} is based on variational formulation of the considered boundary value problem (\ref{1}), (\ref{2}), (\ref{3}), (\ref{4}), (\ref{5}), (\ref{6}).
The following functional space $\bm V$ is defined for the velocity $\bm u$ ($\bm u \in \bm V$):
\[
\begin{split}
\bm V = \{ \bm v \in \bm H^1(\Omega) : \ & \bm{u} \cdot \bm n = 1, \
\bm{u} \times \bm n = 0 \ \mathrm{on} \ \Gamma_{in} ,
\\ & \bm{u} = 0 \ \mathrm{on} \ \Gamma_{s}, \ \bm{u} \cdot \bm n = 0 \
\ \mathrm{on} \ \Gamma_{sim} \} .
\end{split}
\]
Test function $\bm v \in \hat{\bm V}$, where
\[
\hat{\bm V} = \{ \bm v \in \bm H^1(\Omega_f) : \ \bm{u} = 0 \ \mathrm{on} \ \Gamma_{in} , \
\bm{u} = 0 \ \mathrm{on} \ \Gamma_{s}, \ \bm{u} \cdot \bm n = 0 \
\ \mathrm{on} \ \Gamma_{sim} \} .
\]
For the pressure $p$ and the related test functions $q$, it is required that $p, q \in Q$, where
\[
Q = \{ g \in L_2(\Omega_f) : \ q = 0 \ \mathrm{on} \ \Gamma_{out} \} .
\]
Let us multiply Eq.(\ref{1}) by $\bm v$, Eq.(\ref{2}) by $q$, and integrate over the computational domain. Taking into account the boundary conditions (\ref{3}), (\ref{4}), (\ref{5}), (\ref{6}), the following system of equations is obtained
with respect to $\bm v \in \bm V$, $q \in Q$
\begin{equation}\label{26}
a(\bm u, \bm v) - b(\bm v, p) = 0 \ \forall \bm v \in \hat{\bm V} ,
\end{equation}
\begin{equation}\label{27}
b(\bm u, q) = 0 \ \forall q \in Q .
\end{equation}
Here
\[
a(\bm u, \bm v) := \int_{\Omega_f} \nabla \bm u \cdot \nabla \bm v \, d \bm x ,
\]
\[
b(\bm v, p) := \int_{\Omega_f}(\nabla \cdot \bm v) q \, d \bm x .
\]
For the FEM approximation of the velocity, the pressure, and the respective test functions, the following finite dimensional subspaces are selected
$\bm V_h \subset \bm V$, $\hat{\bm V}_h \subset \hat{\bm V}$
and $Q_h \subset Q$.
Taylor-Hood $P_2-P_1$ elements \cite{taylor1973numerical} are used here.
These are continuous $P_2$ Lagrange elements for the velocity components
and continuous $P_1$ Lagrange elements for the pressure field.
The computations are carried out using the computing platform for partial differential equations FEniCS (website fenicsproject.org) \cite{LoggMardalEtAl2012a,AlnaesBlechta2015a}.
\begin{figure}[h!]
\center{\includegraphics[width=1\linewidth]{u_x.png}}
\caption{Velocity field by $X$ direction.}
\label{u_x}
\end{figure}
\begin{figure}[h!]
\center{\includegraphics[width=1\linewidth]{u_y.png}}
\caption{Velocity field by $Y$ direction.}
\label{u_y}
\end{figure}
\begin{figure}[h!]
\center{\includegraphics[width=1\linewidth]{pressure.png}}
\caption{Pressure field.}
\label{press}
\end{figure}
\FloatBarrier
As mentioned above, mainly slow flows are of interest for the current study, therefore the basic considered variant is characterized by $\mathrm{Re} =1$. Convergence of the solution with respect to refinement of the grid is illustrated on Fig.\ref{spatial}. Convergence of the solution with respect to time step is illustrated on Fig.\ref{temporal}.
We have used three computational grids: basic grid with 18743 nodes and 35958 triangles, coarse grid with 4760 nodes and 8754 triangles and fine grid with 72745 nodes and 142460 triangles.
From the results, it can be concluded that the coarse grid provides good enough accuracy for the numerical solution. Taking into account that we will solve a lot of direct problems, it was decided to conduct research on a coarse grid.
Before starting work, it was decided to conduct research on the parameters, namely:\\
\begin{itemize}
\item $\mathrm{Pe} = [1, 10, 100]$
\item $\mathrm{Da}_a = [50, 100, 200]$
\item $\mathrm{Da}_d = [0.5, 1, 2]$
\item $\mathrm{M}_\infty = [100, 1000, 100000]$
\end{itemize}
The results illustrated on Figs.\ref{pe_diffs},\ref{m_diffs},\ref{daa_diffs},\ref{dad_diffs}. The curve in these figures is a breakthrough curve which describes how much solute has crossed the outflow boundary and it is given by
\begin{equation}
c_{out}(t) = \frac{\int_{\Gamma_{out}}c(\bm{x}, t)d\bm{x}}{\int_{\Gamma_{out}}d\bm{x}}.
\end{equation}
\begin{figure}[h]
\begin{minipage}[t]{0.47\linewidth}
\center{\includegraphics[width=1\linewidth]{spatial.png}}
\caption{Research on computational grids of different densities.}
\label{spatial}
\end{minipage}
\hfill
\begin{minipage}[t]{0.47\linewidth}
\center{\includegraphics[width=1\linewidth]{temporal.png}}
\caption{Research on different time steps.}
\label{temporal}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.47\linewidth}
\center{\includegraphics[width=1\linewidth]{Pe_diffs.png}}
\caption{Research on different Peclet numbers.}
\label{pe_diffs}
\end{minipage}
\hfill
\begin{minipage}[t]{0.47\linewidth}
\center{\includegraphics[width=1\linewidth]{M_diffs.png}}
\caption{Study on different maximum concentrations adsorbed on the surface.}
\label{m_diffs}
\end{minipage}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.47\linewidth}
\center{\includegraphics[width=1\linewidth]{Daa_diffs.png}}
\caption{Research on different Damk\"oler numbers for adsorption.}
\label{daa_diffs}
\end{minipage}
\hfill
\begin{minipage}[t]{0.47\linewidth}
\center{\includegraphics[width=1\linewidth]{Dad_diffs.png}}
\caption{Research on different Damk\"oler numbers for desorption.}
\label{dad_diffs}
\end{minipage}
\end{figure}
\FloatBarrier
\subsection{Simulation of reactive transport}
The unsteady species transport problem (\ref{20}), (\ref{15}), (\ref{21})-- (\ref{23}) is solved numerically using standard Lagrangian $P_1$ finite elements. Let us define
\[
S = \{ s \in H^1(\Omega_f) : \ q = 1 \ \mathrm{on} \ \Gamma_{in} \} ,
\]
\[
\hat{S} = \{ s \in H^1(\Omega_f) : \ q = 0 \ \mathrm{on} \ \Gamma_{in} \} .
\]
The approximate solution $c \in S$ is sought from
\begin{equation}\label{28}
\left (\frac{\partial c}{\partial t}, s \right) + d(c,s) =
\left (\frac{\partial m}{\partial t} ,s \right )_s
\quad \forall s \in \hat{S} ,
\end{equation}
where the following notations are used
\[
d(c,s) := - \int_{\Omega_f} c \bm u \cdot \nabla s \, d \bm x
+ \frac{1}{\mathrm{Pe}} \int_{\Omega_f} \nabla c \cdot \nabla s \, d \bm x
+ \int_{\Gamma_{out}} (\bm u \cdot \bm n) c s \, d \bm x ,
\]
\[
(\varphi,s)_s := - \int_{\Gamma_{s}} \varphi s \, d \bm x .
\]
For determining $m \in G = L_2(\Gamma_{s})$ (see (\ref{11})) we use
\begin{equation}\label{29}
\left (\frac{\partial m}{\partial t} , g \right )_s -
(f(c,m), g)_s = 0,
\quad g \in G .
\end{equation}
The discretization in time is based on symmetric discretization (Crank–Nicolson method), which is second order accurate (see, e.g., \cite{Samarskii,Ascher2008}.
Let $\tau$ be a step-size of a uniform grid in time such that
$c^n = c(t^n), t^n = n\tau, \ n = 0, 1, ...$.
Eq.(\ref{28}) is approximated in time as follows
\[
\left (\frac{c^{n+1} - c^n}{\tau }, s \right) + d\left (\frac{c^{n+1} + c^n}{2} ,s\right ) =
\left (\frac{m^{n+1} - m^n}{\tau } ,s \right )_s .
\]
Similarly, for (\ref{29}) we get
\[
\left (\frac{m^{n+1} - m^n}{\tau } , g \right )_s -
\left (f\left (\frac{c^{n+1} + c^n}{2} ,\frac{m^{n+1} + m^n}{2} \right ), g\right )_s = 0 ,
\quad n = 0, 1, ... ,
\]
In the considered here case, zero initial conditions are posed
\[
c^0 = 0, \quad \bm x \in \Omega_f,
\]
\[
m^0 = 0, \quad \bm x \in \Gamma_s .
\]
Species concentration at different time moments are shown on Fig.\ref{diff_time_solute}.
Sensitivity study is carried out in order to see how the change of different parameters leads to change of the breakthrough curves. Such sensitivity studies are often first stage for optimization or parameter identification procedures. The dependence of the average outlet concentration from Peclet number is shown on Fig.\ref{pe_diffs}.
The rate of adsorption is characterized by $\mathrm{Da}_a$. The influence of this parameter on the outlet concentration is illustrated on Fig.\ref{daa_diffs}. Increasing $\mathrm{Da}_a$, as expected, leads to more intensive adsorption and larger amount of the deposited substance. The influence of the parameter $\mathrm{Da}_d$ on the outflow concentration is illustrated on Fig.\ref{dad_diffs}.
Henry isotherm usually describes well the initial stages of the adsorption. In cases when only a limited mass can be adsorbed at a surface, Langmuir isotherm should be used to reflect the decay of the adsorption rate close to the saturation. In this case an additional parameter appears, namely $\mathrm{M}$ (see (\ref{25})).
The influence of $\mathrm{M}$ on the average output concentration is shown on Fig.\ref{m_diffs}.
\begin{figure}[h]
\begin{minipage}[t]{1.0\linewidth}
\center{\includegraphics[width=1\linewidth]{t_450.png}} \\a
\label{t_450}
\end{minipage}
\hfill
\begin{minipage}[t]{1.0\linewidth}
\center{\includegraphics[width=1\linewidth]{t_900.png}} \\b
\label{t_900}
\end{minipage}
\hfill
\begin{minipage}[t]{1.0\linewidth}
\center{\includegraphics[width=1\linewidth]{t_1800.png}} \\c
\label{t_1800}
\end{minipage}
\hfill
\begin{minipage}[t]{1.0\linewidth}
\center{\includegraphics[width=1\linewidth]{conc_shk.png}}
\label{conc_shk}
\end{minipage}
\caption{Species concentration at different dimensionless time moments: a --- $t = 450$ , b --- $t = 900$, c --- $t = 1800$}
\label{diff_time_solute}
\end{figure}
\FloatBarrier
\section{Numerical solution of the inverse problem in reaction dominated case}
Consider an inverse problem for determining unknown adsorption rate (so called parameter identification problem). As a function which we want to minimize, we choose the functional of the residual which is given by
\begin{equation}
J(\mathrm{Da}_a, \mathrm{Da}_d, \mathrm{M}_{\infty}) = \int_0^T (c_{out}(t) - \widehat{c(t)} )^2 dt,
\end{equation}
where $\widehat{c(t)}$ is a breakthrough curve from experimental set. The starting point is monitoring of the difference (residual) between the measured $\widehat{c_{out}(t)}$ and the computed $c_{out}(t)$ average outflow concentration for different values of the parameters $\mathrm{Da}_a$, $\mathrm{Da}_d$ and $\mathrm{M}_\infty$ in Langmuir isotherm Eq.(\ref{25}).
\subsection{Deterministic approach}
Taking into account all the results shown in the figures above, it was decided to use a grid with 4760 nodes and the following parameters: $\tau = 30, \mathrm{Pe} = 10, \mathrm{Da}_a = 100, \mathrm{Da}_d = 1, \mathrm{M}_\infty = 1000$. To evaluate the residual functional, we decided to fix each of the three parameters and solve the functional values on a uniform grid of 50 by 50 (see results on Figs. \ref{fixed_daa}, \ref{fixed_dad}, \ref{fixed_m}). \textit{Blue star} is our exact parameters location, \textit{red contour} is an admissible set. As you can see, our functional has a rather complex shape. In the case when we do not know the location of exact parameters, in 3D case we must build uniform grid, for example 50 by 50 by 50, which is equal to 125000 direct calculations of the problem. Obviously, this strategy is very effective, simple, it is impossible to make a mistake with it, but almost always it is a quite expensive for calculation. We made this investigations only to see the form of our functional, because, first of all, this work is devoted to the operation of the bee colony algorithm.
\begin{figure}[h]
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1\linewidth]{func_fixed_Daa.png}}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1\linewidth]{func_fixed_Daa_3d.png}}
\end{minipage}
\caption{Calculations for fixed $\rm{Da_a}$}
\label{fixed_daa}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1\linewidth]{func_fixed_Dad.png}}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1\linewidth]{func_fixed_Dad_3d.png}}
\end{minipage}
\caption{Calculations for fixed $\rm{Da_d}$}
\label{fixed_dad}
\end{figure}
\begin{figure}[h]
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1\linewidth]{func_fixed_M.png}}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1\linewidth]{func_fixed_M_3d.png}}
\end{minipage}
\caption{Calculations for fixed $\rm{M}$}
\label{fixed_m}
\end{figure}
\FloatBarrier
\subsection{Modified Bee Colony Algorithm}
Let's take the parameters: $\mathrm{Da}_a = 100$, $\mathrm{Da}_d = 1$, $\mathrm{M}_\infty = 1000$ as the exact solution that we will try to identify. We will assume that the Peclet number is known and equal to 10. Specifically, to show the behavior of the algorithm with various input parameters, we will run the algorithm two times with different sets of parameters:
\begin{itemize}
\item[I:] $n = 1$, $m = 2$, $d_x = 1.5$, $d_y = 0.015$, $d_z = 5$, $\delta = 1$, $sb = 20$, $abb = 20$, $abp = 5$, $\tau = 3$;
\item[II:] $n = 2$, $m = 5$, $d_x = 1.5$, $d_y = 0.015$, $d_z = 5$, $\delta = 1$, $sb = 200$, $abb = 50$, $abp = 40$, $\tau = 5$.
\end{itemize}
$X$ axis --- parameter $\mathrm{Da}_a = [60, 140]$, $Y$ axis --- parameter $\mathrm{Da}_d = [0, 2]$, $Z$ axis --- parameter $\mathrm{M}_{\infty} = [800, 1200]$.
At the end of the algorithm, the following results were obtained: \\
For parameter set I:\\
J(125.8217,1.2586,1002.1758) = 0.0087\\
J(129.8189,1.3040,1045.4760) = 0.0466\\
J(128.2248,1.2780,967.3870) = 0.0517\\
J(100.6993,1.0117,1051.8844) = 0.0615\\
Total direct problem computing (times): 2080\\
For parameter set II:\\
J(92.7936, 0.9277, 997.6068) = 0.00129\\
J(83.3579, 0.8331, 994.8119) = 0.0030\\
J(101.6545, 1.0166, 1000.0540) = 0.0042\\
J(136.4043, 1.3656, 1011.4730) = 0.0057\\
J(117.6360, 1.1779, 1014.2707) = 0.0112\\
J(108.3891, 1.0849, 1011.3793) = 0.0118\\
J(86.7377, 0.8657, 980.6640) = 0.0183\\
Total direct problem computing (times): 12300\\
The execution time of the algorithm is skipped because it depends on the computer and cost of the direct problem. As mentioned above, the number of direct problem evaluations is important to us. The quantity depends on the input parameters, so you can adjust the parameters depending on the computational cost of the direct problem and available computational power. Check Fig.\ref{MBC_res_3d} to see the results of 3D problem. One more way to decrease the number of direct problem evaluation is to increase stop criteria parameter. On all of our calculations it is set as 10e-8.
\begin{figure}[h!]
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1.0\linewidth]{bee_crowd_1.png}} \\ a
\end{minipage}
\hfill
\begin{minipage}[t]{0.5\linewidth}
\center{\includegraphics[width=1.0\linewidth]{bee_crowd_2.png}} \\ b
\end{minipage}
\caption{Results of the algorithm: a --- parameter set I, b --- parameter set II.}
\label{MBC_res_3d}
\end{figure}
\FloatBarrier
We compared the resulting extremums from both parameter sets with our exact solution and get following results: 0.0128\% and 0.0018\% of relative error for first coordinate in I and II parameter sets respectively. Of course, in some cases solving even 2080 direct problems is already expensive. So, you can increase the stop criteria parameter as noted above and you can significantly reduce number of direct problem evaluation. For example, the following parameter set
$n = 1$, $m = 2$, $d_x = 1.5$, $d_y = 0.015$, $d_z = 5$, $\delta = 1$, $sb = 20$, $abb = 20$, $abp = 5$, $\tau = 3$ was launched with stop criteria equal to 10e-3 and we get the following results:\\
\noindent J(126.1104, 1.2867, 1151.4495) = 0.3617\\
J(104.8607, 1.0383, 991.0511) = 7.5122\\
J(117.2133, 1.1463, 927.1145) = 7.5122\\
Total direct problem computing (times): 205\\
We compared the resulting extremum with the coordinates (126.1104, 1.2867, 1151.4495) with our exact solution and got a relative error in 0.626\% which is a pretty good result at a fairly low price.
The formula by which the relative error was calculated:
\begin{equation}
\mathcal{E}_{rel} = \frac{\| c_{out} - \widehat{c_{out}}\|_2}{\|\widehat{c_{out}}\|_2} = \frac{\sqrt{\sum_{i=0}^{Nt}(c_i - \widehat{c_i})^2}}{\sqrt{\sum_{i=0}^{Nt} \widehat{c_i}^2}},
\end{equation}
where $Nt$ --- the count of the time intervals, $\widehat{c_{out}}$ --- our exact solution (data from experiments).
\subsection{Experiments with diffusion dominated case}
Nevertheless, despite the fact that the whole article is aimed at studying the reaction dominated case, we want to conduct a small analysis of the diffusion dominated case. You can read more about this case in our previous articles \cite{grigoriev2019computational,10.1007/978-3-030-41032-2_12}. Let us recall key points from that work. We investigated the diffusion dominated case (see Fig.\ref{dd_deter}), when the reaction rates were small, thus there were few time layers: $T = 40$ in dimensionless time, small time step $\tau = 0.1$. We implemented many different methods for identifying the parameters, such as deterministic approach, statistical parameter identification and multistage parameter identification.
In this section we show the results of the algorithm in diffusion dominated case and Henry isotherm.
$X$ axis --- parameter $\mathrm{Da}_a = [0, 0.01]$, $Y$ axis --- parameter $\mathrm{Da}_d = [0, 0.1]$
Exact parameters, which we will try to identify: $\mathrm{Da}_a = 0.005$, $\mathrm{Da}_d = 0.05$.\\
The algorithm was launched with the following parameters: $n = 2$, $m = 3$, $d_x = 0.0002, d_y = 0.002$, $\delta = 0.002$, $sb = 20$, $abb = 20$, $abp = 10$, $\tau = 3$.
We got following results:\\
J(0.0050001, 0.049995) = 2.79961e-05 \\
J(0.0050081, 0.050061) = 0.000768\\
J(0.0050086, 0.049906) = 0.000947\\
J(0.0049929, 0.049853) = 0.001008\\
J(0.0050096, 0.050515) = 0.002408\\
Total direct problem computing (times): 1720\\
\begin{figure}[h!]
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1.0\linewidth]{func_dd.png}}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1.0\linewidth]{func_3d_dd.png}}
\end{minipage}
\caption{Diffusion dominated case}
\label{dd_deter}
\end{figure}
\begin{figure}[h!]
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1.0\linewidth]{bee_crowd_dd_1.png}}
\caption{2 best and 3 perspective locations.}
\label{dd_mbc_alot}
\end{minipage}
\hfill
\begin{minipage}[t]{0.45\linewidth}
\center{\includegraphics[width=1.0\linewidth]{bee_crowd_dd_2.png}}
\caption{1 best location.}
\label{dd_mbc_one}
\end{minipage}
\end{figure}
\FloatBarrier
For this case, we specially took 2 best locations and 3 perspective ones to show how the bee colonies all converge on one point. As you can see in Fig.\ref{dd_mbc_alot} all the bees merged into one point. The number of direct problem evaluations is also not so big. For this case, the functional we have has a good smooth shape, and we have only one global extremum. If we have such a priori information about functional, then we can run our algorithm with the condition that there is only 1 best location and 0 perspective locations, other parameters are the same. For 1 best location we got following result: \\
J(0.0050001, 0.0499864) = 6.4601e-05 \\
Total direct problem computing (times): 220 \\
The algorithm gives us the exact coordinates of the extremum with substantially small numbers of direct problem evaluations (see Fig.\ref{dd_mbc_one}). At the same time, now it looks more like a multistage parameter identification.
\section{Conclusion}
The Modified Bee Colony Algorithm for identification of unknown adsorption and desorption rates in Langmuir isotherm are presented in conjunction with pore scale simulation of reactive flow. Simulation results show that MBC algorithm can be efficiently employed to solve the multimodal engineering problems with high dimensionality.
\begin{enumerate}
\item
The 2D mathematical model of the direct problem includes steady state Stokes equations, and convection--diffusion equation supplemented with Robin type boundary conditions accounting for adsorption and desorption. Langmuir isotherm describe the kinetics. A simple pore scale geometry described by periodic arrangement of cylindrical obstacles, is considered for illustration of the identification procedure. The key dimensionless parameters are specified. The approach is applicable for wide range of microgeometries and process parameters.
\item
The numerical solution is based on triangular grids and FEM with Taylor and Hood elements. Spatial and temporal investigations are performed.
\item
Mass transport is simulated for as given velocity field (one way coupling). The numerical solution is based on FEM with piecewise linear elements. Crank-Nikolson scheme is used in the time discretization. Sensitivity studies are carried out to investigate the influence of different parameters on the reactive transport through the porous media.
\item
The Modified Bee Colony Algorithm are presented and tested. A comparative work is done with the previous work on the identification of parameters in diffusion dominated case.
\end{enumerate}
\section*{Acknowledgements}
The work was supported by Mega-grant of the Russian Federation Government (N 14.Y26.31.0013), and by grant from the Russian Foundation for Basic Research (project 19-31-90108).
|
1,116,691,497,868 | arxiv | \section{Introduction}
\label{sec:intro}
Transformation optics \cite{Pendry:2006Sc,Leonhardt:2006Nj,Leonhardt:2008Oe} in the recent years became an important design tool for new types of artificial materials (metamaterials.) In transformation optics one starts from the constitutive equations of vacuum\footnote{Often the terms constitutive equation or constitutive relation refer to media, only. Since in the present context the constitutive relations of transformation media are closely related to the vacuum relation, we follow Refs.~\cite{Post,Leonhardt:2006Nj,Bergamin:2008Pa} and call the following equation constitutive relation of vacuum.} of a possibly curved spacetime, written in generic coordinates \footnote{Throughout this paper all equations are written in natural units, $\epsilon_0 = \mu_0 = c = 1$. Furthermore Einstein's summation convention is used, which implies a summation over all repeated indices. Further details of our notation are explained in Appendix \ref{sec:conventions}.}:
\begin{align}
\label{epsorig}
D^i &= \frac{g^{ij}}{\sqrt{-g_{00}}} E_j - \frac{g_{0j}}{g_{00}} \epsilon^{jil} H_l \\
\label{muorig}
B^i &= \frac{g^{ij}}{\sqrt{-g_{00}}} H_j + \frac{g_{0j}}{g_{00}} \epsilon^{jil} E_l
\end{align}
It is then observed that these equations resemble the constitutive equations of a special medium with $\epsilon^{ij} = \mu^{ij} = g^{ij}/\sqrt{-g_{00}}$ and bi-anisotropic contributions $\xi^{ij} = - \zeta^{ij} = - g_{0l} \epsilon^{lij}/g_{00}$ \cite{Landau2}. Thus, if empty spacetime can look like a medium, it should be possible to find media that look like empty spacetime. Transformation media \cite{Leonhardt:2006Sc,Pendry:2006Sc,Leonhardt:2006Nj} are media of this type. They are linear media that may be interpreted as to mimic a different spacetime. In other words, the solutions of the Maxwell equations in the medium, which is placed in a certain spacetime called laboratory space, can be mapped on the solutions of the electromagnetic fields propagating in a different, but empty spacetime. Transformation optics thus is a tool to express the solution of the Maxwell equations in a yet unexplored medium (the transformation medium) in terms of well-known solutions (here vacuum solutions.) Additionally, transformation optics allows to design media with a pre-defined propagation of light (a pre-defined solution of Fermat's principle \cite{Leonhardt:2008Oe}) in an easy way, since this propagation is encoded geometrically in the chosen transformation of spacetime, locally expressed as a coordinate transformation. As most popular examples, light can be guided around a volume in space (e.g.\ a sphere or a cylinder) leading to an invisibility cloak \cite{Pendry:2006Sc,Leonhardt:2006Sc} or the transformation medium can mimic an inversion of space, which leads to a perfect lens \cite{Leonhardt:2006Nj}. In these applications it appeared natural to make the interface between vacuum (outer space) and the transformation medium reflectionless. Though transformation optics today is used in a much broader context than these two examples, somewhat surprisingly the complete conditions for a reflectionless interface only have been presented recently \cite{Yan:2008Rl} and the study of boundary conditions at a generic interface still seems to be missing.
Despite its successes, transformation optics also has its limitations, mainly in terms of the accessible effective media parameters. As can be seen from Eqs.\ \eqref{epsorig} and \eqref{muorig}, the constitutive relation of vacuum always has the form of a reciprocal medium with permittivity and permeability being equal. These restrictions led to ideas how the original setup could be generalized, either within the geometric approach of transformation optics \cite{Bergamin:2008Pa} or by replacing geometric transformations by direct field transformations \cite{Tretyakov:2008Gf}. Though these generalizations also provide a mapping of vacuum solutions of the Maxwell equations onto the solutions of the medium, the implications of this map are often less immediate than in standard transformation optics, where the medium just mimics a free spacetime. Thus a thorough analysis of reflection and refraction at interfaces between such media or between a medium of this type and vacuum is important to improve our understanding of these tools.
It is the aim of this paper to study interfaces between two arbitrary transformation media in detail. We will concentrate on standard transformation media or media of the generalization of Ref.~\cite{Bergamin:2008Pa}, which will be explained more in detail in Sect.~\ref{sec:troptics}. At these interfaces standard boundary conditions,
\begin{align}
\label{BC1}
(\mathbf D_1 - \mathbf D_2 )\cdot \mathbf n &= -\sigma\ , & (\mathbf B_1 - \mathbf B_2 )\cdot \mathbf n &= 0\ , \\
\label{BC2}
(\mathbf E_1 - \mathbf E_2) \times \mathbf n &= 0\ , & (\mathbf H_1 - \mathbf H_2) \times \mathbf n &= \mathbf K\ ,
\end{align}
will be imposed ($\mathbf n$ is the unit vector normal to the interface and the indices 1 and 2 refer to the two different sides of the interface.) The implications of these boundary conditions can be studied in two different ways. Since a solution of the Maxwell equations in a transformation medium is expressed in terms of a vacuum solution, one can ask the question for which combinations of media the boundary conditions are met if the same vacuum solution is used on both sides of the interface. In this approach (worked out in detail in Sect.~\ref{sec:boundaryI}) the implementation of the boundary conditions yields constraints on the geometric transformations used to describe the two media and it is shown that these constraints can be reduced to two simple rules. They provide a sufficient (though not necessary) condition for a reflectionless interface. If these constraints are met everywhere on the interface, this interface disappears completely in the formulation in terms of vacuum solutions and consequently becomes invisible.
Of course, one can consider the interface between two arbitrary transformation media, which in general is not reflectionless. In this case, as discussed in Sect.~\ref{sec:boundaryII}, the solutions in the two different media are expressed in terms of two different vacuum solutions. Still, it is possible to re-formulate the boundary conditions completely in terms of the vacuum solutions and the geometric manipulations. In this reformulation the boundary conditions are no longer independent of the media as is the case in Eqs.\ \eqref{BC1} and \eqref{BC2}, but depend on the geometric transformations and thus on the characteristics of the media (the physical content of the boundary conditions of course remains unchanged.) As discussed in Sect.\ \ref{sec:conclusions} one advantage of this formulation is the fact that the implications of the boundary conditions for a whole class of media derive from only one specific formula. As basic examples we will show in Sect.\ \ref{sec:boundaryII} how the laws of reflection and refraction at the interface between vacuum and a homogeneous and isotropic medium can be understood in a geometric way.
\section{Generalized transformation optics}
\label{sec:troptics}
In this section a brief introduction to standard transformation optics and a generalization thereof are presented. Originally, transformation optics was introduced as a tool to design invisibility cloaks \cite{Pendry:2006Sc,Leonhardt:2006Sc}, the full concept as used in this paper was introduced in Ref.~\cite{Leonhardt:2006Nj}, a pedagogical review was presented by the same authors in Ref.~\cite{Leonhardt:2008Oe}.
As already mentioned in the introduction, transformation optics is based on the fact that the constitutive relations of vacuum of a possibly curved spacetime and written in general coordinates resemble those of a reciprocal medium. Thus it should be possible to find media, which may be interpreted as to mimic an empty spacetime, which however is different from the spacetime the medium is placed in. Though in principle no restrictions on the nature of the spacetime to be mimicked exist, the two spacetimes in most applications are related by a diffeomorphism, locally implemented as a coordinate transformation (see Fig.~\ref{fig:leonhardt}.)
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,bb=0 0 256 43]{leonhardt.eps}
\caption{Illustration and notation for standard transformation optics.}
\label{fig:leonhardt}
\end{figure}
To design a specific medium by means of transformation optics one starts with the definition of laboratory space, which is the spacetime where the transformation medium shall be placed in. In this spacetime a coordinate system with coordinates $x^{\mu}$ is chosen and in these coordinates the spacetime metric $g_{\mu\nu}(x)$ takes a certain form. One can write down the vacuum solutions of the Maxwell equations in laboratory space, which we denote by $\mathbf E(x)$, $\mathbf B(x)$, $\mathbf D(x)$ and $\mathbf H(x)$. Now a mapping from laboratory space to a different spacetime, called electromagnetic space, is defined. Mathematically this transformation is a diffeomorphism, locally it is implemented as a coordinate transformation $x^\mu \rightarrow \bar x^\mu (x)$. Since electrodynamics is invariant under diffeomorphisms, the two spacetimes are physically equivalent and one simply rewrites the vacuum solutions in terms of the new coordinates $\bar x^{\mu}$ and the new metric $\bar g_{\mu\nu}$. Until now no medium parameters have been defined, but the Maxwell equations just have been rewritten in terms of different coordinates. In a second step, it is claimed that the physical spacetime still is laboratory space, but the electromagnetic fields shall propagate as if the spacetime was electromagnetic space. This makes the presence of a medium necessary. Technically this means that the solutions $\bar{\mathbf E}(\bar x)$, $\bar{\mathbf B}(\bar x)$, $\bar{\mathbf D}(\bar x)$ and $\bar{\mathbf H}(\bar x)$, which are solutions of the Maxwell equations in the spacetime with metric $\bar g_{\mu\nu}$, have to be turned back into solutions in the spacetime with metric $g_{\mu\nu}$. Since the Maxwell equations in general coordinates only depend on the determinant of the spacetime metric (see Eqs.~\eqref{EOMcomp} and \eqref{covder}), this can be achieved by a simple rescaling of fields. As has been shown in Refs.~\cite{Leonhardt:2006Nj,Bergamin:2008Pa}, a possible rescaling is
\begin{align}
\label{scal1}
\tilde E_i &= \bar s \bar E_i\ , & \tilde B^i &= \bar \sigma \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma}} \bar B^i\ , \\
\label{scal2}
\tilde D^i &= \bar \sigma \frac{\sqrt{-\bar g}}{\sqrt{-g}} \frac{\partial \bar x^0}{\partial x^0} \bar D^i\ , & \tilde H_i &= \bar s \frac{\sqrt{-g_{00}}}{\sqrt{-\bar g_{00}}} \frac{\partial \bar x^0}{\partial x^0} \bar H_i\ .
\end{align}
Here, $\gamma$ is the determinant of the induced space metric according to Eq.~\eqref{indmetric}, $\bar s = \pm 1$ is positive, if the transformation $x^{\mu}\rightarrow \bar x^{\mu}$ does not change the orientation of the manifold (i.e.\ a right-handed coordinate system in laboratory space is mapped onto a right-handed one in electromagnetic space) and $-1$ otherwise. Finally, $\bar \sigma = \pm 1$ is positive, if space and spacetime in electromagnetic space have the same orientation, $-1$ otherwise. The signs $\bar \sigma$ and $\bar s$ play an important role in the context of negative refractive index media \cite{Leonhardt:2006Nj} and will be written explicitly in all equations to keep full generality of the result. Since this paper does not deal specifically with negative refractive index media, these signs are not explained in detail at this point. Further comments are made in the Appendix, for a detailed discussion we refer to \cite{Bergamin:2008Pa}. Our notation is also summarized in Fig.~\ref{fig:leonhardt} and explained more in detail in the Appendix \footnote{As is seen from Fig.~\ref{fig:leonhardt}, the explicit coordinates of laboratory space with medium are distinguished from the ones of empty laboratory space. This is necessary, since a particular point in spacetime may be represented by \emph{different} values of the coordinates in empty laboratory space and in laboratory space with the medium.}.
If the barred electromagnetic fields constitute a solution of the Maxwell equations in electromagnetic space then it is easy to check that the fields with a tilde are indeed a solution in laboratory space. It is important to notice that the rescalings \eqref{scal1} and \eqref{scal2} are not a symmetry transformation and thus the barred solutions are not physically equivalent to the solutions labeled with a tilde. Instead, by means of this rescaling a medium has been introduced which mimics the electromagnetic space in laboratory space. From the constitutive relation in electromagnetic space, Eqs.~\eqref{epsorig} and \eqref{muorig} in terms of barred variables, the constitutive relation of the transformation medium is easily derived with Eqs.~\eqref{scal1} and \eqref{scal2} as
\begin{align}
\label{epstilde2}
\tilde{ D}^i &= \bar s \frac{\bar g^{ij}}{\sqrt{-\bar g_{00}}} \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma}} \tilde E_j - \frac{\bar g_{0j}}{\bar g_{00}} \epsilon^{jil} \tilde{ H}_l\ , \\
\label{mutilde2}
\tilde B^i &= \bar s \frac{\bar g^{ij}}{\sqrt{-\bar g_{00}}} \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma}} \tilde{ H}_j + \frac{\bar g_{0j}}{\bar g_{00}} \epsilon^{jil} \tilde E_l\ .
\end{align}
We mention that transformation optics does not only provide the constitutive relation and the solutions of the Maxwell equations, but it also defines the dispersion relation\footnote{This dispersion relation is the result of the mathematical manipulations of transformation optics and it is not claimed that it corresponds to any real medium.} in a purely geometric way. The vacuum dispersion relation, $\mathbf{k}^2 = \omega^2$, rewritten in terms of the generic coordinates of the electromagnetic space becomes $\bar g^{\mu\nu} k_{\mu} k_{\nu} = 0$ with $(k_{\mu}) = (\omega, k_i)$. This relation also has to hold in the transformation medium, but now is interpreted in laboratory space.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth,bb=0 0 256 105]{triple}
\end{center}
\caption{Illustration and notation of generalized transformation optics according to Ref. \cite{Bergamin:2008Pa}. Notice that the diffeomorphism I only acts on the fields $\mathbf E$ and $\mathbf B$, while diffeomorphism II acts on $\mathbf D$ and $\mathbf H$.}
\label{fig:triplespace}
\end{figure}
In Ref.~\cite{Bergamin:2008Pa} an extension of transformation optics was introduced, which is based on the same geometrical principles as standard transformation optics but allows to design media not covered by the constitutive relations \eqref{epstilde2} and \eqref{mutilde2}. This extension starts from the observation that the Maxwell equations,
\begin{align}
\label{maxwell}
\nabla_i B^i &= 0\ , & \nabla_0 B^i + \epsilon^{ijk}\partial_j E_k &= 0\ , \\
\label{maxwell2}
\nabla_i D^i &= \rho\ , & \epsilon^{ijk} \partial_j H_k - \nabla_0 D^i &= j^i\ ,
\end{align}
split into two sets of equations with mutually excluding field content. Thus, the sets of fields $(\mathbf E, \mathbf B)$ and $(\mathbf D, \mathbf H)$ can be transformed independently \footnote{These independent transformations do not establish a symmetry, since the constitutive relation is not invariant. Nevertheless they are invariant transformations of the equations of motion.}. This means that the generalized transformation media do not mimic a single electromagnetic space, but rather two electromagnetic spaces (see Fig.~\ref{fig:triplespace}.) In these media $\mathbf E$ and $\mathbf B$ propagate as if the spacetime had metric $\bar g_{\mu\nu}$, while $\mathbf D$ and $\mathbf H$ mimic a spacetime with metric $\bbar g_{\mu\nu}$. Still, all three spaces (laboratory space and the two electromagnetic spaces) are related by diffeomorphisms and thus are physically equivalent. As indicated in Fig.~\ref{fig:triplespace}, all variables referring to the electromagnetic space of $\mathbf E$ and $\mathbf B$ are written with a bar, while the variables of the electromagnetic space of $\mathbf D$ and $\mathbf H$ are double-barred. As in standard transformation optics the solution in laboratory space is obtained from the solution in the electromagnetic spaces by suitable rescalings of the fields. These rescalings are equivalent to Eqs.~\eqref{scal1} and \eqref{scal2} if on the right hand side of the two equations in \eqref{scal2} all barred variables are replaced by double-barred ones. Now, the most general constitutive relation of generalized transformation optics can be derived as
\begin{align}
\label{tripeleps}
\tilde{D}^i &= - \bar s \frac{\sqrt{-\bbar g}}{\sqrt{\gamma} g_{\bar 0 \bbar{0}}} g^{\bbar{i} \bar j} \tilde E_j - \bar s \bbar{s} \frac{\sqrt{-\bar g}\sqrt{-\bbar{g}}}{\gamma g_{\bar 0 \bbar{0}}} g^{\bbar{i} \bar k} g^{\bbar{0} \bar l} \epsilon_{klm} g^{\bar m\bbar j} \tilde{ H}_j\ , \\
\label{tripelmu}
\tilde B^i &= - \bbar s \frac{\sqrt{-\bar g}}{\sqrt{\gamma} g_{\bar 0 \bbar{0}}} g^{\bar{i} \bbar j} \tilde{ H}_j + \bar s \bbar{s} \frac{\sqrt{-\bar g}\sqrt{-\bbar{g}}}{\gamma g_{\bar 0 \bbar{0}}} g^{\bar{i} \bbar k} \epsilon_{klm} g^{\bbar l \bar 0} g^{\bbar m \bar j} \tilde E_j\ .
\end{align}
Here $\bar s$ and $\bbar s$ are $+1$ if the corresponding maps do not change the orientation of the manifold and $-1$ otherwise. As abbreviation the notation
\begin{equation}
\label{gbbb}
g^{\bbar{\mu} \bar \nu} = \frac{\partial \bbar{x}^{\mu}}{\partial x^{\rho}} \frac{\partial \bar{x}^{\nu}}{\partial x^{\sigma}} g^{\rho \sigma} = \bbar{g}^{\mu \rho} \frac{\partial \bar{x}^{\nu}}{\partial \bbar x^{\rho}} = \frac{\partial \bbar{x}^{\mu}}{\partial \bar x^{\rho}} \bar g^{\rho \nu}
\end{equation}
has been introduced. It is important to realize that $g^{\bbar{\mu} \bar \nu}$ is not an (inverse) spacetime metric, in particular it needs not be a symmetric matrix and it does not necessarily have signature $(3,1)$.
The most important physical differences between the two constitutive relations \eqref{epstilde2}/\eqref{mutilde2} and \eqref{tripeleps}/\eqref{tripelmu} can be summarized as follows: In standard transformation optics permittivity and permeability are always equal and proportional to the induced spatial metric $\bar g^{ij} = \bar \gamma^{ij}$ (see Eq.~\eqref{indmetric}.) This implies that $\epsilon^{ij} = \mu^{ij}$ are symmetric matrices with three positive or three negative eigenvalues. In contrast to this result, $g^{\bbar{i} \bar j}$ in Eq.~\eqref{tripeleps} and $g^{\bar{i} \bbar j}$ in Eq.\ \eqref{tripelmu} are not spatial metrics and thus non-reciprocal media with an anti-symmetric contribution to permittivity and permeability can be described within the generalized setup. Also, the eigenvalues of $g^{\bbar{i} \bar j}$ need not all be positive, which allows to obtain media with strong anisotropy (also called indefinite media.) Though permittivity and permeability are not equal, they are still related as $\bbar{s} \sqrt{-\bbar{g}} \mu^{ij} = \bar s \sqrt{-\bar g} \epsilon^{ji}$. As an interesting special case a proportionality $\mu^{ij} = \alpha \epsilon^{ji}$ with a positive or negative proportionality constant $\alpha$ is conceivable. An example of this type is presented in Sect.~\ref{sec:boundaryII}.
Standard transformation optics intends to mimic as well as possible the electromagnetic space by means of a medium. In particular, in transformation media the trajectories of light according to Fermat's principle are equivalent to the ones in electromagnetic space (but differ from the ones in laboratory space without medium; this aspect is discussed in detail in Ref.~\cite{Leonhardt:2008Oe}.) This is also seen from the fact that the Poynting vector $S^i = \epsilon^{ijk} E_j H_k$ transforms under the spatial part of the coordinate transformations in the same way as $B^i$ and $D^i$. This does not apply to generalized transformation optics as defined in Ref.~\cite{Bergamin:2008Pa}, since $\mathbf E$ and $\mathbf H$ are not affected by the same coordinate transformation (see Fig.~\ref{fig:triplespace} and the discussion in Ref.~\cite{Bergamin:2008Pa}.)
Finally we notice that bi-anisotropic contributions to the constitutive relations require a mixing of space and time in the transformation from laboratory space to electromagnetic space, such that $\bar g_{0i} \neq 0$ and/or $\bbar g_{0i} \neq 0$. In the main part of this paper we will not consider media of this type, some comments about bi-anisotropic media are made in Sect.~\ref{sec:conclusions}.
\section{Boundary conditions for media without bi-anisotropic terms}
\label{sec:boundary}
In the main part of this work we will assume media without bi-anisotropic contributions to the constitutive relation, in other words $\bar g_{0i}$ in Eqs.\ \eqref{epstilde2}, \eqref{mutilde2} and $g_{\bar 0 \bbar i}$, $g_{\bbar 0 \bar i}$ in Eqs.\ \eqref{tripeleps}, \eqref{tripelmu} are assumed to vanish. Bi-anisotropic media in principle can be treated along the same lines, however, the specific results get much more complicated.
We want to study interfaces between two media of this type or between a medium and empty space, whereby empty space is interpreted as a trivial transformation medium (all mappings are identity maps, thus the vacuum solution in laboratory space is mapped onto itself.) At the interface we will impose the standard boundary conditions \eqref{BC1} and \eqref{BC2}. Furthermore surface charge and current will be set to zero, $\sigma = 0$ and $\mathbf K = 0$. Of course, different types of boundary conditions could be studied as well, which however should be rather straightforward once the formalism itself has been developed.
As mentioned in the introduction already, the consequence of the boundary conditions \eqref{BC1} and \eqref{BC2} can be studied in two different ways. Since the solutions of the Maxwell equations in the media are constructed out of vacuum solutions we can ask the question under which restrictions on the transformations the boundary conditions are met if the same vacuum solution is used on both sides of the interface. This means that the interface disappears in electromagnetic space. Since the trajectories of light rays in the media are equivalent to the ones in electromagnetic space it is evident that all interfaces of this type are reflectionless. Alternatively interfaces between two arbitrary transformation media can be considered. Since these interfaces are not necessarily reflectionless, the interface does not disappear in electromagnetic space, but will be visible as a discontinuity between two different vacuum solutions. Still, it should be possible to express the laws of reflection and refraction in terms of vacuum solutions.
In both cases the exact knowledge of the maps from the vacuum solutions in laboratory space onto the media solutions is indispensable. In this context we should mention that these maps, as defined in Refs.~\cite{Leonhardt:2006Nj,Bergamin:2008Pa}, are not unique. As explained in the previous section, generalized transformation optics consists of two steps, a transformation of the equations of motion from laboratory space to the electromagnetic spaces (the diffeomorphisms I and II) and a suitable re-interpretation of the result in laboratory space \footnote{From now on, the notation of generalized transformation optics will be used, unless explicitly mentioned differently. The case of standard transformation optics always follows by a simple identification $\bbar x^\mu = \bar x^\mu$.}. Under the diffeomorphisms the fields transform according to the standard law
\begin{align}
\label{barEB}
\bar E_i &= \frac{\partial x^0}{\partial \bar x^0} \frac{\partial x^j}{\partial \bar x^i} E_j\ , & \bar B^i &= \frac{\partial \bar x^i}{\partial x^j} B^j\ , \\
\label{barDH}
\bbar D^i &= \bbar{\sigma} \frac{\partial \bbar x^i}{\partial x^j} D_j\ , & \bbar H_i &= \bbar{\sigma} \frac{\partial x^0}{\partial \bbar x^0}\frac{\partial x^j}{\partial \bbar x^i} H_j\ .
\end{align}
The additional signs $\bbar \sigma$ are needed in order to obtain the correct transformation of the excitation tensor $\mathcal H^{\mu\nu}$ as defined in Eq.\ \eqref{spacevec2} and are a consequence of the factors $\sqrt{-g_{00}}$ in that equation. As coordinate transformations (or local representations of diffeomorphisms) \eqref{barEB} and \eqref{barDH} are unambiguous.
For the second step the rescaling
\begin{align}
\label{scal3}
\tilde E_i &= \bar s \bar E_i\ , & \tilde B^i &= \bar \sigma \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma}} \bar B^i\ , \\
\label{scal4}
\tilde D^i &= \bbar \sigma \frac{\sqrt{-\bbar g}}{\sqrt{-g}} \frac{\partial \bbar x^0}{\partial x^0} \bbar D^i\ , & \tilde H_i &= \bbar s \frac{\sqrt{-g_{00}}}{\sqrt{-\bbar g_{00}}} \frac{\partial \bbar x^0}{\partial x^0} \bbar H_i\ .
\end{align}
has been proposed in Ref.~\cite{Bergamin:2008Pa} as extension of Eqs.~\eqref{scal1} and \eqref{scal2} to generalized transformation optics. Obviously, any change in these equations which does not change the constitutive equations \eqref{tripeleps} and \eqref{tripelmu} represents a physically equivalent though mathematically different identification of the medium solution in terms of vacuum solutions of the Maxwell equations and thus establishes an ambiguity. Two ambiguities are important in the current work:
\begin{itemize}
\item Diffeomorphisms which leave the metric invariant (so-called isometries) constitute an ambiguity of this kind. They comprise translations, rotations and Lorentz transformations. This ambiguity is important since in the prescription of Refs.~\cite{Leonhardt:2006Nj,Bergamin:2008Pa} it is by no means obvious that a certain point on the interface with coordinates $\tilde x^{\mu}_I$ is represented by the same values of the coordinates $\bar x^{\mu}$ and $\bbar x^{\mu}$ on both sides of the interface. In other words, it is possible that the interface is represented by two different surfaces in the electromagnetic spaces of the two media. This leads to a discontinuity in the vacuum solutions, which however can be removed by a suitable choice of an isometry transformation.
\item The Maxwell equations are invariant under a rescaling of all fields by a constant factor $\alpha$ and thus this represents an ambiguity in the re-interpretation in laboratory space. We will keep this factor in the following and it will be seen below that it is relevant in the case of negative refractive indices. \footnote{Since the two sets of equations, \eqref{maxwell} and \eqref{maxwell2}, depend on two different sets of fields, a rescaling of the fields of \emph{one} set also represents an invariant transformation of the equations of motion. However, these rescalings change the constitutive relation as they change permittivity and permeability by a (not essentially positive) constant. This implies that the interpretation of a negative refractive index in transformation optics \cite{Leonhardt:2006Nj} actually is the effect of an ambiguity. This is most easily seen in the relativistically covariant formulation: according to Ref.~\cite{Leonhardt:2006Nj} a negative refractive index is found if $\epsilon^{ijk}$ in Eqs.~\eqref{maxwell} and \eqref{maxwell2} changes sign under the transformation, as this sign has to be absorbed by a rescaling of $\mathbf E$ and $\mathbf H$ with a negative constant. However, in the relativistically covariant formulation the (4-dimensional) Levi-Civita symbol only appears as an overall factor in the constraint $\epsilon^{\mu\nu\rho\sigma} \partial_\mu F_{\rho\sigma} = 0$, and thus the change of sign is without any consequences. Thus, starting from the relativistically covariant formulation, the negative refractive index appears rather as an ambiguity.}
\end{itemize}
Combining Eqs.\ \eqref{barEB}, \eqref{barDH} with Eqs.\ \eqref{scal3}, \eqref{scal4} and taking into account the new factor $\alpha$ one obtains
\begin{align}
\label{tildeE}
\tilde{E}_i\left(\tilde x = \bar x(x)\right) &= \alpha \bar s \frac{\partial x^0}{\partial \bar x^0} \frac{\partial x^j}{\partial \bar x^i} E_j(x)\ , \\
\label{tildeH}
\tilde{H}_i\left(\tilde x = \bbar x(x)\right) &= \alpha \bbar s \bbar \sigma \frac{\sqrt{-\bbar g_{00}}}{\sqrt{-g_{00}}} \frac{\partial x^j}{\partial \bbar x^i} H_j(x)\ , \\
\label{tildeB}
\tilde B^i\left(\tilde x = \bar x(x)\right) &= \alpha \bar \sigma \frac{\sqrt{\bar \gamma}}{\sqrt{\gamma}} \frac{\partial \bar x^i}{\partial x^j} B^j(x) \ , \\
\label{tildeD}
\tilde D^i\left(\tilde x = \bbar x(x)\right) &= \alpha \frac{\sqrt{-\bbar g}}{\sqrt{-g}} \frac{\partial \bbar x^0}{\partial x^0} \frac{\partial \bbar x^i}{\partial x^j} D^j(x) \ .
\end{align}
First we want to show that the transformations of $\mathbf E$ and $\mathbf H$ and of $\mathbf B$ and $\mathbf D$ are---up to the difference in the associated electromagnetic spaces---equivalent if bi-anisotropic terms in the constitutive relation are absent. Indeed, if $\bar g_{0i} = \bbar g_{0i} = 0$ it easily follows from Eq.~\eqref{dettransform} that
\begin{equation}
\label{g00transform}
\sqrt{-\bar g_{00}} = \left\lVert \frac{\partial x^0}{\partial \bar x^0} \right\lVert \sqrt{-g_{00}} = \bar \sigma \frac{\partial x^0}{\partial \bar x^0} \sqrt{-g_{00}}
\end{equation}
with a similar relation for $\sqrt{-\bbar{g}_{00}}$ (the symbol $\|a\|$ is used to indicate the absolute value of a number $a$ in order to distinguish it from the determinant of a matrix, $|A|$.) Using this equation with $\sqrt{-g} = \sqrt{-g_{00}}\sqrt{\gamma}$ in Eqs.~\eqref{tildeD} and \eqref{tildeH} straightforwardly establishes the equivalence. Applying the transformation law \eqref{dettransform} to the spatial metric,
\begin{equation}
\label{gammatransform}
\sqrt{\bar \gamma} = \sqrt{\left\lvert \frac{\partial x^i}{\partial \bar x^j} \right\lvert^2} \sqrt{\gamma} = \bar s \bar \sigma \left\lvert \frac{\partial x^i}{\partial \bar x^j} \right\lvert \sqrt{\gamma}\ ,
\end{equation}
allows to rewrite Eqs.~\eqref{tildeE}--\eqref{tildeD} as
\begin{align}
\label{tildeEH}
\tilde{E}_i\left(\tilde x = \bar x(x)\right) &= \alpha \bar s \frac{\partial x^0}{\partial \bar x^0} \frac{\partial x^j}{\partial \bar x^i} E_j(x)\ , \\
\label{tildeH2}
\tilde{H}_i\left(\tilde x = \bbar x(x)\right) &= \alpha \bbar s \frac{\partial x^0}{\partial \bbar x^0} \frac{\partial x^j}{\partial \bbar x^i} H_j(x)\ , \\
\label{tildeBD}
\tilde B^i\left(\tilde x = \bar x(x)\right) &= \alpha \bar s \left\lvert \frac{\partial x^k}{\partial \bar x^l}\right\lvert \frac{\partial \bar x^i}{\partial x^j} B^j(x) \ , \\
\label{tildeD2}
\tilde D^i\left(\tilde x = \bbar x(x)\right) &= \alpha \bbar s \left\lvert \frac{\partial x^k}{\partial \bbar x^l}\right\lvert \frac{\partial \bbar x^i}{\partial x^j} D^j(x) \ .
\end{align}
By means of the relations \eqref{tildeEH}--\eqref{tildeD2} the boundary conditions, which are imposed on the media solutions, can be translated into conditions imposed on the vacuum solutions.
To be able to write down these equations some notations and conventions have to be introduced. Though the interface forms a surface in space, the boundary conditions only will be studied in one particular point of this interface, whose coordinates in laboratory spacetime are denoted by $\tilde x^{i}_I$. Furthermore, it is convenient to choose a certain time instance $\tilde t_I$ as well, since we allow transformations of time. Most of the equations in this section only hold at this specific point in laboratory space, $\tilde x^\mu_I$, which is not indicated specifically if the meaning of the equation is obvious in the context.
Since most equations are written in an index notation, an adapted coordinate system will be used. Without loss of generality we can assume that at the point $\tilde x^{\mu}_I$ the space vector parallel to the interface is represented as $\mathbf x_{\parallel} = (x^{A},0)$, where indices with capital Latin letters take values 1,2. The vector perpendicular to the interface accordingly is written as $\mathbf x_\perp = (0,0,x^{\perp})$. In concrete application we will also use $x^\perp = z$, $(x^A) = (x,y)$. To distinguish the two media we will denote them as \emph{left medium} (index $L$) and \emph{right medium} (index $R$). This situation is also illustrated in Fig.~\ref{fig:boundarynot}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{boundary3.eps}
\caption{Notations used to describe the interface between two transformation media. The interface between vacuum and a medium follows straightforwardly if one transformation is reduced to the trivial identity map.}
\label{fig:boundarynot}
\end{figure}
In our notation the boundary conditions \eqref{BC1} and \eqref{BC2} in absence of surface charges and currents can be written as $\tilde D_L^\perp = \tilde D^\perp_R$, $\tilde B_L^\perp = \tilde B^\perp_R$ and $\tilde E^L_{A} = \tilde E^R_{A}$, $\tilde H^L_{A} = \tilde H^R_{A}$ and with Eqs.~\eqref{tildeEH}--\eqref{tildeD2}
{\allowdisplaybreaks
\begin{gather}
\label{VBCE}
\begin{split}
\tilde{E}^L_A &= \alpha_L \bar s_L \frac{\partial x^0}{\partial \bar x_L^0} \frac{\partial x^j}{\partial \bar x_L^A} E^L_j=\\ &= \alpha_R \bar s_R \frac{\partial x^0}{\partial \bar x_R^0} \frac{\partial x^j}{\partial \bar x_R^A} E^R_j = \tilde{E}^R_A\ ,
\end{split}\\
\label{VBCH}
\begin{split}
\tilde{H}^L_A &= \alpha_L \bbar s_L \frac{\partial x^0}{\partial \bbar x_L^0} \frac{\partial x^j}{\partial \bbar x_L^i} H^L_j =\\& = \alpha_R \bbar s_L \frac{\partial x^0}{\partial \bbar x_R^0} \frac{\partial x^j}{\partial \bbar x_R^i} H^R_j= \tilde{H}^R_A\ ,
\end{split}\\
\label{VBCB}
\begin{split}
\tilde B^\perp_L &= \alpha_L \bar s_L \left\lvert \frac{\partial x^k}{\partial \bar x_L^l}\right\lvert \frac{\partial \bar x_L^\perp}{\partial x^j} B_L^j =\\&= \alpha_R \bar s_R \left\lvert \frac{\partial x^k}{\partial \bar x_R^l}\right\lvert \frac{\partial \bar x_R^\perp}{\partial x^j} B_R^j = \tilde B^\perp_R\ ,
\end{split}\\
\label{VBCD}
\begin{split}
\tilde D_L^\perp &= \alpha_L \bbar s_L \left\lvert \frac{\partial x^k}{\partial \bbar x_L^l}\right\lvert \frac{\partial \bbar x_L^\perp}{\partial x^j} D_L^j =\\&= \alpha_R \bbar s_R \left\lvert \frac{\partial x^k}{\partial \bbar x_R^l}\right\lvert \frac{\partial \bbar x_R^\perp}{\partial x^j} D_R^j = \tilde D_R^\perp\ .
\end{split}
\end{gather}
}
These are the boundary conditions in terms of the vacuum solutions which will be considered in the following.
\subsection{Extending the vacuum solution across the interface}
\label{sec:boundaryI}
In this section restrictions on the transformations shall be derived under which the boundary conditions \eqref{BC1} and \eqref{BC2} are satisfied automatically, in other words under which the same vacuum solution $(\mathbf E, \mathbf H)$ of the Maxwell equations can be used on both sides of the interface:
\begin{align}
E^L_i\left(x_L(\tilde x_I)\right) &= E^R_i\left(x_R(\tilde x_I)\right) &\\ H^L_i\left(x_L(\tilde x_I)\right) &= H^R_i\left(x_R(\tilde x_I)\right)
\end{align}
Interfaces of this type disappear on the level of the vacuum solutions and thus they must be reflectionless.
Most of the results of this section have been obtained elsewhere already, in particular Refs. \cite{Yan:2008Pi,Yan:2008Rl}. There are, however, a few differences in the approach taken here: in contrast to Refs.~\cite{Yan:2008Pi,Yan:2008Rl} we intend to construct a solution of the Maxwell equations in the media from the solutions in vacuo, furthermore stretching and eventual inversion of the time direction are included in our calculation and finally we work in the generalized approach of transformation optics according to Ref.\ \cite{Bergamin:2008Pa}.
Since we intend to construct a vacuum solution that extends over the interface, the location of the latter in the electromagnetic spaces must the same for the left and right medium,
\begin{equation}
\bar x_L^{\mu} (\tilde x_I) = \bar x_R^{\mu} (\tilde x_I)\ , \qquad \bbar x_L^{\mu} (\tilde x_I) = \bbar x_R^{\mu} (\tilde x_I)
\end{equation}
At this point the first ambiguity discussed above is important, since it allows to adjust the two transformations in such a way that this equation holds at the point $\tilde x_I^{\mu}$ without changing the physics. Now we can choose without loss of generality our coordinate system $\tilde x^{\mu}$ according to the discussion above and as illustrated in Fig.~\ref{fig:boundarynot}.
To derive the restrictions on the transformations the map $x^{\mu}\rightarrow\bar{x}^\mu$ is considered first, which affects $\mathbf E$ and $\mathbf B$. The discussed extension of the vacuum solution across the interface is required to hold for any solution of the Maxwell equations in vacuum. Therefore, the ensuing restrictions are derived from \eqref{VBCE} and \eqref{VBCB} as
\begin{align}
\label{parallelI}
\alpha_L \bar s_L \frac{\partial x^0}{\partial \bar x_L^0} \frac{\partial x^j}{\partial \bar x_L^A} &= \alpha_R \bar s_R \frac{\partial x^0}{\partial \bar x_R^0} \frac{\partial x^j}{\partial \bar x_R^A}\ , \\
\label{normalI}
\alpha_L \bar s_L \left\lvert \frac{\partial x^k}{\partial \bar x_L^l}\right\lvert \frac{\partial \bar x_L^\perp}{\partial x^j} &= \alpha_R \bar s_R \left\lvert \frac{\partial x^k}{\partial \bar x_R^l}\right\lvert \frac{\partial \bar x_R^\perp}{\partial x^j}\ .
\end{align}
By multiplying Eq.\ \eqref{parallelI} by $\partial \bar x_L^B/\partial x^j$ and summing over $j$ this equation yields the conditions
\begin{align}
\label{gsI}
\frac{\partial \bar x_L}{\partial \bar x_R} &= \frac{\partial \bar y_L}{\partial \bar y_R} = \bar s_L \bar s_L \frac{\alpha_L}{\alpha_R} \frac{\partial \bar x^0_R}{\partial \bar x^0_L}\ , & \frac{\partial \bar x_L}{\partial \bar y_R} &= \frac{\partial \bar y_L}{\partial \bar x_R} = 0\ .
\end{align}
As the maps shall be continuous, the two transformations must agree along the boundary and thus
\begin{equation}
\bar s_L \bar s_R \frac{\alpha_L}{\alpha_R} \frac{\partial \bar x^0_R}{\partial \bar x^0_L} = 1\ .
\end{equation}
To simplify condition \eqref{normalI} the relation $\lvert\partial x^i/\partial \bar x_R^j\lvert/\lvert\partial x^k/\partial \bar x_L^l\lvert = \lvert\partial \bar x_L^i/\partial \bar x_R^j\lvert$ may be used. Then the restriction on the transformation of the normal component $\bar x^{\perp}$ can be written as
\begin{equation}
\frac{\partial \bar x_L^\perp}{\partial \bar x_R^\perp} = \bar s_L \bar s_L \frac{\alpha_R}{\alpha_L}\left\lvert \frac{\partial \bar x_L^k}{\partial \bar x_R^l}\right\lvert = \frac{\partial \bar x^0_R}{\partial \bar x^0_L} \left\lvert \frac{\partial \bar x_L^k}{\partial \bar x_R^l}\right\lvert\ .
\end{equation}
By virtue of Eq.~\eqref{gsI} the determinant reduces to
\begin{equation}
\label{gsdetconstr}
\left\lvert \frac{\partial \bar x_L^k}{\partial \bar x_R^l}\right\lvert = \frac{\partial \bar x^\perp_L}{\partial \bar x^\perp_R} - \frac{\partial \bar x^\perp_L}{\partial \bar x^A_R} \frac{\partial \bar x^A_L}{\partial \bar x^\perp_R} = \frac{\partial \bar x^\perp_L}{\partial \bar x^\perp_R}\ ,
\end{equation}
where the fact that $\partial x^i/\partial \bar x^A_L = \partial x^i/\partial \bar x^A_R$ has been used. The different restrictions \eqref{gsI}--\eqref{gsdetconstr} now can be summarized in the following simple form:
\begin{align}
\label{simple}
\frac{\partial \bar x^A_L}{\partial \bar x^B_R} &= \delta^A_B & \frac{\partial \bar x^0_L}{\partial \bar x^0_R} &= 1
\end{align}
$\partial \bar x_L^\perp/\partial \bar x_R^\perp$ remains unrestricted. There exist no transformations with stretchings and/or reversal of time that allow an extension over an interface. Still, space inversions of the type $\bar x^\perp_L = - \beta x^\perp_R$ are possible and in these cases $\alpha_R/\alpha_L = -1$. Without loss of generality it then can be assumed that $\alpha = \pm 1$ in all mappings.
The whole calculation needs to be redone for the fields $\mathbf D$ and $\mathbf H$. This generates a new set of conditions, which is found simply by replacing all barred variables in Eqs.~\eqref{parallelI}--\eqref{simple} by double-barred ones. An important comment is in order: since the global factors $\alpha_R$ and $\alpha_L$ only can be chosen once, they need to be the same for both mappings. Thus solutions only extend over an interface if either none or both mappings include space inversions. Thus, it is seen that indeed the ambiguity associated with the constant $\alpha$ plays an important role in the discussion of boundary conditions as soon as negative refractive index media are involved.
Although many proposals of transformation designed devices, most importantly the invisibility cloak \cite{Pendry:2006Sc,Leonhardt:2006Sc} and the perfect lens \cite{Leonhardt:2006Nj}, fit into the picture described in this section, it is important to realize that in many interesting situations this setup might be too limited. In many situations transformation designed devices do not exclusively contain reflectionless boundaries. This can happen if some surfaces do not contribute to the functionality of the device. In this case this might be regarded as a minor problem since the above calculation still applies locally. However, one also might deal with reflections at functional surfaces, which requires an extension of the approach.
\subsection{General transformations}
\label{sec:boundaryII}
In this section interfaces between two arbitrary generalized transformation media are considered. The transformations at such interfaces do not necessarily obey the restrictions \eqref{parallelI} and \eqref{normalI} and consequently it can no longer be required that a certain solution of the Maxwell equations in the two media is described by the same vacuum solution. Of course, a specific solution for $\mathbf E$ and $\mathbf H$ via Eqs.~\eqref{tildeEH}--\eqref{tildeD2} still provides solutions of the Maxwell equations on both sides of the interface, but we no longer insist that these solutions obtained from the \textit{same} vacuum solution meet the boundary conditions \eqref{BC1} and \eqref{BC2}. This allows to relax all constraints found in the previous section, in particular the transformations need not even be continuous at the interface. It will be shown in the following how the boundary conditions \eqref{BC1} and \eqref{BC2} (still with $\sigma = 0$ and $\mathbf K = 0$) can be rewritten in terms of the vacuum solutions in laboratory space and the geometric transformations.
Let us start with the result already obtained in Eqs.~\eqref{VBCE}--\eqref{VBCD}. In this section these equations are no longer seen as restrictions onto the transformations, but rather define the free space solution $(E_i^L,H_i^L)$ at the interface in terms of $(E_i^R,H_i^R)$ taken at this point. A complication arises as Eqs.~\eqref{VBCE} and \eqref{VBCH} relate field components with lower indices in laboratory space, while Eqs.~\eqref{VBCB} and \eqref{VBCD} relate upper indices. Due to Eqs.~\eqref{scal3} and \eqref{scal4} this also applies in electromagnetic space, where the boundary conditions can be reformulated as
\begin{gather}
\label{nonGS1}
\tilde E^L_A = \tilde E^R_A \quad \Rightarrow \quad \bar{E}^L_A = \bar s_L \bar s_R \frac{\alpha_R}{\alpha_L} \bar{E}^R_A \ ,\\
\label{nonGS2}
\tilde B^\perp_L = \tilde B^\perp_R \quad \Rightarrow \quad \bar B^\perp_L = \bar s_L \bar s_R \frac{\alpha_R}{\alpha_L} \left\lvert \frac{\partial \bar x_{L}^k}{\partial \bar x^l_R}\right\lvert \bar B^\perp_R \ , \\
\label{nonGS3}
\tilde{H}^L_A = \tilde{H}^R_A \quad \Rightarrow \quad \bbar{H}^L_A = \bbar s_L \bbar\sigma_L \bbar s_R \bbar\sigma_R \frac{\alpha_R}{\alpha_L} \bbar{H}^R_A \ ,\\
\label{nonGS4}
\tilde D^\perp_L = \tilde D^\perp_R \quad \Rightarrow \quad \bbar D^\perp_L = \bbar s_L \bbar \sigma_L \bbar s_R\bbar\sigma_R \frac{\alpha_R}{\alpha_L} \left\lvert \frac{\partial \bbar x_{L}^k}{\partial \bbar x^l_R}\right\lvert \bbar D^\perp_R \ .
\end{gather}
Here, all fields are taken at those points which are mapped onto the interface in laboratory space, e.g., $\bar E_A^L$ and $\bar B^\perp_L$ are taken at the spacetime point $\left(\bar t_L(\tilde t_I), \bar x^i_{L}(\tilde x_I^j)\right)$. As we no longer insist on continuous mappings this spacetime point in laboratory space may be represented by two different spacetime points in electromagnetic space on the two sides of the interface. Since no a priori assumptions about the metric in the electromagnetic spaces should be made, the derivation of the boundary conditions exclusively in upper (or alternatively lower) indices is not straightforward. To simplify this task standard transformation optics \cite{Leonhardt:2006Nj,Leonhardt:2008Oe} is considered in a first step.
\subsubsection{Standard transformation optics}
In this subsection the boundary conditions are discussed for standard transformation optics and thus $\bar x^\mu \equiv \bbar x^\mu$ holds. Then with Eqs.~\eqref{parallelmetric} and \eqref{parallelmetricII} the following relations can be established:
\begin{align}
\label{STO1}
\bar E^A &= \left(\bar g^{AB} - \frac{\bar g^{A\perp} \bar g^{\perp B}}{\bar g^{\perp\perp}}\right) \bar E_A + \frac{\bar g^{A\perp}}{\bar g^{\perp\perp}} \bar E^\perp \\ \label{STO1.1}
\bar D_\perp &= \frac{1}{\bar g^{\perp \perp}}(\bar D^\perp - \bar g^{\perp A} \bar D_A)
\end{align}
These two relations enable us to rewrite the boundary conditions in electromagnetic space (Eqs.~\eqref{nonGS1}--\eqref{nonGS4}) exclusively in terms of vectors (lower indices) or covectors (upper indices). If we intend to express everything in terms of vectors one uses the relation \eqref{STO1.1} and after some algebra arrives at
\begin{align}
\label{STO2}
E_i^L &= \bar s_L \bar s_R \frac{\alpha_R}{\alpha_L} \frac{\partial \bar x^0_L}{\partial \bar x^0_R} V_{i}{}^{k} \frac{\partial x_R^j}{\partial \bar x^k_R} E_j^R\ ,\\
\label{STO3}
H_i^L &= \bar s_L \bar s_R \frac{\alpha_R}{\alpha_L} \frac{\partial \bar x^0_L}{\partial \bar x^0_R} V_{i}{}^{k} \frac{\partial x_R^j}{\partial \bar x^k_R} H_j^R\ ,
\end{align}
where $V_{i}{}^{k}$ is given by
\begin{equation}
V_i{}^{k} = \frac{\partial \bar x^k_L}{\partial x_L^i} + \frac{1}{\bar g^{\perp\perp}_L} \frac{\partial \bar x^\perp_L}{\partial x^i_L}\left(\left\lvert\frac{\partial \bar x^m_L}{\partial \bar x^n_R}\right\lvert \frac{\partial \bar x^0_R}{\partial \bar x^0_L} \bar g^{\perp k}_R - \bar g^{\perp k}_L \right)\ .
\end{equation}
For covectors raising of all indices implies
\begin{align}
\label{STO4}
E^i_L &= \bar s_L \bar s_R \frac{\alpha_R}{\alpha_L} \frac{\partial x^i_L}{\partial \bar x^k_L} C^{k}{}_{j} E^j_R\ , \\
\label{STO5}
H^i_L &= \bar s_L \bar s_R \frac{\alpha_R}{\alpha_L} \frac{\partial x^i_L}{\partial \bar x^k_L} C^{k}{}_{j} H^j_R\ ,
\end{align}
with
\begin{equation}
\label{Ckj}
\begin{split}
C^{k}{}_{j} &=\left(\bar g_L^{kA} - \frac{\bar g_L^{k\perp}\bar g_L^{\perp A}}{\bar g_L^{\perp\perp}}\right) \bar g^R_{Al}\frac{\partial \bar x_R^l}{\partial x^j_R}+\\ &\quad + \left\lvert\frac{\partial \bar x^m_L}{\partial \bar x^n_R}\right\lvert \frac{\partial \bar x^0_R}{\partial \bar x^0_L} \frac{\bar g_L^{k\perp}}{\bar g^{\perp\perp}_L} \frac{\partial \bar x_R^\perp}{\partial x^j_R}\ .
\end{split}
\end{equation}
In these equations it is important to remember that $\bar E^i = \sqrt{-\bar g_{00}} \bar D^i$ and $\bar H^i = \sqrt{-\bar g_{00}} \bar B^i$. Furthermore, notice that the fields on both sides of these equations are taken at the same value in laboratory space, $\tilde x_I^\mu$, cf.\ Eqs.\ \eqref{nonGS1}--\eqref{nonGS4}. Eqs.~\eqref{STO2}--\eqref{Ckj} are the reformulation of the boundary conditions in terms of the vacuum solutions in laboratory space. As can be seen, the field values at the interface of the vacuum solution of the left medium are completely defined in terms of the vacuum solution of the right medium at this point and the geometric manipulations of transformation optics. The equations do no longer include any reference to the medium solutions, which actually describe the physical situation.
\paragraph{Example: homogeneous and isotropic media}
To show how basic characteristics of media are encoded in Eqs.~\eqref{STO2}--\eqref{Ckj} let us consider a simple example, the flat interface between vacuum and a homogeneous and isotropic medium. Media of this type with $\epsilon = \mu = n$ can be obtained by the simple transformation $\bar t = n t$, as is seen when inserting this relation into Eqs.~\eqref{epstilde2} and \eqref{mutilde2}. Unless $n = \pm 1$ the interface is not reflectionless, however the above conditions still describe correctly the boundary conditions that have to hold.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth,bb=0 0 253 154]{hom_isotr_medium.eps}
\caption{Interface between vacuum and a homogeneous and isotropic medium according to transformation optics.}
\label{fig:HomIsoMed}
\end{figure}
For concreteness let us assume the situation as depicted in Fig.\ \ref{fig:HomIsoMed}. Since the value of $\alpha_R$ can be assumed to be $\pm 1$ we will choose it in such a way that $\bar s_R \alpha_R = 1$. With this choice Eqs.~\eqref{STO2} and \eqref{STO3} reduce to
\begin{align}
\label{STO6}
E_A^L &= \frac{1}{n} E_A^R\ , & E_\perp^L &= E_\perp^R\ , \\
\label{STO7}
H_A^L &= \frac{1}{n} H_A^R\ , & H_\perp^L &= H_\perp^R\ .
\end{align}
To study the reflection and refraction coefficients we start by defining the solution in the right medium. Its vacuum solution is assumed as a plane wave,
\begin{align}
\label{STO7.1}
\mathbf E^R &= \mathbf e \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right] + \mbox{c.c.}\ ,\\ \mathbf H^R &= \mathbf h \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right] + \mbox{c.c.}\ ,
\end{align}
which is mapped onto the solution in the medium as $\tilde{\mathbf E}^R = \mathbf E^R/n$, $\tilde{\mathbf H}^R = \mathbf H^R/n$. In the simple example given here it is immediate that the plane wave of the vacuum solution obeying $\mathbf k_R^2 = \omega_R^2$ maps onto a plane wave in the medium with dispersion relation $\tilde{\mathbf k}_R^2 = n^2 \tilde{\omega}_R^2$, as required by the fact that $\epsilon = \mu = n$.
From Eqs.~\eqref{STO6} and \eqref{STO7} the solution for $\mathbf E^L$ and $\mathbf H^L$ at the interface is deduced as
\begin{align}
\begin{split}
E_A^L &= \frac{1}{n} e_A \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right]\\ & = \frac{1}{n} e_A \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n}\omega_L t_L)\right]\ ,
\end{split}
\\
\begin{split}
E_\perp^L &= e_\perp \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right] \\ &= e_\perp \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n} \omega_L t_L)\right]\ ,
\end{split} \\
\begin{split}
H_A^L &= \frac{1}{n} h_A \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right] \\&= \frac{1}{n} h_A \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n} \omega_L t_L)\right]\ ,
\end{split} \\
\begin{split}
H_\perp^L &= h_\perp \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right] \\ &= h_\perp \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n} \omega_L t_L)\right]\ .
\end{split}
\end{align}
While the solution $(\mathbf e, \mathbf h, \mathbf k_R)$ by construction obeys $\mathbf k_R \cdot \mathbf e = \mathbf k_R \cdot \mathbf h = \mathbf e \cdot \mathbf h = 0$ and in addition $\mathbf k_R^2 = \omega_R^2$, the corresponding relations for the solution in the vacuum (``left hand side'') are not immediate. It is easily seen that they cannot be met simultaneously with a single plane wave solution unless $e_\perp = h_\perp = k^R_A = 0$, in other words unless the incoming wave hits the interface at normal incidence.
For simplicity let us assume in the following that the magnetic field is perpendicular to the plane of incidence, i.e.\ $h_\perp = 0$. Then $\mathbf H^L$ automatically is perpendicular to the wave vector, while for the electric field we make the ansatz
\begin{align}
\label{STO10}
E_A^L &= \left(S+(1-S)\right) E_A^L = (E_{\mbox{\tiny in}})_A + (E_{\mbox{\tiny ref}})_A\ , \\
\label{STO10.1} E_\perp^L &= \left(T+(1-T)\right) E_\perp^L = (E_{\mbox{\tiny in}})_\perp + (E_{\mbox{\tiny ref}})_\perp\ ,
\end{align}
where $S$ and $T$ are chosen in such a way that $\mathbf k_{\mbox{\tiny in}} \cdot \mathbf e_{\mbox{\tiny in}} = 0$ and $\mathbf k_{\mbox{\tiny ref}} \cdot \mathbf e_{\mbox{\tiny ref}} = 0$. As indicated by the notation in Eqs.~\eqref{STO10} and \eqref{STO10.1}, the first solution represents the incoming wave while the second one is the reflected one. If we choose $\tilde z = z_R = z_L = 0$ as location of the interface this implies
\begin{align}
S \frac{e_A}{n} k^L_A + T e_\perp k^L_\perp &= 0\ , \\ (1-S) \frac{e_A}{n} k^L_A -(1-T) e_\perp k^L_\perp &= 0\ ,\\ \label{kldisprel} \mathbf k_L^2 = \omega_L^2 &= \frac{\omega_R^2}{n^2}\ .
\end{align}
Eq.\ \eqref{kldisprel} together with $\mathbf k_R^2 = \omega_R^2$ allows to deduce
\begin{equation}
(k_\perp^L)^2 = (k_\perp^R)^2 + (\frac{1}{n^2}-1) \omega_R^2\ .
\end{equation}
With this one finds for $S$ and $T$
\begin{align}
S &= \frac{1}{2}\frac{n \cos \phi + \sqrt{n^2-\sin^2\phi}}{\sqrt{n^2-\sin^2\phi}}\ , \\ T&= \frac{1}{2} \frac{n \cos \phi + \sqrt{n^2-\sin^2\phi}}{n\cos\phi}\ .
\end{align}
From these equations and the dispersion relation of $\mathbf k_L$ it is now easy to show that
\begin{align}
\label{STO8}
\frac{\|\mathbf E_{\mbox{\tiny trans}}\|}{\|\mathbf E_{\mbox{\tiny in}}\|} &= \frac{2n^2\cos \phi}{n \cos\phi + \sqrt{n^2-\sin^2\phi}}\ , \\
\label{STO8.1}
\frac{\|\mathbf E_{\mbox{\tiny ref}}\|}{\|\mathbf E_{\mbox{\tiny in}}\|} &= \frac{n\cos \phi - \sqrt{n^2-\sin^2\phi}}{n \cos\phi + \sqrt{n^2-\sin^2\phi}} \ ,
\end{align}
where $\mathbf E_{\mbox{\tiny trans}}$ is the solution \eqref{STO7.1}. Notice that these relations determine the reflection and transmission coefficients in terms of the vacuum solutions of transformation optics. From the relation $\tilde{\mathbf E}^R = \mathbf E^R/n$ it can be seen that the additional factor $n$ in the first relation indeed reproduces the correct result in terms of the solutions in laboratory space.
The polarization with $e_{\bot} = 0$ follows analogously by taking the equations for $\mathbf H$ instead of those for $\mathbf E$. Obviously this yields again \eqref{STO8} and \eqref{STO8.1} as is required since $\epsilon = \mu$ in our example.
\subsubsection{Generalized transformations}
The situation gets slightly more complicated in generalized transformation optics \cite{Bergamin:2008Pa}, i.e.\ if the constraint $\bar x^{\mu} \equiv \bbar x^{\mu}$ is relaxed. The complete boundary conditions in electromagnetic space are deduced for the electric field from Eqs.~\eqref{nonGS1} and \eqref{nonGS4}, those for the magnetic field from Eqs.~\eqref{nonGS2} and \eqref{nonGS3}. As is seen, the boundary condition for the lower parallel components are formulated in a different spacetime than the ones for the upper normal component. To reformulate all boundary conditions in terms of the same electromagnetic space, the condition \eqref{nonGS4} with help of the transformation \eqref{barDH} can be rewritten as
\begin{equation}
\label{Dbbarbar}
\frac{\partial \bbar{x}^{\perp}_L}{\partial \bar x^i_L} \bar \gamma^{ij}_L \bar D_j^L = \bbar s_L \bar \sigma_L \bbar s_R\bar\sigma_R \frac{\alpha_R}{\alpha_L} \left\lvert \frac{\partial \bbar x_{L}^k}{\partial \bbar x^l_R}\right\lvert \frac{\partial \bbar{x}^{\perp}_R}{\partial \bar x^i_R} \bar \gamma^{ij}_R \bar D_j^R\ .
\end{equation}
This additional transformation between the two electromagnetic spaces induces a mixing of the two transformations in the ensuing boundary condition. Now, Eqs.~\eqref{nonGS1} and \eqref{Dbbarbar} can be combined to derive the boundary conditions in terms of the vacuum solutions in laboratory space in analogy to Eq.~\eqref{STO2}. Using the notation \eqref{gbbb} it can be cast into the rather simple form
\begin{equation}
\label{TSM1}
E_i^L = \frac{\alpha_R}{\alpha_L} \frac{\partial \bar x^0_L}{\partial \bar x^0_R} \bar V_{i}{}^{k} \frac{\partial x_R^j}{\partial \bar x^k_R} E_j^R\ ,
\end{equation}
where the new transformation matrix $\bar V_{i}{}^{k}$ takes the form
\begin{multline}
\label{Vbar}
\bar V_{i}{}^{k} = \bar s_L\bar s_R \frac{\partial \bar x^k_L}{\partial x_L^i} + \\ + \frac{1}{g^{\bbar \perp \bar \perp}_L} \frac{\partial \bar x^\perp_L}{\partial x^i_L}\left(\bbar s_L \bbar s_R \left\lvert\frac{\partial \bbar x^m_L}{\partial \bbar x^n_R}\right\lvert \frac{\partial \bar x^0_R}{\partial \bar x^0_L} g^{\bbar \perp \bar k}_R - \bar s_L \bar s_R g^{\bbar \perp \bar k}_L \right)\ .
\end{multline}
The calculation of the boundary conditions of $\mathbf H$ follows analogously by interchanging $\bar x^\mu \leftrightarrow \bbar x^\mu$:
\begin{equation}
\label{TSM2}
H_i^L = \frac{\alpha_R}{\alpha_L} \frac{\partial \bbar x^0_L}{\partial \bbar x^0_R} \bbar V_{i}{}^{k} \frac{\partial x_R^j}{\partial \bbar x^k_R} H_j^R\ .
\end{equation}
Here, $\bbar V_{i}{}^{k}$ is obtained from the expression \eqref{Vbar} by replacing all barred quantities and indices by double-barred and vice versa.
\paragraph{More on homogeneous and isotropic media} In the previous section homogeneous and isotropic media from standard transformation optics were considered, which follow from time stretchings $\bar t = n t$ and describe media with $\epsilon = \mu = n$. This suggests to consider the transformations
\begin{align}
\label{TSM4}
\bar t &= \mbox{sgn}(\mu) \|\epsilon\| t\ , & \bbar t &= \mbox{sgn}(\epsilon) \|\mu\| t
\end{align}
within the generalized setup, where $\mbox{sgn}(a)$ is the sign of $a$. Applying this transformation in \eqref{tripeleps} and \eqref{tripelmu} yields
\begin{align}
\label{TSM5}
\tilde D^i &= \epsilon \gamma^{ij} \tilde E_j\ , & \tilde B^i &= \mu \gamma^{ij} \tilde H_j\ ,
\end{align}
which explains the choice of signs in Eq.\ \eqref{TSM4}. This shows that homogeneous and isotropic media with arbitrary permittivity and permeability can be understood as an independent stretching of time in the two different transformations. On a side-remark we notice that these media can also be obtained by stretching all spatial directions simultaneously. Indeed, the transformation
\begin{align}
\bar x^i &= \frac{\mbox{sgn}(\epsilon) \mbox{sgn}(\mu)}{\|\epsilon\|^{\frac{1}{3}} \|\mu\|^{\frac{2}{3}}} x^i\ , & \bbar x^i &= \frac{\mbox{sgn}(\epsilon) \mbox{sgn}(\mu)}{\|\epsilon\|^{\frac{2}{3}} \|\mu\|^{\frac{1}{3}}} x^i
\end{align}
also yields the media properties \eqref{TSM5}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth,bb=0 0 253 154]{hom_isotr_mediumII.eps}
\caption{Interface between vacuum and a homogeneous and isotropic medium with arbitrary and independent $\epsilon$ and $\mu$ according to generalized transformation optics.}
\label{fig:HomIsoMedII}
\end{figure}
Extending the result of the example of the previous section, we want to derive the law of reflection and refraction at an interface between vacuum and an arbitrary homogeneous, isotropic medium. This situation is depicted in Fig.~\ref{fig:HomIsoMedII}. Again, the calculation starts from the definition of the vacuum solution that is mapped onto the transmitted wave
\begin{align}
\mathbf E^R &= \mathbf e \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right] + \mbox{c.c.}\ ,\\ \mathbf H^R &= \mathbf h \exp\left[i(\mathbf k_R \cdot \mathbf x_R - \omega_R t_R)\right] + \mbox{c.c.}\ .
\end{align}
Before considering the boundary conditions it might be useful to derive in detail how this vacuum solution is mapped onto a solution of the medium. In a first step we apply the transformations \eqref{TSM4}, which maps $\mathbf E$ and $\mathbf H$ onto the solutions (notice the relation $\mbox{sgn}(\mu) = \bar s$, $\mbox{sgn}(\epsilon) = \bbar s$)
\begin{align}
\bar{\mathbf E}^R(\bar x, \bar t) &= \frac{\bar s}{\|\epsilon\|}\mathbf e \exp\left[i(\bar{\mathbf k}_R \cdot \bar{\mathbf x}_R - \bar{\omega}_R \bar{t}_R)\right] + \mbox{c.c.}\ , \\
\bbar{\mathbf H}^R(\bbar x, \bbar t) &= \frac{1}{\|\mu\|}\mathbf h \exp\left[i(\bbar{\mathbf k}_R \cdot \bbar{\mathbf x}_R - \bbar{\omega}_R \bbar{t}_R)\right] + \mbox{c.c.}\ ,
\end{align}
with $\bar{\mathbf k}_R^2 = \epsilon^2 \bar \omega_R^2$ and $\bbar{\mathbf k}_R^2 = \mu^2 \bbar \omega_R^2$.
These solutions are now re-interpreted in terms of laboratory space,
\begin{align}
\label{TSM6}
\tilde{\mathbf E}^R(\tilde x, \tilde t) &= \frac{\alpha_R}{\|\epsilon\|}\mathbf e \exp\left[i(\tilde{\mathbf k}_R \cdot \tilde{\mathbf x}_R - \tilde{\omega}_R \tilde{t}_R)\right] + \mbox{c.c.}\ ,\\
\label{TSM7}
\tilde{\mathbf H}^R(\tilde x, \tilde t) &= \frac{\alpha_R}{\|\mu\|}\mathbf h \exp\left[i(\tilde{\mathbf k}_R \cdot \tilde{\mathbf x}_R - \tilde{\omega}_R \tilde{t}_R)\right] + \mbox{c.c.}\ ,
\end{align}
with $\tilde{\mathbf k}_R^2 = \epsilon \mu \tilde \omega_R^2$. Although the correct dispersion relation is not present in the solutions in electromagnetic space, it is important to notice that also this information is encoded in a completely geometric way, since the general dispersion relation (in the absence of bi-anisotropic contributions to the constitutive relation) reads \cite{Bergamin:2008Pa}: $g^{\bar i \bbar j} k_i k_j = - g^{\bar 0 \bbar 0} \omega^2$.
The calculation of reflection and refraction coefficients follows in close analogy to the example presented above. As boundary conditions we find from \eqref{TSM1} and \eqref{TSM2}
\begin{align}
\label{TSM10}
E_A^L &= \frac{\alpha_R \bar s_R}{\mbox{sgn}(\mu) \|\epsilon\|} E_A^R\ , & E_\perp^L &= \alpha_R \bbar s_R E_\perp^R\ , \\
\label{TSM11}
H_A^L &= \frac{\alpha_R \bbar s_R}{\mbox{sgn}(\epsilon)\|\mu\|} H_A^R\ , & H_\perp^L &= \alpha_R \bar s_R H_\perp^R\ .
\end{align}
A possible simple choice of $\alpha_R$ is $\alpha_R = \bbar s_R$ which implies
\begin{align}
E_A^L &= \frac{1}{\epsilon} e_A \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n}\omega_L t_L)\right]\ , \\
E_\perp^L &= e_\perp \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n} \omega_L t_L)\right]\ , \\
H_A^L &= \frac{\bar s \bbar s}{\mu} h_A \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n} \omega_L t_L)\right]\ , \\
H_\perp^L &= \bar s \bbar s h_\perp \exp\left[i(\mathbf k_L \cdot \mathbf x_L - \frac{1}{n} \omega_L t_L)\right]\ ,
\end{align}
since $\mathbf k_R = \mathbf k_L$ and $\omega_R = \omega_L/n$ with $n = \sqrt{\epsilon \mu}$. If $h_\perp = 0$ the ansatz \eqref{STO10} and \eqref{STO10.1} yields as conditions for $S$ and $T$
\begin{align}
S \frac{e_A}{\epsilon} k^L_A + T e_\perp k^L_\perp &= 0\ , \\ (1-S) \frac{e_A}{\epsilon} k^L_A -(1-T) e_\perp k^L_\perp &= 0\ , \\ \mathbf k_L^2 = \omega_L^2 &= \frac{\omega_R^2}{n^2}\ .
\end{align}
From these conditions it is found that
\begin{align}
S &= \frac{1}{2}\frac{\epsilon \cos \phi + \sqrt{n^2-\sin^2\phi}}{\sqrt{n^2-\sin^2\phi}}\ , \\ T&= \frac{1}{2} \frac{\epsilon \cos \phi + \sqrt{n^2-\sin^2\phi}}{\epsilon \cos\phi}\ ,
\end{align}
and thus the generalization of the result \eqref{STO8}, \eqref{STO8.1} is easily derived as
\begin{align}
\label{TSM12}
\frac{\|\mathbf E_{\mbox{\tiny trans}}\|}{\|\mathbf E_{\mbox{\tiny in}}\|} &= \frac{2\epsilon n \cos \phi}{\epsilon \cos\phi + \sqrt{n^2-\sin^2\phi}}\ , \\
\label{TSM13}
\frac{\|\mathbf E_{\mbox{\tiny ref}}\|}{\|\mathbf E_{\mbox{\tiny in}}\|} &= \frac{\epsilon \cos \phi - \sqrt{n^2-\sin^2\phi}}{\epsilon \cos\phi + \sqrt{n^2-\sin^2\phi}} \ .
\end{align}
This is the law of reflection and refraction in terms of vacuum solutions. Keeping in mind that from Eq.~\eqref{TSM6}, with our choice of $\alpha_R$, $\tilde{\mathbf E}^L = \mathbf E^L/\epsilon$ it is seen that this result indeed reproduces the correct law in terms of the laboratory space solutions. Again, the case $e_\bot = 0$ follows completely analogously.
In this example it has been shown that homogeneous and isotropic media with arbitrary permittivity and permeability allow a geometric interpretation in terms of generalized transformation optics. This does not just include the derivation of solutions in these media from vacuum solutions, but also the laws of reflection and refraction at an interface.
\section{Concluding remarks}
\label{sec:conclusions}
In this paper boundary conditions at the interface of two generalized transformation media have been studied and it has been shown how the ensuing conditions can be expressed completely in terms of vacuum solutions of the Maxwell equations and of geometric manipulations. This task has been carried out in two steps: in a first step we considered the most general situation that allows to describe the electromagnetic fields on both sides of the interface in terms of the same vacuum solution. Obviously, all interfaces of this type are reflectionless. In a second step we relaxed the condition of a single vacuum solution and considered interfaces between two arbitrary transformation media. Of course, these are not reflectionless in general; as basic example we showed how generalized transformation optics allows to derive the standard law of reflection and refraction at the interface of homogeneous and isotropic media in a geometric way.
Once this second step has been calculated it is worth to reconsider the meaning of the first result. We have shown that the boundary conditions at a generic (not necessarily reflectionless) interface can be described completely in terms of vacuum solutions of the Maxwell equations in laboratory space and geometric manipulations (diffeomorphisms, locally interpreted as coordinate transformations.) It is well known that diffeomorphisms have a group structure and it is evident that this group structure extends straightforwardly to transformation media: two transformations applied subsequently again yield a transformation that describes a transformation medium, there exist a unit element, which is the trivial transformation, and to each transformation there exists an inverse which reverts the action of the former.
Given an interface between two generalized transformation media, whose boundary conditions in terms of vacuum solutions are described by Eqs.~\eqref{TSM1} and \eqref{TSM2}, we may ask the following question: which interfaces between two different transformation media yield the \emph{same} boundary conditions in terms of the vacuum solutions? The answer is given by the result of Sect.~\ref{sec:boundaryI}. From our specific choice of media we can obtain physically different situations by applying to these media solutions additional transformations that obey the constraints \eqref{parallelI} and \eqref{normalI}. Under such transformations the Eqs.~\eqref{TSM1} and \eqref{TSM2} do not change and thus the boundary conditions in terms of the vacuum solutions are not changed. Therefore the transformations of Sect.~\ref{sec:boundaryI} define equivalent classes of transformation media within the general result of Sect.~\ref{sec:boundaryII} and with respect to the group structure of diffeomorphisms. The reflectionless media are those which are members of the equivalent class of an interface vacuum-vacuum.
Finally it should be mentioned that similar formulas could also be derived for bi-anisotropic media. Still, since the coordinate transformations associated with these media mix the spatial directions with the time direction in order to obtain $\bar g_{0i} \neq 0$ and $\bbar g_{0i} \neq 0$, they will also mix $\mathbf E$ with $\mathbf B$ and $\mathbf D$ with $\mathbf H$. Thus the ensuing conditions are expected to be considerably more involved.
\begin{acknowledgments}
The author wishes to thank J.~Llorens Montolio for helpful discussions. This work profited a lot from fruitful discussion with M.~Qiu and M.~Yan and W.~Yan during a cooperation of the Advanced Concepts Team of the European Space Agency with the Royal Institute of Technology (KTH). The cooperation was funded under the Ariadna program of ESA.
\end{acknowledgments}
|
1,116,691,497,869 | arxiv | \section{Introduction}
For the three-dimensional Navier-Stokes equations the existence of strong solutions over
an arbitrary time interval is not known. Therefore the validity of the results of the numerical solutions of 3d Navier-Stokes equations is not obvious. This problem is addressed in Chernyshenko et al (2006) where a rigorous relationship between numerical and sufficiently regular exact solutions is given. They show for sufficiently smooth initial conditions and forcing functions (data) that although \emph{a priori} there is no guarantee of the validity of the numerical solutions, there is an \emph{a posteriori} condition that if satisfied by the numerical results guarantees the existence of a strong solution. In this paper we show the validity of the results proved in Chernyshenko et al (2006) for the less regular strong solutions which are not covered there.\\
\indent We will study the Navier-Stokes equations in their functional form. For a bounded domain $\Omega$ we let $\mathcal{H}$ be the space of divergence-free smooth vector-valued functions on $\Omega$ with compact support and zero average and define
\begin{eqnarray*}
H &=& \textrm{closure of } \mathcal{H}\: \mathrm{in} \:[L^2(\Omega)]^3,\\
V &=& \textrm{closure of } \mathcal{H}\: \mathrm{in} \:[H^1(\Omega)]^3.
\end{eqnarray*}
We use the same notation $H$ and $V$ for the similar spaces of periodic functions over the periodic domain $Q$. Then the Navier-Stokes equations in their functional form are written as (Constantin and Foias 1988, Robinson 2001)
\begin{equation}\label{NSe_reg}
\frac{\mathrm{d} u}{\mathrm{d} t} +\nu Au +B(u,u) = f,\quad\mbox{with}\quad u(0)=u_0
\end{equation}
where $Au=-\Pi\Delta$, $B(u,u)=\Pi (u.\nabla)u$ with $\Pi$ the orthogonal projection from $L^2$ into $H$.
We will consider this equation with the following cases for the data\\
\\
\indent (a) `minimal regularity' when $u_0\in{V}$ and $f\in L^2(0,T;H)\cap L^1(0,T;V)$\\
\indent (b) `second order regularity' when $u_0\in{V^2}$ and $f\in L^2(0,T;V)\cap L^1(0,T;V^2)$,\\
\\
where $V^m=H^m\cap V$. For the periodic case we know that $D(A^{m/2})=H^m\cap V$ for all $m$ and therefore we define the norm on $V^m$ as $\|u\|_m=|A^{m/2}u|$. We denote by $(\cdot,\cdot)$ and $|\cdot|$ the inner product and norm on $H$.\\
\indent We will show that the strong solution of (\ref{NSe_reg}) with the data introduced in $(a)$ or $(b)$, if it exists, remains strong if the changes in the initial condition and forcing function are small enough. The exact conditions required for these changes to be `small' are given in theorems \ref{robust1} and \ref{robust2}. For example in the case of minimally regular strong solutions we require
\begin{eqnarray*}
&&|D(u_0-v_0)| +\int_0^T|Df(s)-Dg(s)|\,\mathrm{d} s \nonumber\\
&& \quad < \frac{1}{k}\left(\frac{\nu^3}{27T}\right)^{1/4}
\exp\left(-\frac{k^2}{2}\int_0^T\frac{27k^2}{2}\frac{1}{\nu^3}|Du(s)|^4+\frac{1}{\nu}|Du(s)|\,|Au(s)|\,\mathrm{d} s\right).
\end{eqnarray*}
We then, in theorems \ref{postt} and \ref{postt2}, use these robustness results to find an \emph{a posteriori} condition that if satisfied by sufficiently refined numerical approximations, implies the existence of a strong solution. We also show that if a strong solution exists the Galerkin approximations converge to it and then use this to prove that the existence of a strong solution can be verified by the Galerkin approximations. In the last section we consider a channel flow as a physical example that can be described by the Navier-Stokes equations with the conditions introduced above. For this example we will show how the results of this paper can be applied to the Galerkin approximations to verify the existence of a strong solution.\\
\indent The results we prove here for a strong solution with lowest regularity hold in a general bounded domain as well as in the absence of boundaries unlike the results for more regular solutions which are proved only for the equations in a periodic domain or the whole space.
\section{General ODE lemma}
We first prove an ODE lemma which will be used in dealing with the differential inequalities that appear in the proofs. We consider the differential inequality
$$
\frac{\mathrm{d} y}{\mathrm{d} t}\le \delta(t)+\alpha y^n \qquad \mbox{with}\quad y(0)=y_0>0
$$
and find the conditions on $y_0$, $\delta(t)$ and $\alpha$ that ensure that $y(t)$ exists on a finite time interval $[0,T]$. This lemma is a generalization of the result obtained for $n=2$ in Chernyshenko et al (2006).
\begin{lemma}\label{lemma}
Let $T>0$, $\alpha>0$ and $n>1$ be constants and let $\delta(t)$
be a non-negative continuous function on $[0,T]$. Let $y$ satisfy
the differential inequality
\begin{equation} \label{ineq}
\frac{\mathrm{d} y}{\mathrm{d} t}\le \delta(t)+\alpha y^n \qquad \mbox{with}\quad y(0)=y_0>0
\end{equation}
and define
$$
\eta=y_0+\int_0^T\delta(s)\,\mathrm{d} s.
$$
\begin{itemize}
\item[(i)] If
\begin{equation} \label{general_condition}
\eta<\frac{1}{[(n-1)\alpha T]^{1/(n-1)}}
\end{equation}
then $y(t)$ remains bounded on $[0,T]$
\item[(ii)]
$y(t)\rightarrow0$ uniformly on $[0,T]$ as $\eta\rightarrow0$.
\end{itemize}
\end{lemma}
\begin{proof}
We first consider the following differential inequality
\begin{equation}\label{simple}
\dot{z} \le \alpha z^n \quad\mbox{with}\quad z(0)=\eta
\end{equation}
and show that
\begin{equation}\label{sup_y(T)}
\sup_{S_1}\;{y(T)} \le \sup_{S_2}\;{z(T)}.
\end{equation}
where $S_1$ and $S_2$ are the sets of all possible solutions of inequalities (\ref{ineq}) and (\ref{simple}) respectively.\\
\indent Since $\dot{y}$, $\dot{z}$, $y(0)$ and $z(0)$ are non-negative, the suprema of $y(T)$ and $z(T)$ happen when
\begin{eqnarray*}
\dot{y}&&=\delta(t)+ \alpha y^n,\\
\dot{z}&&=\alpha z^n
\end{eqnarray*}
for all $t\in [0,T]$. In this case, for the difference $w=y-z$ we have
\begin{equation}
\dot{w}=\delta(t)+ \alpha w \sum_{k=0}^{n-1} {n-1 \choose k} y^{n-1-k}\;z^k \quad \mbox{with}\quad w(0)=-\int_0^t\delta(s)\;\mathrm{d} s.\nonumber
\end{equation}
Since $y(t)$ and $z(t)$ are greater than zero and assuming they remain finite over $[0,T]$, there exists some $M>0$ such that
$$
\dot{w} \le \alpha M w + \delta(t)
$$
and therefore by Gronwall's inequality
\begin{eqnarray*}
w(t) &&\le w(0)\mathrm{e}^{\alpha Mt}+\int_0^t\delta(s)\;\mathrm{d} s\\
&&=(1-\mathrm{e}^{\alpha Mt})\int_0^t\delta(s)\;\mathrm{d} s.
\end{eqnarray*}
This implies $w(t)\le 0$ for any $t\in[0,T]$ and (\ref{sup_y(T)}) follows.\\
The inequality (\ref{simple}) can be written as
$$
\frac{\dot{z}}{z^n}\le \alpha
$$
Integrating both sides from 0 to $t$ yields
\begin{equation}\label{z(T)}
z(T) \le \frac{\eta}{(-\alpha T (n-1)(\eta)^{n-1}+1)^{1/{n-1}}}.
\end{equation}
Since $y(t)\le y(T) \le z(T)$, $y(t)$ remains finite on $[0,T]$ provided that
$$
\alpha T (n-1)(\eta)^{n-1} < 1
$$
which yields (\ref{general_condition}).\\
\indent Furthermore, it is clear from (\ref{z(T)}) that $z(T)\to 0$ as $\eta\to 0$, from which it follows that $y(t)\to 0$ uniformly on $[0,T]$.
\end{proof}
\section{The inequalities for the nonlinear term}
\indent For the nonlinear operator, we will use the following inequalities in this paper
\begin{eqnarray}
|(B(u,v),Aw)| &&\le k|Du|\,|Dv|^{1/2}|Av|^{1/2}|Aw|\label{triform1}\\
|(B(w,u),A^2 w)|&&\le c\|u\|_3\|w\|_2^2\label{triform21}\\
|(B(u,w),A^2 w)|&&\le c'\|u\|_3\|w\|_2^2\label{triform22},
\end{eqnarray}
The first inequality holds for both Dirichlet and periodic boundary conditions (Constantin and Foias 1988) while the other two are valid only in the absence of boundaries with $c$ and $c'$ independent of the size of the domain (Kato 1972).
The constant $k$ for the periodic and no-slip domain and also the whole of $\mathbb{R}^3$, is independent of the domain $\Omega$. In a general domain with nonzero boundary conditions however, it depends on the regularity properties of the boundary of $\Omega$. To see this we use the H\"older inequality to write
$$
|B(u,v,Aw)| \le \sum_{i,j=1}^3 \|u_i\|_{L^6}\|D_iv_j\|_{L^3}|Aw_j|.
$$
By the Sobolev inequality
\begin{equation} \label{sobolev1}
\|u\|_{L^6}\le c_s|Du|
\end{equation}
we have
$$
\|Dv\|_{L^3}\le \|Dv\|_{L^6}^{1/2}|Dv|^{1/2}\le c_s^{1/2}|Av|^{1/2}|Dv|^{1/2}.
$$
Therefore we can write
\begin{eqnarray*}
|B(u,v,Aw)| &\le& \sum_{i,j=1}^3 \|u_i\|_{L^6}\|D_iv_j\|_{L^3}|Aw_j|\\
&\le& 9\|u\|_{L^6}\|Dv\|_{L^3}|Aw|\\
&\le& 9c_s^{3/2}|Du|\,|Dv|^{1/2}|Av|^{1/2}|Aw|
\end{eqnarray*}
and so $k=9c_s^{3/2}$. For no-slip boundary conditions and the whole of $\mathbb{R}^3$, $c_s$, the constant of the Sobolev inequality (\ref{sobolev1}), does not depend on $\Omega$ (Ziemer 1989).
For a bounded domain with non-zero boundary conditions, $c_s$ depends on the regularity properties of $\partial\Omega$ but is independent of the size of $\Omega$ (Adams 1975). Therefore for the cubic domain of the periodic boundary conditions also, it does not depend on the size of the domain. In fact, the cubic domain has the strong Lipschitz property and for such a domain Adams (1975, Lemma 5.10) has shown that $c_s=4\sqrt{2}$.
Similarly, $c$ and $c'$ depend on the constant of the Sobolev inequality $\|u\|_{L^{6/(3-2k)}}\le c_{s,k}\|u\|_{H^k}$ which is again independent of the size of the domain (Adams 1975 and Ziemer 1989).\\
\indent
In obtaining the robustness conditions in the next section, we keep track of the constants $k$, $c$ and $c'$ when they appear, so that the value of the constant coefficients in the robustness conditions could be computed if desired.\\
\indent The inequalities (\ref{triform1})--(\ref{triform22}) are not as elegant as the inequalities available for the more regular solutions considered in Chernyshenko et al (2006) and this is why different proofs are needed in the less regular cases we are studying here.\\
\indent We note here that assuming a strong solution $u\in L^\infty(0,T;V)\cap L^2(0,T;V^2)$ exists, from (\ref{triform1}) we have $B(u,u)\in L^2(0,T;H)$ and therefore (\ref{NSe_reg}) implies that $\mathrm{d} u/\mathrm{d} t \in L^2(0,T;H)$. Having $u_0\in{V^2}$ and $f\in L^2(0,T;V)$ in case (b), from the regularity result for periodic domains proved in Constantin and Foias (1988) we know that the strong solution $u$ is in fact more regular and an element of $L^\infty(0,T;V^2)\cap L^2(0,T;V^3)$.
\section{Robustness of strong solutions}
Here we show that as long as a strong solution exists for some specific initial data and forcing function, the equations with sufficiently close data also have a strong solution.
A similar result about the robustness of strong solutions with respect to changes in the forcing function is proved by Fursikov in his 1980 paper. For the three-dimensional Navier-Stokes equations with initial condition $u_0\in V^m$ where $m\ge 1/2$, he shows that the set of forcing functions for which a strong solution exists is open in $L^q(0,T;V^{m-1})$ with $q\ge 2$.
\\
\indent For both minimally regular solutions (a) and the more regular case (b) the condition we obtain here for the data depends explicitly on the viscosity coefficient unlike the more regular cases considered in Chernyshenko et al (2006). The robustness result we prove for the minimally regular strong solutions holds in a general bounded domain. For the second order regular solutions however, we need to restrict the domain to be periodic or the whole of $\mathbb{R}^3$ since the inequalities (\ref{triform21}) and (\ref{triform22}) only hold in a periodic domain (Constantin and Foias 1988) or the whole of $\mathbb{R}^3$ (Kato 1972).
\subsection*{(a) Strong solutions with minimal regularity}
To prove the robustness with respect to the data we write the governing equation for the difference between the solution of the equations with nearby data and the strong solution and find a bound on the norm of this difference.
\begin{theorem}\label{robust1}
Let $f\in L^1(0,T,V)\cap L^2(0,T;H)$, $u_0\in V$ and $u\in L^{\infty}(0,T;V)\cap L^2(0,T;V^2)$ be a strong solution of
\[
\frac{\mathrm{d} u}{\mathrm{d} t}+Au+B(u,u)=f \quad\mbox{with}\quad u(0)=u_0.
\]
If
\begin{eqnarray}\label{m1cond}
&&|D(u_0-v_0)| +\int_0^T|Df(s)-Dg(s)|\,\mathrm{d} s \nonumber\\
&& \quad < \frac{1}{k}\left(\frac{\nu^3}{27T}\right)^{1/4}
\exp\left(-\frac{k^2}{2}\int_0^T\frac{27k^2}{2}\frac{1}{\nu^3}|Du(s)|^4+\frac{1}{\nu}|Du(s)|\,|Au(s)|\,\mathrm{d} s\right)
\end{eqnarray}
then the solution of
\[
\frac{\mathrm{d} v}{\mathrm{d} t}+\nu Av+B(v,v)=g\quad\mbox{with}\quad v(0)=v_0
\]
is also a strong solution on $[0,T]$ with the same regularity as $u$.
\end{theorem}
\begin{proof}
By the local existence of strong solutions (Constantin and Foias 1988, Temam 1995) we know that there exists $T^*>0$ such that $v\in L^{\infty}(0,T';V)\cap L^2(0,T';V^2)$ for any $T'<T^*$. We consider $T^*$ the maximal time of existence of the strong solution $v$ meaning that $\lim\sup_{t\to T^*} |Dv|=\infty$. In the following argument we assume $T^*\le T$ and deduce a contradiction.\\
\indent We consider $w=u-v$ which satisfies
$$
\frac{\mathrm{d} w}{\mathrm{d} t}+\nu Aw+B(u,w)+B(w,u)-B(w,w)=f-g.
$$
Over the time interval $t\in [0,T')$ for any $T'<T^*$ we have $dv/dt\in L^2(0,T';H)$ and since $T^*\le T$, $du/dt\in L^2(0,T';H)$. Therefore taking the inner product of the above equation with $Aw$ and using (\ref{triform1}) we can write
\begin{eqnarray*}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}|Dw|^2 + \nu|Aw|^2
&\le& k|Du||Dw|^{1/2}|Aw|^{3/2}+ k|Dw||Du|^{1/2}|Au|^{1/2}|Aw|\\
& & \qquad+k|Dw|^{3/2}|Aw|^{3/2} +|D(f-g)|\;|Dw|.
\end{eqnarray*}
We then use Young's inequality to remove $|Aw|$ (which causes the appearence of $\nu$ in the coefficients) and obtain
$$
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}|Dw|^2 \le \left( \frac{27k^4}{4\nu^3} |Du|^4 +\frac{k^2}{2\nu}|Du||Au| \right)|Dw|^2+ \frac{27k^4}{4\nu^3}|Dw|^6 +|D(f-g)|\;|Dw|.
$$
Dividing by $|Dw|$ (assuming $|Dw|$ is nonzero. If not, replacing $|Dw|$ by $|Dw|+e_0$ with $e_0>0$ will lead us to the same result after tending $e_0$ to zero at the end) we get
$$
\frac{\mathrm{d}}{\mathrm{d} t}|Dw| \le \left( \frac{27k^4}{4\nu^3} |Du|^4 +\frac{k^2}{2\nu}|Du||Au| \right)|Dw|+ \frac{27k^4}{4\nu^3}|Dw|^5 +|D(f-g)|.
$$
Now letting
$$
\beta(t)= \frac{k^2}{2} \int_0^t \left( \frac{27k^2}{2}\frac{1}{\nu^3}|Du|^4 +\frac{1}{\nu}|Du|\,|Au| \right)\; \mathrm{d} s
$$
and setting $y(t)=|Dw(t)|\mathrm{e}^{-\beta(t)}$ the above inequality can be written as
$$
\frac{\mathrm{d} y}{\mathrm{d} t} \le \alpha y^5 + \delta(t),
$$
where $\alpha=\frac{27k^4}{4\nu^3}\mathrm{e}^{4\beta(T)}$ and $\delta(t)=|Df(t)-Dg(t)|$.\\
\indent By lemma \ref{lemma}, if the condition (\ref{m1cond}) is satisfied, $|Dw(t)|$ is uniformly bounded on
$[0,T^*)$. This implies that $|Dv(T^*)|<\infty$ which contradicts the maximality of $T^*$. It follows
that $v(t)$ is a strong solution on $[0,T]$.
\end{proof}
\subsection*{(b) Strong solutions with second order regularity}
We follow the same approach as in the minimally regular case to show the robustness for a strong solution with second order regularity. Here also the inequalities we need to use for the nonlinear operator to find a bound on the $V^2$-norm of the difference between nearby solutions depend on the $V^3$-norm of this difference and therefore the robustness condition on the data depends explicitly on the viscosity coefficient. However, its dependence is weaker in this case.
\begin{theorem}\label{robust2}
Let $f\in L^1(0,T,V^2)\cap L^2(0,T;V)$, $u_0\in V^2$ and $u\in L^{\infty}(0,T;V^2)\cap L^2(0,T;V^3)$ be a
strong solution of
$$
\frac{\mathrm{d} u}{\mathrm{d} t}+Au+B(u,u)=f \quad\mbox{with}\quad u(0)=u_0.
$$
If
\begin{eqnarray}\label{m2cond}
|A(u_0-v_0)| +\int_0^T|A(f-g)|\,\mathrm{d} t\; < \; \frac{1}{c}\sqrt{\frac{2\nu}{T}}\exp\left(-\int_0^T (c+c')\|u\|_3\, \mathrm{d} t\right),
\end{eqnarray}
then the solution of
$$
\frac{\mathrm{d} v}{\mathrm{d} t}+Av+B(v,v)=g \quad\mbox{with}\quad v(0)=v_0
$$
is also a strong solution on $[0,T]$ with the same regularity as $u$.
\end{theorem}
\begin{proof}
As before, the difference $w=u-v$ satisfies
\begin{equation}
\frac{\mathrm{d} w}{\mathrm{d} t}+\nu Aw+B(u,w)+B(w,u)+B(w,w)=f-g.\nonumber
\end{equation}
Taking the inner product of the above equation with $A^2 w$ and using (\ref{triform21}) and (\ref{triform22}) we obtain
\begin{equation}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}\|w\|_2+ \nu\|w\|_3^2 \le (c+c')\|u\|_3^2\|w\|_2^2 + c\|w\|_3\|w\|_2^2 + \|f-g\|_2\|w\|_2.\nonumber
\end{equation}
We apply Young's inequality to the second term on the right hand side and get
$$
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}\|w\|_2^2 \le (c+c')\|u\|_3\|w\|_2^2 + \frac{c^2}{4\nu}\|w\|_2^4 +\|f-g\|_2\|w\|_2.
$$
Dividing the above inequality by $\|w\|_2$ yields
$$
\frac{\mathrm{d}}{\mathrm{d} t}\|w\|_2 \le (c+c')\|u\|_3\, \|w\|_2 + \frac{c^2}{4\nu}\|w\|_2^3 +\|f-g\|_2.
$$
\indent Now we consider
\[
y(t)=\|w\|_2 e^{-\beta(t)}
\]
with $\beta(t)=\int_0^t (c+c')\|u\|_3\, \mathrm{d} t$. The equation with this new variable becomes
\[
\frac{\mathrm{d} y}{\mathrm{d} t}\le \delta(t)+\alpha y^3,
\]
where $\alpha=\frac{c^2}{4\nu}e^{2\beta(T)}$ and $\delta=\|f-g\|_2$.
Using lemma \ref{lemma} $\|w\|_2$ is bounded if the condition (\ref{m2cond}) is satisfied.
\end{proof}
\section{Convergence of Galerkin approximations}
Here we show that if a strong solution with minimal or second order regularity exists, Galerkin approximations converge to it. Similar results about convergence of various numerical method assuming the existence of a strong solution, are given for finite element methods by Heywood and Rannacher (1982), for a Fourier collocation method by E (1993) and for a nonlinear Galerkin method by Devulder, Marion and Titi (1993). Here as in Chernyshenko et al (2006), we make no assumption on the regularity of the Galerkin approximations.\\
\indent For the minimally regular case the result we prove here holds for the solution of the equations over a general bounded domain. For the second order regular solutions we need to use inequality (\ref{triform22}) which is valid only in a periodic domain or the whole of $\mathbb{R}^3$.\\
\indent In the following theorems we let $P_n$ be the orthogonal projection in $H$ onto the space spanned by the first $n$ eigenfunctions of the Stokes operator $A$, $\{w_j\}_{j=1}^n$, ordered so that their corresponding eigenvalues satisfy
$0 < \lambda_1 \le \lambda_2 \le \dots$ .
Since the eigenfunctions of $A$ are smooth (Constantin and Foias 1988), for any $u\in V^m$, $m\ge 0$ (with $V^0=H$) we have $u_n=P_n u=\sum_{j=1}^n (u,w_j)w_j\in V^m$.\\
\indent We note that what we obtain here about the convergence of Galerkin approximations would be useful even if the existence of regular solutions was known.
\subsection*{(a) Minimal regularity}
The following theorem, like the robustness theorem in the minimal regularity case, holds in sufficiently smooth bounded domains as well as in the absence of boundaries.
\begin{theorem}\label{galerkin1}
Let $u_0\in V$, $f\in L^2(0,T;H)$ and $u\in L^{\infty}(0,T;V) \cap L^2(0,T;V^2)$ be a strong solution of the Navier-Stokes equations
$$
\frac{\mathrm{d} u}{\mathrm{d} t} +\nu Au +B(u,u)=f(t)\quad\mbox{with}\quad u(0)=u_0.
$$
Then $u_n$, the solution of Galerkin system
\begin{equation}\label{galerkin_sys}
\frac{\mathrm{d} u_n}{\mathrm{d} t} +\nu Au_n +P_n B(u_n,u_n)=P_n f(t)\quad\mbox{with}\quad u_n(0)=P_n u_0,
\end{equation}
converges strongly to $u$ in both $L^{\infty}(0,T;V)$ and $L^2(0,T;V^2)$ as $n\to \infty$.
\end{theorem}
\begin{proof}
We consider $w_n=u-u_n$ which satisfies
$$
\frac{\mathrm{d} w_n}{\mathrm{d} t} +\nu Aw_n +P_n B(u,w_n) +P_n B(w_n,u) -P_n B(w_n,w_n) =Q_n f(t) -Q_n B(u,u).
$$
Letting $q_n=u-P_n u$ and taking the inner product of the above equation with $Aw_n$ while noting that
$(P_n B(u,v),Aw_n)=b(u,v,P_nAw_n)$ we obtain
\begin{eqnarray*}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}|Dw_n|^2 +\nu|Aw_n|^2
\le &&\,k\left(|Du|\,|Dw_n|^{1/2}|Aw_n|^{3/2}\right. \\
&&+|Dw_n|\,|Du|^{1/2}\,|Au|^{1/2}|Aw_n|
+|Dw_n|^{3/2}|Aw_n|^{3/2}\\
&&\left.+|Du|^{3/2}|Au|^{1/2}|Aq_n| \right)+|f|\,|Aq_n|.
\end{eqnarray*}
Applying Young's inequality and redefining the constant $k$ gives
\begin{eqnarray*}
\frac{\mathrm{d}}{\mathrm{d} t}|Dw_n|^2 \le &&\,k\left\{\left( \frac{1}{\nu^3}|Du|^4
+\frac{1}{\nu}|Du|\,|Au|\right)|Dw_n|^2
+\frac{1}{\nu^3}|Dw_n|^6\right.\\
&&\left.+|Du|^{3/2}|Au|^{1/2}|Aq_n|\right\}+|f|\,|Aq_n|.
\end{eqnarray*}
Now letting
$$
\beta(t)=k\int_0^t\left( \frac{1}{\nu^3}|Du|^4 +\frac{1}{\nu}|Du|\,|Au|\right)\,\mathrm{d} s
$$
and
$$
y_n(t)=\mathrm{e}^{-\beta(t)}|Dw_n|^2
$$
the above inequality can be written as
$$
\frac{\mathrm{d} y_n}{\mathrm{d} t} \le \alpha y_n^3 +\delta_n(t),
$$
with $\alpha=\frac{k}{\nu^3}\mathrm{e}^{2\beta(T)}$ and $\delta_n(t)=|f|\,|Aq_n| +k|Du|^{3/2}|Au|^{1/2}|Aq_n|$.
\\
\indent Using the H\"older inequality we have
\begin{eqnarray*}
\int_0^T \delta_n(s)\,\mathrm{d} s \le \left[\left(\int_0^T |f|^2\mathrm{d} s\right)^{1/2}
+k\left(\int_0^T |Du|^{3}|Au|\mathrm{d} s\right)^{1/2}\right]
\left(\int_0^T |Aq_n|^2\mathrm{d} s\right)^{1/2}
\end{eqnarray*}
Since $u\in L^2(0,T;V^2)$, $u(s)\in V^2$ for almost every $s\in [0,T]$ and therefore
$q_n(s)=Q_n u(s)\to 0$ in $V^2$ as $n\to\infty$ and $|Aq_n(s)|^2\le |Au(s)|^2$ for a.e. $s\in [0,T]$. Hence the
Lebesgue dominated convergence theorem for each $s$ implies that $\int_0^T |Aq_n|^2\mathrm{d} s$ tends to zero
as $n\to \infty$ and therefore
$$
\int_0^T \delta_n(s)\,\mathrm{d} s\to 0
$$
as $n\to \infty$. Since $y_n(0)\to 0$ as $n\to\infty$, lemma \ref{lemma} shows that
$y_n$, and hence $|Dw_n|^2$, converges to zero uniformly on
$[0,T]$ as $n\to\infty$.
\end{proof}
\subsection*{(b) Second order regularity}
Here also, in a similar way to the previous case, we try to find a bound on the $V^2$-norm of the difference between a second order regular strong solution and Galerkin approximations. However here the more regular solution space makes the proof easier.
\begin{theorem}\label{galerkin2}
Let $u_0\in V^2$, $f\in L^2(0,T;V)\cap L^1(0,T;V^2)$ and $u\in L^{\infty}(0,T;V^2) \cap L^2(0,T;V^3)$ be
a strong solution of the Navier-Stokes equations
$$
\frac{\mathrm{d} u}{\mathrm{d} t} +\nu Au +B(u,u)=f(t)\quad\mbox{with}\quad u(0)=u_0.
$$
Then $u_n$, the solution of Galerkin system
\begin{equation}\label{galerkin_sys2}
\frac{\mathrm{d} u_n}{\mathrm{d} t} +\nu Au_n +P_n B(u_n,u_n)=P_n f(t)\quad\mbox{with}\quad u_n(0)=P_n u_0,
\end{equation}
converges strongly to $u$ in both $L^{\infty}(0,T;V^2)$ and $L^2(0,T;V^3)$ as $n\to \infty$.
\end{theorem}
\begin{proof}
Let $w_n=u-u_n$. Then $w_n$ satisfies
$$
\frac{\mathrm{d} w_n}{\mathrm{d} t} +\nu Aw_n +P_n B(u,w_n) +P_n B(w_n,u) +P_n B(w_n,w_n) = Q_n f -Q_n B(u,u)
$$
Taking the inner product of the above equation with $A^2 w_n$ and using (\ref{triform21}) and
(\ref{triform22}) we obtain
\begin{eqnarray*}
\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d} t}\|w_n\|_2^2 +\nu\|w_n\|_3^2
&\le& (c+c')\|u\|_3\|w_n\|_2^2 +c\|w_n\|_3\|w_n\|_2^2\\
& & +\|Q_nf-Q_n B(u,u)\|_2\|w_n\|_2
\end{eqnarray*}
Now we use Young's inequality to remove the dependence on $\|w_n\|_3$ and then divide by $\|w\|_2$ to obtain
$$
\frac{\mathrm{d}}{\mathrm{d} t}\|w_n\|_2 \le (c+c')\|u\|_3 \|w_n\|_2 +\frac{c^2}{4\nu}\|w_n\|_2^3 +\|Q_n f-Q_nB(u,u)\|_2,
$$
in which the coefficient of $\|w_n\|_2$ does not depend on $n$.\\
\indent Letting
$$
y_n=\|w_n\|_2 e^{-\beta(t)},
$$
with $\beta(t)=\exp\left( \int_0^t (c+c')\|u(s)\|_3\;\mathrm{d} s\right)$ yields
$$
\dot{y}_n \le \delta(t) + \alpha y_n^3,
$$
where $\alpha=\frac{c^2}{4\nu}e^{2\beta(T)}$ and $\delta(t)=\|Q_n f-Q_nB(u,u)\|_2$.
So by lemma \ref{lemma} if
$$
\eta = \|Q_n u_0\|_m + \int_0^T \|Q_n f-Q_nB(u,u)\|_2\;\mathrm{d} s \to 0 \quad \mathrm{as}
\quad n\to\infty,
$$
then $y_n(t)\to 0$ uniformly on $[0,T]$ as $n\to \infty$.\\
By (\ref{galerkin_sys2}) $\|Q_n u_0\|_m\to 0$ as $n\to\infty$. We know that
\begin{equation}\label{triform23}
\|B(u,u)\|_2 \le c\|u\|_2\|u\|_3
\end{equation}
(Kato 1972, Constantin and Foias 1988) and therefore $f(s) -B(u(s),u(s)) \in V^2$ for almost every $s\in [0,T]$. So since $\{w_j\}_{j=1}^n$ form a basis for $V^2$ as well as $H$, $\|Q_n \left(f(s)-B(u(s),u(s))\right)\|_2\to 0$ and
$$
\|Q_n \left(f(s)-B(u(s),u(s))\right)\|_2 \le \|f(s)-B(u(s),u(s))\|_2
$$
for almost every $s\in [0,T]$. Therefore by the Lebesgue dominated convergence theorem it follows that
$$
\int_0^T \|Q_n \left(f(s)-B(u(s),u(s))\right)\|_2 \to 0
$$
and the result follows.
\end{proof}
\section{Numerical verification of the existence of a strong solution}
Here we show that the existence of minimal and second order regular strong solutions can be verified via computations using sufficiently refined Galerkin approximations.
\begin{theorem}\label{postt}
$\mathbf{i)}$ Consider the Navier-Stokes equations
\begin{equation}\label{post_nse}
\frac{\mathrm{d} u}{\mathrm{d} t}+\nu Au+B(u,u)=f\quad\mbox{with}\quad u(0)=u_0
\end{equation}
with $u_0\in V$, $f\in L^2(0,T;H)\cap L^1(0,T;V)$, that hold in a bounded domain $\Omega$ with sufficiently smooth boundary or in a periodic domain. \\
\indent Let $v\in L^{\infty}(0,T;V)\cap L^2(0,T;V^2)$ be a numerical approximation of $u$ satisfying
$$
\frac{\mathrm{d} v}{\mathrm{d} t}+\nu Av+B(v,v) \in L^1(0,T;V)\cap L^2(0,T;H)
$$
and
\begin{eqnarray}
&&|Dv(0)-Du_0| + \int_0^T \|\frac{dv(s)}{ds}+\nu Av(s)+B(v(s),v(s))
-f(s)\|_1\,\mathrm{d} s \nonumber\\
&&\quad < \frac{1}{k}\left(\frac{\nu^3}{27T}\right)^{1/4}
\exp\left(-\frac{k^2}{2}\int_0^T\frac{27k^2}{2}\frac{1}{\nu^3}|Dv(s)|^4
+\frac{1}{\nu}|Dv(s)||Av(s)|\,\mathrm{d} s\right).\label{posteriori}
\end{eqnarray}
Then the Navier-Stokes equations (\ref{post_nse}) have a strong solution
$u\in L^{\infty}(0,T;V)\cap L^2(0,T;V^2)$.\\
\\
\indent $\mathbf{ii)}$ Let $u$ be a strong solution of (\ref{post_nse}). Then there exists an $N$ such that the Galerkin approximations $u_n$ satisfy the inequality (\ref{posteriori}) for all $n>N$. Therefore in view of part $\mathbf{(i)}$, $u$ passes the \emph{a posteriori} test as a solution approximated by $u_n$ i.e. the existence of a strong solution with minimal regularity can be verified by the Galerkin approximations.
\end{theorem}
\begin{proof}
$\mathbf{i)}$ Considering
$$
g=\frac{\mathrm{d} v}{\mathrm{d} t}+\nu Av +B(v,v),
$$
$v$ is a strong solution of
$$
\frac{\mathrm{d} \bar v}{\mathrm{d} t}+\nu A\bar v +B(\bar v,\bar v)=g.
$$
So by theorem \ref{robust1}, if the inequality (\ref{posteriori}) holds, the solution of the equations with nearby data $(u_0,f)$ is a strong solution.\\
\\
\indent $\mathbf{ii)}$ The strong convergence of $u_n$ to $u$ in $L^{\infty}(0,T;V)$ and $L^2(0,T;V^2)$ is guaranteed by theorem \ref{galerkin1}. Therefore the right hand side of (\ref{posteriori}) is bounded below for every $n$. It follows from (\ref{galerkin_sys}) that $|Du_n(0)-Du_0|\to 0$ as $n\to 0$ and
$$
\frac{du_n}{dt}+\nu Au_n +B(u_n,u_n)-f=Q_n[B(u_n,u_n)-f].
$$
Therefore it only remains to show $\|Q_n[B(u_n(s),u_n(s))-f]\|_1$ converges to zero as $n\to \infty$ for a.e. $s\in[0,T]$, it is then clear that (\ref{posteriori}) will be satisfied for all $n$ sufficiently large.\\
\indent For the nonlinear operator we have
$$
\|B(u_n,u_n)\|_1 = \sqrt{(D(u_n\cdot \nabla)u_n,D(u_n\cdot \nabla)u_n)}\le c\left( \|Du_n\|_{L^4(\Omega)}^4 + \|u_n\|_{L^{\infty}(\Omega)}^2 |Au_n|^2 \right)^{1/2}
$$
Using the Sobolev inequaities we can write
$$
\|Du_n\|_{L^4(\Omega)} \le c\|Du_n\|_{L^6(\Omega)} \le c|Au_n|
$$
and
$$
\|u_n\|_{L^{\infty}} \le c|Au_n|.
$$
Therefore $\|B(u_n,u_n)\|_1\le c|Au_n|^2$ which implies
$$
f(s)-B(u_n(s),u_n(s)) \in V \quad\mbox{for a.e. $s\in [0,T]$}
$$
and we will have
$$
\|Q_n(B(u_n(s),u_n(s))-f(s))\|_1 \to 0 \quad\mbox{as $n\to \infty$ and for a.e. $s\in [0,T]$}
$$
and
$$
\|Q_n(B(u_n(s),u_n(s))-f(s))\|_1 \le \|B(u_n(s),u_n(s))-f(s)\| \quad\mbox{for a.e. $s\in [0,T]$}.
$$
By the Lebesgue dominated convergence theorem we conclude
$$
\int_0^T\|Q_n(B(u_n(s),u_n(s))-f(s))\|_1\mathrm{d} s \to 0 \quad\mbox{as $n\to \infty$}.
$$
\end{proof}
A similar result holds for the strong solutions with second order regularity in a periodic domain.%
\begin{theorem}\label{postt2}
$\mathbf{i)}$ Consider the Navier-Stokes equations
\begin{equation}\label{post2_nse}
\frac{\mathrm{d} u}{\mathrm{d} t}+\nu Au+B(u,u)=f\quad\mbox{with}\quad u(0)=u_0
\end{equation}
with $u_0\in V^2$, $f\in L^2(0,T;V^1)\cap L^1(0,T;V^2)$, that hold in a periodic domain. \\
\indent Let $v\in L^{\infty}(0,T;V^2)\cap L^2(0,T;V^3)$ be a numerical approximation of $u$ satisfying
$$
\frac{\mathrm{d} v}{\mathrm{d} t}+\nu Av+B(v,v) \in L^1(0,T;V^2)\cap L^2(0,T;V)
$$
and
\begin{eqnarray}
&\|v(0)-u_0\|_2^2 + \int_0^T \|\frac{dv(s)}{ds}(s)+\nu Av(s)
+B(v(s),v(s))-f(s)\|_2\,\mathrm{d} s \nonumber\\
&\qquad < \frac{1}{c}\sqrt{\frac{2\nu}{T}}\exp\left(-\int_0^T (c+c')\|u_n\|_3\, \mathrm{d} t\right).\label{posteriori2}
\end{eqnarray}
Then the Navier-Stokes equations (\ref{post2_nse}) have a strong solution
$u\in L^{\infty}(0,T;V^2)\cap L^2(0,T;V^3)$.\\
\\
\indent $\mathbf{ii)}$ Let $u$ be a strong solution of (\ref{post2_nse}). Then there exists an $N$ such that the Galerkin approximations $u_n$ satisfy the inequality (\ref{posteriori2}) for all $n>N$. Therefore in view of part $\mathbf{(i)}$, $u$ passes \emph{a posteriori} test as a solution approximated by $u_n$ which means that the existence of the strong solution $u$ can be verified using the Galerkin approximations.
\end{theorem}
\\
\indent Using the robustness result for the strong solutions with second order regularity, theorem \ref{robust2}, the proof of the above theorem closely parallels that of theorem \ref{postt}.
\section{A physical application: channel flow} We considered the Navier-Stokes equations for a flow in a domain with non-moving boundary and with the forcing function in the form of a body force. A physical situation for which the results of this paper and in particular the \emph{a posteriori} test of theorem \ref{postt} can be applied is a channel flow in the domain $0<x<L_x$, $0<y<1$, $0<z<L_z$ with the velocity field $u(x,y,z,t)$ considered to be periodic in $x$ and $z$ with periods $L_x$ and $L_z$ respectively and zero at $y=0$ and $y=1$. The body force $g$ is assumed constant and equal to $(1,1,1)$.
Here we show how to write the inequality (\ref{posteriori}) for the Galerkin approximations, $u_n$, of the solution $u$ of this problem.\\
\indent By theorem \ref{postt} we know that if for some $n$ the Galerkin approximation, $u_n$, satisfies
\begin{eqnarray*}
&&|Du_n(0)-Du_0| + \int_0^T \|\frac{du_n(s)}{ds}+\nu Au_n(s)+B(u_n(s),u_n(s))
-f(s)\|_1\,\mathrm{d} s \\
&&\quad < \frac{1}{k}\left(\frac{\nu^3}{27T}\right)^{1/4}
\exp\left(-\frac{k^2}{2}\int_0^T\frac{27k^2}{2}\frac{1}{\nu^3}|Du_n(s)|^4
+\frac{1}{\nu}|Du_n(s)||Au_n(s)|\,\mathrm{d} s\right)
\end{eqnarray*}
then a strong solution $u$ exists and the Galerkin approximations converge to it. To check this inequality for a Galerkin approximation of the above channel flow example we need to compute $Au_n$, $B(u_n,u_n)$ and $f=\Pi g$. \\
\indent The functions
$$
w_{\bf{k}}=\mathrm{e}^{2\pi i(\frac{k_1}{L_x}x+\frac{k_3}{L_z}z)}\sin({\pi k_2y})
$$
with ${\bf{k}}=(k_1,k_2,k_3)$, form an orthogonal basis for the space of $L^2$-functions on the cubic domain introduced above, which have periodic bounday values in $x$ and $z$ directions and are zero on $y=0$ and $y=1$. We can therefore define the Galerkin approximation $u_n$ in terms of $w_{\bf k}$ as
\begin{equation}\label{u_expansion}
u_n=\sum_{{\bf k}={\bf n}_0}^{\bf n} \alpha_{\bf{k}}w_{\bf{k}},\quad\mbox{with }\quad\nabla\cdot u_n=0
\end{equation}
where $\alpha_{\bf{k}}=(\alpha_{1\bf{k}}(t),\alpha_{2\bf{k}}(t),\alpha_{3\bf{k}}(t))$, $\alpha_{(-k_1,k_2,-k_3)}=\bar{\alpha}_{(k_1,k_2,k_3)}$, ${\bf n}_0=(-n,0,-n)$ and ${\bf n}=(n,n,n)$. Since $u_n$ is divergence-free we have
$$
\sum_{{\bf k}={\bf n}_0}^{\bf n} 2\pi i(\frac{k_1}{L_x}\alpha_{1\bf{k}}
+\frac{k_3}{L_z}\alpha_{3\bf{k}})\sin(\pi k_2y)
+\pi k_2 \;\alpha_{2\bf{k}}\cos(\pi k_2 y)=0.
$$
After expanding $\cos(\pi k_2 y)$ in terms of sine functions we obtain
$$
\pi i (\frac{k_1}{L_x}\alpha_{1\bf{k}}+\frac{k_3}{L_z}\alpha_{3{\bf k}})+
\sum_{l=\lfloor\frac{k_2}{2}\rfloor}^{\lfloor\frac{k_2+n}{2}\rfloor}
(2l+1-k_2)(\frac{1}{2k_2-2l-1}+\frac{1}{2l+1})\alpha_{2\bf{k}}=0.
$$
\indent Defining
$$
\hat{\bf k}=(\frac{\pi i k_1}{L_x},
\sum_{l=\lfloor\frac{k_2}{2}\rfloor}^{\lfloor\frac{k_2+n}{2}\rfloor}
(2l+1-k_2)(\frac{1}{2k_2-2l-1}+\frac{1}{2l+1}),\frac{\pi i k_3}{L_z} )
$$
and having (\ref{u_expansion}) we obtain
\begin{eqnarray*}
Au_n&=& 4\pi^2\sum_{{\bf k}={\bf n}_0}^{\bf n}(\frac{k_1^2}{L_x^2}+\frac{k_2^2}{4}+\frac{k_3^2}{L_z^2})\alpha_{\bf{k}}\;w_{\bf{k}}
\end{eqnarray*}
and
\begin{eqnarray*}
&&B(u_n,u_n)=2\sum_{{\bf k}=2{\bf n}_0}^{2\bf n}\: \sum_{{\bf j}={\bf k}_{-}} ^{\bf{k}_{+}}
\:\sum_{m=\lfloor \frac{k_2+1}{2} \rfloor}^{\lfloor \frac{k_2+2n}{2} \rfloor}
\Bigg\{ (\alpha_{ {\bf k}_m-{\bf j} } \cdot{\bf{j}}) \left( \alpha_{\bf{j}}
-\frac{\hat{\bf k}}{|\hat{\bf k}|^2}(\alpha_{\bf{j}}\cdot \hat{\bf{k}}) \right)\\
&&\qquad \frac{1}{\pi} \left( \frac{1}{2m-2j_2+1}+\frac{1}{2m-2j_2-2k_2+1}-\frac{1}{2m+1}
-\frac{1}{2m-2k_2+1} \right) w_{\bf{k}} \Bigg\}
\end{eqnarray*}
where ${\bf k}_m=(k_1,2m+1-k_2,k_3)$, ${\bf k}_{-}=(\min\{k_1,0\},\min\{k_2,0\},\min\{k_3,0\})$ and
${\bf k}_{+}=(\max\{k_1,0\},\max\{k_2,0\},\max\{k_3,0\})$. \\
\indent Now we note that $f=(1,0,1)$, since $f-g=(0,1,0)$ is perpendicular (with respect to $L^2$-norm) to all functions in $H$. Therefore using the expansion of $u_n$ and $Au_n$ with respect to $\{w_{\bf k}\}_{{\bf k}={\bf n}_0}^{\bf n}$ and $B(u_n,u_n)$ with respect to $\{w_{\bf k}\}_{{\bf k}={2\bf n}_0}^{2\bf n}$ we can write (noting that the coefficients of $w_{\bf k}$ when $k_1,k_2$ or $k_3$ are less than $-n$ or bigger than $n$ are zero for $u_n$ and $Au_n$)
$$
E_n=\frac{\mathrm{d} u_n}{\mathrm{d} t}+\nu Au_n+B(u_n,u_n)-f=\sum_{{\bf k}=2{\bf n}_0}^{2\bf n} \beta_{\bf k}w_{\bf k}-(1,0,1)
$$
and therefore
$$
DE_n=\frac{\partial E_n}{\partial x}+\frac{\partial E_n}{\partial y}+\frac{\partial E_n}{\partial z}=2\sum_{{\bf k}=2{\bf n}_0}^{2\bf n} \beta_{\bf k} (\hat{k}_1+\hat{k}_2+\hat{k}_3)w_{\bf k}
$$
and
$$
\|\frac{\mathrm{d} u_n}{\mathrm{d} t}+\nu Au_n+B(u_n,u_n)-f\|_1^2=2L_xL_z\sum_{{\bf k}=2{\bf n}_0}^{2\bf n}(\beta_{1{\bf k}}^2+\beta_{2{\bf k}}^2+\beta_{3{\bf k}}^2)(\hat{k}_1+\hat{k}_2+\hat{k}_3)^2.
$$
The norms of $Du_n$ and $Au_n$ are computed as
\begin{eqnarray*}
|Du_n|^2&=& 2L_xL_z\sum_{{\bf k}={\bf n}_0}^{\bf n}(\alpha_{1{\bf k}}^2+\alpha_{2{\bf k}}^2+\alpha_{3{\bf k}}^2)(\hat{k}_1+\hat{k}_2+\hat{k}_3)^2,\\
|Au_n|^2&=& 2\pi^2 L_xL_z \sum_{{\bf k}={\bf n}_0}^{\bf n} (\alpha_{1{\bf k}}^2+\alpha_{2{\bf k}}^2+\alpha_{3{\bf k}}^2)(\frac{k_1^2}{L_x^2}+\frac{k_2^2}{4}+\frac{k_3^2}{L_z^2})^2 .
\end{eqnarray*}
The remaining term on the left-hand side of (\ref{posteriori}), $|Du_n(0)-Du_0|$, is obtained in a similar way to $|Du_n|$.\\
\indent Finally we need the value of the constant $k$ in the inequality (\ref{posteriori}). The cubic domain here also has the strong local Lipschitz property and as it was shown in section 3, for such a domain $k=72(2^{3/4})$.\\
\section{Conclusion}
We extended the results of Chernyshenko et al (2006) to the less regular strong solutions not considered in their paper. Unlike their results, the robustness conditions on the data that we obtained for the minimally and second order regular strong solutions in theorems \ref{robust1} and \ref{robust2} depended explicitly on the viscosity coefficient in a way that as the viscosity coefficient became smaller, the conditions (\ref{m1cond}) and (\ref{m2cond}) became more restrictive which obviously is not desirable.
Moreover, since this is due to the less elegant inequalities available for the trilinear form in the less regular spaces considered here, it does not seem that we can remove the viscosity coefficient from the robustness conditions.\\
\indent On the other hand, the results proved here for the minimally regular strong solutions are valid in a sufficiently smooth bounded domain as well as in the absence of boundaries while in the case of the second order regular solutions considered also here and the more regular ones studied in the above mentioned paper the results are restricted to a periodic domain or the whole of $\mathbb{R}^3$.
The reason is that the inequalities we need to use in these cases for $|(B(u,v),v )_m|$
when $m\ge 2$ are valid in the absence of boundaries where we know that swapping the action of the Leray projector and the Laplacian operator is possible.\\
\indent It would be interesting to see if there exists a big enough $m$ for which it is possible to swap the action of the Laplacian operator and the Leray projector in $D(A^{m/2})$ over a bounded domain.
If such a finite $m$ exists we can extend the robustness results for regular enough solutions to the more physically relevant bounded domains.
Furthermore, for two dimensional bounded domains equality of the Stokes and Laplacian operators in $D(A^{m/2})$ can be used in obtaining a better bound on the attractor dimension in $L^2$.\\
|
1,116,691,497,870 | arxiv | \section{Introduction}
Let $f:\mathbb{C}\to\mathbb{C}$ be an entire function. Newton's root finding method
for $f$ is implemented by iterating the associated {\em Newton map}
\[
N_f:\mathbb{C}\to\widehat{{\C}}, \;z\mapsto z-\frac{f(z)}{f'(z)}\;.
\]
It is well known that $\xi\in\mathbb{C}$ is a fixed point of $N_f$ if and
only if $f(\xi)=0$. Furthermore, every finite fixed point $\xi$ of
$N_f$ is attracting, so it has an invariant neighborhood on which
$N_f$-orbits converge locally uniformly to $\xi$. In 2003, Douady
raised the following question: if there exists a {\em virtual
immediate basin} (an invariant, unbounded domain on which
$N_f$-orbits converge locally uniformly to $\infty$), does this
imply that $\infty$ is a `virtual root' of $f$, in other words, does
this imply that $0$ is an {\em asymptotic value} of $f$? In this
paper, we give a condition under which this is true. A recent result
of Bergweiler, Drasin and Langley \cite{BDL} implies that the condition is
sharp when the Julia set of Newton maps is connected. Conversely, we
show that if $f$ has a singularity of logarithmic type over $0$,
then this singularity is contained in a virtual immediate basin of
$N_f$; if it is not of logarithmic type, then we provide
counterexamples.
The dynamics of $N_f$
partitions the Riemann sphere $\widehat{{\C}}$ into two completely invariant
parts: the open {\em Fatou set} of all points at which the iterates
$\{N_f^{\circ n}\}_{n=0}^{\infty}$ are defined and form a normal
family in the sense of Montel, and its complementary {\em Julia set}
that contains the backward orbit of $\infty$; see \cite{Bergweiler,
Milnor} for an introduction to these concepts. Note that starting
values in the Julia set will never converge to an attracting fixed
point of $N_f$.
A component of the Fatou set of $N_f$ for which no point converges
to a root of $f$ under iteration is either {\em wandering} or will
eventually land on a cycle of {\em B\"ottcher domains}, {\em Leau
domains}, {\em Siegel disks}, {\em Herman rings} or {\em Baker
domains} (compare \cite[Theorem 6]{Bergweiler}).
The possibilities become much more restricted when considering an
{\em invariant} component $U$ of the Fatou set, so that
$N_f(U)\subset U$. In this case, it follows from Proposition
\ref{Prop_NewtonMaps} that $U$ either contains a root of $f$, or is
an invariant Herman ring or Baker domain.
Shishikura \cite{Shishikura} has shown that if $N_f$ is rational,
then its Julia set is connected (see Proposition
\ref{Prop_RationalNewton} for a characterization of rational Newton
maps). It is conjectured that Shishikura's result can be extended to
all Newton maps of entire functions. If this is true, an invariant
Fatou component of $N_f$ either contains a root of $f$ or is a
virtual immediate basin (see Section \ref{Sec_Baker} for the precise
definition).
In this paper, we continue the analysis of virtual immediate basins
in \cite{MS} and \cite{RS}. We prove that if $f$ has a logarithmic
singularity over $0$, then $N_f$ has a virtual immediate basin (in
1994, Bergweiler, von Haeseler, Kriete, Meier and Terglane
investigated a class of functions $f$ that tend to $0$ in a sector
and showed that a right end of this sector is contained in a Baker
domain of $N_f$ \cite[Theorem 3.3]{BHK}).
For non-logarithmic singularities over $0$, we give examples of
functions whose Newton maps do not have a virtual immediate basin
associated to these singularities.
Furthermore, we show that there are three classes of virtual
immediate basins for $N_f$, two of which induce an asymptotic value
at $0$ for $f$. For the third class, this statement requires an
additional assumption, without which it is false. Every such virtual
immediate basin even has an open subset of starting values $z_0$
such that as $z_n=N_f^{\circ n}(z_0)\to\infty$, $f(z_n)\to 0$.
\medskip
Our paper is structured as follows: in Section \ref{Sec_Baker}, we
give a precise definition of virtual immediate basins and state
several of their properties. In Section \ref{Sec_Singularities}, we
recall some fundamental notions concerning singular values. In
Section \ref{Sec_Main}, we prove that a logarithmic singularity over
$0$ for $f$ induces a virtual immediate basin for $N_f$, while the
counterexamples for direct singularities are treated in Section
\ref{Sec_Example}. The converse theorem is stated and proved in
Section \ref{Sec_Converse}. The underlying idea of the proof is to
compare iterates of the Newton map $\displaystyle
N_f={\rm id}-\frac{f}{f'}$ to the time 1 flow of $\displaystyle \dot
z=-\frac{f(z)}{f'(z)}$.
\section{Virtual Immediate Basins}
\label{Sec_Baker} The concept of a {\em virtual immediate basin} was
introduced in \cite{MS} to explain the behavior of Newton maps
between different accesses to $\infty$ of an immediate basin.
Examples of Newton maps having virtual immediate basins can be found
in \cite{MS, RS}; these example are discussed in detail in
\cite{Mayer}. The name was chosen to suggest that these domains
behave in many ways similar to immediate basins.
The following proposition characterizes Newton maps of entire
functions.
\begin{proposition}[Newton Maps]
\label{Prop_NewtonMaps} {\em \cite[Proposition 2.8]{RS}.} Let
$N:\mathbb{C}\to\widehat{{\C}}$ be a meromorphic function. It is the Newton map of an
entire function $f:\mathbb{C}\to\mathbb{C}$ if and only if for each fixed point
$N(\xi)=\xi\in\mathbb{C}$, there exists a natural number $m>0$ such that
$N'(\xi)=\frac{m-1}{m}<1$. In this case, there exists $c\neq 0$ such
that
\[
f = c\cdot \exp\left(\int \frac{d\zeta}{\zeta-N(\zeta)}\right)\;\;.
\]
\qed
\end{proposition}
Note that while all definitions in this section are written in terms
of Newton maps, they make sense for arbitrary meromorphic functions.
\begin{definition}[Immediate Basin]
\label{Def_ImmediateBasin} Let $N_f$ be a Newton map. If $\xi$ is an
attracting fixed point of $N_f$, we call the open set
\[
\{z\in\mathbb{C}\,:\,\lim_{n\to\infty}N_f^{\circ n}(z)=\xi\}
\]
its {\em basin (of attraction)}. The component of the basin that
contains $\xi$ is called its {\em immediate basin} and denoted
$U_{\xi}$.
\end{definition}
For the definition of virtual immediate basins, we need the
following concept.
\begin{definition}[Absorbing Set]
\label{Def_AbsorbingSet} Let $V$ be an $N_f$-invariant domain. A
connected and simply connected open set $A\subset V$ is called a {\em weakly absorbing set for
$V$} if $N_f(A)\subset A$ and for each compact
$K\subset V$, there exists $k\in\mathbb{N}$ such that $N_f^{\circ
k}(K)\subset A$.
We call $A$ an {\em absorbing set} if it is weakly absorbing and additionally satisfies $N_f(\overline{A})\subset A$, where the closure is taken in $\mathbb{C}$.
\end{definition}
\begin{definition}[Virtual Immediate Basin]
\label{Def_VirtualBasin} A domain $V\subset\mathbb{C}$ is called a {\em
virtual immediate basin} for $N_f$ if it is maximal (among domains in $\mathbb{C}$) with respect to
the following conditions:
\begin{enumerate}
\item for every $z\in V$, $\lim_{n\to\infty} N_f^{\circ n}(z)=\infty$;
\item $V$ contains an absorbing set.
\end{enumerate}
\end{definition}
Every virtual immediate basin is unbounded, invariant and simply
connected \cite[Theorem 3.4]{MS}. Since Newton maps of polynomials
have a repelling fixed point at $\infty$, virtual immediate basins
can appear only for Newton maps of transcendental functions.
\begin{proposition}[Rational Newton Map]
\label{Prop_RationalNewton} {\em \cite[Proposition 2.11]{RS}.} Let
$f:\mathbb{C}\to\mathbb{C}$ be an entire function. Its Newton map $N_f$ is rational
if and only if there exist polynomials $p,q$ such that $f=p\cdot
e^q$. In this case, $\infty$ is a repelling or parabolic fixed
point.
More precisely, let $m:= \deg p$ and $n:=\deg q$. If $n=0$ and
$m\geq 2$, then $\infty$ is repelling with multiplier
$\frac{m}{m-1}$. If $n>0$, then $\infty$ is parabolic with
multiplier $+1$ and multiplicity $n+1\geq 2$. \qed
\end{proposition}
In the following, let $f$ be a transcendental entire function. If
$N_f$ is rational, then it has virtual immediate basins which are
the attracting petals of the parabolic fixed point at $\infty$ (see
\cite[Theorem 10.5]{Milnor}). If $N_f$ is transcendental
meromorphic, then any virtual immediate basin is (contained in) an
invariant Baker domain.
\begin{definition}[Baker Domain]
\label{Def_BakerDomain} Let $B$ be an invariant component of the
Fatou set of $N_f$. If $\lim_{n\to\infty}N_f^{\circ
n}(z)=\infty\in\partial B$ for all $z\in B$ and $N_f$ has an
essential singularity at $\infty$, then we call $B$ a {\em Baker
domain} of $N_f$.
\end{definition}
If $B$ is a simply connected Baker domain, it contains a weakly absorbing
set $A$ by a result of Cowen \cite[Theorem 3.2]{Cowen}. Using Cowen's work, it is easy to find an absorbing subset of $A$, hence $B$ is a
virtual immediate basin. Moreover, Cowen's result implies that there
are three dynamically defined classes of virtual immediate basins.
The following notations are based on \cite{Koenig} and
\cite{FagellaBaranski}.
\begin{definition}[Conformal Conjugacy]
\label{Def_ConformalConjugacy} Let $V$ be a virtual immediate basin
of $N_f$ and define $T(z)=z+1$. If there exists a weakly absorbing set $A$
for $V$, a $T$-invariant domain $\Omega\subset\mathbb{C}$ and a holomorphic
map $\phi:V\to\Omega$ such that
\begin{equation*}
\phi\circ N_f(z) = T\circ \phi(z)
\end{equation*}
for all $z\in V$, $\phi$ is univalent on $A$ and
$\phi(A)\subset\Omega$ is a weakly absorbing set for $T|_{\Omega}$, then we call the
triple $(\Omega,\phi,T)$ a {\em conformal conjugacy} for $N_f$ on $V$.
\end{definition}
\begin{definition}[Types of Virtual Immediate Basins]
\label{Def_Types} Let $V$ be a virtual immediate basin of $N_f$. We
say that $V$ is {\em parabolic of type I} if it has a conformal
conjugacy $(\Omega,\phi,T)$ such that $\Omega=\mathbb{C}$. It
is {\em parabolic of type II} if there exists a conjugacy such that
$\Omega$ is an upper or lower half-plane and {\em hyperbolic} with
constant $h$ if there exists $h>0$ such that $\Omega$ is the strip
\[
S_h:=\{z\in\mathbb{C}\,:\,|{\rm Im}(z)|<h\}\;.
\]
\end{definition}
\begin{theorem}[Classification of Virtual Immediate Basins]
\label{Thm_Classification} {\em \cite[Theorem 3.2]{Cowen}.} Every
virtual immediate basin $V$ has a conformal conjugacy and is of
exactly one of the three types defined above. If $V$ is hyperbolic,
the constant $h$ is uniquely defined. \qed
\end{theorem}
\par\medskip \noindent {\sc Remark. } We believe that any Baker domain of a Newton map is simply
connected; if this were proved, the notion of a virtual immediate
basin would simply stand for either an attracting petal or a Baker
domain, depending on whether the map under consideration is rational
or not.
\section{Asymptotic Values}
\label{Sec_Singularities} We recall several important definitions
concerning the singular values of a meromorphic map. Singular values
play an important role in iteration theory, because their orbits
determine the dynamics of a map in many ways.
We denote by $B_r(z)$ the open disk of radius $r>0$ around $z\in\mathbb{C}$.
In this section, let $g:\mathbb{C}\to\widehat{{\C}}$ be a meromorphic function.
\begin{definition}[Regular and Singular Value]
Let $a\in\mathbb{C}$ and assume that for $r>0$, $U_r$ is a connected
component of $g^{-1}(B_r(a))$ such that $U_{r_1}\subset U_{r_2}$ if
$r_1<r_2$.\footnote{The function $U:r\mapsto U_r$ is completely
determined by its germ at $0$. Since $\bigcap_{r>0} U_r$ is
connected, the intersection contains at most one point.} We have the
following two cases:
\begin{enumerate}
\item
If $\bigcap_{r>0}U_r=\{z\}$ for some $z\in\mathbb{C}$, then $g(z)=a$. If
$g'(z)\neq 0$, then we call $z$ a {\em regular point} of $g$. If
$g'(z)=0$, then $z$ is called a {\em critical point} and $a$ a {\em
critical value}. In this case, we say that the critical point $z$
{\em lies over} $a$.
\item
If $\bigcap_{r>0}U_r=\emptyset$, then we say that $U:r\mapsto U_r$
defines a {\em singularity of $f^{-1}$} and we call $a$ an {\em
asymptotic value}. For simplicity, we call $U$ a {\em singularity}
and say it {\em lies over} $a$.
\end{enumerate}
A {\em singular value} is an asymptotic or critical value. If no
singularities or critical points lie over a point, we call it a {\em
regular value}.
\end{definition}
Note that there can be many different singularities as well as
regular or critical points over any given point $a\in\mathbb{C}$.
For a rational map, all singular values are critical values.
Asymptotic values of transcendental maps have a well-known
characterization via paths.
\begin{lemma}[Asymptotic Path]
A point $a\in\widehat{{\C}}$ is an asymptotic value of $g$ if and only if there
exists a path $\Gamma:(0,\infty)\to\mathbb{C}$ with
$\lim_{t\to\infty}\Gamma(t)=\infty$ such that
$\lim_{t\to\infty}g(\Gamma(t))=a$. \qed
\end{lemma}
We call $\Gamma$ an {\em asymptotic path} of $a$. We follow
\cite{BergweilerEremenko} in the classification of asymptotic
values.
\begin{definition}[Direct, Indirect and Logarithmic Singularity]
\label{Def_LogSing} Let $U$ be a singularity of $g^{-1}$ lying over
$a\in\mathbb{C}$.
If $a\not\in g(U_r)$ for some $r>0$, then we call $U$ a {\em direct}
singularity. Otherwise, $U$ is called an {\em indirect} singularity.
A direct singularity $U$ over $a$ is called {\em logarithmic} if $g:
U_r\to B_r(a)\setminus\{a\}$ is a universal covering map for all
sufficiently small $r$.
\end{definition}
As an example, the positive real axis is an asymptotic path of $0$ for the
map $z\mapsto \sin(z)/z$. Since its image assumes this value infinitely many
times, it is contained in an indirect singularity over $0$.
For $z\mapsto \exp z$, any left half plane is a logarithmic
singularity over $0$.
\section{A Criterion for Virtual Immediate Basins}
\label{Sec_Main}
Our first result is the following.
\begin{theorem}[Logarithmic Singularity Implies Virtual Immediate Basin]
\label{Thm_Logarithmic} Let $f:\mathbb{C}\to\mathbb{C}$ be an entire function with a
logarithmic singularity $U$ over $0$. Then there exists $r_0>0$ such
that $U_{r_0}$ is an absorbing set for a parabolic virtual immediate
basin of type I for $N_f$.
\end{theorem}
Note that if $U$ is an indirect singularity, each $U_r$ contains
infinitely many roots of $f$ and hence infinitely many attracting
fixed points of $N_f$. Therefore, $U_r$ cannot be part of a virtual
immediate basin. In Section \ref{Sec_Example}, we show that there
exist functions $f:\mathbb{C}\to\mathbb{C}$ with a direct singularity $U$ over $0$
which does not induce a virtual immediate basin for $N_f$.
\begin{proof}
The idea is to compare the iterates of $N_f$ to the time 1 flow of
the differential equation $\dot z = -\frac{f(z)}{f'(z)}$. If $r$ is
small enough, this flow sends $U_r$ isomorphically to $U_{r/e}$. We
will see that for $r$ small enough, $N_f$ maps $U_r$ univalently
into itself and is an absorbing set for a virtual immediate basin of
$N_f$.
First, let $r>0$ be small enough so that $f:U_r\to B_r(0)\setminus\{0\}$
is a universal covering. Set $\eta:=-\log r$ and
$\mathbb{H}_\eta:=\{w\in\mathbb{C}\,:\, {\rm Re}(w)> \eta\}$. Since $e^{ -{\rm id}}: \mathbb{H}_\eta
\to B_r(0)\setminus\{0\}$ is also a universal covering, the map
$-\log(f):U_r\to\mathbb{H}_\eta$ is biholomorphic with inverse $\psi:\mathbb{H}_\eta
\to U_r$ (see Figure \ref{Fig_1}). With this, we get $\log (f(
\psi(w)))=-w$ for $w\in\mathbb{H}_\eta$.
\begin{figure}[hbt]
\centerline{
\begin{picture}(0,0)%
\includegraphics{Fig1.pstex}%
\end{picture}%
\setlength{\unitlength}{2072sp}
\begin{picture}(5356,4787)(3196,-4341)
\put(7741,-2626){$\mathbb{H}_\eta$} \put(4051,-601){$U_r$}
\put(6391,-106){$\psi$} \put(5811,-3796){$w\mapsto e^{-w}$}
\put(4391,-4156){$B_r(0)$} \put(4816,-3301){$0$}
\put(3196,-2401){$f$}
\end{picture}}
\caption{\label{Fig_1} If $f:U_r\to B_r(0)\setminus\{0\}$ is a universal
covering, there exists a biholomorphic map $\psi: \mathbb{H}_\eta\to U_r$.}
\end{figure}
Taking derivatives yields
\[
\frac{f'(\psi(w))}{f(\psi(w))} \cdot \psi'(w) = -1; \quad\text{hence}\quad
\psi'(w) = -\frac{f(\psi(w))}{f'(\psi(w))}\;.
\]
In other words, $\psi$ is a solution of $\dot z=
-\frac{f(z)}{f'(z)}$ and following the flow during time 1 maps
$U_r=\psi(\mathbb{H}_\eta)$ to $U_{r/e}=\psi(\mathbb{H}_{\eta+1})$.
We now want to compare $N_f$ to the time 1 flow of $\dot z=
-\frac{f(z)}{f'(z)}$. We will do the comparison in time space: we
will show that if $z=\psi(w)$ with ${\rm Re}(w)$ large enough, then
$N_f(z) = \psi(w')$ with $w'$ close to $w+1$. More precisely, we
have the following lemma.
\begin{lemma}
\label{Lem_GconjNf} There exists $\eta_0>\eta$ and a holomorphic map
$G:\mathbb{H}_{\eta_0}\to \mathbb{H}_{\eta_0+1/2}$ such that for all $w\in
\mathbb{H}_{\eta_0}$, we have
\[N_f\circ \psi(w) = \psi\circ G(w)\quad\text{and}\quad
|G(w)-(w+1)|<\frac{1}{2}.\]
\end{lemma}
The proof of Theorem \ref{Thm_Logarithmic} is then easily completed.
Indeed, set $V_0:=\psi(\mathbb{H}_{\eta_0})=U_{r_0}$ with $r_0=e^{-\eta_0}$
and let $V_{n+1}$ be the component of $N_f^{-1}(V_{n})$ that
contains $V_0$. Since all points in $\mathbb{H}_{\eta_0}$ converge to
$\infty$ under iteration of $G$ (the real part increases by at least
$1/2$ in each step), we conclude that $V:=\bigcup_{n\in\mathbb{N}} V_n$ is a
virtual immediate basin of $N_f$ with absorbing set $V_0$.
Let us now prove Lemma \ref{Lem_GconjNf}. Note that
\[
N_f(\psi(w))=\psi(w) - \frac{f(\psi(w))}{f'(\psi(w))}=\psi(w) + \psi'(w)\;.
\]
Thus, it is equivalent to prove that there exists $\eta_0>\eta$ and
a holomorphic map $G:\mathbb{H}_{\eta_0}\to \mathbb{H}_{\eta_0+1/2}$ such that for
all $w\in \mathbb{H}_{\eta_0}$, we have
\begin{equation}
\label{Eqn_ToShow} \psi(w) + \psi'(w) = \psi(G(w))
\quad\text{and}\quad |G(w) - (w+1)| < \frac{1}{2}\;.
\end{equation}
Given $w\in \mathbb{H}_{\eta+2}$, define functions $g,h: B_2(w)\to \mathbb{C}$ by
\[
g:\zeta \mapsto \frac{\psi(\zeta)-\psi(w)-\psi'(w)}{\psi'(w)}
\quad\text{and} \quad h:\zeta\mapsto \zeta-(w+1)\;.
\]
Since $g$ and $h$ satisfy $g(w)=h(w)=-1$, $g'(w)=h'(w)=1$ and can
both be extended to all of $\mathbb{H}_\eta$ as univalent maps, by Koebe's
distortion theorem there exists $\eta_0>\eta+2$ such that for every
$w\in \mathbb{H}_{\eta_0}$ and every $\zeta\in B_2(w)$,
$|g(\zeta)-h(\zeta)|<1/4$.
Clearly, $h(w+1)=0$. Note that $|h(\zeta)|=1/2>|g(\zeta)-h(\zeta)|$
when $\zeta$ belongs to the circle $\partial B_{1/2}(w+1)$. By
Rouch{\'e}'s theorem, the map $g$ has a (unique) root $\xi_w \in
B_{1/2}(w+1)$. It is now easy to see that the map $G:\mathbb{H}_{\eta_0}\to
\mathbb{C}$ defined by $G(w)=\xi_w$ satisfies equations (\ref{Eqn_ToShow}).
\end{proof}
\section{A Direct Singularity Counterexample}
\label{Sec_Example}
In this section, we will exhibit examples of entire functions with
direct singularities over $0$ that do not induce Baker domains of
the associated Newton maps. This shows that Theorem
\ref{Thm_Logarithmic} cannot be improved much further; $0$ is an
omitted value in all examples, so that a generalization is not even
possible to this case. We will only treat the first example in full
detail.
For ${\alpha}\in \left]0,+\infty\right[$, consider the entire function
$f_{\alpha}$ defined by
\[
f_{\alpha}(Z) = \exp\left(-\frac{1}{{\alpha}}\left(Z+\frac{1}{2i\pi} e^{2i\pi
Z}\right)\right) \;.
\]
The function $f_{\alpha}$ has infinitely many singularities over $0$ which
are necessarily direct since $f_{\alpha}$ does not vanish. We have two
kinds of asymptotic paths:
\begin{enumerate}
\item \label{sing_type1}for $k\in \mathbb{Z}$, as $t\in \mathbb{R}\to +\infty$, $f_{\alpha}(k+\frac{1}{4}-it)\to 0$;
\item \label{sing_type2}as $t\in \mathbb{R}\to +\infty$, $f_{\alpha}(t)\to 0$.
\end{enumerate}
The singularities of the first kind are of logarithmic type. Thus,
each one induces a Baker domain of parabolic type I for the Newton
map
\[
N_{\alpha}(Z) = Z+\frac{{\alpha}}{1+e^{2i\pi Z}} \;.
\]
The singularity of the second kind is not of logarithmic type and
contains infinitely many critical points of $f$. We will see that
for some values of ${\alpha}$, it does not induce a Baker domain for
$N_{\alpha}$.
More precisely, observe that $N_{\alpha}(Z+1)=N_{\alpha}(Z)+1$. It follows that
we can study the dynamics of $N_{\alpha}$ modulo $1$. In other words, we
have
\[
e^{2i\pi N_{\alpha}(Z)} = g_{\alpha}\left(e^{2i\pi Z}\right)\quad\text{with}\quad
g_{\alpha}(z) = ze^{2i\pi{\alpha}/(1+z)} \;.
\]
The map $g_{\alpha}$ has a fixed point with multiplier $e^{2i\pi {\alpha}}$ at
$z=0$, a fixed point with multiplier $1$ at $z=\infty$ and an
essential singularity at $z=-1$.
Let ${\mathcal F}(N_{\alpha})$ and ${\mathcal F}(g_{\alpha})$ be the Fatou sets of
$N_{\alpha}$ and $g_{\alpha}$ and let $\pi: \mathbb{C}\to \mathbb{C}^*$ be the universal
covering $\pi:Z\mapsto z=e^{2i\pi Z}.$ We claim that
\[
{\mathcal F}(N_{\alpha}) = \pi^{-1}\bigl({\mathcal F}(g_{\alpha})\bigr) \; .
\]
It is easy to see that $\pi^{-1}\bigl({\mathcal F}(g_{\alpha})\bigr)\subset
{\mathcal F}(N_{\alpha})$ (see for example \cite{Bergweiler2}). The inclusion
${\mathcal F}(N_{\alpha})\subset \pi^{-1}\bigl({\mathcal F}(g_{\alpha})\bigr)$ is less
immediate. One may argue as follows. Assume $z_0=\pi(Z_0)\notin
{\mathcal F}(g_{\alpha})$. Then, $z_0$ lies in the closure of the set of
iterated $g_{\alpha}$-preimages of $-1$ (otherwise, the family of iterates
of $g_{\alpha}$ would be well defined near $z_0$ and avoid the infinite
set $g_{\alpha}^{-1}(\{-1\})$, thus it would be normal). It follows that
any neighborhood of $Z_0$ contains a preimage of a pole of $N_{\alpha}$.
Thus, $Z_0\notin {\mathcal F}(N_{\alpha})$.
As $z\to \infty$, we have
\[
g_{\alpha}(z) = z + 1+ \frac{2i\pi{\alpha}}{z}+o(1/z)\;.
\]
Thus, the parabolic fixed point at $\infty$ has multiplicity $2$. It
has a single attracting direction along the positive real axis. The
full preimage of its parabolic basin under the map $e^{2i\pi Z}$ is
the union of the Baker domains of $N_{\alpha}$ induced by the
singularities of $f_{\alpha}$ of the first kind. The map $g_{\alpha}$ has
exactly two critical points: the solutions to $(1+z)^2-2 i \pi{\alpha}
z=0$.
Conjugating with $z\mapsto w=1/(z+1)$, we may put the singularity at
$\infty$ and the fixed points at $0$ and $1$. The map $g_{\alpha}$ is thus
conjugate to the meromorphic function
\[
h_{\alpha}(w) = \frac{w}{w+(1-w)e^{2i\pi {\alpha} w}} \;.
\]
The map $h_{\alpha}$ has growth order $1$ and two critical points. Thus,
it has at most $2$ asymptotic values by \cite[Corollary
3]{BergweilerEremenko}. But as $t\in \mathbb{R}\to +\infty$, $h_{\alpha}(it)\to
0$ and $h_{\alpha}(-it)\to 1$. Thus, $h_{\alpha}$ has exactly $2$ (fixed)
asymptotic values and $2$ critical values and is therefore a finite
type map. It is well known that finite type meromorphic functions
have neither wandering domains nor Baker domains \cite{BakerKotusLu,
RipponStallard}.
The map $h_{{\alpha}}$ has a fully invariant parabolic point at $0$ and
for suitably chosen ${\alpha}$, the fixed point at $1$ is Cremer (in
analogy to \cite[Theorem 11.13]{Milnor}). We want to prove that in
this case, the Fatou set of $h_{{\alpha}}$ consists of the parabolic basin
at $0$ and its preimage components. We deduce that then, the Fatou
set of $g_{\alpha}$ is equal to the parabolic basin of $\infty$ and its
preimage components. Thus, every Fatou component of $N_{\alpha}$ maps
after finitely many iterations into one of the invariant Baker
domains induced by the first kind of singularities of $f_{\alpha}$. There
is no Fatou component associated to the second kind of singularity
of $f_{\alpha}$.
So it remains to show that $h_{{\alpha}}$ has no additional non-repelling
periodic points nor Herman rings. While both claims follow directly
from Epstein's version of the Fatou-Shishikura inequality for finite
type maps \cite{Epstein1, Epstein2, Epstein3}, we provide a version
of Epstein's proof that is sufficient for our purposes; we treat
Herman rings separately in Lemma \ref{Lem_Herman}.
\begin{lemma}[Epstein]
There cannot be any additional non-repelling periodic points.
\end{lemma}
\begin{proof}
Suppose that $h_{\alpha}$ has an additional non-repelling cycle
\[
\{z_1\mapsto z_2\mapsto \ldots, \mapsto z_k\mapsto z_1\}\;.
\]
Let $v_1$ and $v_2$ be the two critical values of $h_{\alpha}$, set
\[
X = \{0,1, z_1,\ldots,z_k\}, \quad X' = X\cup \{v_1,v_2\}\; .
\]
Let ${\mathcal Q}^1(X)$ (resp. ${\mathcal Q}^1(X')$) be the set of
meromorphic quadratic differentials on $\widehat{{\C}}$ which are holomorphic
outside $X$ (resp.\ $X'$) and have at most simple poles in $X$
(resp.\ $X'$). Let ${\mathcal Q}^2(X)$ be the set of meromorphic
quadratic differentials on $\widehat{{\C}}$ which are holomorphic outside $X$,
have at most double poles in $X$ and whose polar part of order $2$
along $X$ is of the form
\[
A \frac{dz^2}{z^2} + B \frac{dz^2}{(z-1)^2} + C\sum_{i=1}^k
\frac{dz^2}{(z-z_i)^2}\quad\text{with }A,B,C\in \mathbb{C}\;.
\]
The sets ${\mathcal Q}^1(X)$, ${\mathcal Q}^1(X')$ and ${\mathcal Q}^2(X)$ are
vector spaces of respective dimensions $k-3$, $k-1$ and $k$. We can
define a linear map $\nabla : {\mathcal Q}^2(X)\to {\mathcal Q}^1(X')$ as
follows. If $U$ is a simply connected subset of $\widehat{{\C}}\setminus X'$,
then $h_{\alpha}:h_{\alpha}^{-1}(U)\to U$ is a (trivial) covering map. We let
$(g_i:U\to \widehat{{\C}})_{i\in I}$ be the countably many inverse branches
and we set
\[
(h_{\alpha})_* q |_U = \left(\sum_{i\in I} g_i^* q\right)\; .
\]
The sum is convergent because
\[
\sum_{i\in I} \int_U |g_i^* q| = \int_{h_{\alpha}^{-1}(U)} |q| <\infty\; .
\]
We can define in such a way a quadratic differential $(h_{\alpha})_* q$
which is holomorphic outside $X'$. A local analysis shows that
\[
\nabla q := (h_{\alpha})_* q-q
\]
has at most simple poles at points of $X'$ and thus, belongs to
${\mathcal Q}^1(X')$.
Since the dimension of ${\mathcal Q}^1(X')$ is less than the dimension
of ${\mathcal Q}^2(X)$, the linear map $\nabla$ is not injective and
there is a $q\in {\mathcal Q}^2(X)$ such that $\nabla q=0$, i.e.,
$(h_{\alpha})_* q = q$. To see that this is not possible, set
\[
U_{\varepsilon} := D(0,{\varepsilon})\cup D(1,{\varepsilon}) \cup \bigcup_{i=1}^k
h_{\alpha}^{-i}\bigl(D(z_1,{\varepsilon})\bigr)\; ,\quad V_{\varepsilon} := h_{\alpha}^{-1}(U_{\varepsilon})\;,
\]
let $W_{\varepsilon}\subset \widehat{{\C}}\setminus\bigl(U_{\varepsilon}\cup \{v_1,v_2\}\bigr)$ be a
simply connected subset of full measure and let $g_i:W_{\varepsilon}\to \widehat{{\C}}$ be
the countably many inverse branches of $h_{\alpha}$. Then, for ${\varepsilon}$
sufficiently small, we have
\[
\int_{\widehat{{\C}}\setminus U_{\varepsilon}} \bigl|(h_{\alpha})_*q\bigr| = \int_{W_{\varepsilon}}
\left|\sum_{i}g_i^* q\right| \leq \sum_{i} \int_{W_{\varepsilon}} \bigl|g_i^*
q\bigr| = \int_{\mathbb{C}\setminus V_{\varepsilon}} |q|
\]
with equality if and only if each $g_i^* q$ is a (real positive)
multiple of $(h_{\alpha})_*q=q$. In particular $q=h_{\alpha}^*(g_i^*q)$ has to
be locally, and thus globally, a constant multiple of $h_{\alpha}^*q$,
i.e.\ $q=c\cdot h_{\alpha}^* q$ for some constant $c> 0$. But in that case
$g_i^*q = c\cdot q$ and the sum $\displaystyle \sum_{i} \int_{W_{\varepsilon}}
\bigl|g_i^* q\bigr|$ will be diverging which is not the case. Thus,
\[
\int_{\widehat{{\C}}\setminus U_{\varepsilon}} |q| \leq \int_{\widehat{{\C}}\setminus V_{\varepsilon}}
|q|-C_{\varepsilon}\quad\text{with }C_{\varepsilon}>0\; .
\]
Note that for $\delta<{\varepsilon}$, we have
\begin{eqnarray*}
\int_{\widehat{{\C}}\setminus U_{\delta}} |q| & = & \int_{\widehat{{\C}}\setminus U_{{\varepsilon}}}
|q| +\int_{U_{{\varepsilon}}\setminus
U_{\delta}} |q| \\
& \leq & \int_{\widehat{{\C}}\setminus V_{\varepsilon}} |q|-C_{\varepsilon} +
\int_{V_{\varepsilon}\setminus V_{\delta}} |q| \\
& = & \int_{\widehat{{\C}}\setminus V_{\delta}} |q| -C_{\varepsilon}\;,
\end{eqnarray*}
thus
\[\int_{\widehat{{\C}}\setminus V_{\delta}} |q| - \int_{\widehat{{\C}}\setminus
U_{\delta}} |q|\geq C_{\varepsilon}>0\;.
\]
We will obtain a contradiction by proving
\[\liminf_{\delta\to 0} \left(\int_{\widehat{{\C}}\setminus V_{\delta}} |q| -
\int_{\widehat{{\C}}\setminus U_{\delta}} |q| \right)\leq 0\;.
\]
This is the place where we use the fact that the cycle is
non-repelling. As $\delta\to 0$, we can find a radius $r_{\delta} =
\delta+o(\delta)$ such that
\[
D(0,r_{\delta})\cup D(1,r_{\delta}) \cup D(z_1,r_{\delta})\cup
\bigcup_{i=2}^k h_{\alpha}^{-i}\bigl(D(z_1,\delta)\bigr) \subset
V_{\delta}.\] Then, $U_{\delta}\setminus V_{\delta}$ is contained
within the union of three annuli
\[
\{z~;~r_{\delta}\leq |z|<\delta\}\cup \{z~;~r_{\delta}\leq
|z-1|<\delta\} \cup \{z~;~r_{\delta}\leq |z-z_1|<\delta\}\; .
\]
Since $q$ has at most double poles at $0$, $1$ and $z_1$, the
integral of $|q|$ on those annuli tends to $0$ as $\delta$ tends to
$0$ and we have
\[
\int_{\widehat{{\C}}\setminus V_{\delta}} |q| - \int_{\widehat{{\C}}\setminus U_{\delta}}
|q| = \int_{U_{\delta}\setminus V_{\delta}} |q| -
\int_{V_{\delta}\setminus U_{\delta}} |q| \leq
\int_{U_{\delta}\setminus V_{\delta}} |q|\underset{\delta\to
0}\longrightarrow 0\; .
\]
\end{proof}
\begin{lemma}
\label{Lem_Herman} There cannot be any cycle of Herman rings.
\end{lemma}
\begin{proof}
Recall that $0$ is a multiple fixed point and its immediate basin of
attraction must contain a critical point $\omega_0$ and the critical
value $v_0=h_{\alpha}(\omega_0)$. Also, $1$ is a Cremer point. It must be
accumulated by the orbit of the second critical point $\omega_1$
with critical value $v_1=h_{\alpha}(\omega_1)$.
Assume there is a cycle of Herman rings $H_1\mapsto H_2\mapsto
\ldots\mapsto H_k\mapsto H_1$. Let $\Gamma$ be the union of the
equators of the Herman rings $H_i$ ($\Gamma$ is the union of a cycle
of Jordan curves). Choose a connected component $W$ of $\widehat{{\C}}\setminus
\Gamma$ which does not contain $1$. Then, there are infinitely many
iterates of $v_1$ contained in $W$ (accumulating a boundary
component of some Herman ring). In particular, there is an integer
$m>2$ such that $h_{\alpha}^{\circ m}(v_1)\in W$. Let $D$ be a disk around
$1$ avoiding $\Gamma$, the forward orbit of $v_0$ and the $m$ first
iterates of $v_1$. Let $D_{-1}$ be the connected component of
$h_{\alpha}^{-1}(D)$ containing $1$. Since $D\setminus \{1\}$ does not
contain any singular value of $h_{\alpha}$, $h_{\alpha}:D_{-1}\to D$ has to be
an isomorphism. Since $D_{-1}$ contains $1$ and avoids $\Gamma$, it
does not contain $h_{\alpha}^{\circ m}(v_1)$. So, $D_{-1}$ is a disk
avoiding $\Gamma$, the forward orbit of $v_0$ and the $m$ first
iterates of $v_1$. We can therefore construct inductively a sequence
of disks $D_{-k}$ containing $1$ such that $h_{\alpha}^{\circ k}:D_{-k}\to
D$ is an isomorphism. Since $|(h_{\alpha}^{\circ k})'(1)|=1$ for all
$k\in\mathbb{N}$, by Koebe's one quarter theorem the disks $D_{-k}$ contain
a common neighborhood of $1$ on which the iterates of $h_{\alpha}$ form a
normal family. This contradicts the fact that $1$ is a Cremer point
contained in the Julia set of $h_{\alpha}$.
\end{proof}
Note that if we choose ${\alpha}\in \mathbb{Q}$, $N_{\alpha}$ will have a wandering
domain that projects to a parabolic basin of a parabolic fixed
point. If ${\alpha}$ is a Brjuno number, $N_{\alpha}$ will have a univalent
Baker domain of parabolic type II which projects to a Siegel disk of
$g_{\alpha}$.
\medskip
We can construct other examples in a similar way. The maps we will
present do not have fixed points. It follows from Proposition
\ref{Prop_NewtonMaps} that they are Newton maps of non-vanishing
entire functions, whose singularities over $0$ are therefore direct.
Assume
\[
N(Z) = Z + \frac{{\alpha}}{1+{\varepsilon}\sin(2\pi Z)}
\]
with
\[0<{\varepsilon}<1 \quad\text{and}\quad 0<{\alpha}< m_{\varepsilon} =
\left\lfloor \frac{(1-{\varepsilon})^2}{2\pi {\varepsilon}}\right\rfloor \;.
\]
Then, $N$ is the Newton map of an entire function $f$ such that
$f(t)\to 0$ as $t\in \mathbb{R}\to +\infty$. The restriction of $N$ to $\mathbb{R}$
is an increasing homeomorphism which commutes with translation by
$1$. Indeed,
\[
N'(Z) = 1-\frac{2\pi {\varepsilon} {\alpha} \cos (2\pi Z)}{\bigl(1+{\varepsilon} \sin (2\pi
Z)\bigr)^2} \geq 1-\frac{2\pi {\varepsilon} {\alpha}}{(1-{\varepsilon})^2} >0.\]
Thus, it has a well defined rotation number ${\rm Rot}(N)$.
This rotation number is positive since $N(Z)>Z$. Note that for
${\alpha}=m_{\varepsilon}$, $N(0) = m_{\varepsilon}$ and thus, ${\rm Rot}(N)= m_{\varepsilon}$. For each
fixed ${\varepsilon}\in (0,1)$, the rotation number increases continuously from
$0$ to $m_{\varepsilon}$ as ${\alpha}$ increases from $0$ to $m_{\varepsilon}$. If ${\rm
Rot}(N)$ is rational, then $N$ has a chain of wandering domains
along the real axis. If ${\rm Rot}(N)$ is a Brjuno number, $N$ has a
univalent Baker domain of hyperbolic type centered on the real axis.
For suitably chosen parameters ${\alpha}$, ${\rm Rot}(N)$ is irrational
and the induced map $N:\mathbb{R}/\mathbb{Z}\to \mathbb{R}/\mathbb{Z}$ is topologically but not
analytically conjugate to the rotation $Z\mapsto Z+{\rm
Rot}(N):\mathbb{R}/\mathbb{Z}\to \mathbb{R}/\mathbb{Z}$. It should follow that $N$ does not have any
Baker domain associated to the singularity of $f$ containing the
large positive real numbers. The proof should be similar to the one
we presented above: study the dynamics modulo $1$.
In the previous examples, $f$ had a direct singularity containing
critical points of $f$. One may wonder whether it is the presence of
critical points that prevents $N_f$ from having a Baker domain
associated to the singularity. The following example shows that this
is not the case. We still assume ${\alpha}>0$ and set
\[
N_{\alpha}(Z)= Z+{\alpha} e^{e^{2i\pi Z}}\;.
\]
Then, $N_{\alpha}$ does not have any fixed points. So, it is the Newton
map of the non-vanishing entire function
\[
f_{\alpha}(Z) = \exp\left(-\frac{1}{{\alpha}}\int_0^Z e^{-e^{2i\pi W}}
dW\right)\;.
\]
Note that when $W\in \mathbb{R}$, the real part of $e^{-e^{2i\pi W}}$ is
greater than $1/e$. Thus, for ${\alpha}>0$ and for $t\in [0,+\infty)$, we
have
\[
|f_{\alpha}(t)| \leq e^{-t/(e{\alpha})}\underset{t\to +\infty}\longrightarrow 0.
\]
The entire map $f_{\alpha}$ has a singularity over $0$ containing large
real numbers. This is a direct singularity since $f_{\alpha}$ does not
vanish. In addition, $N_{\alpha}$ does not have poles and so, $f_{\alpha}$ does
not have critical points.
Again, $N_{\alpha}(Z+1)=N_{\alpha}(Z)+1$ and $N_{\alpha}$ projects via $Z\mapsto
z=e^{2i\pi Z}$ to an entire map $g_{\alpha}$ fixing $0$ with multiplier
$e^{2i\pi {\alpha}}$:
\[
g_{\alpha}(z) = ze^{2i\pi{\alpha} e^z}\;.
\]
By a result of Bergweiler \cite{Bergweiler2}, the Fatou sets of
$N_{\alpha}$ and $g_{\alpha}$ correspond under the map $Z\mapsto e^{2i\pi Z}$.
If $g_{\alpha}$ has a Siegel disk around $0$, the map $N_{\alpha}$ has a Baker
domain of parabolic type II which corresponds to the singularity of
$f_{\alpha}$ described above. But if $g_{\alpha}$ has a Cremer point at $0$,
there can be no Baker domain for $N_{\alpha}$ associated to the
singularity of $f_{\alpha}$ described above.
\section{A Virtual Immediate Basin Implies an Asymptotic Value}
\label{Sec_Converse}
\begin{theorem}[Virtual Immediate Basin Contains Asymptotic Path]
\label{Thm_Converse} Let $f:\mathbb{C}\to\mathbb{C}$ be an entire function such that
its Newton map $N_f$ has a virtual immediate basin $V$. If $V$ is
parabolic of type I or type II, then $0$ is an asymptotic value of
$f$ with asymptotic path in $V$. There exists $H>0$ such that the
same is true if $V$ is hyperbolic with constant $h\geq H$.
\end{theorem}
Bergweiler, Drasin and Langley have constructed an
entire function for which $0$ is not an asymptotic value and whose
Newton map has a virtual immediate basin of hyperbolic type \cite{BDL}. Thus,
the statement of Theorem \ref{Thm_Converse} cannot be extended to
all hyperbolic virtual immediate basins.
Using Theorem \ref{Thm_Converse}, we can give the following
formulation of Theorem 5.1 in \cite{RS}.
\begin{corollary}[Outside Immediate Basins]
Let $N_f$ be the Newton map of an entire function $f$ and $U_\xi$
the immediate basin of the attracting fixed point $\xi\in \mathbb{C}$ for
$N_f$. Let $\Gamma_1,\Gamma_2\subset U_\xi$ be two $N_f$-invariant
curves connecting $\xi$ to $\infty$ such that $\Gamma_1$ and
$\Gamma_2$ are non-homotopic in $U_{\xi}$ and let $\widetilde{V}$ be an
unbounded component of $\mathbb{C}\setminus (\Gamma_1\cup\Gamma_2)$. If the
set $N_f^{-1}(\{z\})\cap\widetilde{V}$ is finite for all $z\in\widehat{{\C}}$, then
$f|_{\widetilde{V}}$ assumes the value $0$ or has $0$ as an asymptotic
value.
\end{corollary}
\begin{proof} If $0\not\in f(\widetilde{V})$, then the virtual immediate basin constructed in the proof of \cite[Theorem 5.1]{RS} is parabolic of type I.
\end{proof}
For the proof of Theorem \ref{Thm_Converse}, we will need the
following corollary to the Koebe distortion theorem. We thank Dierk
Schleicher for pointing it out to us.
\begin{lemma}[Bounded Non-Linearity]
\label{Cor_Koebe} Let $R>0$, $g:B_R(0)\to\mathbb{C}$ be univalent and
$\varepsilon>0$. If $r/R$ is sufficiently small, then
\[
\left| \frac{g(z)-g(w)}{g'(z)(z-w)}-1\right|< \varepsilon
\]
for all $w,z\in B_r(0)$.
\end{lemma}
\begin{proof}
By possibly conjugating $g$ with $z\mapsto Rz$, multiplying $g$ with
a constant or adding a constant to $g$, we may assume that $R=1$,
$g(0)=0$ and $g'(0)=1$. Fix $0<r<1$. By the Koebe distortion
theorem, there is an $\alpha>0$ independent of $g$ such that
\[
|g(z)-g(w)-(z-w)g'(z)|<\alpha|(z-w)|^2
\]
for all $z,w\in B_r(0)$ (Taylor expansion around $z$). Moreover,
there is a $\beta>0$ so that $|g'(z)|>\beta$ for all $z\in B_r(0)$.
This yields
\[
\left|\frac{g(z)-g(w)}{g'(z)(z-w)}-1\right|<\alpha\left|\frac{z-w}{g'(z)}\right|
<\frac{2\alpha r}{\beta} \;.
\]
It follows from the Koebe distortion theorem that $\alpha\to 0$ and
$\beta\to 1$ as $r\to 0$. The claim follows.
\end{proof}
\proofof{Theorem \ref{Thm_Converse}} Suppose first that $V$ is
parabolic of type I. Then, there exists a weakly absorbing set $A$ of $V$ and a conformal conjugacy
$(\mathbb{C},\phi,T)$ such that $F:=\phi(A)$ is an absorbing set for $T:z\mapsto z+1$ in $\mathbb{C}$. Since $\phi|_A$ is univalent, it has a univalent inverse $\psi:F\to A$. With this, we get for $z\in F$
that $N_f(\psi(z)) = \psi(z+1)$, and hence
\[
\psi(z)-\frac{f(\psi(z))}{f'(\psi(z))}=\psi(z+1)\;.
\]
It follows that
\begin{equation}
\label{Eqn_Estimate0}
\frac{f'(\psi(z))}{f(\psi(z))}\cdot \big(\psi(z+1)-\psi(z)\big) = -1
\end{equation}
(note that since $V$ is a virtual immediate basin, $f$ has no roots
in $\psi(F)$). Let $0 < \varepsilon <1$. By Lemma \ref{Cor_Koebe}, there exists
$R>2$ such that if $B_R(z) \subset F$, then
\begin{equation}
\label{Eqn_Estimate1}
\left|\frac{\psi'(z)}{\psi(z+1)-\psi(z)}-1\right|<\varepsilon\;,
\end{equation}
and by equation (\ref{Eqn_Estimate0}) and inequality
(\ref{Eqn_Estimate1}) we get
\begin{equation}
\label{Eqn_Estimate2}
\left|\frac{f'(\psi(z))}{f(\psi(z))}\cdot\psi'(z)+1
\right|=\left|\frac{f'(\psi(z))}{f(\psi(z))}\cdot\psi'(z)\cdot\frac{\psi(z+1)-\psi(z)}{\psi(z+1)-\psi(z)}+1
\right|<\varepsilon\;.
\end{equation}
Since $F$ contains all sufficiently far right translates of the disk $B_R(z_0)$, for every $z_0\in F$ there exists $S_{z_0}\geq 0$ such that
(\ref{Eqn_Estimate2}) holds for all $z_0+t$ with real $t\geq S_{z_0}$.
Let $z_0\in F$ such that $S_{z_0}=0$. Then, for $t\geq 0$ and $z=z_0+t\in F$, we use a standard estimate in complex variables and inequality
(\ref{Eqn_Estimate2}) to get
\begin{eqnarray*}
\left| \log(f(\psi(z)))+z \right| &\leq& \left|\int_{z_0}^z \left((\log\circ f\circ\psi)'(\zeta)+1\right) d\zeta \right| + \left| \log(f(\psi(z_0)))+z_0\right| \\
&\leq&\sup_{w\in [z_0,z]} \left\{ \left| \frac{f'(\psi(w))}{f(\psi(w))}\cdot \psi'(w)+1\right|\right\} \cdot |z-z_0| + C'\\
&\leq&\varepsilon\cdot|z-z_0|+C' \\
&\leq& \varepsilon\cdot |z|+C\;,
\end{eqnarray*}
where $C'=|\log(f(\psi(z_0)))+z_0|$ and $C>0$ depend only on $z_0$;
$[z_0,z]$ denotes the straight line segment in $F$ connecting
$z_0$ to $z$. It follows that $\log(f(\psi(z)))\in B_{\varepsilon
|z|+C}(-z)$ and
\begin{equation}
\label{Eqn_Estimate3}
{\rm Re}(\log(f(\psi(z)))) < - {\rm Re}(z) + \varepsilon |z|+C\;.
\end{equation}
Since ${\rm Im}(z)$ does not depend on $t$, we have that $|z|/{\rm Re}(z)\to 1$ as $t\to\infty$ and the right hand side of inequality (\ref{Eqn_Estimate3}) converges to $-\infty$. Hence, exponentiating (\ref{Eqn_Estimate3}) yields
$\lim_{t\to+\infty}f(\psi(z))=0$.
Analogous estimates hold for sufficiently large imaginary parts if
$V$ is parabolic of type II. If $V$ is hyperbolic, sufficiently large $h$ will permit a construction as above. This finishes the proof.
\qed
\par\medskip \noindent {\sc Remark. } In fact, we not only show the existence of an asymptotic
path to $0$ for $f$ in $V$, but even that $V$ has an $N_f$-invariant
open subset in which $f$ converges to $0$ along $N_f$-orbits. This
is another similarity between immediate basins and their virtual
counterparts.
\section{Acknowledgements}
We thank Adrien Douady for raising the question of a relation
between virtual immediate basins and asymptotic values and Dierk
Schleicher for his helpful comments and his support. We also thank
Walter Bergweiler and Alexandre Eremenko for several interesting
discussions in which we learned a lot about transcendental
functions.
|
1,116,691,497,871 | arxiv | \section{Introduction}
\subsection{Problem \& Aim}
Whether the bald men like to be bald or not, sometimes they wonder how they would look if they had hair. To that end, bald-to-hairy translation can be performed quickly and realistically by using deep learning methods. We focus on image-to-image translation problem for the bald-to-hairy translation task to add hair to bald men in the context of this project.
We aim to create mapping between bald men and hairy men images to generate hair to bald men. As a result of our research, we found out that the most effective methods for this task are Generative Adversarial Networks (GAN) based methods. By considering the literature, we can categorize GAN into two methods which are paired and unpaired methods. For our problem, there are two datasets. One dataset is for bald men images which is source dataset and the other one is for hairy men images which is target dataset. Therefore, the unpaired approach must be considered for bald-to-hairy translation problem.
After obtaining the baseline results, we try to improve the baseline model by adding conditional layer and perceptual loss so that it can generate four different colors of hair which are black, brown, blond, gray and two different styles of hair which are straight, wavy.
\subsection{Challenges}
\label{sec:challenges}
We can collect the challenges of this project into three groups: dataset, classification and time.
For dataset, there are three main challenges:
\begin{itemize}
\item There are two datasets in our project, one for bald men and the other one for hairy men. Since there is no dataset specifically prepared for our problem, we prepared both datasets ourselves from the existing ones.
\item Perhaps the biggest challenge of this project is that it relies on unpaired data. We cannot consider this task on paired data, because there does not exist any paired dataset prepared for our translation problem in the literature. So we need to work on unpaired data and working with unpaired data is a challenge.
\item Our hairy dataset contains large number of images. However, our bald dataset is not as large as the hairy dataset. We cannot train all of the images in datasets due to resource and time issues. Therefore, we randomly sample them, but this is an obstacle to our best performance.
\end{itemize}
Hair classification is another challenge. In the second phase of this project, we classify and generate the hairs conditionally. The classes that has poor image numbers on hairy dataset create challenge for classifying and generating hairs of that class. This challenge can change accuracy of obtained results. Also, classifying unpaired data is an another challenge. Therefore, the generated hair results depend on this issue.
Last challenge is the time. Training GAN based methods takes long time. It takes about one day to train a model we have determined and get the result. If the model we train does not give good results, the time consequences are heavy. This becomes even bigger challenge as we try to tune the hyperparameters to obtain the best model.
\subsection{Method}
Since our project is based on unpaired data, we use CycleGAN~\cite{zhu2017unpaired} as our baseline. According to our research, it is one of the most advanced GAN methods and it can perform image-to-image translation on unpaired data. So our method rely on CycleGAN.
We add conditional layer to our baseline for classifying and generating four different colors of hair which are black, brown, blond, gray and two different styles of hair which are straight, wavy conditionally. Since we do not have enough computing power and time, we keep the number of training images low. Therefore, at first we experiment with four hair classes (black and blond hair colors, straight and wavy hair styles) which can be distinguished better and get more distinctly detachable results. Also, we experiment different hyperparameters such as number of training images and image sizes to get better results. Then we try to generate all six hair classes and experiment it by image size hyperparameter. We try different baselines and obtain the the model that gives the best results on our task.
After obtaining the best baseline by deciding the hyperparameters such as class number, image number and image size, we try to make improvements on results for bald-to-hairy translation task. Firstly, we add perceptual loss beside cycle-consistency loss and GAN losses of CycleGAN model to improve the results. Then we experiment removing cycle-consistency loss and adding perceptual loss.
After deciding perceptual loss addition and cycle-consistency loss removal, we try a different generator architecture, U-Nets, to obtain better results. At first, we experiment it without adding conditional layer to compare with ResNet generator architecture without conditional layer. Then we experiment adding conditional layer to the generator architecture.
Finally, after obtaining the best generator architecture, we experiment the best model with the maximum number of training images to get the best results.
\subsection{Related Works}
According to our research, there are several works related to ours. The most important of these works is CycleGAN. It performs unpaired image-to-image translation by using cycle consistent adversarial networks and performs general generation. It can perform translation between horses and zebras, summer and winter, painting and photos, apples and oranges, etc. It is an important work for our task since we use it as baseline.
StyleGAN\cite{karras2019style} can do any styling with given dataset. That includes human hair styles, too. However, there is no information about bald hairstyles or removing styles.
AttGAN\cite{he2019attgan} can do translations between different facial attributes such as hair styles, beard styles, mouth positions, etc. Nevertheless, it has really low accuracy(~35\%) on bald images and can do big failures in transformations.
Hair-GANs\cite{zhang2019hair} can detect 3D shape of hair from single 2D image, but there is no information about adding nor removing hair from images. It can only compute the 3D volumetric field as the structure guidance for the final hair synthesis.
The fundamental difference of our project from these related works is that we add hair specifically to the bald men. So we mainly focus on bald-to-hairy translation task.
\section{Method}
\subsection{Definitions}
\textbf{Deep Learning:} It is the field involving artificial neural networks containing layers and hidden layers, similar to machine learning algorithms in artificial intelligence. Deep learning operations require lots of learning, data and time. It has supervised, semi-supervised and unsupervised categories.
\textbf{Unpaired Data:} Data is required for deep learning training. It is described as unpaired or independent when the sets of data arise from separate individuals.
\textbf{GAN(Generative Adversarial Networks):} It is a subclass of machine learning frameworks. Two neural networks compete with each other in a game. It is used to generate new data with the same statistics as the training set. We will be using GAN for image-to-image translation using bald and hairy men images.
\textbf{Image-to-Image Translations:} It is a subclass of computer vision that can learn the mapping between two images. In our project, we will be doing image-to-image translation between bald and hairy men images.
\textbf{CycleGAN:} It is one of the most advanced GAN methods. It performs unpaired image-to-image translation by using cycle-consistent adversarial networks. We use CycleGAN as our baseline.
\textbf{Perceptual Loss:} It sums all squared errors between all pixels and takes the mean of it. It is often used to compare two different images that look almost the same. It shows errors in generated images.
\subsection{Approach}
The first step of our method for solving bald-to-hairy translation problem is creating a model based on CycleGAN. Next, we try different baseline models. We decide the best baseline according to the results we have received. After choosing the baseline, we consider four things for improvements:
\begin{figure*}[t!]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./cyclegan_transfer.png}
\captionof{figure}{Model of CycleGAN\cite{zhu2017unpaired}}
\label{fig:cyclegan_model}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{./cyclegan.png}
\captionof{figure}{Architecture of CycleGAN\cite{zhu2017unpaired}}
\label{fig:cyclegan_arch}
\end{minipage}
\end{figure*}
\begin{itemize}
\item Adding conditions to CycleGAN for generating different hair colors and styles.
\item Adding perceptual loss~\cite{zhang2018unreasonable} in addition to cycle consistency and GAN losses to improve the results since there is no perceptual loss in the current version of CycleGAN.
\item Trying U-Net generator architecture instead of ResNet generator architecture to observe different generator architecture impacts on our models.
\item Selecting the best generator architecture and do final testing with the best hyperparameters.
\end{itemize}
As Figure \ref{fig:cyclegan_model} shows, there are two domains X and Y, two generators G and F, two discriminators Dx and Dy in our model as CycleGAN model. In the sub-figure a, cycles are shown. In the sub-figure b and sub-figure c, the detailed views of cycles are given. The main idea of this model is when G generator maps X domain to Y domain, F generator maps the resulted Y domain back to X domain by using the discriminators. The cycle-consistency loss is calculated according to X domain and X domain obtained by F generator.
By calculating the cycle-consistency loss in training, model becomes more cycle-consistent. So X domain and X domain obtained by F generator begins to be almost the same. The sub-figure b shows X to Y translation and sub-figure c shows Y to X translation.
Since we use this cycle model, we also obtain hairy-to-bald translation results. However, our aim is not to improve hairy-to-bald translation, but to improve bald-to-hairy translation results.
Figure \ref{fig:cyclegan_arch} shows architecture of CycleGAN. This network contains two stride-2 convolutions, several residual blocks and two fractionally strided convolutions with stride 1/2. There are 6 blocks for 128$x$128 images and 9 blocks for 256$x$256 and higher resolution training images. Instance normalization is used. For the discriminator networks there are 70$x$70 PatchGANs which aim to classify whether 70$x$70 overlapping image patches are real or fake. Such a patch-level discriminator architecture has fewer parameters than a full-image discriminator and can work on arbitrarily-sized images in a fully convolutional fashion.~\cite{zhu2017unpaired}
By adding condition layer to CycleGAN, we can generate hair with colors and styles. We create the condition layer using an embedding layer and combining it with the data in generator and discriminator blocks. Then it is fed into CycleGAN model.
For further improvements on generations, we can add perceptual loss and experiment with and without cycle-consistency loss. As perceptual loss~\cite{zhang2018unreasonable} is proposed, after passing input and cycle output from VGGNet16, we can obtain perceptual loss by giving them to the loss function. According to the obtained results, we can use the best option for next experiments.
If still there is no strict distinction between different hair colors and styles, we can change ResNet generator architecture with U-Net generator architecture without any condition layer. The goal of that change is to see differences between ResNet and U-Net generator architectures impact on our generations. If U-Net generator architecture results are better than ResNet generator architecture, we select U-Net as the our generator and do a final training using all 4430 train images.
\section{Experimental Settings}
\subsection{Dataset}
Since we use GAN to generate hairy men images, we need a lot of annotated men images to train the model. CelebA\cite{liu2018large} dataset fits for our task. Dataset information can be found in Table \ref{table:dataset}. We use eye-aligned version of this dataset for hairy and bald men data. Due to hardware and time issues, we subsample 1000, 2000 and 4430 training images
and separate 100 test images for each domain.
To create hair classes, we filter images with necessary annotations (black, blond, brown, gray, straight and wavy hair). Table \ref{table:hair_class} shows the distribution over these classes.
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|}
\hline
\textbf{\# images} & \textbf{Total} & \textbf{Hairy Men} & \textbf{Bald Men} \\
\hline\hline
CelebA & 202,599 & 79904 & 4530 \\
\hline
\end{tabular}
\caption{Number of Images Distribution Over Dataset}
\label{table:dataset}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|l|}
\hline
\textbf{Hair Class} & \textbf{\# of images} \\
\hline\hline
Black & 25156 \\
Blond & 1749 \\
Brown & 12788 \\
Gray & 7235 \\
Straight & 20471 \\
Wavy & 11892 \\
\hline
\end{tabular}
\caption{Hair class distribution over dataset}
\label{table:hair_class}
\end{center}
\end{table}
\subsection{Experimental Setup}
Our physical system has NVIDIA GTX 1060 with 6GB vRAM and 1280 CUDA cores. After some time, we switched to dual NVIDIA GTX1080TI with 22GB vRAM and 7168 CUDA cores in total.
Our training data consists of 256$x$256 pixel images. Since this is an issue of performance for us, we downscale the image size to 64$x$64 and 128$x$128.
\subsection{Implementation}
\begin{table*}[ht]
\begin{tabularx}{\textwidth}{| X | X | X | X | X | X | X | X |}
\hline
Experiment Number & Generator Arch & \# Images & Image Size & \# Classes & Loss & Train Time & GPU \\
\hline\hline
1 & ResNet-9 & 1000 & 256$x$256 & - & C & 36 h 45 m & GTX1060 \\
2 & ResNet-6 & 1000 & 64$x$64 & 4 & C & 8 h 35 m & GTX1060 \\
3 & ResNet-6 & 4430 & 64$x$64 & 4 & C & 35 h 35 m & GTX1060 \\
4 & ResNet-6 & 2000 & 64$x$64 & 4 & C & 15 h 45 m & GTX1060 \\
5 & ResNet-6 & 2000 & 128$x$128 & 4 & C & 24 h 41 m & GTX1060 \\
6 & ResNet-6 & 2000 & 64$x$64 & 6 & C & 15 h 22 m & GTX1060 \\
7 & ResNet-6 & 2000 & 128$x$128 & 6 & C & 26 h 36 m & GTX1060 \\
8 & ResNet-6 & 2000 & 128$x$128 & 4 & C + P & 14 h 49 m & GTX1080TI \\
9 & ResNet-6 & 2000 & 128$x$128 & 4 & P & 16 h 52 m & GTX1080TI \\
10 & U-Net-128 & 2000 & 128$x$128 & - & C & 20 h 17 m & GTX1060 \\
11 & U-Net-128 & 2000 & 128$x$128 & 4 & P & 14 h 57 m & GTX1080TI \\
12 & ResNet-6 & 4430 & 128$x$128 & 4 & P & 29 h 51 m & GTX1080TI \\
\hline
\end{tabularx}
\caption{All experiments with 200 epoch (C: Cycle-consistency Loss, P: Perceptual Loss)}
\end{table*}
Every experiment is done in light of previous experiment results. If the changes make improvements on results, this incident is used on next experiments. The first experiment has vanilla CycleGAN hyperparameters and they are changed after the experiment results.
\textbf{\nth{1} Experiment:} In our first experiment, we used default hyperparameters of CycleGAN such as image size, generator architecture and losses. There was no condition layer for hair classes and with these parameters we cannot generate hairs with the given hair colors and styles.
\textbf{\nth{2} Experiment:} In this experiment, we decreased image size to 64$x$64 for faster training. By training faster, we can observe hyperparameter changes without losing too much time and we added condition layer with 4 classes which are black, blond, straight and wavy hair. But we could not achieve good condition separation with 1000 images.
\textbf{\nth{3} Experiment:} After bad results with 1000 images, we increased number of images to 4430 which is all bald images that we have. There was some good separation between conditions but the train time was too long. This experiment was the longest one.
\textbf{\nth{4} Experiment:} Because of the long training time with 4430 images we decreased number of images to 2000 and kept other hyperparameters as they are. The results did not change too much due to low image count. But all results were blurry due to low image resolution.
\textbf{\nth{5} Experiment:} To eliminate blurry images, we increased image size to 128$x$128. The train time increased by \%80 but obtained results have better clarity now.
\textbf{\nth{6} Experiment:} After finding the best hyperparameters, we tried to classify and generate more hair colors so we increased the number of classes to 6(Black Hair, Blond Hair, Brown Hair, Gray Hair, Straight Hair and Wavy Hair). The obtained results were blurry with 64$x$64 images and there was a bad classification between 6 classes.
\textbf{\nth{7} Experiment:} We increased image size to 128$x$128 for clear images. Since there are not enough images to classify 6 classes, the network could not classify 6 classes. Because of that we continued with 4 classes for condition layer.
With the experiments we have done up to this point, we chose all the best hyperparameters and the best model for our hair generation task. We added perceptual loss for better hair generation.
\textbf{\nth{8} Experiment:} In this experiment, we added perceptual loss in addition to cycle-consistency and GAN losses. The obtained results were better than without perceptual loss.
\textbf{\nth{9} Experiment:} After adding perceptual loss to our model, we removed cycle-consistency loss and kept perceptual loss in addition to GAN losses. The obtained results were better than the results of \nth{8} experiment, so we kept perceptual loss and removed cycle-consistency loss.
\textbf{\nth{10} Experiment:} After adding perceptual loss for better hair generation, classification and generation of different hair classes were not impressive because of that we changed ResNet with U-Net generator architecture. But for initial testing we experimented with initial CycleGAN losses and we did not add any condition layer. The obtained results were similar to ResNet results and U-Net generator models allocate 5 times more disk space in this experiment.
\textbf{\nth{11} Experiment:} To test U-Net generator architecture performance with condition layer we added 4 classes to our model and the obtained results were not impressive. Due to insufficient disk space usage and inconsistent results compared to ResNet generator architecture, we did not select U-Net as the best generator architecture.
\textbf{\nth{12} Experiment:} After obtaining the best hyperparameters and the best generator architecture, we experimented with all train images for best results of our model.
\section{Discussions and Conclusions}
Example inputs and outputs from our experiments could be found at the end of the paper.
\subsection{CycleGAN vs Conditional CycleGAN}
In our project, at first we used CycleGAN model as our baseline and obtained the first results. CycleGAN generated hair to bald men but it was inconsistent. The model generated hair in some images while did not generate in others. Also the generated hairs were not realistic, they seemed as if painted with brush.
To improve the initial baseline so that it can generate different hair colors and different hair styles, we added a condition layer. After experimenting different hyperparameters such as number of train images and image sizes with conditional CycleGAN, we tuned hyperparameters and obtained more consistent results. Although we obtained more consistent results, the conditions did not work very well. Since there was not enough data to provide each of the conditions, our conditioning method was not very effective. In most experiments we sampled 2000 images for training. The dominant color was black in that sampled dataset, so while black hair color learned well, there were not enough data to learn other hair colors. Also the sampled dataset was not enough to learn the exact map of different hair styles. So while our aim was to classify and generate 6 hair classes, we can classify and generate at most 4 hair classes(Black Hair, Blond Hair, Straight Hair, Wavy Hair) in a distinct way.
\subsection{Cycle-consistency vs Perceptual Loss}
CycleGAN uses cycle-consistency loss for cycle consistency. The generated hairs with cycle-consistency loss did not seem real, it seemed like they had been put with photoshop. By adding perceptual loss in addition to cycle-consistency loss and GAN losses, we obtained more clear and real generated hairs. We then experimented only perceptual loss in addition to GAN losses by removing cycle-consistency loss, the obtained results were better so that it was difficult to distinguish whether it was real hair or not.
\subsection{ResNet vs U-Net Generator Architecture}
U-Net is generally used in Pix2Pix. So it gives better results in paired data. When we experimented with U-Net generator architecture, we obtained a bad cycle consistency in results. It was because of the unpaired data. U-Net without conditional layer results were good but there was no conditional layer. Because of that we could not generate images we would like. After adding conditional layer to U-Net model we observed that it could not create distinctions between conditions and the generated images were sometimes better than ResNet results but usually they were not as good as ResNet results. Finally saved U-Net generator architecture models were allocating too much disk space due to depth of its network. So we decided to use ResNet generator architecture. With ResNet generator architecture, we obtained better and consistent results. The residual connections help improving the hair generation.
\subsection{Conclusions}
In general, our method for hair generation is succeeded since it can use all train images effectively to learn for generating hair. But our conditioning method is partially failed due to inefficiency of conditionally generated hairs.
CycleGAN was a good choice as a baseline for our task but there were some edge case problems in the results. For example when we trained light-skinned men and gave a dark-skinned man as a test image, we observed that the skin color in the test image became lighter. Also when we tested some experiments with old man images, our model made those men younger.
Finally, we can specify some limitations based on the results we have obtained. On our tests we found out that if there is any object on bald men heads such as glasses or hats, our models cannot generate hairs in these images. An example to this issue can be seen in Figure \ref{fig:example}. To eliminate this issue, more hairy and bald men images with an object on their heads needed on train dataset.
\begin{figure}[H]
\begin{center}
\fbox{
\includegraphics[width=0.8\linewidth]{./glasses.png}
}
\end{center}
\caption{The result of bald man input with object on his head from \nth{9} experiment}
\label{fig:example}
\end{figure}
Also, if the bald dataset sampled for training does not contain enough number of completely bald men, the model does not learn well enough and when a completely bald man is given as a test image, the model cannot generate realistic hair.
\section{Improvements}
To improve the results, we can first consider increasing the number of train images. Our bald dataset is composed of 4530 images. We sample 100 images for test dataset. Therefore we have 4430 image for train dataset. Since CycleGAN needs the same number of images for source and target datasets, the maximum image number we can experiment with is 4430. By increasing the number of bald images, we can increase data for training. Hereby, the number of images for each condition increases, the model learns conditions better and the results improve.
As we discussed in \hyperref[sec:challenges]{\textit{Challenges section}}, the biggest challenge of this project is that it is based on unpaired data. Since there is not any paired dataset (bald and hairy images for the same men) prepared for this task, we work on unpaired data. This is an obstacle for our best performance goals. We could get more successful results with paired data.
Maybe we could experiment another state-of-the-art methods beside CycleGAN. We could experiment with baselining other methods which are more suitable for our task.
The best condition method can be found by trying different condition layers. A method more effective than the current one can be applied to obtain better results.
There are some issues on our dataset. The images in dataset are not very well annotated. There are woman images annotated as man, straight hair images annotated as wavy hair, etc. Also the images annotated as wavy and straight are not very reliable for men images, but reliable for woman images. Using dataset consists of more reliable annotated images could improve the results.
Finally, increasing the image size provides the model to learn better and the results to be improved.
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/5.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/6.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/7.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/1/8.png}\\
\end{tabular}
\caption{Example results of \nth{1} experiment (Note that there is no condition layer in this experiment)}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/2/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{2} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/3/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{3} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/4/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{4} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/5/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{5} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-4.png}\\
\hspace*{0.5cm}Brown-Straight & \hspace*{0.4cm}Brown-Wavy & \hspace*{0.4cm}Gray-Straight & \hspace*{0.3cm}Gray-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-5.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-6.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-7.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/6/1-8.png}\\
\end{tabular}
\caption{Example results of \nth{6} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-4.png}\\
\hspace*{0.5cm}Brown-Straight & \hspace*{0.4cm}Brown-Wavy & \hspace*{0.4cm}Gray-Straight & \hspace*{0.3cm}Gray-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-5.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-6.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-7.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/7/1-8.png}\\
\end{tabular}
\caption{Example results of \nth{7} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/8/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{8} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/9/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{9} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/5.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/6.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/7.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/10/8.png}\\
\end{tabular}
\caption{Example results of \nth{10} experiment (Note that there is no condition layer in this experiment)}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/11/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{11} experiment}
\end{figure*}
\begin{figure*}
\captionsetup[subfigure]{labelformat=empty}
\centering
\setlength\tabcolsep{0pt}
\hspace*{-1.0cm}%
\begin{tabular}{cccc}
\hspace*{0.5cm}Black-Straight & \hspace*{0.4cm}Black-Wavy & \hspace*{0.4cm}Blond-Straight & \hspace*{0.3cm}Blond-Wavy\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/1-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/1-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/1-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/1-4.png}\\
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/2-1.png} &
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/2-2.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/2-3.png}&
\includegraphics[width=0.29\textwidth, ,valign=m, keepaspectratio,]{./results/12/2-4.png}\\
\end{tabular}
\caption{Example results of \nth{12} experiment}
\end{figure*}
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,497,872 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{table*}
\begin{center}
\caption{Details of the observations made of each target. Standard stars
(HD49798, EG 274 and EG 21) for photometric calibration were observed with the
same instrumental configuration.}\label{tab:obs}
\begin{tabular}{cccccccl}
\hline
Target & Redshift & \multicolumn{2}{c}{Position (J2000)} & Date & Exposure & Axial & Comment \\
& & RA & Dec & & Time (s) & Wavelength (\AA) &\\
\hline
MRC B1256-243 & 2.263 & \ra{12}{59}{12.6} & \dec{-24}{36}{05} & 2003 July 27 & 15 $\times$ 60 & 3957.2 & Repeated \\
& & & & & & 3967.1 & twice. \\
& & & & & & 3977.1 & \\
& & & & & & 3987.1 & \\
\\
MRC B2158-206 & 2.249 & \ra{22}{01}{27.0} & \dec{-20}{25}{36} & 2003 July 27 & 15 $\times$ 60 & 3959.1 & Repeated \\
& & & & & & 3969.1 & four \\
& & & & & & 3979.1 & times. \\
& & & & & & 3989.0 & \\
& & & & & & 3999.0 & \\
\\
BR B0019-1522 & 4.528 & \ra{00}{22}{08.0} & \dec{-15}{05}{39} & 1997 Nov. 6 & 600 & 6709.5 & Repeated \\
& & & & & & 6725.9 & eight \\
& & & & & & 6742.3 & times. \\
\hline
\end{tabular}
\end{center}
\end{table*}
The evolution of clustering with cosmic time is widely recognised as one of
the most rigid tests of the cold dark matter paradigm \citep{Kaiser91,
Springel05}. However, locating high redshift clusters is challenging. The
traditional methods of X-ray and blind optical searches are limited: X-ray
surveys can detect only the most luminous sources at high-$z$, while optical
searches are highly vulnerable to projection effects. In order to overcome
these limitations, a way of targeting the search is needed.
Since the earliest studies, it has been established that quasars are
associated with groups and clusters of galaxies \citep{Bahcall69, Oemler72}.
More recently, \citet{McLure01} argued that a close match between the space
density of clusters and that of quasars indicates that practically all
clusters contained an AGN at high redshift. Further, \citet{Rawlings04}
propose that radio jets from AGN are a major influence on cluster evolution.
They suggest that a galaxy merger within the cluster triggers a radio-jet
episode; the jets then delivery energy to the intracluster medium, heating it
and preventing it from falling into the other developing cluster galaxies.
These galaxies are thus starved of fuel, and star formation within the cluster
will effectively shut down. \citeauthor{Rawlings04} speculate that every
protocluster undergoes such an episode, strengthening the link postulated by
\citeauthor{McLure01}.
This relationship between galaxy overdensities and AGN suggests a method for
locating high-$z$ clusters: we can use quasars as convenient `anchors' for our
search. This technique has already been exploited by others with notable
success: for example, \citet{Stiavelli05} tentatively report the detection of
clustering around a radio-quiet quasar at $z = 6.28$.
To date most galaxy clusters detected around AGN have been identified based on
statistical overdensities of objects observed in their vicinity. A better
strategy for overcoming foreground contamination is to identify individual
star forming galaxies in the AGN field by their characteristic redshift
dependent features. In particular, Lyman $\alpha$ emission has been used to
identify high redshift galaxies for some time. Among the first high redshift
objects identified by emission lines were the $z = 4.55$ Ly $\alpha$ emitters
observed in the field of the quasar BR B2237-0607 by \citet{Hu96}. Since then,
a series of highly profitable observations of Ly $\alpha$ emitters in AGN
fields have been carried out. \citet{Kurk00} and \citet{Pentericci00} used a
combination of narrow- and broad-band imaging with follow-up spectroscopy to
identify a galaxy overdensity within 1.5 Mpc of the $z = 2.156$ radio galaxy
PKS B1138-262. Similar results have been achieved for the radio galaxies TN
J1338-1942 \citep[$z=4.1$;][]{Venemans02}, TN J0924-2201
\citep[$z=5.2$;][]{Venemans04, Overzier06} and MRC B0316-257
\citep[$z=3.13$;][]{Venemans05} and 6C0140+326 \citep[$z=4.413$;][]{Kuiper11}.
While this combination of broad and narrowband imaging has produced
demonstrably successful results, the more direct antecedents of this work have
adopted an alternative approach. The \textit{Taurus Tunable Filter} (TTF)
instrument, installed on the Anglo-Australian Telescope, provided a powerful
method of narrow-band (of order 10 \AA) imaging over a large range of
wavelengths \citep{BH982}. \citet{Bremer99} introduced the strategy used to
search for line emitters at a given redshift with TTF: broadly, the tunable
filter is stepped across a range of wavelengths around the expected redshifted
position of the emission. Emission line galaxies then appear brighter in those
frames centred on the spectral line.
Considerable success has been achieved at lower redshifts with this technique.
\citet{Baker01} located a cluster around the $z = 0.9$ radio-loud quasar MRC
B0450-221 using TTF to search for $[$O\,{\sc ii}$]$ 3727 \AA{} emission. The
same technique was used by \citet{Barr04}, who examined six radio-loud quasars
at redshifts $0.8 < z < 1.3$, identifying a total of 47 candidate emission
line galaxies (ELGs), at an average space density around 100 times higher than
that found locally.
Further work with TTF was performed by \citet{Francis04}, who targeted Ly
$\alpha$ emitters within 1 Mpc of the $z=2.159$ radio loud quasar PKS
B0424-131 without making {\it any} detections. These authors selected this
extremely luminous UV source with the expectation of finding Ly $\alpha$
fluorescent clouds in the vicinity of the quasar but these were not detected.
With specific application to PKS B0424-131, \citet{Bruns11} demonstrated that
the most intrinsically UV-luminous quasars observed beyond $z=1$ suppress star
formation in low-mass haloes ($M_{\rm vir} \lesssim 10^{12}$ M$_\odot$) within
a megaparsec of the quasar. The intense UV radiation field is expected to
photo-evaporate HI clouds which presumably accounts for the lack of
detections. We return to this point in our conclusion
(\S~\ref{sec:conclusion}).
The present work continues to push TTF to higher redshifts, searching three
quasar fields at redshifts up to $z \sim 4.5$. The objects selected include
examples of both radio-loud and radio-quiet quasars, and their environments
are compared. Section \ref{sec:obs} of this paper describes the observations,
including target selection, instrumental characteristics and a note on data
reduction. Section \ref{sec:sim} describes simulations performed to examine
statistical properties and completeness of our sample. Section \ref{sec:id}
describes how candidate ELGs were identified and presents details on the
detections, as well as considering the possible sources of mis-identified
`interloper' objects. Section \ref{sec:properties} analyses the distribution
and properties of the sample. Our conclusions are summarised in Section
\ref{sec:conclusion}. Throughout, we assume an $H_0 = 70$ km s$^{-1}$
Mpc$^{-3}$, $\Omega_{\Lambda} = 0.7$, $\Omega_{\mathrm{M}} = 0.3$ cosmology.
\section{Observations}
\label{sec:obs}
\subsection{Target selection}
Two data sources were used for this analysis. The authors used TTF to observe
objects drawn from the Molonglo Quasar Sample \citep[MQS;][]{Kapahi98} of
low-frequency-selected radio-loud quasars in July 2003. Six targets had been
selected from the MQS on the basis of observability, suitable redshifts being
limited by the necessity to place Lyman $\alpha$ within the wavelength ranges
accessible to TTF's order-blocking filters. Due to weather constraints, only
two quasars were observed: MRC B1256-243 ($z = 2.263$) and MRC B2158-206 ($z =
2.249$). Immediately following each quasar observation, a standard star was
observed with the same instrumental settings for flux calibration. In
addition, observations of BR B0019-1522, a $z = 4.528$ radio-quiet quasar,
were drawn from the Anglo-Australian Observatory archive. These data were
taken on 1997 November 6 by Bland-Hawthorn, Boyle and Glazebrook, and were
accompanied by companion observations of a standard star. Details of each
target are given in Table \ref{tab:obs}.
\subsection{Instrumental setup and characteristics}
Throughout this work, a distinction is drawn between a \textit{frame}
(corresponding to one set of data read from the CCD), an \textit{image} (a
number of frames at the same etalon settings which have been combined for
analysis) and a \textit{field}, or stack of images of the same area of sky at
different etalon settings.
\subsubsection{Wavelength variation and the optical axis}
\label{sec:wlvariation}
Fabry-P\'erot images have a quadratic radial wavelength dependence of the form
$\lambda_\theta = \lambda_{centre}(1 - \theta^2/2)$ \citep{Bland89}, where
$\theta$ is the off-axis angle at the etalon. In a typical observation, the
wavelength varies across the field by around 1\% of $\lambda_{centre}$.
Wavelength calibration is performed with respect to the axial wavelength; for
any given pixel position on the image, it is then possible to calculate the
wavelength observed at that point.
\subsubsection{Objects at $z \sim 2.2$}
The TTF was used at $f/8$ on the AAT in combination with the EEV2 CCD. This
resulted in a scale of 0.33'' per pixel. After processing, the total useful
rectangular field of view in the observations was around 7' by 5'. The radial
wavelength variation described in Section \ref{sec:wlvariation} resulted in a
shift of 1.4~\AA{} at 2' from the optical axis and 6.7~\AA{} at 4' from the axis.
Conditions were photometric, and seeing was on the order of 1.5''. The full
width at half maximum of the etalon transmission band was 7.5~\AA.
The targets were scanned at etalon plate spacings corresponding to a series of
wavelength steps of approximately 10~\AA, the aim being to straddle the
redshifted Ly $\alpha$. However, an intermediate-band order-blocking filter is
necessary to eliminate unwanted wavelengths and other orders of interference.
In this case, the AAT's B1 filter was the best available. Unfortunately, the
observed wavelengths were at the very edge of the filter transmission, as
shown in Fig. \ref{fig:trans}: the signal to noise ratio therefore decreases
significantly with wavelength. Table \ref{tab:obs} and Fig. \ref{fig:trans}
record observations of MRC B1256-243 at 3987.1 \AA. When these data were
analysed, it was clear that the reduced filter transmission had resulted in no
useful results at this wavelength. These data are not considered further in
this work. The MRC B2158-206 observations at 3989.0 \AA{} and 3999.0 \AA{} are
included hereafter, but did not include any useful detections.
Each CCD frame contained a total of 30 minutes of observations, taken at two
separate axial wavelengths. Each wavelength was exposed for 60 seconds a total
of 15 times. This procedure was repeated twice in the case of MRC B1256-243
and four times for MRC B2158-206; the total exposure times at each wavelength
are thus 30 minutes and 1 hour, respectively. Between each image, the
telescope pointing was shifted slightly: this enabled the easy identification
and subsequent elimination of diametric ghosts in the data.
\subsubsection{Objects at $z \sim 4.5$}
The TTF was used at $f/8$ on the AAT in combination with the MITLL2 CCD. This
resulted in a scale of 0.37'' per pixel. After processing, the total useful
rectangular field of view in the observations was 9'17'' by 4'10''. The
radial wavelength variation described in Section \ref{sec:wlvariation}
resulted in a shift of 5.1~\AA{} at 2' from the optical axis and 20.3~\AA{} at
4' from the axis. Conditions were photometric, and the seeing was on the
order of 1.5". The full width at half maximum of the etalon transmission band
was 9.5~\AA. The AAT's R0 intermediate-band order-blocking filter was used:
this provided effectively constant transmission across the wavelength range
under consideration.
Each CCD frame contained a total of 30 minutes of observations: ten at each of
three axial wavelengths. Eight CCD frames were recorded, resulting in a total
of 80 minutes exposure for each axial wavelength. As before, the telescope
position was shifted slightly between images.
\begin{figure}
\begin{center}
\includegraphics{fig1}
\caption{On-axis etalon transmission bands for each of the three fields
observed shown relative to the relevant order-blocking filter used on the
telescope. Away from the optical axis the etalon transmission shifts to
shorter wavelengths (\S\ref{sec:wlvariation}).}\label{fig:trans}
\end{center}
\end{figure}
\subsection{Data reduction and catalogue construction}
Data reduction proceeds broadly as for standard broadband imaging. A full
consideration of the issues surrounding tunable filter data is given by
\citet{Jones012} and \citet{Jones02}. The various different images of each
field at the same axial wavelengths were aligned by a marginal centroid fit on
bright stars and then combined. Wavelength calibration was performed through
an emission line, as described by \citeauthor{Jones02}; xenon and
copper-helium arc lamps were used for the $z \sim 2.2$ fields, and a neon arc
lamp for BR B0019-1522.
After the data had been reduced, object detection and fixed aperture
photometry were performed on each image using {\sc SExtractor}
\citep{Bertin96}. The object detection parameters were defined as described in
the next section.
\subsection{Photometry}
\label{sec:photo}
The observations of the standard stars were reduced in the same way. For each
star, {\sc SExtractor} was used to perform aperture photometry yielding a
count $C_\mathrm{s}$. This corresponds to a known magnitude $m_\mathrm{s}$,
based on \citet{Hamuy92} for the lower redshift fields or from the ESO
Standard Star Catalogue for that of BR B0019-1522. If the exposure time on the
standard is $t_\mathrm{s}$ and that on an object in the field is
$t_\mathrm{Obj}$, the AB magnitude of the object is
\begin{equation}
m_\mathrm{AB} = m_\mathrm{s} - 2.5 \log_{10} (C_\mathrm{Obj}t_\mathrm{s})/(C_\mathrm{s}t_\mathrm{Obj}).
\end{equation}
The AB magnitude system \citep{Oke74} is defined by $m_\mathrm{AB} = -2.5
\log_{10} f_\nu - 48.60$ where $f_\nu$ is the flux in units of \mbox{ergs
cm$^{-2}$ s$^{-1}$ Hz$^{-1}$}. The monochromatic flux $f_\lambda$, in units of
\mbox{ergs cm$^{-2}$ s$^{-1}$ \AA$^{-1}$}, is then
\begin{equation}
\label{eq:abtoflux}
f_\lambda = (c \times 10^{-\left(m_{\mathrm{AB}} + 48.60\right)/2.5})/\lambda^2.
\end{equation}
Conversion from $f_\lambda$ to the total flux in the band, $f_\mathrm{total}$
is performed by multiplying by the effective width of the etalon transmission.
The etalon transmission band may be taken as Lorentzian, normalised to 1 at
the wavelength of peak transmission, thus:
\begin{equation}
\label{eq:ttfpass}
T(\lambda) = (\lambda_{\nicefrac{1}{2}}^2 / 4)/((\lambda - \lambda_\mathrm{c})^2 + \lambda_{\nicefrac{1}{2}}^2 / 4)
\end{equation}
where $\lambda$ is the wavelength, $\lambda_c$ the central wavelength of the
band and $\lambda_{\nicefrac{1}{2}}$ its full width at half maximum. Assuming
that $\lambda_\mathrm{c} \gg \lambda_{\nicefrac{1}{2}}$, Equation
\ref{eq:ttfpass} may be integrated over $0 \le \lambda \le \infty$ to yield a
width of $\pi \lambda_{\nicefrac{1}{2}}/2$. Combining this with Equation
\ref{eq:abtoflux} yields a total flux in the band of
\begin{equation}
\label{eq:fluxinband}
f_{\mathrm{total}} = (\pi c \lambda_{\nicefrac{1}{2}} \times 10^{-\left(m_\mathrm{AB} + 48.60\right)/2.5})/2 \lambda_\mathrm{c}^2
\end{equation}
with units \mbox{ergs cm$^{-2}$ s$^{-1}$}.
It is worth noting that this measures the flux received in the etalon
passband, and is thus a lower limit of the line flux of the ELG: variations of
line shapes and widths, and their positions relative to the etalon passband,
will cause the fluxes measured to be systematically underestimated. They
should therefore be regarded as lower limits.
\section{Simulations}
\label{sec:sim}
\begin{figure*}
\begin{center}
\includegraphics{fig2}
\caption{Depths of each of the three fields as determined by the simulations
described in Section \ref{sec:dof}. On the left, the data is plotted in terms
of simulation inputs; on the right, in terms of the measurements made from
the simulated images. Note that the effects of the blocking filter are clearly
seen in the two upper (lower redshift) fields, as the completeness tails off
at higher wavelength. The higher redshift BR B0019-1522 field falls well
within the blocking filter, so the depth is relatively constant with
wavelength across the observed range.}
\label{fig:simresults}
\end{center}
\end{figure*}
We constructed a series of simulated images: data with properties similar to
our observations, but containing a known population of objects. The analysis
of these enables us to address the following questions:
\begin{itemize}
\item What are the most appropriate {\sc SExtractor} parameters for
extracting useful data from the images?
\item To what depth is each field complete--and how does that vary over the
field?
\item To what extent is our analysis prone to mis-identifying spurious `noisy'
features in an image as candidate emission line galaxies?
\end{itemize}
\subsection{Construction of simulated images}
Images were simulated in two stages: first, a background was generated, then
objects were superimposed on top of it.
Due to the properties of the blocking filter and the variation of wavelength
across the image, the background signal is not constant across the image. Each
data image was therefore divided into 100 by 100 pixel blocks, and the mean
background signal and associated noise was measured in each block. Simulated
blocks were then generated matching each of these, and then recombined to form
an overall simulated background of the same shape as the data.
A Ruby\footnote{\url{http://www.ruby-lang.org/}} code was written to simulate
the expected properties of objects we might observe. Objects were simulated at
random redshifts (over the range the observations might be expected to cover)
and pixel positions within the images. Based on the work of
\citet{LeDelliou06}, our observations were not expected to be sensitive to
continuum emission from ELGs, so this was not considered. Further, the ELGs
are spatially unresolved, so were simulated with a Gaussian point spread
function equal to the measured seeing. An emission line model was developed
based on the widths and profiles of high-$z$ Lyman $\alpha$ emitters based
chiefly on the $z \sim 4.5$ objects observed by \citet{Dawson04}.
Experimentation suggested that the results obtained were not sensitive to line
profile; velocity widths in the range 100--1000 km\,s$^{-1}$ were chosen
based on both \citet{Dawson04} and the more extreme example documented by
\citet{Tapken04}.
The effects of the instrument on the objects' detectabilty were then
considered before they were added to the background images. First a correction
for the order-blocking filter transmission was applied, using the position of
the object within the field to determine the observed wavelength and hence
filter transmission. The line profile was then multiplied by the transmission
profile of the etalon for the image under construction.
\subsection{Results of simulations}
Following the procedure above, simulations were run of all three fields. For
each data image, a total of 500 simulated images were constructed, each
containing 500 simulated sources.
\subsubsection{Detection parameters}
\label{sec:detpar}
Source extraction was run multiple times on each image with different
{\sc SExtractor} configuration parameters. In each case, the results were
compared with the catalogue of simulated objects in the image. The combination
of parameters that produced the greatest number of detections of known objects
combined with the smallest number of spurious detections of noise were then
used for the analysis of both the simulations and the observed data. These
parameters are listed in Table \ref{tab:sextractor}.
\begin{table}
\begin{center}
\caption{Optimal {\sc SExtractor} parameters determined by simulations and
used throughout this work.}\label{tab:sextractor}
\begin{tabular}{ccp{4.1cm}}
\hline
Parameter & Value & Description \\
\hline
{\sc detect\_minarea} & \phantom{0}6\phantom{.0} & Minimum number of pixels per detection. \\
{\sc detect\_thresh} & \phantom{0}1.3 & Detection threshold in $\sigma$ above local background. \\
{\sc back\_size} & 64\phantom{.0} & Size in pixels of mesh used for background estimation. \\
{\sc phot\_apertures} & \phantom{0}6\phantom{.0} & Aperture diameter (pixels). \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsubsection{Depths of fields}
\label{sec:dof}
As in the previous section, a source detection procedure was run on each
image and the results compared with the known simulation inputs. This time,
the fraction of the objects at each wavelength and magnitude which were
detected was recorded. The results are shown Fig. \ref{fig:simresults}.
Note that this data can be recorded both in terms of the \textit{simulated}
wavelength and magnitude and and their \textit{detected} equivalents. For any
given pixel position in a field, an object can only be detected as peaking at
one of a limited range of wavelengths, since its peak will be seen to appear
at the wavelength of the image in which it occurs (of which there are at most
5). Hence, an object which is simulated with a very bright magnitude, but at a
wavelength far from the peak transmission of any of the filters, will be
detected with a somewhat dimmer magnitude at a wavelength corresponding to the
image in which it is brightest. Fig. \ref{fig:simresults} shows both the
simulated (on the left) and detected (on the right) quantities for each of
the three fields.
\section{Identification of candidate ELGs}
\label{sec:id}
\begin{table*}
\begin{center}
\caption{ELG candidates in the field of BR B0019-1522. The AB magnitude given
is that measured in the peak from with no correction for galactic extinction
or etalon transmission; the flux is calculated from that magnitude via Equation
\ref{eq:fluxinband}.}\label{tab:elgresults}
\begin{tabular}{lccccccc}
\hline
Field & ELG & \multicolumn{2}{c}{Position (J2000)} & Projected distance & Lyman $\alpha$ Peak & AB & Flux in band \\
& Id. & R.A. & Decl. & from Quasar (Mpc) & Wavelength (\AA) & mag. & (ergs cm$^{-2}$ s$^{-1} \times 10^{18}$)\\
\hline
MRC B1256 & A & \ra{12}{59}{23.2} & \dec{-24}{37}{32.9} & 1.428 & 3966 & 20.9 & 371 \\
& B & \ra{12}{59}{15.7} & \dec{-24}{37}{40.7} & 0.871 & 3966 & 21.1 & 293 \\
& C & \ra{12}{59}{02.7} & \dec{-24}{37}{15.1} & 1.257 & 3957 & 20.9 & 363 \\
& D & \ra{12}{59}{05.3} & \dec{-24}{37}{31.3} & 1.085 & 3960 & 20.7 & 424 \\
\\
MRC B2158 & A & \ra{22}{01}{26.0} & \dec{-20}{25}{08.0} & 0.263 & 3956 & 21.8 & 161 \\
& B & \ra{22}{01}{41.7} & \dec{-20}{24}{03.5} & 1.986 & 3971 & 21.7 & 192 \\
\\
BR B0019 & A & \ra{0}{21}{56.9} & \dec{-15}{04}{04.3} & 1.229 & 6673 & 22.5 & \phantom{0}37 \\
& B & \ra{0}{22}{03.8} & \dec{-15}{07}{41.2} & 0.898 & 6706 & 22.5 & \phantom{0}37 \\
& C & \ra{0}{22}{08.8} & \dec{-15}{06}{58.8} & 0.531 & 6705 & 22.0 & \phantom{0}57 \\
& D & \ra{0}{22}{08.8} & \dec{-15}{06}{56.3} & 0.515 & 6704 & 21.7 & \phantom{0}71 \\
& E & \ra{0}{21}{57.8} & \dec{-15}{06}{58.7} & 1.105 & 6697 & 22.7 & \phantom{0}31 \\
& F & \ra{0}{22}{14.5} & \dec{-15}{06}{42.6} & 0.748 & 6717 & 22.1 & \phantom{0}52 \\
& G & \ra{0}{22}{12.4} & \dec{-15}{06}{17.8} & 0.491 & 6716 & 22.1 & \phantom{0}51 \\
& H & \ra{0}{22}{12.7} & \dec{-15}{06}{01.4} & 0.471 & 6697 & 22.5 & \phantom{0}37 \\
& I & \ra{0}{22}{07.6} & \dec{-15}{05}{27.1} & 0.087 & 6694 & 22.4 & \phantom{0}39 \\
& J & \ra{0}{21}{58.6} & \dec{-15}{04}{56.2} & 0.940 & 6701 & 22.3 & \phantom{0}43 \\
& K & \ra{0}{22}{14.2} & \dec{-15}{04}{20.6} & 0.785 & 6680 & 22.6 & \phantom{0}32 \\
& L & \ra{0}{22}{14.8} & \dec{-15}{07}{22.1} & 0.939 & 6719 & 22.5 & \phantom{0}37 \\
& M & \ra{0}{22}{15.3} & \dec{-15}{06}{52.7} & 0.849 & 6716 & 22.2 & \phantom{0}48 \\
& N & \ra{0}{22}{11.5} & \dec{-15}{05}{04.1} & 0.405 & 6706 & 22.3 & \phantom{0}43 \\
& O & \ra{0}{22}{18.0} & \dec{-15}{04}{36.8} & 1.038 & 6694 & 22.4 & \phantom{0}39 \\
& P & \ra{0}{21}{53.9} & \dec{-15}{05}{58.2} & 1.351 & 6685 & 22.4 & \phantom{0}40 \\
& Q & \ra{0}{22}{13.9} & \dec{-15}{05}{08.8} & 0.597 & 6689 & 22.5 & \phantom{0}35 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics{fig3}
\caption{Relative positions of the ELG candidates detected in each of the
three fields. The dimensions of the plots indicate the size of the observed
fields. The quasars are located at the origin. The letters refer to the ELG
designations used throughout the text.}\label{fig:elgcandidates}
\end{center}
\end{figure}
{\sc SExtractor} was used with the parameters determined in Section
\ref{sec:detpar} and a detection threshold of 5$\sigma$ to build a catalogue
of sources for each image. Within each field, the catalogues from each image
were cross-matched: objects were associated by position, with a three pixel
threshold.
These observations are not deep enough to observe continuum flux from a
typical Lyman $\alpha$ emitting galaxy \citep{LeDelliou06}. Given the likely
range of line widths \citep{Dawson04, Tapken04}, we do not expect to observe
Lyman $\alpha$ emitters in more than two adjacent passbands. Objects which
were identified in either one or two bands were therefore flagged for further
investigation.
In order to minimise the risk of contamination by noisy artefacts, all
flagged objects were examined eye, and those which appeared unphysical or
corresponded to sites of corruption by (for example) heavy cosmic ray
or charge trapping activity in the original images were rejected.
\subsection{MRC B1256-243}
Four candidate emission line galaxies were identified in the field of MRC
B1256-243. Details are given in Table \ref{tab:elgresults}, and their
locations are shown in Fig. 3(a). Thumbnail images of the
candidate galaxies from each field, together with the measured fluxes, are
shown in Fig. \ref{fig:1256objects}.
\subsection{MRC B2158-206}
Two candidate emission line galaxies were identified in the field of MRC
B2158-206. Details are given in Table \ref{tab:elgresults}, and their
locations are shown in Fig. 3(b). Thumbnail images of the
candidate galaxies from each field, together with the measured fluxes, are
shown in Fig. \ref{fig:2158objects}.
\subsection{BR B0019-1522}
Seventeen candidate emission line galaxies were identified in the field of BR
B0019-1522. Details are given in Table \ref{tab:elgresults}, and their
locations are shown in Fig. 3(c). Thumbnail images of the
candidate galaxies from each field, together with the measured fluxes, are
shown in Fig. \ref{fig:0019objects}.
\subsection{Contaminants}
This section briefly addresses the likelihood that our method might
incorrectly identify another sort of object as an ELG.
\subsubsection{Continuum objects}
As per Figs. \ref{fig:trans} and \ref{fig:simresults}, the sensitivity of
our instrument varies from image to image. Therefore, it is possible that a
flat-spectrum continuum object may be detected in some images but not others,
thereby appearing to be a potential ELG.
We use the results of Section \ref{sec:sim} to estimate the probability of
this occurring. Each of the 250,000 simulated objects was sorted into one of
3,600 bins by wavelength and magnitude (each bin covering 1 \AA{} and 0.1
magnitudes). It is then possible to calculate the completeness of the bin
(i.e. the fraction of simulated objects which were recovered). Each candidate
ELG is assigned to a bin, and we then check the corresponding bins in adjacent
images for completeness. A low completeness value in these bins indicates that
a flat-spectrum object may have been `lost'.
This procedure calls into question four objects: A in the field of MRC
B2158-206, B in the field of MRC B2156-243 and E and K in the field of BR
B0019-1522. These sources were examined by eye, but there is no indication of
a faint detection in the crucial frame. They have not, therefore, been
excluded from this analysis.
\subsubsection{Lower redshift interlopers}
Another possibility is other emission lines at lower redshift may appear in
our observations. The lines which might be observed are listed in Table
\ref{tab:interlopers}.
\begin{table*}
\begin{center}
\caption{Potential low-redshift `interloper' emission lines, together with the
redshifts at which they appear and the estimated number observed in each of
the fields. The flux of each line relative to \mbox{H\,$\alpha$}{} in
a ``typical'' galaxy is given, based on \citet{Kennicutt92}.}\label{tab:interlopers}
\begin{tabular}{ccccccccccc}
\hline
Line & \AA & Flux & \multicolumn{2}{c}{MRC B2158-206} & \multicolumn{2}{c}{MRC B1256-243} & \multicolumn{2}{c}{BR B0019-1522} \\
& (rest) & ratio & $z$ & Number & $z$ & Number & $z$ & Number \\
\hline
\fline{O}{ii} & 3727 & $0.41\pm0.21$ & 0.065 & \phantom{$^*$}0.05\phantom{$^*$} & 0.060 & 0.02 & 0.803 & 1.93 \\
\mbox{H\,$\beta$} & 4860 & $0.14\pm0.06$ & - & - & - & - & 0.383 & 1.68 \\
\fline{O}{iii} & 5007 & $0.20\pm0.15$ & - & - & - & - & 0.342 & 1.41 \\
\mbox{H\,$\alpha$} & 6548 & $1.00\pm0.00$ & - & - & - & - & 0.027 & \phantom{$^*$}0.01\phantom{$^*$} \\
\fline{N}{ii} & 6583 & $0.43\pm0.16$ & - & - & - & - & 0.021 & \phantom{$^*$}0.01\phantom{$^*$} \\
\hline
\end{tabular}
\end{center}
\end{table*}
\citet{Cowie97} and \citet{Gallego95} provide number density counts for star
forming galaxies at a range of redshifts. Both adopt a \mbox{$H_0 =
50$ km\,s$^{-1}$\,Mpc$^{-3}$}, $\Omega_{\Lambda} = 0$, $\Omega_{\mathrm{M}} =
1$ cosmology, which we converted to match that used in this work (Section
\ref{sec:intro}). In addition, \citeauthor{Gallego95} assume a \citet{Scalo86}
IMF; \citeauthor{Cowie97} provide a conversion to a \citet{Salpeter55} IMF,
and it is these results we adopt in this work. Based on these, we can estimate
the number density of star forming galaxies along our line of sight: see
Fig. \ref{fig:sfgs}.
\begin{figure}
\begin{center}
\rotatebox{270}{\resizebox{!}{\columnwidth}{\includegraphics{fig4}}}
\caption{Variation of galaxy number density with star formation rate for a
range of redshifts. Based on data from \citet{Cowie97} and \citet{Gallego95}.}
\label{fig:sfgs}
\end{center}
\end{figure}
\citet{Kennicutt98} provides a conversion between star formation rate in a
galaxy and \mbox{H\,$\alpha$}{} luminosity; the ratios given in Table \ref{tab:interlopers}
make it possible to convert that into expected luminiosities for the other
lines. After applying a correction for instrumental effects and galactic
extinction \citep{Schlegel98}, a locus of points in the magnitude-wavelength
completeness diagrams (Fig. \ref{fig:simresults}) on which each line at a
given redshift might be detected is determined. This locus is then integrated
to estimate the total volume over which the line might be observed at this
redshift. This procedure is then repeated along the full length of the curves
shown in Fig. \ref{fig:sfgs}. In this way, the total number of interlopers
which might be observed is estimated. The results are shown in Table
\ref{tab:interlopers}.
It is clear that the estaimted number of interlopers is negligible in the case
of the two lower-redshift fields. However, it is possible that as many as five
of the candidate ELGs in the BR B0019-1522 field are, in fact, low redshift
interlopers. This could only be confirmed by further observations.
\section{Properties of candidate ELGs}
\label{sec:properties}
In this section, we consider the distribution of candidate ELGs around the
quasars to determine whether the quasar lies in an identifiable overdensity
relative to the field.
The small number of candidates around the lower-$z$ quasars renders a
meaningful statistical analysis of the individual fields unreliable. In an
attempt to mitigate this, and given the apparent similarity of the fields,
they are both considered as one unit in this section.
\begin{figure*}
\begin{center}
\includegraphics{fig5}
\caption{Distribution of ELG candidates around the quasars. On the left, the
projected distance seen on the sky for both the ELG candidates (boxes) and all
the objects observed (crosses); at right, the relative velocities.}
\label{fig:distribution}
\end{center}
\end{figure*}
The distribution of ELG candidates around the quasar is shown in both
projection on the sky (left) and velocity distribution (right) in Fig.
\ref{fig:distribution}. When calculating the projection on the sky, we have
normalised the total visible area on the sky in each distance bin. We also
plot the distribution of all objects detected by {\sc SExtractor} in the field
for comparison.
Based on these figures, there is little evidence of projected clustering in
the low-$z$ fields. However, there is a notably higher density of objects
within 1 Mpc (projected) of BR B0019-1522. This is consistent with what one
might expect from an examination of Fig. \ref{fig:elgcandidates}: note the
large number of objects to the east of the quasar in Fig. 3(c). It is also
in-line with the scale lengths observed in clusters around other AGN
\citep{Venemans02, Bremer02, Barr04}.
There is no suggestion of clustering in velocity space in Fig.
\ref{fig:distribution}. In part, this may be due to the low number of
detections in the low-$z$ fields. In the field of BR B0019-1522, we note that
all candidates were observed as bluer than the quasar itself; this is
noteworthy, but not implausible given the wavelength range probed (6650--6740
\AA, with the quasar at 6722 \AA). Although the bluest velocity bins show a
lower number of total counts, this can be attributed to the reduced
instrumental sensitivity at the relevant wavelengths (see Fig. 3(c)).
The space density of galaxies in the three fields may also be estimated. As
alluded to in the previous section, the comoving volume being probed by our
measurements varies with wavelength and magnitude. Consider for example Fig.
2(a): a bright object--magnitude 19, say--may be detected at a range of
wavelengths, from around 3920 \AA{} to 4010 \AA. A fainter object at, for
instance, magnitude 22 is only detected if it lies within a much smaller
wavelength range: around 3940 \AA{} to 3960 \AA. Therefore, we define an
`accessible volume', $\mathcal{V}_n$, for each detected object $n$ within the
field. $\mathcal{V}_n$ is calculated by taking the locus of points in Fig.
\ref{fig:simresults} occupied by a source with the observed properties and
integrating over all wavelengths. The density is taken as $\rho =
1/\mathcal{V}_1 + 1/\mathcal{V}_2 + ... + 1/\mathcal{V}_n$. The results for
our fields are given in Table \ref{tab:density}.
\begin{table}
\begin{center}
\caption{Estimated space and star formation rate densities, together with the
total number of ELG candidates (\#), for each of the fields
observed. Note that our observations are valid only to an approximately
defined lower limit of star formation.}\label{tab:density}
\begin{tabular}{cccc}
\hline
Field & \# & Number density & SFR density \\
& & (Mpc$^{-3}\,\times\,10^4$) & (M$_\odot\;$\,yr$^{-1}$\,Mpc$^{-3}$) \\
\hline
MRC B1256 & \phantom{0}4 & $22.48 \pm 11.64$ & $0.0346 \pm 0.0174$ \\
MRC B2158 & \phantom{0}2 & $\phantom{0}9.09 \pm \phantom{0}6.52$ & $0.0070 \pm 0.0049$ \\
BR B0019 & 17 & $49.09 \pm 12.21$ & $0.0484 \pm 0.0117$ \\
\hline
\end{tabular}
\end{center}
\end{table}
It is also instructive to estimate the star formation rates found in these
fields. Based on \citet{Kennicutt94} combined with \citet{Brocklehurst71} and
\citet{Hu96}, we arrive at the relationship:
\begin{equation}
\mathrm{SFR}(\mathrm{M}_\odot\,\mathrm{yr^{-1}}) = 0.91 \times 10^{-42} L(\mathrm{Ly} \alpha) (\mathrm{erg\,s^{-1}})
\label{eq:sfr}
\end{equation}
It should be noted that \mbox{Ly $\alpha$}{} is a very poor indicator of star formation
rate. It is resonantly scattered by neutral hydrogen, and hence has a high
chance of absorption either before leaving the galaxy or by clouds in the
intergalactic medium \citep{Haiman99}. Further, \citet{VG93} argues that \mbox{Ly $\alpha$}{}
emission in starbursts is strongly dependent on the age of the burst,
rendering the calibration of Equation \ref{eq:sfr} unreliable from around
$10^7$ years after the burst start. Nevertheless, \mbox{Ly $\alpha$}{} is the only
diagnostic available to us, so we persist in these estimates with caution.
We take the star formation rate density as $\rho_{SFR} = SFR_1/\mathcal{V}_1 +
SFR_2/\mathcal{V}_2 + ... + SFR_n/\mathcal{V}_n$, where $SFR_n$ is the star
formation rate associated with ELG candidate $n$ as calculated by Equation
\ref{eq:sfr}. Recall from Section \ref{sec:photo} that the line fluxes are
systematically underestimated since objects will fall outside the peaks of the
etalon passpands. Making the approximation that objects are evenly spread in
wavelength around the etalon peaks, we apply a correction to the observed
magnitudes of 0.23 (in the low-$z$ field) or 0.27 (BR B0019-1522 field) to
account for this. We correct the results for completeness based on Fig.
\ref{fig:simresults}: a single detection in an area with a low detection rate
is taken as representative of a larger population.
The results are shown in Table \ref{tab:density}. Note that our observations
are sensitive to galaxies only down to some minimum level of star formation
(\sfr{9} in the case of MRC B2158-206 and BR B0019-1522; \sfr{25} in the case
of MRC B1256-243): there may be a fainter population which we do not probe.
It is noteworthy that the star formation rate in the field of MRC B1256-243 is
anomalously high, but the large uncertainties in the field and the higher
minimum detectable rate render this result questionable. The most well
constrained result is that for BR B0019-1522; our results there are broadly
similar to those reported by \citet{Venemans02} around the $z = 4.1$ radio
galaxy TN J1338-1942. In all three fields, the number of objects detected is
higher than that which might be expected in the absence of any clustering.
Based on \citet{Cowie97}, we might expect on average 0.86 galaxies in the
field of MRC B2158-206, 0.25 in that of MRC B1256-243, and 1.3 in that of BR
B0019-1522, while an extrapolation from the results of the LALA \citep[`Large
Area Lyman $\alpha$';][]{Rhoads00} survey suggests we should observe 1.1
objects in the field of MRC B2158-206, 0.8 in that of MRC B1256-243 and 2.1 in
that of BR B0019-1522 (assuming that the density of \mbox{Ly $\alpha$}{} emitters is similar
at $z \sim 2.2$ to that observed at $z \sim 4.5$).
\section{Conclusions}
\label{sec:conclusion}
Until recently, it has proved difficult to find high-redshift clusters and,
indeed, there are very few known beyond $z \sim 1$. The detection of hot
X-ray emission from intracluster gas followed by optical imaging and/or
spectroscopic confirmation becomes inefficient for detecting more distant
clusters; a manifestly higher success rate is achieved by targeting the
vicinity of high redshift radio galaxies and quasars.
We have used tunable filter observations to identify a galaxy overdensity in
the field of BR B0019-1522, with a local number density an order of magnitude
higher than that which might be expected in the field. This is among the
highest-redshift clusters detected around a radio quiet quasar. We have also
identified potential overdensities in the fields of and MRC B1256-243 and MRC
B2158-208, although deeper observations are required to confirm these
detections.
The current observations were made with the Taurus Tunable Filter, an
instrument which has now been decommissioned, on the 4 metre class
Anglo-Australian Telescope. These observations have clearly demonstrated the
success of the tunable imaging technique. The prospects for further progress
in this area are strong, as the next generation of tunable filter instruments
are now available or becoming available on telescopes such as the GTC 10-m
\citep[OSIRIS;][]{Cepa00}, SOAR 4-m \citep[BTFI;][]{Taylor10}, SALT 11-m
\citep[PFIS;][]{Smith06}, NTT 3.5-m \citep[3D-NTT;][]{Marcelin08} and the
Magellan 6.5-m \citep[MMTF;][]{Veilleux10}.
With existing telescopes, it is very difficult to extract more information
than a few emission lines and broadband photometry for the host galaxies in
these high-redshift environments. More detailed spectral information will not
be possible until the next generation of extremely large telescopes or the
James Webb Space Telescope come on line. But there are other uses for these
observations: in particular, \citet{Bruns11} have shown that quasar
environments may act as a surrogate for studying the radiative suppression of
galaxy formation during the epoch of reionization. Interestingly, the UV
suppression reduces the star-forming galaxy counts by a factor of 2--3 but
does not suppress them altogether. The time is therefore ripe to further
develop this promising method of investigation in order to learn about the
occurrence of high-redshift, star forming groups and the impact on these
groups by quasar activity.
\bibliographystyle{mn2e}
|
1,116,691,497,873 | arxiv | \section{Introduction}
Jackiw--Teitelboim (JT) gravity is a simple model of two-dimensional quantum gravity on backgrounds of constant curvature such as anti-de~Sitter spaces $AdS_2$ \cite{JACKIW,TEITELBOIM,MaldacenaAdS2,AlmheiriPolchinski,Jensen:2016pah,Engelsoy:2016xyb}. It consists of a real scalar field $\phi$ coupled to gravity with the Euclidean action on a Riemann surface $\surf$ being
\begin{equation} \label{eq:JTAction}
I_\text{JT}=-\frac{S_0}{2}\left(\frac{1}{2}\int_{\surf}\!\!\!d^2x \, \sqrt{g} R
+\int_{\partial\surf} \!\!\!\! dx \, \sqrt{h} K \right)
-\frac{1}{2}\int_{\surf}\!\!\! d^2x \, \sqrt{g}\phi(R+2)
+\int_{\partial\surf}\!\!\!\! dx \, \sqrt{h}\phi(K-1)\ ,
\end{equation}
where $R$ is the Ricci scalar, $g_{\mu\nu}$ the metric, $K$ is the trace of the extrinsic curvature at the boundary $\partial\surf$, and $h_{\mu\nu}$ is the boundary metric induced from $g_{\mu\nu}$. The sum of the first two terms is proportional to the Euler characteristic of the surface $\surf$, which in a black hole context represents the ground-state entropy and for the full gravitational path integral weighs the contribution of geometries in terms of the coupling~$S_0$. The third term sets the constraint of only considering hyperbolic Riemann surfaces
\begin{equation}\label{eq:HyperbolicCondition}
R(x)+2=0\ ,
\end{equation}
and the last term contains a Gibbons--Hawking--York boundary term together with a counterterm that ensures a finite result when removing the regularisation of the position of the $AdS_2$ boundary. This term captures the Schwarzian dynamics of reparametrisations at the boundary. JT~gravity has been used as a gravitational model in the $AdS_2/CFT_1$ correspondence and in a broader context it encapsulates the low-energy dynamics of near-extremal black holes \cite{Nayak:2018qej,Sarosi:2017ykf}. It can also be linked to the Sachdev–Ye–Kitaev model \cite{SYK1,SYK2} because its low-energy sector is described by the Schwarzian theory and in a certain limit the thermal partition functions agree \cite{RemarksSYK,SYK2}.
In the remarkable work \cite{SSS} Saad, Shenker and Stanford demonstrate that extending the gravitational sector to include geometries consisting of arbitrary number of boundaries and also arbitrary genera furnishes a partition function equivalent to a specific double-scaled Hermitian matrix theory. This duality can be stated as
\begin{align}\label{eq:JTMMDuality}
Z(\beta_1,\ldots,\beta_n) \mathrel{\widehat{=}}\langle \text{Tr} e^{-\beta_1 H} \ldots \text{Tr} e^{-\beta_n H} \rangle_{\text{MM}}\ .
\end{align}
Here the left hand side is the connected thermal partition function~$Z(\beta_1,\ldots,\beta_n)$ of JT~gravity for geometries with $n$ asymptotic boundary components characterised by their inverse temperatures $\beta_i$, $i=1,\ldots,n$. The right hand side is the corresponding correlator of the dual Hermitian matrix integral. Interestingly, these correlators enjoy an interpretation as observables in an ensemble of quantum mechanical systems whose random Hamiltonians~$H$ are given by Hermitian matrices~$H$ of the matrix model \cite{SSS}.\footnote{According to ref.~\cite{BlackHolesandRandomMatrices}, the intriguing appearance of an ensemble of quantum mechanical systems can also be argued for via the relationship of JT~gravity to the Sachdev–Ye–Kitaev model.} This duality is generalised in ref.~\cite{Stanford:2019vob}, where extensions of JT~gravity are associated to other matrix models \cite{Dyson:1962es,Altland:1997zz,Zirnbauer:1996zz}.
The arguments for the proposed duality in ref.~\cite{SSS} rely on two crucial facts: Firstly, as can be seen for the disk, the path integral of the Schwarzian theory localises \cite{WittenStanford}. Secondly, the contributions of Riemann surfaces of higher genera to the JT~gravity path integral reduce to a Schwarzian theory at each boundary component together with an integration over suitable moduli spaces of hyperbolic Riemann surfaces. The latter contributions give rise to Weil--Peterson volumes on the associated moduli spaces of stable curves that --- as proven in ref.~\cite{Eynard_Orantin_2007Volumes} --- obey the same recursion relations as appear in the context of the specific double-scaled Hermitian matrix integral, which in turn suggests the proposed correspondence~\eqref{eq:JTMMDuality}. The duality~\eqref{eq:JTMMDuality} as spelt out above is a priori established perturbatively, i.e.\ on the level of an asymptotic genus expansion. In addition, there are also non-perturbative contributions \cite{SSS}, and hence the matrix model can be viewed as a (non-unique) non-perturbative completion of the genus expansion of JT~gravity. A proposal to deal with potential non-perturbative instabilities is developed in refs.~\cite{CJ1,CJ2,CJ3}.
In this work we focus on the structure of deformations to JT~gravity and the resulting modifications to the thermal partition functions appearing on the left hand side of the duality~\eqref{eq:JTMMDuality}. A particular deformation to JT~gravity can be incorporated by adding a scalar potential $U(\phi)$ to the Lagrangian of the action~\eqref{eq:JTAction} of the form \cite{Maxfield3gravity,WittenDeformations}
\begin{equation}\label{eq:Dilatonpotential}
U(\phi) = 2 \epsilon \, e^{-(2\pi-\alpha)\phi} \ , \quad 0 < \alpha < \pi \ .
\end{equation}
This potential does not affect the asymptotic boundary conditions and the gravitational path integral can be evaluated perturbatively in the coupling $\epsilon$ \cite{WittenDeformations}. Carrying out the path integral over the scalar field $\phi$ at the perturbative order $\epsilon^k$ changes the constraint~\eqref{eq:HyperbolicCondition} to \cite{MertensDefects,Maxfield3gravity}
\begin{equation}\label{eq:HyperbolicConditionwithDefect}
R(x)+2= 2 \sum_{j=1}^{k}
(2\pi-\alpha) \, \delta^{(2)}(x-x_j) \ ,
\end{equation}
with a remaining integral of the positions $x_1,\ldots,x_k$ over the Riemann surface $\surf$. Thus the constraint~\eqref{eq:HyperbolicConditionwithDefect} at the given perturbative order $\epsilon^k$ with the two-dimensional $\delta$-distributions introduces on the hyperbolic surfaces $k$~conical singularities at the points $x_1,\ldots,x_k$ with identification angle $\alpha$. As a result, perturbatively the path integral of JT~gravity with the potential~\eqref{eq:Dilatonpotential} can be interpreted as a sum over all possible hyperbolic Riemann surfaces $\Sigma$ with any number of conical singularities with identification angles $\alpha$ at arbitrary positions on $\Sigma$. Furthermore, we can interpret the deformation~\eqref{eq:Dilatonpotential} as coupling JT~gravity to a gas of defects characterized by the coupling constant $\epsilon$ and the idenfication angle $\alpha$ \cite{Maxfield3gravity,WittenDeformations}. The structure can readily be generalised to an arbitrary finite number (possibly even to an infinite number or to a continuous family) of defect species with individual couplings $\epsilon_j$ and identification angles $\alpha_j$ \cite{Maxfield3gravity,WittenDeformations}, such that a more general class of deformations to JT~gravity can be realised.
Instead of directly studying deformations to JT~gravity via scalar potentials of the type \eqref{eq:Dilatonpotential}, we use the connection to two-dimensional topological gravity \cite{WittenIntersection} and the related formulation in terms of moduli spaces of stable curves \cite{KontsevichIntersection,Mirzakhani}. Previously, this approach has been prominently employed in this context, for instance, in refs.~\cite{SSS,OkuyamaSakai1,OkuyamaSakai2,WittenDeformations,Alishahihaetal}. Upon identifying deformations to JT~gravity with solutions to the KdV~hierarchy (which play a central role in topological gravity, see e.g.~ref.~\cite{Itzykson:1992ya}) and using well-established matrix model techniques \cite{Gross1,Gross2,Douglas:1989ve,Brezin:1990rb}, we can study a rather general class of deformations to JT~gravity. From this perspective topological gravity and hence JT~gravity with deformations can be identified with certain minimal string theories and deformations thereof \cite{Douglas:1989ve,GinspargMooreLectures,Belavin:2008kv}. Already in ref.~\cite{SSS} it is observed that JT~gravity can be viewed as the large $p \rightarrow+\infty$ limit of the $(2,2p-1)$ minimal string theory with the associated couplings $t_k$ given by \cite{OkuyamaSakai1,CJ1}
\begin{equation} \label{eq:OSexpansionpoint}
t_0=t_1=0 \ , \quad
t_k=\gamma_k \quad \text{with} \quad
\gamma_k=\frac{(-1)^k}{(k-1)!} \quad \text{for} \quad k=2,3,\ldots \ .
\end{equation}
These values for the couplings $t_k$ relate to a specific solution to the above mentioned KdV~hierarchy. In this work we study deformations to JT~gravity by considering more general solutions to the KdV~hierarchy, which on the level of the couplings $t_k$ amounts to deforming them as
\begin{equation}\label{eq:shiftedexpansionpoint}
t_k=\gamma_k+\sdef_k \quad \text{for} \quad k=0,1,2,\ldots \ .
\end{equation}
For particular choices of $\sdef_k$ --- as established in refs.~\cite{Maxfield3gravity,WittenDeformations} and as discussed in detail in the main text --- this description realises JT~gravity interacting with a gas of defects as described by the scalar potential~\eqref{eq:Dilatonpotential} and generalisations thereof discussed in ref.~\cite{WittenDeformations}. Inspired by the work of Okuyama and Sakai we thoroughly investigate the relationship between general deformations $\sdef_k$ and the specific deformations that are attributed to the interaction of JT~gravity with a gas of defects.
Moreover, we turn to some applications of our general results. First of all, we analyse the low temperature behaviour of the calculated thermal partition functions using techniques developed in refs.~\cite{OkuyamaSakai1,OkuyamaSakai2}. At low temperatures the (asymptotic) genus expansion of the thermal partition function can be given an exact analytic expression \cite{OkuyamaSakai2,Okounkovnpointfunction}, because non-perturbative corrections are suppressed in the performed low temperature double scaling limit. This allows us to study in this low temperature regime Hawking--Page phase transitions and the features of spectral form factors as functions of the deformation parameters with the help of numerical methods. As a second application, we comment on a further instance of JT~gravity, which requires the inclusion of Riemann surfaces with conical singularities, namely the wavefunction of the universe for JT~gravity in de~Sitter space \cite{MaldacenadS,MaloneydS}. This striking connection relies on subtleties of the analytic continuation from sharp to blunt defects or equivalently from small identification angles to large identification angles.
The structure of the paper is as follows:
In Section~\ref{JTGravityDeformed JT GravityandTopological Gravity} we first set the stage for the forthcoming analysis and introduce well-established physical and mathematical tools to study correlation functions in topological gravity. Then, applying techniques developed in ref.~\cite{OkuyamaSakai1}, as a genus expansion we calculate for deformed theories of JT~gravity (asymptotic) thermal partition functions (with one or several asymptotic boundary components). The studied class of deformations is suitable to describe interactions of JT~gravity with defects.
In Section~\ref{Section:LowtemperatureExpansion} we turn to the low temperature expansion of the thermal partition function, which can be computed exactly at leading order in temperature \cite{OkuyamaSakai1,OkuyamaSakai2,Alishahihaetal}. For certain physical applications this analysis is more natural than the previously discussed asymptotic genus expansion because the expansion in temperature naturally sets an energy scale for the accessible states in the computed thermal partition functions.
Using the computed low energy limit of the partition functions for JT~gravity coupled to a gas of defects, we show in Section~\ref{Spectral Form Factor} that there is a Hawking--Page phase transition. We numerically compute the associated critical temperature as a function of the deficit coupling constant, and we also analyse the spectral form factor. We find that in the given low temperature approximation the time scale for the onset of the plateau exhibits a simple behaviour in terms of the deficit coupling, which conforms with the observed Hawking--Page phase transition.
In Section~\ref{ds} we make some basic comments on the connection between the wavefunction of the universe for JT~gravity on de~Sitter space~$dS_2$ and the Weil--Petersson volumes of the associated Riemann surfaces with conical singularities in the light of the recent work \cite{TuriaciBluntDefects}.
Finally, in Section~\ref{sec:concl} we present our conclusions, where we discuss our results and present some outlook for further investigations.
\smallskip
While completing this work, ref.~\cite{Okuyama:2021ytf} appeared, which has certain overlap with some of our discussions in Section~\ref{JTGravityDeformed JT GravityandTopological Gravity}.
\section{JT Gravity, Deformed JT Gravity and Topological Gravity}\label{JTGravityDeformed JT GravityandTopological Gravity}
In this section we aim to describe JT~gravity together with deformations in terms of two-dimensional topological gravity. The works~\cite{OkuyamaSakai1,OkuyamaSakai2} by Okuyama and Sakai establish a direct link between the partition functions of JT~gravity and correlation functions in topological gravity. Deforming JT~gravity from interactions with defects (as established in ref.~\cite{WittenDeformations,Maxfield3gravity}) yields another instance of two-dimensional topological gravity with modified coupling parameters. While we are indeed interested in JT~gravity coupled to a gas of defects, we study deformations to JT~gravity in a more general setting. By using the results of ref.~\cite{Itzykson:1992ya} we construct thermal partition functions for deformed theories of JT~gravity, which at any intermediate stage of their derivation can be specialised to particular deformed JT~gravity theories (such as JT~gravity interacting with defects). Our approach could offer a starting point towards a dictionary between specific values for the couplings in two-dimensional topological gravity and deformations attributed to scalar potentials added to the JT~gravity action, such as the potential~\eqref{eq:Dilatonpotential} for deformations arising from defect interactions.\footnote{Results in a similar vein of thought are reported in ref.~\cite{TuriaciBluntDefects} as well. See also ref.~\cite{Mertens:2020hbs} for a discussion along these lines from the minimal string theory perspective.}
In part this section uses and reviews some well-established mathematical tools from the intersection theory on the moduli spaces of stable curves to derive the thermal partition functions of deformed JT~gravity. The reader not interested in these derivations should skip these technical details and instead view this section as a collocation of expressions for thermal partition functions and related quantities, which are used in later sections of this work.
\subsection{Weil--Petersson Volumes of Hyperbolic Riemann Surfaces}\label{Weil--Petersson Volumes of Hyperbolic Riemann Surfaces}
To set the stage and to introduce the used notation, we first collect some mathematical preliminaries on the Weil--Petersson volumes of hyperbolic Riemann surfaces with geodesic boundary components and conical singularities from the perspective of intersection theory on the moduli spaces of stable curves.
Let $\mathcal{M}_{g,n}$ be the moduli space of smooth curves of genus $g$ with $n$ distinct marked points. By construction the moduli space $\mathcal{M}_{g,n}$ is not compact, as it contains neither the limiting curve with a handle degenerating to a nodal point nor the limit as two marked points collide. The Deligne--Mumford compactification $\overline{\mathcal{M}}_{g,n}$ includes the above mentioned limits in terms of stable curves with nodal singularities. The resulting moduli space of stable curves is well-defined to parametrise curves with marked points that do not admit any continuous automorphisms. That is to say $\overline{\mathcal{M}}_{g,n}$ is defined for genus $g\ge 2$ and any number of marked points, for genus one with at least one marked point, and for genus zero with at least three marked points. The complex dimensions of these moduli spaces are given by
\begin{equation} \label{eq:dimM}
\dim_\mathbb{C} \overline{\mathcal{M}}_{g,n} = 3g - 3 + n \ .
\end{equation}
The moduli space of stable curves $\overline{\mathcal{M}}_{g,n}$ comes equipped with several natural cohomology classes. To each marked point $p_i$, $i=1,\ldots,n$, on the curve $C_g$ one associates at the point $p_i$ the complex cotangent line $T^*_{p_i} C_g$, which patches together to a line bundle~$\mathcal{L}_i$ on $\overline{\mathcal{M}}_{g,n}$. The first Chern class of this line bundle realises a cohomology class on $\overline{\mathcal{M}}_{g,n}$ denoted by
\begin{equation}
\psi_i = c_1(\mathcal{L}_i) \, \in \, H^2(\overline{\mathcal{M}}_{g,n},\mathbb{Q}) \ .
\end{equation}
The other for us relevant cohomology class is the first Miller--Morita--Mumford class $\kappa_1$, which arises in a similar fashion. Consider the forgetful map $\pi: \overline{\mathcal{M}}_{g,n+1} \to \overline{\mathcal{M}}_{g,n}$ that omits the $(n+1)$-th marked point. Then the cohomology class $\kappa_1$ is given by \cite{MR1486986,MR2482127}
\begin{equation} \label{eq:MMMcl}
\kappa_1 = \pi_*( c_1(\mathcal{L}_{n+1})^2 ) + \sum_{i=1}^n \psi_i \, \in \, H^2(\overline{\mathcal{M}}_{g,n},\mathbb{Q}) \ ,
\end{equation}
where the push-forward $\pi_*$ can heuristically be thought of as integrating over the fiber of the map~$\pi$. The class $\kappa_1$ is proportional to the Weil--Petersson K\"ahler form $\omega_\text{WP}$ \cite{MR727702}
\begin{equation} \label{eq:WPKahler}
\omega_\text{WP} = 2 \pi^2 \kappa_1 \ .
\end{equation}
Upon integrating such cohomology classes over $\overline{\mathcal{M}}_{g,n}$ we obtain (rational) intersection numbers that are collected in correlators. The correlators of particular interest to us are given by
\begin{equation} \label{topologicalgravitycorrelationfunctions}
\left\langle \kappa_1^\ell\tau_{d_1} \ldots \tau_{d_n} \right\rangle_{g,n} = \int_{\overline{\mathcal{M}}_{g,n}}\kappa_1^\ell \psi_1^{d_1} \ldots \psi_{n}^{d_n} \ ,
\quad \ell,d_1,\ldots,d_n \in \mathbb{Z}_{\ge 0} \ ,
\end{equation}
where the classes $\tau_{d_i}$ are the conventional abbreviations for $\psi_i^{d_i}$ arising from the $i$-th marked point. The defined correlators are only non-vanishing if the integrated class represents a (non-zero) top class of $\overline{\mathcal{M}}_{g,n}$, which together with eq.~\eqref{eq:dimM} amounts to the selection rule
\begin{equation} \label{selectionrules}
\left\langle \kappa_1^\ell\tau_{d_1} \ldots \tau_{d_n} \right\rangle_{g,n} \ne 0 \quad \Rightarrow \quad
\ell + d_1 + \ldots + d_n = 3g - 3 + n \ .
\end{equation}
For these correlators we introduce the generating functions \cite{WittenIntersection}
\begin{equation} \label{generatingfunction}
F(\{t_k\}) = \sum_{g=0}^{+\infty} g_s^{2g} \left\langle e^{\sum_{d=0}^{\infty} t_d \tau_d} \right\rangle_g
=\sum_{g=0}^{+\infty} g_s^{2g} \sum_{\{n_d\}} \left(\prod_{d=0}^{\infty}\frac{t_d^{n_d}}{n_d !} \right) \left\langle \tau_0^{n_0} \tau_1^{n_1} \ldots \right\rangle_g \ ,
\end{equation}
and
\begin{multline}\label{eq:G}
G(s,\{t_k\}) = \sum_{g=0}^{+\infty} g_s^{2g} \left\langle e^{s \kappa_1 + \sum_{d=0}^{\infty} t_d \tau_d} \right\rangle_g
=\sum_{g=0}^{+\infty} \sum_{m=0}^{+\infty}\frac{g_s^{2g} s^m}{m!}\sum_{\{n_d\}} \left(\prod_{d=0}^{\infty} \frac{t_d^{n_d}}{n_d !} \right)
\left\langle \kappa_1^m \tau_0^{n_0} \tau_1^{n_1} \ldots \right\rangle_g \ ,
\end{multline}
in terms of the genus expansion parameter $g_s$ and the couplings $\{t_d\}$. Due to the relation~\eqref{eq:MMMcl} the two generating functions are not independent but instead are related as \cite{MR2379144,MR2482127,Dijkgraaf:2018vnm}
\begin{equation}\label{eq:gammak}
G(s,\{t_k\}) = F(\{t_k + \gamma_k\}) \ , \qquad \gamma_0 = \gamma_1 = 0 \ , \quad \gamma_k = \frac{(-1)^k}{(k-1)!} s^{k-1} \ .
\end{equation}
As the first Miller--Morita--Mumford class $\kappa_1$ is proportional to the Weil--Petersson K\"ahler form $\omega_\text{WP}$ (cf.\ eq.~\eqref{eq:WPKahler}), the function $G(2\pi^2,\{t_k=0\})$ evaluated at $t_k=0$ readily becomes the generating function of the Weil--Petersson volumes $V_g$ of the moduli space of genus $g$ curves (for $g\ge 2$) without any marked points, i.e.\
\begin{equation}
G(2\pi^2,\{t_k=0\}) = \sum_{g=2}^{+\infty} g_s^{2g} \int_{\overline{\mathcal{M}}_{g,0}} e^{\omega_\text{WP}} = \sum_{g=2}^{+\infty} g_s^{2g} \int_{\overline{\mathcal{M}}_{g,0}} \text{vol}_\text{WP}
= \sum_{g=2}^{+\infty}g_s^{2g} V_g \ .
\end{equation}
Here $\text{vol}_\text{WP}$ is the Weil--Petersson volume form of the $(3g-3)$-dimensional moduli space $\overline{\mathcal{M}}_{g,0}$.
As shown in the seminal work~\cite{MR2257394} by Mirzakhani, the Weil--Petersson volume of a hyperbolic Riemann surfaces of genus $g$ with $n$ geodesic boundary components of length $\vec b=(b_1,\ldots,b_n)$ reads in terms of the previously defined cohomology classes on $\overline{\mathcal{M}}_{g,n}$
\begin{equation}\label{WPVolumes}
V_{g,\vec{b}}=\int_{\overline{\mathcal{M}}_{g,n}}e^{\omega_\text{WP}+\frac{1}{2}\sum_{\ell=1}^n b_\ell^2\psi_\ell}
=\left\langle{e^{2\pi^2\kappa_1+\frac{1}{2}\sum_{\ell=1}^{n} b_\ell^2\psi_\ell}}\right\rangle_{g,n} \ .
\end{equation}
For hyperbolic Riemann surfaces with geodesic boundary components of uniform length $b$, using eq.~\eqref{topologicalgravitycorrelationfunctions} it is straightforward to verify that the volumes $V_{g,\left(b,\ldots,b\right)}$ are generated by
\begin{equation}
G(2\pi^2,\{t_k = \tfrac{b^{2k}}{2^k k!}\delta \}) = \sum_g g_s^{2g} \sum_{i=0}^{+\infty}
\frac{\delta^i}{i!} \, V_{g,(\smallunderbrace{b,\ldots,b}_{i\ \text{times}})} \ ,
\end{equation}
or upon rescaling all cohomology classes with a non-zero factor $\lambda$ we obtain with eq.~\eqref{eq:dimM} the generating function
\begin{equation}
G(2\pi^2\lambda,\{t_k = \tfrac{\lambda^k b^{2k}}{2^k k!}\delta \}) = \sum_g \frac{g_s^{2g}}{\lambda^3} \sum_{i=0}^{+\infty}
\frac{(\lambda\delta)^i}{i!} \, \lambda^{3g} V_{g,(\smallunderbrace{b,\ldots,b}_{i\ \text{times}})} \ .
\end{equation}
For this generating function of Weil--Petersson volumes (and similarly for all other generating functions of Weil--Petersson volumes to be defined in the following), the volumes $V_{g,(b,\ldots,b)}$ that are not in accord with the selection rule~\eqref{selectionrules} are set to zero.\footnote{This in particular implies that the Weil--Petersson volumes are only non-vanishing for stable curves, with the only exception being the Weil--Petersson volume $V_{1}$ for $g=1$ and $n=0$, which is either set to zero or to a constant, see for instance the discussion in ref.~\cite{WittenIntersection}. In this work, however, the volume $V_{1}$ is not relevant as we only consider Riemann surfaces with at least one boundary component.} Furthermore, for boundary components with $p$ distinct geodesic length $b_1,\ldots,b_p$, this generating function readily generalises to
\begin{equation}
G(2\pi^2\lambda,\{ t_k\! =\! \sum_{i=1}^p\tfrac{\lambda^kb_i^{2k}}{2^k k!}\delta_j \} )=
\sum_g \frac{g_s^{2g}}{\lambda^3} \!\!\!\sum_{i_1,\ldots,i_p = 0}^{+\infty} \left(\prod_{s=1}^{p} \frac{(\lambda\delta_s)^{i_s}}{i_s !} \right)
\lambda^{3g} V_{g,(\smallunderbrace{ b_1,\ldots,b_1}_{i_1\ \text{times}},\ldots ,\smallunderbrace{ b_p,\ldots,b_p}_{i_p\ \text{times}})} \ .
\end{equation}
Finally, a hyperbolic Riemann surface with a conical singularity with identification angle $\alpha$ can simply be obtained by replacing the argument $b$ of a boundary component by $i\alpha$ (for the identification angles in the range $0<\alpha_i<\pi$).\footnote{The identification angle $\alpha$ of a conical singularity corresponds to the deficit angle $2\pi - \alpha$ of the singularity.} Thus, the Weil--Petersson volume $V_{g,\vec b, \vec\alpha}$ of a hyperbolic Riemann surface with boundary components of geodesic lengths $\vec b=(b_1,\ldots,b_p)$ and together with conical singularities $\vec\alpha=(\alpha_1,\ldots,\alpha_q)$ is given by
\begin{equation}\label{eq:conicalWPvolumes}
V_{g,\vec b,\vec\alpha} = V_{g,(b_1 ,\ldots, b_p,i \alpha_1, \ldots, i \alpha_q)} \ .
\end{equation}
Moreover, the generating function for hyperbolic Riemann surfaces with boundary components of geodesic lengths $b_1,\ldots,b_p$ and conical singularities of with identification angles $\alpha_1,\ldots,\alpha_q$ becomes in terms of the non-zero parameter $\lambda$
\begin{multline} \label{eq:GGeneral}
G(2\pi^2\lambda,\{ t_k =\sum_{i=1}^p\tfrac{\lambda^kb_i^{2k}}{2^k k!}\delta_i + \sum_{j=1}^q\tfrac{\lambda^k(-\alpha_j^2)^k}{2^k k!}\epsilon_j \} ) \\
= \!\sum_g \frac{g_s^{2g}}{\lambda^3} \!\!\!\!\!\sum_{\substack{i_1,\ldots,i_p = 0\\ j_1,\ldots,j_q = 0}}^{+\infty} \!\!
\! \left(\!\prod_{s=1}^{p} \frac{(\lambda\delta_s)^{i_s}}{i_s !}\!\prod_{t=1}^{q} \frac{(\lambda\epsilon_t)^{j_t}}{j_t !} \!\!\right)
\!\lambda^{3g}V_{g,(\smallunderbrace{ b_1,\ldots,b_1}_{i_1\,
\text{times}},\ldots ,\smallunderbrace{ b_p,\ldots,b_p}_{i_p\, \text{times}}),
(\smallunderbrace{\alpha_1,\ldots,\alpha_1}_{j_1\, \text{times}},\ldots ,\smallunderbrace{ \alpha_q,\ldots,\alpha_q}_{j_q\, \text{times}})}
.
\end{multline}
\subsection{Deformations of JT~Gravity from Minimal Strings} \label{sec:TopgravityMMstringsandMM}
Before delving into the technical computation of the thermal partition functions of JT~gravity with deformations, in this subsection we briefly spell out the connections among topological gravity, minimal string theories, and JT~gravity. This puts the forthcoming analysis into a broader context.
Saad, Shenker and Stanford already point out that standard JT~gravity relates to the large $p$ limit of the $(2,2p-1)$ minimal string theory \cite{SSS}. Such minimal string theories in turn enjoy a dual matrix model formulation \cite{Douglas:1989ve,KazakovMM,StaudacherMM}, which for finite $p$ comes with a finite number of coupling parameters. In the large $p$ limit, however, an infinite (but countable) number of couplings occur, which for standard JT~gravity are set to specific non-zero values. Furthermore, this infinite number of couplings relate to observables and their correlators in two-dimensional topological gravity, as introduced in the previous subsection.
In the following, as in ref.~\cite{OkuyamaSakai1}, using the connection to topological gravity we want to compute thermal partition functions as a function of this infinite number of couplings in order to describe JT~gravity and deformations thereof. In other words, instead of solely focussing on particular deformation backgrounds --- such as JT~gravity without deformations or JT~gravity interacting with a gas of defects --- we parametrise generic deformations to JT~gravity in terms of deformations of the $(2,2p-1)$ minimal string theories in the large $p$ limit, using the results of ref.~\cite{Itzykson:1992ya}.
Starting from a JT~gravity action formulation the values of the deformation parameters are ultimately determined from the constraints obtained from integrating out the scalar dilaton field. For instance, JT~gravity coupled to a gas of defects yields the constraint \eqref{eq:Dilatonpotential}, which is dual to specific values of the topological gravity coupling parameters. For a given JT~gravity action functional --- such as JT~gravity interacting with defects --- we refer to coupling values that fulfill these constraints as on-shell couplings and couplings that deviate from this critical condition as off-shell couplings (adapting to a terminology introduced in ref.~\cite{OkuyamaSakai1}).
Turning this argument around, we can now ask whether specific values for these couplings correspond to a legitimate action functional of a deformed theory of JT~gravity. Intriguingly, as discussed in the following both JT~gravity and JT~gravity coupled to defects give rise to on-shell couplings that are governed by Bessel functions \cite{CJ1,OkuyamaSakai1,WittenDeformations,Maxfield3gravity}. The problem of establishing a dictionary between these deformation spaces raises the question to what extend other transcendental functions for on-shell couplings are linked to action functionals of deformed JT~gravity theories (see, e.g. ref.~\cite{Okuyama:2020qpm} for the realisation of JT~supergravity). For finite $p$ the $(2,2p-1)$ minimal string theories possess a finite dimensional deformation space resulting from finitely many couplings~$t_k$. In the considered limit $p\to\infty$, the deformations $\sdef_k$ in eq.~\eqref{eq:shiftedexpansionpoint} can be characterized by their asymptotic behaviour for large $k$. The values for the couplings $t_k$ for undeformed JT~gravity are suppressed factorially (cf.~eq.~\eqref{eq:OSexpansionpoint}). For deformations arising from a gas of defects (at least for only finitely many types of defect species) the asymptotic behaviour of the couplings~$t_k$ for large $k$ remains the same. On the level of the action functional of JT~gravity such deformations give rise to a scalar potential~\eqref{eq:Dilatonpotential} that is exponentially suppressed for large positive values of the dilaton $\phi$. In general, we expect that the asymptotic behaviour of the scalar potential $U(\phi)$ for large $\phi$ relates to the asymptotic behaviour of the deformations $\sdef_k$ for large $k$.\footnote{Ref.~\cite{Mertens:2020hbs} makes an interesting proposal for a correspondence between a certain limit of Liouville theory coupled to matter and JT~gravity with a $\sinh(\phi)$-dilaton potential with a different asymptotic behaviour for $\phi\to+\infty$ (see also ref.~\cite{Kyono:2017pxs}).} Describing this duality beyond the discussed asymptotic growth behaviours seems a challenging task, which is beyond the scope of this work. Nevertheless, we hope that the description of generic deformations in the context of $(2,2p-1)$ minimal string theories in the large $p$ limit presented here proves useful from the JT~gravity perspective as well.
\subsection{JT Gravity Interacting with a Gas of Defects}\label{JT Partition Function and Specific Coupling Background}
We now study JT gravity interacting with a gas of defects, which is geometrically described in terms of Riemann surfaces with conical singularities \cite{WittenDeformations,Maxfield3gravity}. That is to say, we consider the partition function of JT gravity with contributions from hyperbolic Riemann surfaces with asymptotic boundary conditions together with an arbitrary number of conical singularities and at arbitrary genus. The relevant path integrals localise on the Weil--Petersson volumes of hyperbolic Riemann surfaces with geodesic boundary components and conical defects, folded with the path integral of the Schwarzian theory describing the one-dimensional action at the asymptotic boundaries \cite{SSS}. For a single asymptotic boundary component the resulting partition function reads \cite{WittenDeformations,Maxfield3gravity}
\begin{multline} \label{eq:JTPartitionFunctionDefects}
Z(\beta) = e^{S_0} Z^{\text{disk}}(\beta) +e^{S_0}\sum_{j=1}^{r}\epsilon_{j}Z^{\text{disk}}(\beta,\alpha_j)\\
+\sum_{g,n=0}^{\infty}e^{(1-2g)S_0}
\sum_{j_1,\ldots,j_{n}=1}^{r}\frac{\epsilon_{j_1}\cdots\epsilon_{j_{n}}}{n!}\int_{0}^{\infty}db\,b\,
Z^{\text{trumpet}}(\beta,b)V_{g,b,(\alpha_{j_1},\ldots,\alpha_{j_{n}})} \ .
\end{multline}
Here the parameters $\epsilon_j$, $j=1,\ldots,r$, are the coupling constants to the $r$ distinct defect types that are characterised by the identification angles $\alpha_j$ of their associated conical singularities on the hyperbolic Riemann surfaces. Furthermore, $\beta$ is the inverse temperature attributed to the configurations of wiggles at the asymptotic boundary of the hyperbolic Riemann surfaces. The distinct topologies of Riemann surfaces are weighted by the action $S_0$ that relates to the gravitational coupling $G_N$ as $G_N \sim 1/S_0$. Hence, the partition function is a non-perturbative expansion in the gravitational coupling $G_N$ of JT gravity \cite{SSS}. The first two terms in this expansion capture the contributions of disks with no conical singularities and a single conical singularity, respectively. The remaining topologies appear in the second line.\footnote{Due to the selection rules~\eqref{selectionrules} for non-vanishing Weil--Petersson volumes $V_{g,b,\vec a}$, the second line of eq.~\eqref{eq:JTPartitionFunctionDefects} does not contain a contribution from disks without any or with a single conical singularity.} The individual terms in this expansion are computed as \cite{WittenStanford,SSS,MertensDefects}
\begin{equation} \label{eq:BuildBlocks}
Z^\text{disk}(\beta)=\frac{\gamma^{\frac32}e^{\frac{2\pi^2\gamma}{\beta}}}{(2 \pi)^\frac12 \beta^{\frac32}}\ , \quad
Z^\text{disk}(\beta,\alpha_j)=\frac{\gamma^\frac12e^{\frac{\gamma \alpha_j^2}{2\beta}}}{(2 \pi\beta)^{\frac{1}{2}}} \ , \quad
Z^{\text{trumpet}}(\beta,b)=\frac{\gamma^\frac12e^{-\frac{\gamma b^2}{2\beta}}}{(2 \pi\beta)^{\frac{1}{2}}}\ ,
\end{equation}
where $\gamma$ is the coupling constant to the one-dimensional Schwarzian action.
First we observe that the summation over defects in eq.~\eqref{eq:JTPartitionFunctionDefects} can be rewritten as
\begin{equation}
\sum_{n=0}^{+\infty}\sum_{j_1,\ldots,j_{n}=1}^{r}\frac{\epsilon_{j_1}\cdots\epsilon_{j_{n}}}{n!}V_{g,b,(\alpha_{j_1},\ldots,\alpha_{j_{n}})}
= \sum_{n_1,\ldots,n_r = 0}^{+\infty} \left(\prod_{j=1}^r \frac{ \epsilon_j^{n_j}}{n_j !}\right)
V_{g,b,(\smallunderbrace{ \alpha_1,\ldots,\alpha_1}_{n_1\ \text{times}},\,.\,.\,.\,,\smallunderbrace{ \alpha_r,\ldots,\alpha_r}_{n_r\ \text{times}})} \ .
\end{equation}
Summed over all genera $g$ we readily express the volumes $V_{g,b,(\alpha_{j_1},\ldots,\alpha_{j_n})}$ in terms of the generating function~\eqref{eq:GGeneral} as
\begin{align}
\sum_{g,n=0}^{+\infty} g_s^{2g} \!\!\!\!\!\!\sum_{j_1,...,j_{n}=1}^{r}\!\!\!\!\!\!
\frac{\epsilon_{j_1}...\epsilon_{j_{n}}}{n!}\lambda^{3g} V_{g,b,(\alpha_{j_1},...,\alpha_{j_{n}})} &=
\left. \lambda^2\frac{\partial}{\partial\delta}
G(2\pi^2\lambda,\{t_k =\tfrac{\lambda^kb^{2k}}{2^k k!}\delta + \sum_{j=1}^r\tfrac{\lambda^{k-1}(-\alpha_j^2)^k}{2^k k!}\epsilon_j \} ) \right|_{\delta=0}
\nonumber\\
&
\hspace*{-2em}= \sum_\ell \frac{b^{2\ell}\lambda^{\ell+2}}{2^\ell \ell!} \frac{\partial}{\partial{t_\ell}}
G(2\pi^2\lambda,\{t_k\! =\!\sum_{j=1}^r\tfrac{\lambda^{k-1}(-\alpha_j^2)^k}{2^k k!}\epsilon_j \} ) \ .
\label{eq:VAsGen1}
\end{align}
We insert this expression into eq.~\eqref{eq:JTPartitionFunctionDefects} with the relation
\begin{equation} \label{eq:Defgs}
e^{-S_0} = \lambda^\frac32 g_s \ ,
\end{equation}
and carry out the integration over the geodesic boundary lengths in eq.~\eqref{eq:JTPartitionFunctionDefects} using
\begin{equation}\label{eq:bintegration}
\int_{0}^{\infty}db\,b^{2n+1}\,e^{-\frac{\gamma b^2}{2\beta}} = \frac{n!}2 \left(\frac{2\beta}\gamma\right)^{n+1} \ .
\end{equation}
Then we arrive for the partition function $Z(\beta)$ at
\begin{equation} \label{eq:Zsingle1}
Z(\beta) = \frac{1}{\sqrt{2\pi} g_s} \!\left( \!\! \frac{\gamma}{\lambda \beta} \right)^\frac32
\!\left( e^{\frac{2\pi^2\gamma}\beta} \!\! + \frac\beta\gamma \sum_{j=1}^r \epsilon_j\,e^{\frac{\gamma \alpha_j^2}{2\beta}}
\!\! + \sum_{\ell=0}^{+\infty} \left( \frac{\lambda\beta}{\gamma}\right)^{\ell+2} \!\!\!\frac{\partial}{\partial t_\ell} G(2\pi^2\lambda,\{ t_k = \sdef_k \})
\! \!\right) ,
\end{equation}
with
\begin{equation} \label{eq:defGtilde}
\sdef_k = \sum_j \sdef_{k,j} \ , \qquad \sdef_{k,j} = \tfrac{\lambda^{k-1}(-\alpha_j^2)^k}{2^k k!}\epsilon_j \ .
\end{equation}
Note that only the last term of the partition function $Z(\beta)$ given in eq.~\eqref{eq:Zsingle1} is mapped to the topological correlators~\eqref{topologicalgravitycorrelationfunctions}, whereas the first two terms associated to disk topologies capture the semi-classical contributions to the partition function in the presence of a gas of defects.
It is straightforward to generalise the partition function $Z(\beta)$ to geometries with multiple asymptotic boundaries \cite{Maxfield3gravity,WittenDeformations}. For $m$ boundaries we define the partition function of connected hyperbolic Riemann surfaces by $Z(\beta_1,\ldots,\beta_m)$, where the inverse temperatures $\beta_1, \ldots, \beta_m$ describe the thermodynamics of the wiggles at the $m$ distinct asymptotic boundary components.
Similarly as for the partition function $Z(\beta)$ of a single asymptotic boundary, the partition function $Z(\beta_1,\beta_2)$ with two asymptotic boundaries splits into two pieces
\begin{equation} \label{eq:Zsplit}
Z(\beta_1,\beta_2) = Z(\beta_1,\beta_2)^\text{non-top.} + Z(\beta_1,\beta_2)^\text{top.} \ .
\end{equation}
The first term does not relate to topological correlators~\eqref{topologicalgravitycorrelationfunctions}, while the second term arises from an integral transformation of the Weil--Petersson volumes of hyperbolic Riemann surfaces with two geodesic boundary components that are computable in terms of topological correlators, cf. eqs.~\eqref{WPVolumes} and \eqref{eq:conicalWPvolumes}. The non-topological piece $Z(\beta_1,\beta_2)^\text{non-top.}$ receives only a contribution at genus zero from the topology of a cylinder (without any conical singularities). Using eqs.~\eqref{eq:BuildBlocks} and \eqref{eq:bintegration}, this cylindrical contribution is obtained by gluing two trumpets along their geodesic boundary components, as computed in ref.~\cite{SSS}
\begin{equation}
Z(\beta_1,\beta_2)^\text{non-top.}=\int\displaylimits_{0}^{\infty}db \,b\, Z^{\text{trumpet}}(\beta_1,b)\, Z^{\text{trumpet}}(\beta_2,b)
=\frac{\sqrt{\beta_1 \beta_2}}{2 \pi \beta_1 +2 \pi \beta_2 }\ .
\end{equation}
The selection rule~\eqref{selectionrules} implies that the partition functions $Z(\beta_1,\ldots,\beta_m)$ with $m>2$ receive only contributions of the topological type, i.e.\
\begin{equation}
Z(\beta_1,\ldots,\beta_m) = Z(\beta_1,\ldots,\beta_m)^\text{top.} \quad \text{for} \quad m>2 \ .
\end{equation}
For any $m\ge 1$ the topological part of the partition function $Z(\beta_1,\ldots,\beta_m)$ reads
\begin{multline} \label{eq:DefZk}
Z(\beta_1,\ldots,\beta_m)^\text{top.} =
\sum_{g,n=0}^{\infty}e^{(2-2g-m)S_0}
\sum_{j_1,\ldots,j_{n}=1}^{r}\frac{\epsilon_{j_1}\cdots\epsilon_{j_{n}}}{n!} \\
\cdot \prod_{i=1}^m\int_{0}^{\infty} \!\!\! db_i\,b_i\,
Z^{\text{trumpet}}(\beta_i,b_i)V_{g,(b_1,\ldots,b_m),(\alpha_{j_1},\ldots,\alpha_{j_{n}})} \ .
\end{multline}
Analogously to the formula~\eqref{eq:VAsGen1} for a single boundary component, we express the volumes $V_{g,(b_1,\ldots,b_m),(\alpha_{j_1},\ldots,\alpha_{j_n})}$ in terms of the generating function~\eqref{eq:GGeneral} as
\begin{multline}
\sum_{g,n} g_s^{2g} \!\! \sum_{j_1,\ldots,j_{n}=1}^{r} \lambda^{3g} \frac{\epsilon_{j_1}\cdots\epsilon_{j_{n}}}{n!}
V_{g,(b_1,\ldots,b_m),(\alpha_{j_1},\ldots,\alpha_{j_{n}})} \\
= \lambda^{3-m} \prod_{i=1}^m \left( \sum_{\ell=0}^{+\infty} \frac{b_i^{2\ell} \lambda^\ell}{2^\ell \ell!} \frac{\partial}{\partial t_\ell} \right)
G(2\pi^2\lambda,\{t_k = \sum_{j=1}^r\tfrac{\lambda^{k-1}(-\alpha_j^2)^k}{2^k k!}\epsilon_j \} ) \ .
\end{multline}
Inserting this expression into eq.~\eqref{eq:DefZk} and carrying out the integrals~\eqref{eq:bintegration}, we obtain
\begin{equation} \label{eq:Zmultitop}
Z(\beta_1,\ldots,\beta_m)^\text{top.} = \frac1{g_s^2} \mathcal{B}(\beta_1) \cdots \mathcal{B}(\beta_m) G(2\pi^2 \lambda,\{ t_k = \sdef_k \} )
\quad\text{for}\quad m \ge 1 \ ,
\end{equation}
with $\delta_k$ as defined in eq.~\eqref{eq:defGtilde} and in terms of the differential operator
\begin{equation}
\mathcal{B}(\beta) = g_s \sqrt{ \frac{\lambda \beta}{2\pi\gamma} } \,
\sum_{\ell=0}^{+\infty} \left( \frac{\lambda\beta}{\gamma} \right)^\ell \frac{\partial}{\partial t_\ell} \ .
\end{equation}
It is shown in ref.~\cite{OkuyamaSakai2} that the differential operator $\mathcal{B}(\beta)$ creates an asymptotic boundary component at temperature $\beta$. It is universal in the sense that without any modifications it also creates asymptotic boundary components in the presence of defects. The operator $\mathcal{B}(\beta)$ as a function of $\beta$ relates to the operator in ref.~\cite{Moore:1991ir}, which in the context of two-dimensional topological gravity creates in a surface a hole of specified boundary length. Therefore, we refer to $\mathcal{B}(\beta)$ as the boundary creation operator.
The obtained simple forms~\eqref{eq:Zsingle1} and \eqref{eq:Zmultitop} of the partition function $Z(\beta)$ and its multi-boundary generalisations $Z(\beta_1,\ldots,\beta_m)$ in the presence of a gas of defects have a nice interpretation from the topological gravity perspective. The Weil--Petersson volumes~\eqref{WPVolumes} are computed with the K\"ahler class $2\pi^2 \kappa_1$ on the moduli spaces $\overline{\mathcal{M}}_{g,n}$ \cite{MR2257394}. The generating function $G(2\pi^2\lambda,\{ t_k \})$ now expresses these volumes (as functions of the scaling and genus expansion parameters $\lambda$ and $g_s$) in terms of the shifted generating function $F(\{ t_k + \gamma_k \})$ of topological gravity according to eq.~\eqref{eq:gammak}. As explained in ref.~\cite{SSS,OkuyamaSakai1}, JT gravity can be interpreted as topological gravity with non-vanishing background parameters~$\{\gamma_k\}$. Including now a gas of defects (characterised by their couplings $\epsilon_j$ and identification angles $\alpha_j$) further deforms the background couplings $\{\gamma_k\}$. The leading order contribution arises from single-defect interactions while the higher order corrections are due to multi-defect interactions. These order-by-order contributions can be viewed as a Taylor expansion about the JT~gravity background parameters $\{\gamma_k\}$, which altogether sum up to the deformation $\{\gamma_k +\sdef_k\}$. Thus, JT~gravity interacting with a gas of defects yields yet other expansion points of the generating function $F(\{t_k\})$. It would be interesting to see if there are special expansion points that are singled out from the topological gravity point of view.
As in ref.~\cite{OkuyamaSakai1}, in the following we set the coupling $\gamma$ and the scaling parameter $\lambda$ to the convenient values
\begin{equation}
\lambda = \gamma = \frac1{2\pi^2} \ .
\end{equation}
Then the boundary creation operator $\mathcal{B}(\beta)$ and the background parameters $\sdef_k$ simplify to
\begin{equation} \label{eq:dshifts}
\mathcal{B}(\beta) = g_s \sqrt{ \frac{\beta}{2\pi} } \, \sum_{\ell=0}^{+\infty} \beta^\ell \frac{\partial}{\partial t_\ell} \ , \qquad
\sdef_k = \sum_j \left(-\frac{\alpha_j^2}{4\pi^2}\right)^k\frac{2\pi^2 \epsilon_j}{k!} \ ,
\end{equation}
and the partition functions become
\begin{equation} \label{eq:Zfuncs}
\begin{aligned}
Z(\beta) &= \frac{1}{\sqrt{2\pi} g_s \beta^\frac32}
\left( e^{\frac{1}\beta} + 2\pi^2\beta \sum_{j=1}^r \epsilon_j\,e^{\frac{\alpha_j^2}{4\pi^2\beta}} \right)
+ \frac1{g_s^2} \mathcal{B}(\beta) G(1,\{ t_k=\sdef_{k} \}) \ , \\
Z(\beta_1,\beta_2) &= \frac{\sqrt{\beta_1 \beta_2}}{2 \pi \beta_1 +2 \pi \beta_2 }
+\frac1{g_s^2} \mathcal{B}(\beta_1) \mathcal{B}(\beta_2) G(1,\{ t_k = \sdef_k \}) , \\[1.5ex]
Z(\beta_1,\ldots,\beta_m) &= \frac1{g_s^2} \mathcal{B}(\beta_1) \cdots \mathcal{B}(\beta_m) G(1,\{ t_k = \sdef_k \}) \quad \text{for}\quad m\ge 3\ ,
\end{aligned}
\end{equation}
where the first two partition functions receive both non-topological and topological contributions.
\subsection{KdV Hierarchy and Off-Shell Partition Functions} \label{sec:KdV}
As conjectured by Witten~\cite{WittenIntersection} and proven by Kontsevich~\cite{KontsevichIntersection} the generating function $F(\{t_k\})$ of correlators in topological gravity defined in eq.~\eqref{generatingfunction} arises as a solution to the KdV hierarchy as follows. Let us define
\begin{align}\label{eq:Ftou}
u(\{t_k\}) = \frac{\partial^2}{\partial t_0^2}F(\{t_k\}) \ .
\end{align}
The function $u(\{t_k\})$ is a tau function to the KdV~hierarchy, i.e.\ it solves the system of graded partial differential equations
\begin{equation} \label{eq:GeneralizedKdV}
\partial_k u = \partial_0 \mathcal{R}_{k+1}(u,\partial_0u,\partial_0^2u,\ldots)\quad \text{with} \quad
\partial_k \equiv \frac{\partial}{\partial t_k} \ , \quad k=0,1,2,3,\ldots \ .
\end{equation}
Here $\mathcal{R}_k$, $k=1,2,3,\ldots$, are the Gelfand--Dikii polynomials \cite{Gelfand:1975rn}, which are polynomials in the derivatives $\partial_0^\ell u(\{t_k\})$, $\ell=0,1,2,\ldots$ of $u(\{t_k\})$, and depend on the parameter $g_s$. Together with the condition $\mathcal{R}_k(\{\partial_0^\ell u \equiv 0 \}) = 0$ they are defined with the initial polynomial $\mathcal{R}_1=u$ recursively as \cite{Gelfand:1975rn}
\begin{equation} \label{eq:generalKdvequation}
\partial_0 \mathcal{R}_{k+1}=\frac{1}{2k+1}\left(
2 u \left(\partial_0 \mathcal{R}_{k}\right) + \left(\partial_0 u\right) \mathcal{R}_{k} + \frac{g_s^2}{4}\partial_0^3 \mathcal{R}_{k} \right) \ .
\end{equation}
The first three Gelfand--Dikii polynomials read
\begin{equation} \label{eq:GD}
\mathcal{R}_1 = u \ , \quad
\mathcal{R}_{2}=\frac{u^2}{2}+\frac{g_s^2 }{12}\partial_0^2 u \ , \quad
\mathcal{R}_{3}=\frac{u^3}{3!}+\frac{g_s^2}{24}\left(2 u \partial_0^2 u + (\partial_0u)^2 \right) +\frac{g_s^4}{240} \partial_0^4u \ .
\end{equation}
The leading order term of the Gelfand--Dikii polynomials is given by
\begin{equation} \label{eq:Rk_u0}
\left. \mathcal{R}_k \right|_{g_s=0} = \frac{u^k}{k!} \ ,
\end{equation}
independent of any derivatives $\partial_0 u(\{t_k\})$, $\partial_0^2 u(\{t_k\})$, $\partial_0^3 u(\{t_k\})$, $\ldots$.
As the KdV hierarchy~\eqref{eq:GeneralizedKdV} depends only implicitly on the couplings $t_k$, the function $v(\{t_k\})= \partial_0^2 F(\{ t_k + \Delta t_k \})$ is a tau function for any set of constants $\{\Delta t_k\}$. In particular, a tau function arises from the generating function~$G(s,\{ t_k \})$ of Weil--Petersson volumes (cf.\ eq.~\eqref{eq:gammak}) and from the generating function $H(\{t_k\})$ of correlators on hyperbolic Riemann surfaces with conical singularities given by
\begin{equation} \label{eq:DefH}
H(\{ t_k \}) = G(1,\{t_k + \sdef_k \}) = F(\{t_k + \gamma_k + \sdef_k\}) \ ,
\end{equation}
in terms of the constants $\Delta t_k=\gamma_k+\sdef_k$, cf. eqs.~\eqref{eq:gammak} and \eqref{eq:dshifts}.
The particular tau function $u(\{t_k\})$ of topological gravity and hence the tau function $v(\{t_k\})$ with the shifted couplings obey the string equation \cite{Dijkgraaf:1990rs}
\begin{equation} \label{eq:stringeq}
\partial_0 u = 1 + \sum_{k=1}^{+\infty} t_k \partial_k u \ , \qquad
\partial_0 v = 1 + \sum_{k=1}^{+\infty} (t_k + \Delta t_k) \, \partial_k v \ .
\end{equation}
The string equation together with the KdV hierarchy determine unambiguously the tau functions $u(\{t_k\})$ and $v(\{t_k\})$ \cite{WittenIntersection}. The string equation can be viewed as the initial condition specifying a unique solution to the KdV hierarchy.
The partition functions $Z(\beta_1,\ldots,\beta_m)$ defined in eq.~\eqref{eq:Zfuncs} do not depend on the coupling parameters $\{t_k\}$ appearing in the definition of $H(\{t_k\})$. Instead the generating function $H(\{ t_k \})$ is evaluated at the specific values $t_k=0$ (corresponding to $t_k =\gamma_k + \sdef_k$ in terms of the generating functions $F(\{t_k\})$). We can define partition functions $Z^F(\{ t_k \}; \beta_1,\ldots,\beta_m)$ based on $F(\{ t_k \})$ or alternatively the partition functions $Z^H(\{ t_k \}; \beta_1,\ldots,\beta_m)$ based on $H(\{ t_k \})$ depending on $\{ t_k \}$ by generalising the topological part in eqs.~\eqref{eq:Zfuncs} to
\begin{equation} \label{eq:offshellPart}
\begin{aligned}
Z^F(\{ t_k \}; \beta_1,\ldots,\beta_m)^\text{top.} &= \frac{1}{g_s^2} \mathcal{B}(\beta_1) \cdots \mathcal{B}(\beta_m) F(\{ t_k \}) \ , \\
Z^H(\{ t_k \}; \beta_1,\ldots,\beta_m)^\text{top.} &= \frac{1}{g_s^2} \mathcal{B}(\beta_1) \cdots \mathcal{B}(\beta_m) H(\{ t_k \}) \ .
\end{aligned}
\end{equation}
Following ref.~\cite{OkuyamaSakai1} we refer to $Z^F(\{ t_k \}; \beta_1,\ldots,\beta_m)$ and $Z^H(\{ t_k \}; \beta_1,\ldots,\beta_m)$ as the off-shell partition functions, and upon specialising to suitable values for the couplings $\{t_k\}$ --- denoted as on-shell values --- we get back the result $Z(\beta_1,\ldots,\beta_m)$ referred to as the on-shell partition function, i.e.\
\begin{equation} \label{eq:onshellPart}
\begin{aligned}
Z(\beta_1,\ldots,\beta_m) &= Z^F(\{ t_k=\gamma_k + \delta_k \}; \beta_1,\ldots,\beta_m) \ , \\
Z(\beta_1,\ldots,\beta_m) &= Z^H(\{ t_k = 0\}; \beta_1,\ldots,\beta_m) \ .
\end{aligned}
\end{equation}
These two classes of off-shell partition functions enjoy distinct interpretations. Whereas the off-shell partition function $Z^F(\{ t_k \}; \beta_1,\ldots,\beta_m)$ is defined in the setting of topological gravity in the context of intersection theory on the moduli spaces of stable curves \cite{WittenIntersection,KontsevichIntersection}, the partition functions $Z^H(\{ t_k \}; \beta_1,\ldots,\beta_m)$ directly relate to correlators on hyperbolic Riemann surfaces (possibly coupled to a gas of defects as described by the constants $\{ \delta_k \}$) in the context of JT~gravity \cite{SSS,OkuyamaSakai1}. These two classes of off-shell partition functions are related as $Z^F(\{ \gamma_k + \delta_k + t_k \}; \beta_1,\ldots,\beta_m)=Z^H(\{ t_k \}; \beta_1,\ldots,\beta_m)$.
Let us now determine the introduced off-shell partition functions explicitly. The tau function~\eqref{eq:Ftou} and the generating function~$F(\{t_k\})$ enjoy the genus expansion
\begin{equation}
u(\{t_k\}) = \sum_{\ell=0}^{+\infty} g_s^{2\ell} \, u_\ell (\{ t_k \}) \ , \qquad
F(\{t_k\}) = \sum_{\ell=0}^{+\infty} g_s^{2\ell} \, F_\ell(\{t_k \}) \ ,
\end{equation}
such that $F_g = \partial_0^2 u_g$. The KdV hierarchy~\eqref{eq:GeneralizedKdV} with eq.~\eqref{eq:Rk_u0} and the string equation~\eqref{eq:stringeq} imply for the genus zero contribution the partial differential equations
\begin{equation} \label{eq:u0diffeq}
\partial_k u_0 = \frac{\partial_0 u_0^{k+1}}{(k+1)!} \ , \qquad \partial_0 u_0 = 1 + \sum_{k=1}^{+\infty} t_k \,\partial_k u_0 \ .
\end{equation}
Defining the series
\begin{equation} \label{eq:DefI}
I_n(u_0,\{ t_k \}) = \sum_{k=0}^{+\infty} t_{k+n} \frac{u_0^k}{k!} \quad \text{for} \quad n=0,1,2,\ldots \ ,
\end{equation}
and using the partial differential equations~\eqref{eq:u0diffeq}, Itzykson and Zuber show for the genus zero part $u_0(\{ t_k \})$ of the tau function~$u(\{t_k\})$ the remarkable functional relation \cite{Itzykson:1992ya}
\begin{equation}
u_0 - I_0(u_0,\{t_k\}) = 0 \ .
\end{equation}
With the ansatz $u_0(\{t_k\}) = \sum_{N=0}^{+\infty} \sum_{{\sum n_k = N}} u_{0,\{n_k\}} t_0^{1-N+\sum k n_k} \left(t_1^{n_1} t_2^{n_2} \cdots\right)$ summed over non-negative integral sets $\{n_k\}$, one readily determines order-by-order the formal expansion in the coupling parameters $\{t_k\}$
\begin{equation} \label{eq:u0exp}
u_0(\{t_k\}) = t_0 + t_0 t_1 + \left( t_0 t_1^2 + \frac12 t_0^2 t_2 \right) +\left( t_0 t_1^3 + \frac32 t_0^2 t_1 t_2 + \frac16 t_0^3 t_3 \right) + \ldots \ .
\end{equation}
Imposing the correct boundary conditions, the function $u_0$ integrates to \cite{Itzykson:1992ya}
\begin{equation} \label{eq:F0}
F_0(u_0,\{t_k\}) = \frac{u_0^3}{3!} - \sum_{k=0}^{+\infty} t_k \frac{u_0^{k+2}}{(k+2)k!} + \frac12 \sum_{k=0}^{+\infty} \frac{u_0^{k+1}}{k+1}
\sum_{n=0}^k \frac{t_n t_{k-n}}{n!(k-n)!} \ .
\end{equation}
Furthermore, observing that the functions~\eqref{eq:DefI} obey the differential identities
\begin{equation}
\partial_0 I_0 = \frac{1}{1-I_1} \ , \qquad \partial_0 I_k = \frac{I_{k+1}}{1-I_1} \quad \text{for} \quad k\ge 1 \
\end{equation}
Itzykson and Zuber establish that the KdV~hierarchy implies at higher genus the finite non-trivial expansions \cite{Itzykson:1992ya}
\begin{equation} \label{eq:ug}
u_g = (1- I_1)^{g-1} \sum_{\sum_{k=2}^{3g} (k-1) \ell_k =3 g - 1} u_{g,\{\ell_k\}} \left( \frac{I_2}{(1-I_1)^2} \right)^{\ell_2} \cdot \ldots \cdot \left( \frac{I_{3g}}{(1-I_1)^{3g}} \right)^{\ell_{3g}} \ .
\end{equation}
Inserting this ansatz into the KdV hierarchy~\eqref{eq:GeneralizedKdV} (recursively in the genus) determines unambiguously the numerical cofficients $u_{g,\{\ell_k\}}$, for instance up to genus $g=2$ we arrive at
\begin{align}
u_1 &= \frac1{12} \left( \frac{I_2}{(1-I_1)^2} \right)^2 + \frac1{24} \frac{I_3}{(1-I_1)^3} \ , \label{eq:u1} \\
u_2 &= (1-I_1) \left(\frac{49 I_2^5}{288 (1-I_1)^{10}}+\frac{11 I_3I_2^3}{36 (1-I_1)^9}
+\frac{7 I_4 I_2^2}{96(1-I_1)^8}+\frac{109 I_3^2 I_2}{1152 (1-I_1)^8}\right.\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad+\left.\frac{I_5 I_2}{90 (1-I_1)^7}+\frac{17 I_3 I_4}{960 (1-I_1)^7}+\frac{I_6}{1152 (1-I_1)^6}\right) \ . \label{eq:u2}
\end{align}
At genus one $u_1(\{t_k\})$ integrates to
\begin{equation} \label{eq:F1def}
F_1 = -\frac{1}{24}\log(1 - I_1) \ .
\end{equation}
The generating functions $F_g$ for $g>1$ enjoy yet again an expansion of the form \cite{Itzykson:1992ya}
\begin{equation} \label{eq:Fgstructure}
F_g = (1- I_1)^{g-1} \sum_{\sum_{k=2}^{3g-2} (k-1) \ell_k =3 g - 3} f_{g,\{\ell_k\}} \left( \frac{I_2}{(1-I_1)^2} \right)^{\ell_2} \cdot \ldots \cdot \left( \frac{I_{3g-2}}{(1-I_1)^{3g-2}} \right)^{\ell_{3g-2}} \ ,
\end{equation}
in terms of the finitely many coefficients $f_{g,\{\ell_k\}}$ (with the subscript $\{\ell_k\}=\{\ell_2,\ell_3,\ldots\}$). In particular, with eq.~\eqref{eq:u2} we find for $g=2$ the numerical coefficients
\begin{equation} \label{eq:CoeffF2}
f_{2,{\{3\}}}=\frac{7}{1440}\ , \quad f_{2,{\{1,1\}}}=\frac{29}{5760}\ , \quad f_{2,{\{0,0,1\}}}=\frac{1}{1152} \ ,
\end{equation}
and we arrive at
\begin{equation}
F_2 = \frac7{1440} \frac{I_2^3}{(1-I_1)^5}+\frac{29}{5760}\frac{I_2\,I_3}{(1-I_1)^4}+\frac1{1152} \frac{I_4}{(1-I_1)^3} \ .
\end{equation}
Thus, the method of Itzykson and Zuber --- expressing the tau function $u(\{t_k\})$ and hence the generating function $F(\{t_k\})$ in terms of the functions $I_n(u_0,\{t_k\})$ --- offers a very powerful method to compute the generating function $F(\{t_k\})$ order-by-order as a genus expansion \cite{Itzykson:1992ya}. Upon inserting the expression~\eqref{eq:u0exp} to the desired order, one can readily read off the correlators of topological gravity explicitly.
Solving the KdV hierarchy in terms of the functions $I_n$ allows us to derive a universal expression for the off-shell partition functions~\eqref{eq:offshellPart} with arbitrary shifts $\{ \Delta t_k \}$ in the coupling parameters $\{ t_k \}$. The defined off-shell partition functions~\eqref{eq:offshellPart} are derived from the generating function $F(F_0,\{ I_n \}) = F_0 + \sum_{g=1}^{+\infty} g_s^{2g} F_g(\{ I_n \})$, which --- if expressed in terms of $F_0(u_0(\{t_k\}),\{t_k\})$ and $I_n(u_0(\{t_k\}),\{t_k\})$, $n=1,2,3,\ldots$ --- only implicitly depend on the couplings $\{t_k\}$. Computing the action of the boundary creation operators~\eqref{eq:dshifts} on the functions $F_0$ yields
\begin{equation} \label{eq:BbetaF0}
\begin{aligned}
\mathcal{B}(\beta) F_0 &= \frac{g_s}{\sqrt{2\pi}\beta^\frac32} \left( e^{\beta I_0}\left(1 - \beta I_0\right)-1 +\sum_{k,\ell=0}^{+\infty} \frac{I_0^{k+\ell+1}}{k+\ell+1} \frac{\beta^{k+2}}{k!} \frac{t_\ell}{\ell!}\right) \ , \\
\mathcal{B}(\beta_1)\mathcal{B}(\beta_2) F_0 &= \frac{g_s^2 \sqrt{\beta_1\beta_2}}{2\pi \beta_1 +2 \pi \beta_2} \left( e^{(\beta_1+\beta_2)I_0} - 1 \right) \ ,
\end{aligned}
\end{equation}
whereas for $I_n$ we find
\begin{equation}
\mathcal{B}(\beta) I_0 = g_s\sqrt{\frac{\beta}{2\pi}} \frac{e^{\beta I_0}}{1-I_1} \ ,\qquad
\mathcal{B}(\beta) I_k = g_s\sqrt{\frac{\beta}{2\pi}} e^{\beta I_0} \left(\beta^k + \frac{I_{k+1}}{(1-I_1)} \right) \quad \text{for} \quad k\ge 1 \ .
\end{equation}
As a consequence of these derivative rules --- except for the leading genus zero contribution to the partition function with one asympototic boundary --- the off-shell partition functions~\eqref{eq:offshellPart} are universally expressible in terms of the functions $I_n$, i.e.\
\begin{equation} \label{eq:Zuni}
\begin{aligned}
Z(\{\mathcal{B}(\beta) F_0,I_n\}; \beta)^{\text{top.}} & =\frac{1}{g_s^2} \mathcal{B}(\beta) F(\{ t_k \}) = \frac{1}{g_s^2} \mathcal{B}(\beta) F_0 +Z^{(g>0)}(\{I_n\}; \beta)^{\text{top.}} \ , \\
Z(\{I_n \}; \beta_1,\ldots,\beta_m )^\text{top.} &= \frac{1}{g_s^2} \mathcal{B}(\beta_1) \cdots \mathcal{B}(\beta_m) F(\{ t_k \}) \quad \text{for} \quad m>1 \ .
\end{aligned}
\end{equation}
In particular, the partition function with a single asymptotic boundary component enjoys the genus expansion
\begin{equation} \label{eq:Ztopgeneral}
Z(\{\mathcal{B}(\beta) F_0,I_n\}; \beta)^{\text{top.}} =
\frac{1}{g_s^2} \mathcal{B}(\beta) F_0 + \sqrt{\frac\beta{2\pi}} e^{\beta I_0}
\sum_{g=1}^{+\infty} g_s^{2g-1} (1-I_1)^{g-1} Z_g(\{ I_n \},\beta) \ ,
\end{equation}
where
\begin{equation} \label{eq:Zg1}
Z_1 = \frac{1}{24}\left(\frac\beta{1-I_1} + \frac{I_2}{(1-I_1)^2}\right) \ ,
\end{equation}
and for $g>0$
\begin{multline} \label{eq:Zguni}
Z_g
=\!\!\!\!\!\!\!\!\! \sum_{\sum_{k=2}^{3g-2} (k-1) \ell_k =3 g - 3} \!\!\!\!\!\!\!\!\!\!\!\!\! f_{g,\{\ell_k\}}
\sum_{s=2}^{3g-2} \ell_s \left(\frac{1+2s}{3(1-I_1)} \left( \beta + \frac{I_2}{1-I_1} \right)
+\frac{I_{s+1}}{I_s(1-I_1)} + \frac{\beta^s}{I_s} \right) \\
\cdot\left(\frac{I_2}{(1-I_1)^2} \right)^{\ell_2}\cdot\ldots \cdot \left( \frac{I_{3g-2}}{(1-I_1)^{3g-2}} \right)^{\ell_{3g-2}} ,
\end{multline}
in terms of the constants $f_{g,\{\ell_k\}}$ defined in eq.~\eqref{eq:Fgstructure}. With eq.~\eqref{eq:Zg1} and inserting~\eqref{eq:CoeffF2} into $Z_2$ we find explicitly up to genus two
\begin{multline} \label{eq:Zgupto2}
Z^{(g>0)}(\{I_n\}; \beta)^{\text{top.}} =
\frac{g_s}{24} \sqrt{\frac{\beta}{2\pi}} e^{\beta I_0} \left( \frac\beta{1-I_1} + \frac{I_2}{(1-I_1)^2} \right) \\
+ \frac{g_s^3}{5760} \sqrt{\frac{\beta}{2\pi}} e^{\beta I_0}
\left(
\frac{5\beta^4}{(1-I_1)^4}
+\frac{29\beta^3 I_2+29\beta^2 I_3 + 15 \beta I_4+5I_5}{(1-I_1)^5} \hskip20ex \right.\\
\left. +\frac{84\beta^2 I_2^2+116\beta I_3 I_2 +44 I_4 I_2+29I_3^2}{(1-I_1)^6}
+\frac{20 I_2^2 (7\beta I_2 + 10 I_3)}{(1-I_1)^7}
+\frac{140 I_2^4}{(1-I_1)^8}
\right) \\
+ \ldots \ .
\end{multline}
Similar formulas can be worked out for the universal partition functions with several asymptotic boundary components, namely \begin{equation}
Z(\{I_n \}; \beta_1,\ldots,\beta_m )^\text{top.}=
\prod_{i=1}^{m}\left(e^{\beta_iI_0}\sqrt{\frac{\beta_i}{2\pi}}\right)
\sum_{g=0}^{\infty}g_s^{2g+m-2}(1-I_1)^{g-1}Z_g(\{I_n\},\beta_1,\ldots,\beta_m) \ ,
\end{equation}
where
\begin{equation}
\mathcal{B}(\beta_1)\cdots\mathcal{B}(\beta_m) F_g=\frac{g_s\sqrt{\beta_1\cdots\beta_m}}{(2\pi)^\frac{m}2} e^{(\beta_1+\ldots+\beta_m)I_0}(1-I_1)^{g-1}Z_g(\{I_n\},\beta_1,\ldots,\beta_m) \ .
\end{equation}
In particular for two asymptotic boundary components the leading order contributions are given by
\begin{multline}
Z(\{I_n \}; \beta_1,\beta_2) = \frac{ \sqrt{\beta_1\beta_2}}{2\pi \beta_1 +2 \pi \beta_2} e^{(\beta_1+\beta_2)I_0} \\
+
\frac{ g_s^2 \sqrt{\beta_1\beta_2}}{48\pi}e^{(\beta_1+\beta_2)I_0}
\left( \frac{\beta_1^2+\beta_1\beta_2+\beta_2^2}{(1-I_1)^2}
+\frac{2(\beta_1+\beta_2)I_2+I_3}{(1-I_1)^3}
+\frac{2I_2^2}{(1-I_1)^4} \right) + \ldots \ ,
\end{multline}
including the semi-classical contribution, cf.\ eq.~\eqref{eq:Zfuncs}. Thus, any of the off-shell or on-shell partition functions defined in eqs.~\eqref{eq:offshellPart} and \eqref{eq:onshellPart} can be obtained from the universal partition functions~\eqref{eq:Zuni} upon inserting $I_n(\{ t_k \} , u_0(\{ t_k \}))$ with suitable values for the couplings $\{ t_k \}$. For instance, inserting $I_n(\{ t_k + \gamma_k +\sdef_k \}, u_0( \{ t_k + \gamma_k +\sdef_k \}))$ we obtain the off-shell partition functions $Z^H(\{t_k\}; \beta_1,\ldots,\beta_m)$, whereas for $I_n(\{ \gamma_k +\sdef_k \}, u_0( \{ \gamma_k +\sdef_k \}))$ we arrive at the on-shell partition functions $Z(\beta_1,\ldots,\beta_m)$. In the next section, we focus on the partition functions $Z(t_0,t_1;\beta_1,\ldots,\beta_m)$ studied in refs.~\cite{OkuyamaSakai1,OkuyamaSakai2}, where we assign on-shell values to the couplings $t_k$, $k=2,3,4,\ldots$, while keeping the first two couplings $t_0$ and $t_1$ off-shell \cite{ZografLargeGenusAsymptotics}.
While the presented genus expansion in the coupling $g_s \sim e^{-1/G_N}$ is non-perturbative in the gravitational coupling $G_N$ of JT gravity, it is perturbative in the dual matrix model formulation, where the expansion parameter $g_s$ describes quantum fluctuations about the classical energy density of states \cite{SSS,CJ1}. In fact the discussed partition functions $Z(\{ I_n \}; \beta_1,\ldots,\beta_m)$ are divergent series in $g_s$ due to the factorial growth $(2g)!$ of the contributions at order $g_s^{2g}$ \cite{ZografLargeGenusAsymptotics,SSS}. Therefore, the partition functions $Z(\{ I_n \}; \beta_1,\ldots,\beta_m)$ are asymptotic series that require a non-perturbative completion arising from non-perturbative effects of the order $e^{-1/g_s}$. For further details on this issue and the possible emergence of non-perturbative instabilities, we refer the reader to ref.~\cite{SSS,CJ1} and the solutions proposed in refs.~\cite{CJ1,CJ2,CJ3}.
\subsection{Partition Functions with Leading Order Off-shell Couplings} \label{sec:TwoOffShell}
In the spirit of refs.~\cite{OkuyamaSakai1,OkuyamaSakai2} let us now consider the partition functions $Z(t_0,t_1;\beta_1,\ldots,\beta_m)$ with only the couplings $t_0$ and $t_1$ taken to be off-shell. Then the partition functions for JT~gravity coupled to a gas of defects are defined as
\begin{equation}
Z(t_0,t_1;\beta_1,\ldots,\beta_m) \equiv Z(\{t_0,t_1,t_{k\ge2} = \gamma_k + \sdef_k \};\beta_1,\ldots,\beta_m) \ ,
\end{equation}
where setting $t_0=\sdef_0$ and $t_1=\sdef_1$ yields the on-shell partition functions in all couplings. Analogously, we can define the function $u(t_0,t_1)$ and the generating function $F(t_0,t_1)$ obtained by evaluating the couplings $t_{k\ge 2}$ of the tau function $u(\{t_k\})$ and of the generating function $F(\{t_k\})$ at their on-shell values, i.e.\
\begin{equation}
u(t_0,t_1) = u(\{t_0,t_1,t_{k\ge2} = \gamma_k + \sdef_k \}) \ , \qquad
F(t_0,t_1) = F(\{t_0,t_1,t_{k\ge2} = \gamma_k + \sdef_k \}) \ ,
\end{equation}
with
\begin{equation}
u(t_0,t_1)=\partial_0^2F(t_0,t_1) \ .
\end{equation}
All these functions can respectively be obtained from their universal expressions~\eqref{eq:Zuni}, \eqref{eq:ug}, and \eqref{eq:Fgstructure} by inserting the on-shell values of the couplings $t_{k\ge 2}$ into the functions $I_n$. The function $u(t_0,t_1)$ fulfils the first partial differential equation of the KdV hierarchy~\eqref{eq:GeneralizedKdV}, which is just the non-linear partial differential KdV~equation, i.e.,
\begin{equation} \label{eq:KdVeq}
\partial_{1}u = u \, \partial_{0}u +\frac{g_s^2}{12} \, \partial_{0}^3 u \ .
\end{equation}
With $t_0$ and $t_1$ off-shell we observe that the function $I_1$ depends only on $t_1$ and $u_0\equiv I_0$, while $I_n$ for $n\ge 2$ are series in $u_0$ without an explicit dependence on $t_0$ and $t_1$. Therefore, it is convenient to introduce new (formal) variables $(y,t)$ given by \cite{ZografLargeGenusAsymptotics,OkuyamaSakai1}
\begin{equation} \label{eq:yt}
y=u_0 \ , \qquad t=1-I_1 \ .
\end{equation}
Since $I_n$ for $n\ge 2$ is only a function of $y$, we obtain from the universal tau function~\eqref{eq:ug} and the universal generating function~\eqref{eq:Fgstructure} the asymptotic series
\begin{equation}\label{eq:ugenusexpansion}
u(y,t)=y+\sum_{g=1}^{\infty}g_s^{2g} u_g(y,t)\ , \qquad
u_g(y,t)=\sum_{k=2g+1}^{5g-1}u_{g,k}(y)t^{-k} \ ,
\end{equation}
and
\begin{equation}\label{eq:Fgenusexpansion}
F(y,t)=F_0(y,t) - \frac{g_s^2}{24}\log t+ \sum_{g=2}^{\infty}g_s^{2 g}F_g(y,t) \ , \qquad
F_g(y,t)=\sum_{k=2g-1}^{5g-5}F_{g,k}(y,t)t^{-k}\ .
\end{equation}
The coefficient functions $u_g(y,t)$ (for $g\ge 1$) and $F_g(y,t)$ (for $g\ge 2$) are Laurent polynomials in the variable $t$, where the range for the powers of $t$ is a consequence of the restricted sums in eqs.~\eqref{eq:ug} and \eqref{eq:Fgstructure}. The degrees of these Laurent polynomials conform with the structure derived by Zograf for the specific on-shell couplings $t_k = \gamma_k$ for $k\ge 2$ \cite{ZografLargeGenusAsymptotics}. Furthermore, at genus one the logarithmic contribution to $F(y,t)$ arises from eq.~\eqref{eq:F1def}, whereas with eq.~\eqref{eq:F0} the genus zero contribution becomes
\begin{multline}
F_0(y,t) = \frac16 y^3 t^2 +\frac16 y^2t \sum_{k=2}^{+\infty} \frac{y^k(2k+5)(\gamma_k+\sdef_k)}{(k+2)(k+1)(k-2)!}
+ \frac16y \left( \sum_{k=2}^{+\infty} \frac{y^k(\gamma_k+\sdef_k)}{(k+1)(k-2)!}\right)^2 \\
+\sum_{k=4}^{+\infty} \frac{y^{k+1}}{3(k+1)(k+2)!} \sum_{n=2}^{k-2} \binom{k+4}{n+2}\binom{n}2\binom{k-n}2(\gamma_n+\sdef_n)(\gamma_{k-n}+\sdef_{k-n}) \ .
\end{multline}
Let us now turn to the partition function $Z(t_0,t_1;\beta)$ with a single asymptotic boundary. Since the couplings $t_{k\ge2}$ are taken on-shell we cannot obtain $Z(t_0,t_1;\beta)$ by acting with the boundary creation operator $\mathcal{B}(\beta)$ on the generating function $F(t_0,t_1)$ because the boundary operator $\mathcal{B}(\beta)$ contains derivatives with respect to those parameters that have been fixed to their on-shell values. Thus, either we compute $Z(t_0,t_1;\beta)$ from the universal partition function~\eqref{eq:Zuni} or we determine a differential equation with $Z(t_0,t_1;\beta)$ as its solution. For the latter approach we follow the authors of ref.~\cite{OkuyamaSakai1}. Note that the partial derivatives $\partial_k$ for $k\ge 2$ appearing in the boundary operator $\mathcal{B}(\beta)$ can be rewritten in terms of derivatives with respect to $\partial_0$ due to the KdV hierarchy~\eqref{eq:GeneralizedKdV}, namely
\begin{equation} \label{eq:ZWrel}
\partial_0 Z(t_0,t_1;\beta) = \left.\frac1{g_s^2} \mathcal{B}(\beta) \partial_0 F(\{t_k\}) \right|_{\{t_{k\ge 2} = \gamma_k +\sdef_k \}}
= -\frac1{g_s\sqrt{2\pi\beta}}+W(t_0,t_1;\beta) \ ,
\end{equation}
with the definition
\begin{equation}
W(t_0,t_1;\beta) = \frac{1}{g_s\sqrt{2\pi\beta}} \sum_{\ell=0}^{+\infty} \beta^\ell \mathcal{R}_{\ell} \ ,
\end{equation}
in terms of the Gelfand--Dikii polynomials~\eqref{eq:GD} and $\mathcal{R}_0=1$. The key observation of ref.~\cite{OkuyamaSakai1} is now that the Gelfand--Dikii polynomials obey the non-trivial relation\footnote{This relation can directly be proven by induction with respect to the index $k$ of the Gelfand--Dikii polynomials $\mathcal{R}_k$. The induction step is performed by applying the recursion relation~\eqref{eq:generalKdvequation} of the Gelfand--Dikii polynomials.}
\begin{equation}
\partial_1 \mathcal{R}_k = u \, \partial_0\mathcal{R}_k + \frac{g_s^2}{12} \partial_0^3\mathcal{R}_k \ ,
\end{equation}
which immediately implies the differential equation
\begin{equation}
\partial_1 W(t_0,t_1;\beta) = u \, \partial_0W(t_0,t_1;\beta) + \frac{g_s^2}{12} \partial_0^3W(t_0,t_1;\beta) \ .
\end{equation}
The partition function $Z(t_0,t_1;\beta)$ can now be determined from this differential equation for $W(t_0,t_1;\beta)$. The function $W(t_0,t_1;\beta)$ is an interesting quantity by itself, see for instance the discussion in ref.~\cite{OkuyamaSakai1}.
Upon expressing the couplings $(t_0,t_1)$ in terms of the variables $(y,t)$ defined in eq.~\eqref{eq:yt}, the function $W(y,t;\beta)$ enjoys the asymptotic genus expansion
\begin{equation} \label{eq:Wgenusxpansion}
W(y,t;\,\beta)=\frac{e^{\beta y}}{\sqrt{2 \pi \beta}}\sum_{g=0}^{+\infty} g_s^{2 g-1}\,W_g(y,t;\beta)\ ,
\end{equation}
where --- due to the definition~$\mathcal{R}_0=1$ and due to the leading order behaviour~\eqref{eq:Rk_u0} of the Gelfand--Dikii polynomials --- the genus zero contribution reads
\begin{equation} \label{eq:Wg0}
W_0(y,t;\beta) = 1 \ .
\end{equation}
By inserting the variables~\eqref{eq:yt} into the $t_0$-derivative of the universal expressions~\eqref{eq:Zguni}, we find that the higher genus contributions $W_g(y,t;\beta)$ are polynomials in $t^{-1}$ with coefficient functions in terms of $y$ and $\beta$ of the form
\begin{equation}\label{eq:WLaurentpolynomial}
W_g(y,t;\beta)=\sum_{k= 2 g}^{5g-1}W_{g,k}(y;\beta) \, t^{-k} \quad \text{for} \quad g \ge 1 \ .
\end{equation}
Inserting the asymptotic expansion~\eqref{eq:Wgenusxpansion} into the partial differential equation yields the recursion differential equation \cite{OkuyamaSakai1}
\begin{equation} \label{eq:DiffRec}
\partial_t W_g = - \sum_{h=0}^{g-1} u_{g-h} \nabla(\beta) W_h - \frac1{12} \nabla(\beta)^3 W_{g-1} \ ,
\end{equation}
with the linear differential operators
\begin{equation}
\nabla(\beta)
= \partial_0 + \frac\beta{t}
= \frac1t \left( -I_2 \partial_t + D_y \right)\ , \qquad
D_y =\partial_y + \beta \ .
\end{equation}
Furthermore, inserting the expansion~\eqref{eq:WLaurentpolynomial} into the differential recursion relation and carrying out a few steps of algebra yields recursion relations for the Laurent modes $W_{g,k}(y;\beta)$. With the initial genus zero contribution~\eqref{eq:Wg0} we arrive for genus $g=1$ at\footnote{Note that the polynomial structure~\eqref{eq:WLaurentpolynomial} of $W_g(y,t;\beta)$ fixes the constant of integration in the differential recursion relation~\eqref{eq:DiffRec} with respect to $t$ unambiguously.}
\begin{equation}
W_{1,k} = \frac{\beta}k u_{1,k} + \frac{\beta^3}{24}\delta_{k,2} + \frac1{36} (3 I_2 \beta^2 + I_3 \beta)\delta_{k,3} + \frac\beta{16} I_2^2\delta_{k,4} \quad
\text{for} \quad k=2,3,4 \ ,
\end{equation}
which explicitly becomes with eq.~\eqref{eq:u1}
\begin{equation}
W_1(y,t;\beta) = \frac{\beta^3}{24t^2} + \frac\beta{24t^3} \left( 2\beta I_2 + I_3 \right) + \frac\beta{12t^4} I_2^2 \ .
\end{equation}
Furthermore, for $g \ge 2$ and $k=2g+1,\ldots, 5g-1$ we arrive at the lengthy but straightforwardly applicable recursion relation
\begin{multline}
W_{g,k} =
\sum_{h=1}^{g-1} \sum_{n=2h}^{5h-1}\left( \frac{n}{k} I_2 u_{g-h,k-n-1} W_{h,n} + \frac1k u_{g-h,k-n} D_y W_{h,n}\right)
+\frac\beta{k} u_{g,k} \\
+\frac1{12k} \Big[D_y^3 W_{g-1,k-2}
+\left(3(k-2) I_2 D_y^2 + (3k-8)I_3 D_y + (k-3) I_4\right) W_{g-1,k-3}\\
+\left(3(k^2-5k +5) I_2^2 D_y + (k-4)(3k-5) I_2 I_3 \right) W_{g-1,k-4}\\
+(k-5)(k-3)(k-1) I_2^3 W_{g-1,k-5} \Big] \ ,
\end{multline}
where we set $W_{h,n} \equiv 0$ for $n\not\in\{ 2h,\ldots,5h-1 \}$ and $u_{h,n} \equiv 0$ for $n\not\in\{2h+1,\ldots,5h-1\}$. In particular, for genus two we readily compute
\begin{multline}
W_2(y,t;\beta) =
\frac{\beta}{5760} \left(
\frac{5\beta^5}{t^4}
+\frac{44\beta^4 I_2+58\beta^3 I_3+44\beta^2 I_4+20\beta I_5+5 I_6}{t^5} \right. \\
+\frac{200\beta^3 I_2^2+400\beta^2 I_2 I_3+145\beta I_3^2+220\beta I_2I_4
+102 I_3 I_4+64 I_2 I_5}{t^6}\\
+\frac{5I_2 \left(112\beta^2 I_2^2+240\beta I_3 I_2+84 I_4 I_2+109 I_3^2\right)}{t^7}
\left.+\frac{20 I_2^3 \left(49\beta I_2+88 I_3\right)}{t^8}
+\frac{980I_2^5}{t^9} \right)
\ .
\end{multline}
With the help of these recursion formulas we are now in a position to deduce the partition function $Z(y,t;\beta)$ with one asymptotic boundary component as well. The general structure~\eqref{eq:Zguni} implies for the partition function the asymptotic series\footnote{Note that the newly introduced contributions $\tilde Z_g$ to the partition function differ from the definition of $Z_g$ given in eq.~\eqref{eq:Ztopgeneral} by a normalisation.}
\begin{equation}
Z(y,t;\beta) = \sum_{g=0}^{+\infty} g_s^{2g-1} \tilde Z_g(y,t;\beta) \ .
\end{equation}
The genus zero part splits into the semi-classical and topological contributions
\begin{equation}
\tilde Z_0(y,t;\beta) = \tilde Z_0(y,t;\beta)^\text{semi.}
+ \tilde Z_0(y,t;\beta)^\text{top.} \ ,
\end{equation}
where --- using eqs.~\eqref{eq:dshifts} and \eqref{eq:Zfuncs}--- the semi-classical part is given by
\begin{multline}
\tilde Z_0(y,t;\beta)^\text{semi.} = \frac{t(1+y\beta)}{\sqrt{2\pi} \beta^{\frac32}}
+ \frac1{\sqrt{2\pi\beta}} \sum_{k=2}^{+\infty} \frac{y^k (\gamma_k +\delta_k)}{k(k-2)!} \\
+\frac1{\sqrt{2\pi}\beta^{\frac32}} \sum_{k=2}^{+\infty} \frac{y^{k-1}(\gamma_k +\delta_k)}{(k-1)!}
+\frac1{\sqrt{2\pi\beta}} \sum_{k=2}^{+\infty} \frac{\delta_k + \gamma_k}{(-\beta)^k} \ ,
\end{multline}
and where --- according to eq.~\eqref{eq:BbetaF0} --- the topological part reads
\begin{multline}
\tilde Z_0(y,t;\beta)^\text{top.} = \frac{t}{\sqrt{2 \pi} \beta^{\frac32}} \left(e^{\beta y} - (1+y\beta) \right) - \frac{e^{\beta y}}{\sqrt{2\pi\beta}}\sum_{k=2}^{+\infty} \frac{y^k (\gamma_k +\sdef_k)}{k!}
-\frac1{\sqrt{2\pi\beta}} \sum_{k=2}^{+\infty} \frac{y^k (\gamma_k +\sdef_k)}{k(k-2)!} \\
+\frac{e^{\beta y}-1}{\sqrt{2 \pi} \beta^{\frac32}} \sum_{k=2}^{+\infty} \frac{y^{k-1} (\gamma_k +\sdef_k)}{(k-1)!}
+\frac1{\sqrt{2 \pi} \beta^{\frac32}} \sum_{k=0}^{+\infty} \sum_{\ell=2}^{+\infty} \frac{y^{k+\ell+1}\beta^{k+2}}{(k+\ell+1)!} \binom{k+\ell}{k} (\gamma_\ell+\sdef_\ell) \ .
\end{multline}
Therefore, the total genus zero contribution becomes
\begin{multline} \label{eq:Z0all}
\tilde Z_0(y,t;\beta) = \frac{e^{\beta y}}{\sqrt{2 \pi} \beta^{\frac32}} \left( t - \beta\sum_{k=2}^{+\infty} \frac{y^k (\gamma_k +\sdef_k)}{k!}
+\sum_{k=2}^{+\infty} \frac{y^{k-1} (\gamma_k +\sdef_k)}{(k-1)!} \right) \\
+\frac1{\sqrt{2\pi} \beta^{\frac32}}
\sum_{k=0}^{+\infty} \sum_{\ell=2}^{+\infty} \frac{y^{k+\ell+1}\beta^{k+2}}{(k+\ell+1)!} \binom{k+\ell}{k} (\gamma_\ell+\sdef_\ell)
+\frac{1}{\sqrt{2\pi\beta}}\sum_{k=2}^{+\infty} \frac{\delta_k + \gamma_k}{(-\beta)^k} \ .
\end{multline}
For the higher genus contributions we arrive with eq.~\eqref{eq:Zguni} at the polynomials in $t^{-1}$
\begin{equation} \label{eq:ZgPoly}
\tilde Z_g(y,t;\beta) = \frac{e^{\beta y}}{\sqrt{2 \pi} \beta^{\frac32}} \sum_{k=2g-1}^{5g-3} Z_{g,k}(y;\beta) t^{-k} \quad \text{for} \quad g \ge 1 \ .
\end{equation}
Thus, employing the derived recursion relations for $W_{g,k}(y;\beta)$ we can determine $Z_g(y,t;\beta)$ recursively upon integrating eq.~\eqref{eq:ZWrel}. Note that the constants of integration at each order in $g_s$ are unambiguously determined by the general structure~\eqref{eq:ZgPoly}. Explicitly, we find for genus one --- in agreement with eq.~\eqref{eq:Zg1} --- the result
\begin{equation} \label{eq:Z1onshell}
\tilde Z_1(y,t;\beta) = \frac{e^{\beta y}}{24\sqrt{2\pi\beta}} \left( \frac{\beta^2}t + \frac{\beta I_2}{t^2} \right) \ ,
\end{equation}
whereas for genus two --- in agreement with eq.~\eqref{eq:Zgupto2} --- we obtain
\begin{multline} \label{eq:Z2onshell}
\tilde Z_2(y,t;\beta) =
\frac{\sqrt{\beta} \, e^{\beta y}}{5760\sqrt{2\pi}} \left(
\frac{5\beta^4}{t^3}
+\frac{29\beta^3 I_2+29\beta^2 I_3 + 15 \beta I_4+5 I_5}{t^4} \right.\\
\left. +\frac{84\beta^2 I_2^2+116\beta I_3 I_2 +44 I_4 I_2+29 I_3^2}{t^5}
+\frac{20 I_2^2 (7\beta I_2 + 10 I_3)}{t^6}
+\frac{140 \beta I_2^4}{t^7}
\right) \ .
\end{multline}
Let us give an alternative perspective on the partition function $Z(y,t; \beta)$ in terms of the associated Schr\"odinger problem \cite{Brezin:1990rb,Douglas:1989ve,Gross1}
\begin{equation}
\mathcal{H} \psi_E(t_0,t_1) = E \, \psi_E(t_0,t_1) \quad \text{with} \quad
\mathcal{H} = \hbar^2 \partial_0^2 + u(t_0,t_1) \ ,
\end{equation}
with $\hbar=\frac{g_s}{\sqrt{2}}$, Hamilton operator $\mathcal{H}$, and the wavefunctions $\psi_E(t_0,t_1)$, which are eigenfunctions with energy eigenvalue~$E$. Here the partially on-shell tau function $u(t_0,t_1)$ becomes the potential of the Schr\"odinger equation, and the partition function can be written as
\begin{equation}
Z(y,t;\beta) = \int dE \, e^{-\beta E} \rho(E;y,t) \ ,
\end{equation}
in terms of the spectral density $\rho(E;y,t)$ of the energy eigenvalues of the Hamilton operator $\mathcal{H}$. This formulation offers a framework for a non-perturbative description in the genus expansion~$g_s$. However, since in our context the tau function $u(t_0,t_1)$ itself is only given as an asymptotic series in $g_s$, setting up the appropriate non-perturbatively exact Schr\"odinger problem is nevertheless a difficult task. This question has been discussed and analysed with numerical methods in refs.~\cite{CJ1,CJ2,CJ3}. Here we only focus on the leading order contribution at genus zero, which predicts the integral representation
\begin{equation}
\tilde Z_0(y,t;\beta) = \int dE \,e^{-\beta E} \rho_0(E;y,t) \ ,
\end{equation}
in terms of the genus zero spectral density $\rho_0(E;y,t)$. To verify this prediction explicitly, we first express the genus zero partition function~\eqref{eq:Z0all} as
\begin{equation}\label{eq:Ztilde0}
\tilde Z_0 = \frac{e^{\beta y}}{\sqrt{2\pi} \beta^{\frac32}} \left( t + J'(y)\right) - \frac{e^{\beta y}}{\sqrt{2\pi\beta}}J(y)
+ \sqrt{\frac{\beta}{2\pi}} \int_{-\infty}^y dv \,e^{v\beta} J(v) \ ,
\end{equation}
in terms of the function
\begin{equation} \label{eq:defJC}
J(y) = \sum_{k=2}^{+\infty} \frac{y^{k} (\gamma_k +\sdef_k)}{k!} \ .
\end{equation}
Here we assume that the function $J(v)$ is continuously differentiable in the interval $(-\infty,y)$, and that the stated integral (for $\beta > 0$) is finite. Performing an integration by parts and using the integral identities
\begin{equation}
\sqrt{\frac\pi\beta}e^{\beta z} = \int_{-z}^{+\infty} dE \frac{e^{-\beta E}}{\sqrt{E+z}} \ , \qquad
\frac{\sqrt{\pi}}{2 \beta^{\frac32}} e^{\beta z} = \int_{-z}^{+\infty} dE e^{-\beta E} \sqrt{E+z} \ ,
\end{equation}
we arrive at the expression
\begin{equation}
\tilde Z_0(y,t;\beta) = \int_{-y}^{+\infty} dE \,e^{-\beta E} \rho_0(E;y,t) \ ,
\end{equation}
in terms of the (genus zero) spectral density
\begin{equation} \label{eq:rho0}
\rho_0(E;y,t) = \frac{\sqrt{2}}{\pi} \sqrt{E+y} \left( t +J'(y) \right)
-\frac1{\sqrt{2}\pi} \int_{-y}^E dv \frac{J'(-v)}{\sqrt{E-v}} \ .
\end{equation}
The obtained result agrees with the expected structure of the partition function obtained from the associated Schr\"odinger problem. Note that the obtained function $\rho_0(E;y,t)$ enjoys only the interpretation as a spectral density, if it is non-negative over the energy range $(-y,+\infty)$. The conditions $J'(y)\ge -t$ and $J'(v) \ge 0$ for $v\in (-y,+\infty)$ are sufficient to ensure a non-negative spectral density function (in the genus zero approximation). For some energy ranges $E$ in the interval~$(-y,+\infty)$ we seemingly arrive at a negative function $\rho_0(E;y,t)$. However, as on the classical level a Hawking--Page like first order phase transition can be observed when varying the potential~\eqref{eq:Dilatonpotential} \cite{Witten:2020ert}, it might be expected that here too, a phase transition occurs preventing the aforementioned negativity of the spectral density function. In ref.~\cite{CJ4} this was only observed to be true for a specific class of models for which $U(0)=0$ with $U(\phi)$ (again referring to eq.~\eqref{eq:Dilatonpotential}), while a larger class of models, namely those for which $U(0) \neq 0$, are declared both perturbatively and non-perturbatively unstable.
For energies $E$ close to the negative coupling $-y$ the calculated energy density $\rho_0(E;y,t)$ behaves as
\begin{equation} \label{eq:GroundState}
\rho_0(E;y,t) = \frac{\sqrt{2}t}{\pi}\sqrt{E+y}+ \mathcal{O}(|E-E_0|^\frac32) \ .
\end{equation}
Therefore, we can interpret the negative coupling $-y$ as the (semi-classical) ground state energy of the Schr\"odinger problem. In particular, for JT~gravity in the absence of defects the on-shell value of $y$ becomes zero, and hence the ground state energy vanishes. Coupling JT~gravity to a gas of defect, however, yields a non-vanishing on-shell value for $y$ according to eqs.~\eqref{eq:dshifts} and \eqref{eq:yt}, which therefore results in a non-trivial shift of the ground state energy. This observation is in agreement with the results obtained in refs.~\cite{Maxfield3gravity,WittenDeformations}, and we get back to this point in the explicit example below and in Section~\ref{Section:LowtemperatureExpansion}.
\bigskip
Finally, let us illustrate the structure of the partially off-shell partition function $Z(y,t;\beta)$ for JT gravity interacting with a single defect type specified by the coupling~$\epsilon$ and identification angle~$\alpha$. Then --- according to eqs.~\eqref{eq:gammak} and \eqref{eq:dshifts} --- the on-shell couplings $t_k$ for $k\ge 2$ become
\begin{equation} \label{eq:onshelltk}
t_k = \frac{(-1)^k}{(k-1)!} + \left(-\frac{\alpha^2}{4\pi^2}\right)^k \frac{2\pi^2 \epsilon}{k!}
\quad \text{for} \quad k\ge 2 \ ,
\end{equation}
whereas the remaining unfixed couplings $t_0$ and $t_1$ acquire their on-shell values upon setting
\begin{equation}
\left.(t_0,t_1)\right|_\text{on-shell}= 2 \pi^2 \epsilon\left(1, -\frac{\alpha^2}{4\pi^2}\right) \ .
\end{equation}
The on-shell values of the variables $(y,t)$ defined in terms of $(t_0,t_1)$ in eq.~\eqref{eq:yt} are governed by the functional relations
\begin{equation} \label{eq:ytonshell}
\begin{aligned}
0 &= \left. -\sqrt{y} \,\BJ_{1}(2 \sqrt{y})\right|_\text{on-shell}
+ \left.(2\pi^2 \epsilon) \BJ_0\left( \frac{\alpha\sqrt{y}}\pi \right) \right|_\text{on-shell} \ , \\
\left. t \right|_\text{on-shell} &= \left. \BJ_{0}(2 \sqrt{y})\right|_\text{on-shell}
+ \left.(2\pi^2 \epsilon) \frac{\alpha}{2\pi\sqrt{y}} \BJ_1\left( \frac{\alpha\sqrt{y}}\pi \right) \right|_\text{on-shell} \ ,
\end{aligned}
\end{equation}
in terms of the Bessel functions $\BJ_\nu(x)$ of the first kind
\begin{equation} \label{eq:Bfk}
\BJ_\nu(x) = \left(\frac{x}2\right)^\nu \sum_{k=0}^{+\infty} \frac{(-1)^k}{\Gamma(\nu+k+1) \, k!} \left(\frac{x^2}4\right)^k \ , \quad
\BJ_{-n}(x)\equiv(-1)^n \BJ_n(x) \ \text{for integer $n$} \ .
\end{equation}
In the limit of vanishing defect interaction $\epsilon \to 0$ the functional relations~\eqref{eq:ytonshell} have the on-shell solution $\left.(y,t)\right|_\text{on-shell}=(0,1)$ in accord with ref.~\cite{OkuyamaSakai1}. Solving for $\left.(y,t)\right|_\text{on-shell}$ in the vicinity of $(0,1)$ for small $\epsilon$ with the implicit function theorem, we obtain for $(y,t)$ the on-shell expansion in the first few orders
\begin{equation}\label{eq:ytexpansionpoint}
\begin{aligned}
\left.y\right|_\text{on-shell}=&2\pi^2\epsilon+\pi^2\left(2\pi^2-\alpha^2\right)\epsilon^2+\frac{\pi^2(15\alpha^4-72\pi^2\alpha^2+80\pi^4)}{24} \epsilon^3+\ldots \ , \\
\left.t\right|_\text{on-shell} =& 1+\frac{\alpha^2-4\pi^2}{2} \epsilon-\frac{\alpha^4-8 \pi^2\alpha^2+8 \pi^4}{8} \epsilon^2 \\
&\qquad\qquad\qquad+\frac{21\alpha^6-216\pi^2\alpha^4+576\pi^4\alpha^2-448\pi^6}{288}\epsilon^3+\ldots \ .
\end{aligned}
\end{equation}
According to eq.~\eqref{eq:GroundState} these on-shell values give rise to a non-vanishing ground state energy, which to leading order in $\epsilon$ reads
\begin{equation} \label{eq:GSenergy}
E_0 = - 2 \pi^2 \epsilon + \mathcal{O}(\epsilon^2) \ .
\end{equation}
Furthermore, inserting the on-shell couplings~\eqref{eq:onshelltk} into the functions $I_n$ for $n\ge2$ yields in terms of the Bessel function~\eqref{eq:Bfk} the expressions
\begin{equation} \label{eq:DefIOneBdry}
I_n(y) = \frac{(-1)^n}{(\sqrt{y})^{n-1}} \BJ_{n-1}(2 \sqrt{y}) + (2\pi^2 \epsilon) \left( - \frac{\alpha}{2\pi \sqrt{y}} \right)^n \BJ_n\left(\frac{\alpha \sqrt{y}}\pi\right)
\quad \text{for} \quad n\ge 2 \ .
\end{equation}
Similarly, the function~$J'(y)$ defined via eq.~\eqref{eq:defJC} becomes
\begin{equation} \label{eq:DefJpOneBdry}
J'(y) = 1+ 2 \pi^2 \epsilon \frac{\alpha^2}{4\pi^2} - \BJ_0(2\sqrt{y})
- (2\pi^2\epsilon) \frac{\alpha}{2\pi\sqrt{y}} \BJ_1\left(\frac{\alpha\sqrt{y}}\pi\right) \ .
\end{equation}
Thus --- according to eq.~\eqref{eq:rho0} --- the genus zero contribution of the spectral density is given in terms of the Bessel functions $\BJ_0$ and $\BJ_1$ and the modified Bessel functions $\MBJ_0$ and $\MBJ_1$ by
\begin{equation} \label{eq:rho0offshell}
\rho_0(E;y,t) =
\frac1{\sqrt{2}\,\pi} \int_{-y}^E dv \frac{\MBJ_0(2\sqrt{v}) + (2\pi^2\epsilon) \frac{\alpha}{2\pi\sqrt{v}} \MBJ_1\left(\frac{\alpha\sqrt{y}}\pi\right)}{\sqrt{E-v}}
\end{equation}
with the modified Bessel functions defined as $\MBJ_\nu(x) = i^{-\nu} \BJ_{\nu}(i x)$. This result is in agreement with refs.~\cite{WittenDeformations,Maxfield3gravity}. Finally, upon inserting the expressions~\eqref{eq:DefIOneBdry} into the general genus one and genus two results~\eqref{eq:Z1onshell} and \eqref{eq:Z2onshell}, we arrive at $Z_1(y,t;\beta)$ and $Z_2(y,t;\beta)$ in terms of Bessel functions. Expanding these results to leading order in the coupling $\epsilon$ we respectively obtain
\begin{multline} \label{eq:Z1Z2}
\left.\tilde Z_{1}(y,t;\beta)\right|_{y=2 \pi^2 \epsilon + \mathcal{O}(\epsilon^2),
t=1+\frac{1}{2}(\alpha^2-4\pi^2)\epsilon+\mathcal{O}(\epsilon^2)}
=\frac{\beta^{\frac32}e^{2\pi^2\epsilon \beta}}{\sqrt{2\pi}} \left(\frac1{24}-\frac{\alpha ^2 \epsilon }{48}+\frac{\pi ^2 \epsilon }{12}\right) \\
+\frac{\beta^{\frac12}e^{2\pi^2\epsilon \beta}}{\sqrt{2\pi}} \left(\frac1{24}+\frac{\alpha ^4 \epsilon }{384 \pi ^2}-\frac{\alpha ^2 \epsilon }{24}+\frac{\pi ^2 \epsilon }{8}\right)
+ \mathcal{O}(\epsilon^2) \ ,
\end{multline}
and
\begin{equation}
\begin{aligned}
& \left.\tilde Z_{2}(y,t;\beta)\right|_{y=2 \pi^2 \epsilon + \mathcal{O}(\epsilon^2),
t=1+\frac{1}{2}(\alpha^2-4\pi^2)\epsilon+\mathcal{O}(\epsilon^2)}\\
&\quad =\frac{\beta^{\frac{9}{2}} e^{2\pi^2\epsilon \beta}}{\sqrt{2\pi}} \left(\frac{1}{1152}-\frac{\alpha ^2 \epsilon }{768}+\frac{\pi ^2 \epsilon }{192}\right)
+\frac{\beta^{\frac{7}{2}} e^{2\pi^2\epsilon \beta}}{\sqrt{2\pi}} \left(\frac{29}{5760}+\frac{29 \alpha ^4 \epsilon }{92160 \pi ^2}-\frac{29 \alpha ^2 \epsilon }{2880}
+\frac{203 \pi ^2 \epsilon }{5760}\right)\\
&\quad+\frac{\beta^{\frac{5}{2}} e^{2\pi^2\epsilon \beta}}{\sqrt{2\pi}} \left(\frac{139}{11520}-\frac{29 \alpha ^6 \epsilon }{1105920 \pi ^4}
+\frac{7 \alpha ^4 \epsilon }{3840 \pi ^2}-\frac{181 \alpha ^2 \epsilon }{5760}
+\frac{1697 \pi ^2 \epsilon }{17280}\right)\\
&\quad+\frac{\beta^{\frac{3}{2}} e^{2\pi^2\epsilon \beta}}{\sqrt{2\pi}} \left(\frac{449}{11520}+\frac{\alpha ^8 \epsilon }{1179648 \pi ^6}
-\frac{29 \alpha ^6 \epsilon }{276480 \pi ^4}+\frac{461 \alpha ^4 \epsilon }{46080 \pi ^2}
-\frac{77 \alpha ^2 \epsilon }{576}+\frac{5269\pi ^2 \epsilon }{13824}\right)\\
&\quad+\frac{\beta^{\frac{1}{2}} e^{2\pi^2\epsilon \beta}}{\sqrt{2\pi}} \Big(-\frac{137}{9216}-\frac{\alpha ^{10} \epsilon }{70778880 \pi ^8}+\frac{11 \alpha ^8 \epsilon }{4423680 \pi ^6}
-\frac{19 \alpha ^6 \epsilon }{122880 \pi ^4}-\frac{289 \alpha ^4 \epsilon }{138240 \pi ^2}\\
&\qquad\qquad\qquad\qquad\qquad
+\frac{1267 \alpha ^2 \epsilon }{27648}-\frac{3239 \pi ^2 \epsilon }{23040}\Big) + \mathcal{O}(\epsilon^2) \ .
\end{aligned}
\end{equation}
We observe that at every order in the inverse temperature $\beta$, there are contributions from the interaction with the defects already at the linear order in the defect coupling $\epsilon$. Therefore, it is obvious that the dynamics of JT~gravity are strongly influenced by the interaction with defects.
Finally, let us remark that the generalisation to multiple species of defects (with defect couplings $\epsilon_j$ and identification angles $\alpha_j$) is straightforward. Namely, the on-shell values of the couplings $(y,t)$ of eq.~\eqref{eq:ytexpansionpoint} generalise to
\begin{equation}
\begin{aligned}
\left. y \right|_\text{on-shell} &= 2 \pi^2 \sum_{j} \epsilon_{j}
+\sum_{j,k}\left( 2 \pi^4-\frac12\pi^2\alpha_j^2-\frac12\pi^2\alpha_k^2 \right) \epsilon_{j}\epsilon_{k}
+ \ldots \ , \\
\left. t \right|_\text{on-shell} &= 1+\sum_{j} \frac{\alpha_j^2 - 4 \pi^2}2 \epsilon_j
-\sum_{j,k} \frac{ \alpha_{k}^4+\alpha_j^4-8\pi^2 \left(\alpha_k^2+\alpha_j^2\right)+ 16\pi^4 }{16}\epsilon_{j}\epsilon_{k}
+ \ldots \ .
\end{aligned}
\end{equation}
Furthermore, the functions~\eqref{eq:DefIOneBdry} and \eqref{eq:DefJpOneBdry} now become
\begin{equation}
\begin{aligned}\label{eq:InJprimeoneshellmanydefects}
I_n(y) &= \frac{(-1)^n}{(\sqrt{y})^{n-1}} \BJ_{n-1}(2 \sqrt{y})
+ 2\pi^2 \sum_j \epsilon_j \left( - \frac{\alpha_j}{2\pi \sqrt{y}} \right)^n \BJ_n\left(\frac{\alpha_j \sqrt{y}}\pi\right)
\quad \text{for} \quad n\ge 2 \ , \\
J'(y) &= 1+ 2 \pi^2 \sum_j \epsilon_j \frac{\alpha_j^2}{4\pi^2} - \BJ_0(2\sqrt{y})
- 2\pi^2 \sum_j \epsilon_j \frac{\alpha_j}{2\pi\sqrt{y}} \BJ_1\left(\frac{\alpha_j\sqrt{y}}\pi\right) \ .
\end{aligned}
\end{equation}
With these expressions at hand one can again readily compute order-by-order in the genus expansion parameter $g_s$ the partition function of JT~gravity coupled to several species of defects.
\section{Low Temperature Expansion}\label{Section:LowtemperatureExpansion}
So far the partition function has been organised as a genus expansion. That is to say, for any given genus different powers of the temperature $T$ contribute in combination with different powers of the defect couplings~$\epsilon_j$. The contribution to the thermal partition functions at each genus are multiplied by polynomials in the inverse temperature $\beta=1/T$. Hence the magnitude of these polynomials are bounded for high temperatures, and the genus expansion in $g_s$ is sensible in the high temperature regime. However, this expansion breaks down in the low temperature limit $\beta\to+\infty$ unless we keep $g_s \beta^{3/2}$ fixed. Then the perturbative genus expansion remains finite and can be summed exactly \cite{OkuyamaSakai1,OkuyamaSakai2,Alishahihaetal}. This double scaling limit implies for the genus expansion parameter $g_s \to 0^+$, and as a consequence the non-perturbative corrections of the type $\sim e^{-1/g_s}$ vanish in this limit.
To study in the following in the described low temperature limit the interaction of JT~gravity with a gas of defects, the couplings~$\epsilon_j$ --- which are the characteristic energy scales of the defect, see, e.g., eq.~\eqref{eq:GSenergy} --- must be comparable to the low temperature scale $T$. Therefore, we additionally require that for $\beta \to +\infty$ the products $\beta \epsilon_j$ remain constant as well. This limit also implies that non-perturbative corrections of the type $\sim e^{-1/| \epsilon_j|}$ are exponentially suppressed.
\subsection{Low Temperature Limit}\label{LowTemperatureLimit}
Let us consider the low temperature expansion of JT~gravity coupled to a gas of defects of a single species type characterised by the defect coupling $\epsilon$ and the identification angle $\alpha$. To this end, we want to compute the partition functions $Z(\beta_1,\ldots,\beta_m)$ defined in eq.~\eqref{eq:Zfuncs} in the double scaling limit
\begin{equation} \label{eq:lowtemplimit}
\beta_i \to +\infty \quad \text{with} \quad g_s \beta_i^{3/2} = \text{const.} \ , \ \epsilon \beta_i = \text{const.} \quad \text{for all} \quad i=1,\ldots,m \ ,
\end{equation}
with distinct inverse temperatures $\beta_i$ for the individual boundary components.\footnote{In the absence of defects the low temperature limit of the partition function $Z(\beta_1,\beta_2)$ was previously derived in ref.~\cite{OkuyamaSakai2}. For the uniform limit $\beta \to +\infty$ with $\beta = \beta_1 = \ldots =\beta_m$ the low temperature limit of the partition functions together with defects has been first reported in ref.~\cite{Alishahihaetal}.} The inverse temperatures of the boundary components are conveniently described in terms of the universal inverse temperature scale $\beta$ and the dimensionless constants
\begin{equation}
\mathfrak{b}_i = \frac{\beta_i}{\beta} \ .
\end{equation}
Then the above limit becomes $\beta \to+\infty$ for constant positive values $\mathfrak{b}_i$ while keeping $g_s \beta^{3/2}$ and $\epsilon\beta$ fixed.
In the limit~\eqref{eq:lowtemplimit} (the topological part of) the partition function of eq.~\eqref{eq:Zfuncs} becomes
\begin{equation} \label{eq:ZmLowTempExp}
\begin{aligned}
Z(\beta_1,&\ldots,\beta_m)^\text{top.} =
\frac1{g_s^2} \mathcal{B}(\beta_1) \cdots \mathcal{B}(\beta_m) G(1,\{ t_k = \sdef_k \}) \\
&= \sum_{g,n=0}^{+\infty} \frac{(g_s \beta^{\frac32})^{2g-2+m}(\epsilon\beta)^n}{(2\pi)^\frac{m}2} \\
&\quad \cdot\sum_{\ell_1,\ldots,\ell_m=0}^{+\infty}
\beta^{\ell_1 + \ldots +\ell_m-m-n-3g+3}
\left.
\mathfrak{b}_1^{\ell_1+\frac12}\cdots\mathfrak{b}_m^{\ell_m+\frac12}
\partial_{\ell_1}\cdots\partial_{\ell_m} G_{g,m+n}(\{t_k\}) \right|_{t_k =\sdef_k/\epsilon} \ ,
\end{aligned}
\end{equation}
with the generating function $G(1,\{ t_k\}) = \sum_{g,n} g_s^{2g} G_{g,n}(\{t_k\})$ decomposed into the contributions $G_{g,n}$ indexed by their genus $g$ and their number of marked points $n$. Imposing now the selection rule~\eqref{selectionrules} and inserting $\sdef_0= 2\pi^2 \epsilon$, we arrive at
\begin{multline} \label{eq:Zlowtemp}
Z(\beta_1,\ldots,\beta_m)^\text{top.} \\
= \sum_{g,n=0}^{+\infty}
\frac{(g_s \beta^{\frac32})^{2g-2+m}(2\pi^2\epsilon\beta)^n}{(2\pi)^\frac{m}2 \, n!}
\sum_{\ell_1,\ldots,\ell_m=0}^{+\infty}
\mathfrak{b}_1^{\ell_1+\frac12}\cdots\mathfrak{b}_m^{\ell_m+\frac12}
\left\langle \tau_0^n \tau_{\ell_1} \cdots \tau_{\ell_m} \right\rangle_g
+ \mathcal{O}(\beta^{-1})\ ,
\end{multline}
in terms of the non-vanishing correlators~\eqref{topologicalgravitycorrelationfunctions} on the moduli space of stable curves $\overline{\mathcal{M}}_{g,m+n}$ of genus $g$ with $m+n$ marked points.\footnote{The correction terms $\mathcal{O}(\beta^{-1})$ depend on the genus expansion parameter $g_s$ and the coupling $\epsilon$ in such a way that in the double scaling limit~\eqref{eq:lowtemplimit} they approach zero at least with the rate $\sim1/\beta$.} The string equation of topological correlators implies (expect for the genus zero correlator $\left\langle \tau_0 \tau_0 \tau_0 \right\rangle_0 = 1$) \cite{WittenIntersection}
\begin{equation} \label{eq:topstringeq}
\left\langle \tau_0^n \tau_{\ell_1} \cdots \tau_{\ell_m} \right\rangle_g
= \sum_{p_1 +\ldots+ p_m = n} \frac{n!}{p_1! \cdots p_m!}
\left\langle \tau_{\ell_1-p_1} \cdots \tau_{\ell_m-p_m} \right\rangle_g \ ,
\end{equation}
where $\left\langle \tau_{a_1} \cdots \tau_{a_m} \right\rangle_g = 0$ if any $a_i$, $i=1,\ldots,m$, is negative.
Following ref.~\cite{OkuyamaSakai2}, we express the low temperature limit by applying the results of ref.~\cite{Okounkovnpointfunction}. Namely, let us define the generating function~$\mathcal{F}$ of topological correlators with $m$ marked points as
\begin{equation}
\begin{aligned}
&\mathcal{F}(x) =\frac1{x^2}
+\sum_{\ell=0}^{+\infty} \sum_{g=1}^{+\infty} x^\ell \left\langle \tau_{\ell} \right\rangle_g \ ,\quad
\mathcal{F}(x_1,x_2) = \frac{1}{x_1+x_2}
+\sum_{\ell_1,\ell_2=0}^{+\infty} \sum_{g=1}^{+\infty} x_1^{\ell_1} x_2^{\ell_2}
\left\langle \tau_{\ell_1} \tau_{\ell_2} \right\rangle_g\ , \\
&\mathcal{F}(x_1,\ldots,x_m) =
\sum_{\ell_1,\ldots,\ell_m=0}^{+\infty} \sum_{g=0}^{+\infty} x_1^{\ell_1} \cdots x_m^{\ell_m}
\left\langle \tau_{\ell_1} \cdots \tau_{\ell_m} \right\rangle_g \quad \text{for} \quad m\ge 3 \ .
\end{aligned}
\end{equation}
Using these expressions with the string equation~\eqref{eq:topstringeq} and formula $\left\langle\tau_{\alpha_1}\cdots\tau_{\alpha_n}\right\rangle_0=\frac{(n-3)!}{\alpha_1!\cdots\alpha_n!}$ we arrive from eq.~\eqref{eq:Zlowtemp} (for any $m\ge1$) at
\begin{equation} \label{eq:lowtemp}
Z(\beta_1,\ldots,\beta_m)
=\prod_{i=1}^m \sqrt{\frac{g_s^{\frac23}\beta_i}{2\pi}} \
e^{2\pi^2\epsilon\beta_i}
\mathcal{F}(g_s^{2/3} \beta_1,\ldots,g_s^{2/3} \beta_m)
+ \mathcal{O}(\beta^{-1}) \ ,
\end{equation}
because $Z(\beta_1,\ldots,\beta_m)^\text{top.} =Z(\beta_1,\ldots,\beta_m)$ for $m>2$ while the semi-classical terms of the partition functions $Z(\beta_1)$ and $Z(\beta_1,\beta_2)$ are included in the leading non-polynomial terms in $\mathcal{F}(x)$ and $\mathcal{F}(x_1,x_2)$, respectively.
For this generating functions Okounkov has developed a remarkable formula spelt out in ref.~\cite{Okounkovnpointfunction}, namely
\begin{equation} \label{eq:defF2}
\mathcal{F}(x_1,\ldots,x_m) =
\frac{(2\pi)^{m/2}}{\sqrt{x_1\cdot\ldots\cdot x_m}} \mathcal{G}(2^{-1/3}x_1,\ldots,2^{-1/3}x_m) \ ,
\end{equation}
where
\begin{equation}
\mathcal{G}(x_1,\ldots,x_m) = \sum_{\alpha \in \Pi_m} \frac{(-1)^{ \ell(\alpha) +1}}{\ell(\alpha)}
\sum_{\sigma \in S_{\ell(\alpha)}} \mathcal{E}(\sigma(x_{\alpha})) \ .
\end{equation}
Here the first sum is taken over the partitions $\Pi_m$ of the set $\{1,\ldots,m\}$ with $m$ elements where the individual partitions $\alpha$ characterised by their length $\ell(\alpha)$. Furthermore, to each partition $\alpha$ of length $\ell(\alpha)$ is assigned a vector $x_\alpha$ of length $\ell(\alpha)$, where the individual entries of $x_\alpha$ are in turn given by sums of the variables $x_i$ indexed by the subsets in $\alpha$. For example the partition $\alpha = \left\{ \{1,3,6\}, \{2\}, \{4,5\} \right\} \in \Pi_6$ of length $\ell(\alpha) = 3$ yields the vector $x_\alpha=(x_1+x_3+x_6,x_2,x_{4}+x_{5})$. The second sum runs over the permutations $\sigma$ in the symmetric group $S_{\ell(\alpha)}$ of size $\ell(\alpha)$, where $\sigma(x_\alpha)$ permutes the entries of the vector $x_\alpha$ of length $\ell(\alpha)$. Finally, the function $\mathcal{E}(x_1,\ldots,x_\ell)$ is defined as
\begin{equation}\label{eq:Definitionepsilon}
\mathcal{E}\left(x_1,\ldots,x_\ell\right)=\frac{1}{2^{\ell}\pi^{\ell/2}}\frac{
\text{e}^{\frac{1}{12}\sum_{i=1}^{\ell} x_i^3}}{\sqrt{x_1\cdot\ldots\cdot x_\ell}}
\int\displaylimits_{y_i\geq 0} d y_1 \cdots d y_\ell \;
\text{e}^{ -\sum_{i=1}^{\ell}\frac{\left(y_i-y_{i+1} \right)^2}{4 x_i}-\sum_{i=1}^{\ell}\frac{y_i+y_{i+1}}{2}x_i}\ ,
\end{equation}
with $y_{\ell+1} \equiv y_1$. For further details on the function~$\mathcal{G}(x_1,\ldots,x_m)$ see the original definitions in refs.~\cite{Okounkovnpointfunction}.
Using the integral formulation of the generating functions~$\mathcal{F}$, in the low temperature limit the partition function $Z(\beta)$ is calculated to be
\begin{equation} \label{eq:Z1lowtemp}
Z(\beta) = \frac{\text{e}^{\frac{g_s^2}{24}\beta^3+2\pi^2 \epsilon \beta}}{\sqrt{2\pi}\, g_s \beta^{\frac32}}
+ \mathcal{O}(\beta^{-1}) \ ,
\end{equation}
while the partition function $Z(\beta_1,\beta_2)$ becomes
\begin{equation}
Z(\beta_1,\beta_2) = \frac{\text{e}^{\frac{g_s^2}{24} (\beta_1+\beta_2)^3 + 2\pi^2 \epsilon(\beta_1 +\beta_2)}}
{\sqrt{2\pi} \, g_s (\beta_1 + \beta_2)^{\frac32}} \,
\operatorname{erf}(2^{-3/2} g_s \sqrt{\beta_1\beta_2(\beta_1 + \beta_2)} )
+ \mathcal{O}(\beta^{-1}) \ ,
\end{equation}
in terms of the error function
\begin{equation}
\operatorname{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x du \, e^{-u^2}
= \frac{2}{\sqrt{\pi}} \left( x - \frac{x^3}{3} + \frac{x^5}{10} - \ldots \, \right) \ .
\end{equation}
\subsection{Low Temperature Expansion Schemes} \label{OffshellExpansioninT}
The corrections $\mathcal{O}(\beta^{-1})$ to the low temperature limit in eq.~\eqref{eq:lowtemp} are perturbatively included order-by-order by evaluating the subleading terms of eq.~\eqref{eq:ZmLowTempExp}. For explicitness we focus on the partition function $Z(T)$ with a single boundary component with temperature $T \equiv \beta^{-1}$, and we want to study its low temperature corrections
\begin{equation} \label{eq:TExpScheme1}
Z(\epsilon; T) = \frac{T^{\frac32}\,\text{e}^{\frac{g_s^2}{24T^3}+\frac{2\pi^2 \epsilon}T}}{\sqrt{2\pi}\, g_s}
\mathcal{Z}_\epsilon(T)
\quad \text{where} \quad
\mathcal{Z}_\epsilon(T)
= \sum_{\ell=0}^{+\infty} T^\ell \, z_\ell( g_s\beta^{3/2},\epsilon\beta ) \ .
\end{equation}
The coefficient functions $z_\ell$ do not depend on the temperature $T$ in the applied double scaling limit~\eqref{eq:lowtemplimit}. By including these perturbative temperature corrections to all orders, the partition function becomes an asymptotic series in $T$, i.e., the series does not contain any non-perturbative corrections that vanish in the limit $T\to 0$.
Compared to the (asymptotic) genus expansion studied in detail in Section~\ref{JTGravityDeformed JT GravityandTopological Gravity}, the low temperature expansion~\eqref{eq:TExpScheme1} is more natural from a physics point of view, as for many physical problems one is interested in the result up to a certain energy scale. In particular, we see that by only taking the leading order contribution~\eqref{eq:Z1lowtemp} we can immediately read off the threshold energy~\eqref{eq:GSenergy}. Note, however, that since the coupling $\epsilon$ approaches zero in the low energy limit $T\to0$, the ground state energy and the subleading temperature corrections in the expansion~\eqref{eq:TExpScheme1} depend on the details of the chosen double scaling limit. The limit~\eqref{eq:lowtemp} is naturally adapted to the defect coupling $\epsilon$ and the genus expansion parameter $g_s$. However, alternatively we can study other low temperature limits, where other ratios between physical parameters and the temperature $T$ are kept constant. In the following, we refer to such different choices for the double scaling limits as distinct low temperature expansion schemes.
In addition to the scheme discussed in the previous subsection, we introduce the low temperature expansion scheme of ref.~\cite{OkuyamaSakai1}, which is naturally adapted to the variables $(y,t)$ defined in eq.~\eqref{eq:yt} by the double scaling limit
\begin{equation} \label{eq:lowtemplimityt}
\beta \to +\infty \quad \text{with} \quad \frac{g_s \beta^{\frac32}}{t} = \text{const.} \ , \ y \beta = \text{const.} \ .
\end{equation}
Solving eq.~\eqref{eq:yt} for small deformations $\sdef_k$, $k=1,2,3,\ldots$, away from pure JT~gravity yields for the coupling parameters $(y,t)$ appearing in the above limit the expansion
\begin{equation}
\begin{aligned}
y &= \sdef_0 + \frac12 (2 \sdef_0\sdef_1 -\sdef_0^2)+\ldots \ ,\\
t & = 1 - (\sdef_0 + \sdef_1) + (\sdef_0^2 -\sdef_0\sdef_1 -\sdef_0\sdef_2) + \ldots \ ,
\end{aligned}
\end{equation}
which at leading order for a single defect become $y =2\pi^2\epsilon + \mathcal{O}(\epsilon^2)$ and $t = 1 + \mathcal{O}(\epsilon)$ (cf.\ eq.~\eqref{eq:ytexpansionpoint}). This low temperature expansion scheme agrees at leading order in $\epsilon$ with the scheme~\eqref{eq:lowtemplimit}, and in particular, upon inserting the on-shell values for $(y,t)$ in the absence of defects, i.e., setting $\epsilon=0$ such that $(y,t)=(0,1)$, the two low temperature expansion schemes become the same.
In the latter scheme the (asymptotic) low temperature expansion of the partition function reads \cite{OkuyamaSakai1}
\begin{equation}\label{eq:ZT}
Z(y,t;T)=\frac{T^{\frac32}\,\text{e}^{\frac{g_s^2}{24t^2T^3}+\frac{y}{T}}}{\sqrt{2\pi}\, g_s} \mathcal{Z}_{y,t}(T)
\quad \text{where} \quad
\mathcal{Z}_{y,t}(T) =\sum_{\ell=0}^{+\infty} T^\ell z_\ell(y,t) \ ,
\end{equation}
where the coefficient functions $z_\ell(y,t)$ now differ from the coefficient functions $z_\ell(g_s\beta^{3/2},\epsilon\beta)$ in eq.~\eqref{eq:TExpScheme1} (even after inserting the functional relations among their respective arguments).\footnote{There is actually a subtlety here. While the coefficient functions $z_\ell(g_s\beta^{3/2},\epsilon\beta)$ are temperature independent in the double scaling limit~\eqref{eq:lowtemplimit}, the functions $z_\ell(y,t)$ are still temperature dependent in the limit~\eqref{eq:lowtemplimityt}. One can obviously define temperature independent coefficients in the latter case as well. However, as discussed in the following the coefficient functions $z_\ell(y,t)$ are conveniently computable and comparable with ref.~\cite{OkuyamaSakai1}. Truncating the infinite sum in $\mathcal{Z}_{y,t}$ at some finite value $\ell=N$ yields unambiguously the low temperature corrections up to order $T^N$ in the discussed expansion scheme (because the temperature dependence only gives rise to corrections at order $\mathcal{O}(T^{N+1})$).}
For completeness, let us briefly review the strategy of ref.~\cite{OkuyamaSakai1} to compute the coefficients $z_\ell(y,t)$. First of all, the coefficients $z_\ell(y,t)$ are conveniently determined from the low temperature expansion of the function $W(y,t;\beta)$ defined in eq.~\eqref{eq:Wgenusxpansion}. Using the ansatz
\begin{equation}\label{eq:WTansatz}
W(y,t;T)=\sqrt{\frac{T}{4 \pi}}\,\text{e}^{\frac{g_s^2}{24 t^2 T^3 }+\frac{y}{T}}\mathcal{W}_{y,t}(T)
\quad \text{where} \quad
\mathcal{W}_{y,t}(T)=\sum_{\ell=0}^{\infty}T^\ell w_\ell(y,t) \ ,
\end{equation}
together with equation~\eqref{eq:DiffRec} yields the differential equation
\begin{equation} \label{eq:WT}
\partial_t \mathcal{W}_{y,t}
=\frac{g_s^2}{12 t^3T^3}\mathcal{W}_{y,t}-\sum_{g=1}^{\infty} g_s^{2 g} u_g \nabla(T) \mathcal{W}_{y,t} +\frac{g_s^2}{12}\nabla(T)^3 \mathcal{W}_{y,t} \ ,
\end{equation}
in terms of the differential operator
\begin{equation}
\nabla(T)
=\partial_0+\frac{1}{t T}+ \frac{g_s^2 \, I_2}{12 \,t^4 \, T^3}
= \frac1t \left( -I_2 \partial_t + D_y\right)+ \frac{g_s^2 \, I_2}{12 \,t^4 \, T^3} \ , \qquad
D_y = \partial_y + \frac1T \ ,
\end{equation}
which then recursively determines the coefficient functions $w_\ell(y,t)$.\footnote{The first few coefficient functions $w_\ell$ are calculated and spelled out explicitly in ref.~\cite{OkuyamaSakai1}.} Finally, the relation~\eqref{eq:ZWrel} translates to
\begin{equation}
\mathcal{W}_{y,t} = T\, \nabla(T) \mathcal{Z}_{y,t} \ ,
\end{equation}
leading for the coefficient functions $z_\ell(y,t)$ to the recursion formula \cite{OkuyamaSakai1}
\begin{equation}\label{eq:zlwl}
z_\ell
= t \left(\ell! \, w_\ell-\ell \left(\nabla(T)- \frac{1}{t T} \right) z_{\ell-1}\right) \ .
\end{equation}
The first few coefficient functions $z_\ell$ are calculated to be
\begin{gather} \label{eq:zlsolutions}
z_0=t\ , \quad
z_1 =\left(1+\frac{g_s^4 }{240 t^4 T^{6} } \right) I_2 \ , \nonumber \\
z_2 =\left(\frac{7 g_s^4}{240 t^5 T^{6}}+\frac{g_s^6}{576 t^7 T^9}+\frac{g_s^8}{57600 t^9 T^{12}} \right) I_2^2
+\left(-2+\frac{g_s^2}{12 t^2 T^{3}}+\frac{g_s^4}{120 t^4 T^{6}}+\frac{g_s^6}{3360 t^6 T^9} \right) I_3\ .
\end{gather}
Let us point out some physical implications of the low temperature expansion scheme in the variables $(y,t)$. For the on-shell values \eqref{eq:ytexpansionpoint} of $(y,t)$ for JT~gravity coupled to a gas of defect and compared to the expansion scheme~\eqref{eq:TExpScheme1}, the low temperature expansion of the partition function depends on the identification angle $\alpha$ already at leading orders in the temperature~$T$. Namely compared to the result~\eqref{eq:Z1lowtemp} one finds upon inserting eq.~\eqref{eq:ytexpansionpoint} into the expansion~\eqref{eq:ZT}
\begin{equation}
Z(T) = \frac{(1+\frac{\alpha^2-4\pi^2}{2} \epsilon+\ldots)\,T^{\frac32}\,\text{e}^{\frac{g_s^2}{24 T^3}+\frac{g_s^2(4\pi^2-\alpha^2)}{24 T^3} \epsilon +\frac{2\pi^2}T \epsilon + \ldots }}{\sqrt{2\pi}\, g_s}
+ \mathcal{O}(T) \ ,
\end{equation}
where the dots `$\ldots$' indicate subleading terms in $\epsilon$ at order $\mathcal{O}(\epsilon^2)$.
The above analysis of the low temperature limit is general in the sense that we can consider other on-shell values for the couplings $(y,t)$ (and also for the couplings $t_k=\gamma_k+\sdef_k$ appearing implicitly in the expansion~\eqref{eq:ZT}). In particular, if we consider small deviations from the on-shell values $(y,t)=(1,0)$ (and small perturbations $\sdef_k$ for $k\ge 2$) of pure JT~gravity, we can study the low temperature expansions of deformations to pure JT~gravity together with their scheme dependence.
A particularly interesting example in this context is discussed in ref.~\cite{WittenDeformations}, which corresponds to coupling JT~gravity to a gas of defects with two types of defect species characterized by their couplings $\epsilon_1 = -\epsilon_2 = \epsilon$, which are aligned with opposite sign, and their respective identification angles $\alpha_1$ and $\alpha_2$. On the one hand, for the low temperature double scaling limit~\eqref{eq:lowtemplimit} we arrive at
\begin{equation}
Z(T) = \frac{T^{\frac32}\,\text{e}^{\frac{g_s^2}{24 T^3}}}{\sqrt{2\pi}\, g_s}
+ \mathcal{O}(T) \ ,
\end{equation}
which results in an expected vanishing threshold energy, cf.\ eq.~\eqref{eq:GSenergy}. On the other hand the double scaling limit~\eqref{eq:lowtemplimityt} yields
\begin{equation}
Z(T) = \frac{(1+\frac{(\alpha_1^2-\alpha_2^2)}{2} \epsilon+\ldots)\,T^{\frac32}\,\text{e}^{\frac{g_s^2}{24 T^3}+\frac{g_s^2(\alpha_2^2-\alpha_1^2)}{24 T^3} \epsilon + \ldots }}{\sqrt{2\pi}\, g_s}
+ \mathcal{O}(T) \ ,
\end{equation}
where a non-trivial dependence on the identification angles $\alpha_1$ and $\alpha_2$ now enters because the couplings $(y,t)$ govern the physical quantities that are kept constant in the double scaling limit~\eqref{eq:lowtemplimityt}.
\subsection{Low Temperature Expansion Schemes for Multiple Boundaries}
Finally, let us remark that the low temperature discussion of the previous subsection can be repeated with multiple boundary components in the same way. The low temperature expansion in this case is studied by Okuyama and Sakai in ref.~\cite{OkuyamaSakai2}.
As a preparation for Section~\ref{Spectral Form Factor}, we just record here the result of the low temperature limit for the partition function $Z(\beta_1,\beta_2)$ with two boundary components with inverse temperatures $\beta_1$ and $\beta_2$. Then the low temperature expansion scheme~\eqref{eq:lowtemplimityt} generalises to the double scaling limit
\begin{equation}
\beta_i \to +\infty \quad \text{with} \quad \frac{g_s \beta_i^{\frac32}}{t} = \text{const.} \ , \ y \beta_i = \text{const.}
\quad \text{for} \quad i=1,2 \ ,
\end{equation}
which yields for the low temperature limit of the partition function $Z(\beta_1,\beta_2)$ the result \cite{OkuyamaSakai2}
\begin{equation} \label{eq:ZZT}
Z(y,t;\beta_1,\beta_2) = \frac{t\,\text{e}^{\frac{g_s^2(\beta_1+\beta_2)^3}{24 t^2}+y(\beta_1+\beta_2)}}{\sqrt{2\pi}\, g_s (\beta_1+\beta_2)^{\frac32}} \,
\operatorname{erf}\left(\frac{g_s}{2\sqrt{2}\,t}\sqrt{\beta_1\beta_2(\beta_1+\beta_2))} \right)
+\mathcal{O}(\beta_1^{-1},\beta_2^{-1}) \ .
\end{equation}
\section{Phase Transition and Spectral Form Factor}\label{Spectral Form Factor}
Using the low temperature limit of the partition functions $Z(y,t;\beta_1,\beta_2)$ and $Z(y,t;\beta)$ of the previous section and applying numerical methods, we study two well-established and related phenomena, namely the phase transition \cite{doubletrumpet,Engelhardt:2020qpv}, which exchanges the dominance between the connected versus the disconnected geometries in the two boundary partition function, and the spectral form factor,\footnote{The spectral form factor was first introduced in the AdS/CFT context in ref.~\cite{Papadodimas:2015xma}.} which arises as a certain analytic continuation of the two-boundary partition function. In particular, we analyse the dependence of these quantities in the presence of defects.
\begin{figure}[t]
\centering
%
\includegraphics[scale=0.5]{twodisks2check.pdf}\hspace{20ex}
\includegraphics[scale=0.4]{Connected2check.pdf}
\caption{The left figure shows a disconnected geometry --- here illustrated in terms of two $AdS_2$ disks at genus zero --- that dominates the spectral form factor at early times $\tau$, whereas the right figure depicts a connected geometry with two boundaries --- shown is the double trumpet contribution --- that becomes dominant at late times $\tau$. }
\label{fig:SSFGeometries}
\end{figure}
\subsection*{Phase Transition}
There are two types of geometries that contribute to the two-point function. On the one hand there are geometries with two disconnected components, each with a single boundary component, and on the other hand there are connected geometries with two boundary components, as illustrated in fig.~\ref{fig:SSFGeometries} (where only the genus zero contributions are depicted for simplicity). At low temperatures we have according to eqs.~\eqref{eq:ZT} and \eqref{eq:ZZT} (in the chosen low temperatue expansion scheme) the following two quantities
\begin{equation}\label{eq:generaltwopointfunctions}
\begin{aligned}
Z(y,t;\beta)^2&=\frac{e^{2 y \beta} e^{\frac{g_s^2 \beta^3}{12 t^2}}}{2 \pi g_s^2 \beta^{3}}t^2
+ \mathcal{O}(\beta^{-1}) \ ,\\
Z(y,t;\beta,\beta) &=\frac{e^{2 y \beta} e^{\frac{\beta^3 g_s^2}{3 t^2}}}{4 \sqrt{ \pi} \beta^{3/2}g_s}\, t\, \operatorname{erf}\left(\frac{\beta^{3/2} g_s}{2 t}\right)
+ \mathcal{O}(\beta^{-1}) \ .
\end{aligned}
\end{equation}
Independent of the specific choices for the on-shell values of the parameters $(y,t)$, we can make some quite general comments. Taking the ratio of the two-point contributions in eq.~\eqref{eq:generaltwopointfunctions}, the dependence on the shift in energy given by $y$ drops out (at leading order in the temperature). Hence, the phase transition (and as a consequence also the spectral form factor introduced later) is determined by the off-shell parameter $t$. Explicitly analysing the ratio of the two contributions~\eqref{eq:generaltwopointfunctions} in the low temperature regime yields with the dimensionless constant $c:={g_s\beta^{3/2}}/{t}$ the dimensionless (numerical) critical value $c_\text{crit.}$ for the phase transition according to
\begin{equation}
\frac{Z(y,t;\beta,\beta)}{Z(y,t;\beta)^2}=1 \ \Rightarrow\ \frac{1}{2} \sqrt{\pi } c e^{\frac{c^2}{4}} \text{erf}\left(\frac{c}{2}\right)=1\ \Rightarrow\
c_\text{crit.} \approx \pm 1.24013 \ .
\end{equation}
Let us now focus on JT gravity with defects. This means that we take $(y,t)$ to their on-shell values ~\eqref{eq:ytonshell} and that we work with the quantities in eq.~\eqref{eq:twopointfunctions}, where the on-shell values of $(y,t)$ are found numerically for a given set of $\epsilon$ and $\alpha$, i.e.
\begin{equation}\label{eq:twopointfunctions}
\begin{aligned}
Z(\beta)^2&=\left.\frac{e^{2 y \beta} e^{\frac{g_s^2 \beta^3}{12 t^2}}}{2 \pi g_s^2 \beta^3}t^2\right|_{y,t \text{ on-shell}}\,,\\
Z(\beta,\beta) &=\left.\frac{e^{2 y \beta} e^{\frac{\beta^3 g_s^2}{3 t^2}}}{4 \sqrt{ \pi} \beta^{3/2} g_s}\, t\, \text{erf}\left(\frac{\beta^{3/2} g_s}{2 t}\right)\right|_{y,t \text{ on-shell}}.
\end{aligned}
\end{equation}
\noindent Keeping the above in mind, we plot the connected and disconnected parts of the two-point function in Fig. \ref{fig:ConnVSDiscBETA}.
\noindent We can see the general behaviour of JT~gravity in the absence of defects is reproduced: at high temperatures the disconnected geometry dominates, whereas for low temperatures the connected part constitutes the more dominant contribution \cite{CJ3,OkuyamaSakai2}. This is the two-dimensional instantiation of a Hawking-Page phase transition \cite{WittenHawkingPage,MaldacenaAdS2}. However, we should also notice that, as shown in fig.\ \ref{fig:PhaseTransTempBETA}, for larger $\epsilon$, the phase transition occurs at a smaller value of $\beta$.
\begin{figure}[ht]
\centering
\includegraphics[scale=1.3]{connVSdiscBETAnew.pdf}
\caption{We plot the connected versus the disconnected geometry contributions of eq.~\eqref{eq:twopointfunctions}. The identification angle $\alpha$ is fixed to $\alpha=\frac{\pi}{2}$, the defect amplitude is $\epsilon=0.001$ and $g_s=0.0027$.
In the range of the plot we have a maximum of $\ \sim 4.2\% $ relative error, which measures the ratio of the terms ignored (order $T^2$) over the terms kept in the low temperature expansion.}
\label{fig:ConnVSDiscBETA}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[scale=1.2]{PhaseTransTempBETAnew.pdf}
\caption{The phase transition temperature (the point for which $Z(\beta,\beta)=Z(\beta)^2$) as a function of the defect amplitude. The identification angle $\alpha$ is fixed to $\alpha=\frac{\pi}{2}$ and $g_s=0.0027$.}
\label{fig:PhaseTransTempBETA}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale=1.3]{SFFnew.pdf}
\caption{Shown is the spectral form factor for different values of $\epsilon$ with $g_s=\frac{1}{4\cdot180^{3/2}}$, $\beta=180$, $\alpha=\pi/2$.}
\label{fig:SFFproper}
\end{figure}
\subsection*{Spectral Form Factor}
Now we come to the analysis of the spectral form factor $Z(\beta+i \tau,\beta-i \tau)$, which is a real function of the time $\tau$ defined via an analytic continuation of the two-point function $Z(\beta_1,\beta_2)$. The spectral form factor is essential in the analysis of quantum chaotic behaviour and plays an increasingly important role in the study of black hole physics \cite{BlackHolesandRandomMatrices}. For the case of JT gravity in the presence of defects the spectral form factor has not yet been analysed. The task is to understand the role of the parameter $\epsilon$.
For large groups of systems obeying quite common assumptions (such as the eigenstate thermalisation hypothesis \cite{PhysRevE.50.888, PhysRevA.43.2046}), one expects the spectral form factor to exhibit certain universal features. Early times are characterised by decay and hence a ``slope", followed by a rise and hence a ``ramp", and lastly at late times we encounter a ``plateau" with fixed value given by the one-point function $Z(2\beta)$.\footnote{The ``plateau'' cannot be obtained if the perturbative series is truncated at some finite $g$. To render an asymptotic series convergent non-perturbative contributions have to be taken into account \cite{doubletrumpet}. In the zero temperature/zero coupling limit considered in ref.~\cite{OkuyamaSakai1} and here the perturbative series converges.} Let us define the normalised spectral form factor in the following manner
\begin{align}\label{eq:nSFF}
G(\beta,\tau):=\frac{ Z(\beta + i \tau,\beta - i \tau )}{Z(2 \beta)} =\text{erf}\left(\frac{\beta ^{3/2} g_s \sqrt{\frac{\tau ^2}{\beta ^2}+1}}{2t}\right)\,,
\end{align}
where we are normalising with respect to the contribution $Z(2 \beta)$ as this sets the height of the plateau. Due to the low temperature dominance of the connected contribution as outlined above, we would expect late times to be dominated by connected contributions. A closer look at eq.~\eqref{eq:generaltwopointfunctions} shows that this is guaranteed by the functional form of both expressions. We are only considering connected geometries in eq.~\eqref{eq:nSFF} as we are mainly interested in the ramp and plateau behaviour. We want to reiterate some statements of refs.~\cite{SSS,OkuyamaSakai2}, which help in understanding the importance of the corrections outlined in section \ref{LowTemperatureLimit}. The $g=0$ part of the two-boundary correlator only furnishes the ``ramp" behaviour as shown in ref.~\cite{doubletrumpet}. We can see that the approximation \eqref{eq:lowtemplimit} already allows for the creation of the plateau \cite{OkuyamaSakai2}. Furthermore, if we work in the limit \eqref{eq:lowtemplimityt} both the phase transition and spectral form factor become sensitive to the presence of defects.
We note that the transition from ramp to plateau now depends on $\epsilon$. More specifically, for larger values of $\epsilon$ we can move it to earlier times, whereas negative values moves it to later times, which mirrors the behaviour discovered for the phase transition.
We may also consider changes in the identification angle~$\alpha$ while keeping $\epsilon$ fixed for both the phase transition and the spectral form factor. While the dependence on $\alpha$ within the range $0\le\alpha<\pi$ can be studied straightforwardly with the methods presented here, it would be even more interesting to consider changes in $\alpha$ over the full range of identification angles. This could possibly be achieved by implementing the results of ref.~\cite{TuriaciBluntDefects}.
\section{Some Comments on two-dimensional de~Sitter space }\label{ds}
In both refs.~\cite{MaldacenadS,MaloneydS} a proposal is made for the application of the matrix model/JT~duality to two-dimensional de~Sitter space. The logic is the following: As Lorentzian de~Sitter space can be analytically continued to Euclidean Anti-de~Sitter space \cite{Maldacenadsviaads}, in the two-dimensional setting there should exist a map translating the results for the partition function of ref.~\cite{SSS} to the wavefunction of the universe~$\Psi$ at future infinity $\mathcal{I}^{+}$ and past infinity $\mathcal{I}^{-}$ \cite{MaldacenadS,MaloneydS}. For the semi-classical contribution it can be shown that the wavefunction can be mapped to the disk result via the identification
\begin{align}\label{eq:dsbeta}
\beta \rightarrow \left\{\begin{matrix} -i \ell,\ \text{future}\\
i \ell, \ \text{past}\end{matrix}\right.
\end{align}
where $\ell$ is the renormalised length of both the future and past circles. For higher genus contributions an approach was outlined in ref.~\cite{MaloneydS}, in which the boundary conditions inherited from de~Sitter space require the analytic continuation of the geodesic length $b\rightarrow i \alpha$ such that this instance of JT gravity requires the inclusion of surfaces with conical singularities. Sticking to the one-point function for the moment, following ref.~\cite{MaloneydS} the wave function on a single future boundary would be given by
\begin{equation}
\begin{aligned}\label{eq:dssingleboundary}
\Psi(\ell) &=\frac{(2 \pi^2)^{3/2}}{g_s} Z^{\text{disk}}\left( - i \ell \right)-\sum_{g=1}^{\infty}g_s^{2g-1}\int\displaylimits_{0}^{\infty}d \alpha \alpha \frac{e^{\frac{ i \alpha^2}{4 \pi^2 \ell}}}{2 \pi \sqrt{\pi} \sqrt{- i \ell}}V_{g,(i \alpha)}\\
&= \frac{(2 \pi^2)^{3/2}}{g_s} Z^{\text{disk}}\left( -i \ell \right)+\frac{1}{g_s^2}B\left(- i \ell\right)F(\{\gamma_k\})\ .
\end{aligned}
\end{equation}
which would indeed correspond to $Z(-i \ell)$. In general this approach implies that the mere analytic continuation \eqref{eq:dsbeta} of the partition function of ref.~\cite{SSS} corresponds to the wave function $\Psi$, i.e.
\begin{equation}
\begin{aligned}\label{eq:noboundarywavefunction}
&\Psi_{\text{conn.}} \left(\ell_1,...,\ell_{n_{+}},\ell_{n_{+}+1},\ldots,\ell_{n_{-}}\right)\\ \mathrel{\widehat{=}} &\left\langle\text{tr}\left(e^{i \ell_1 H}\right)\ldots\text{tr}\left(e^{i \ell_{n_+} H}\right)\text{tr}\left(e^{-i \ell_{n_{+}+1} H}\right)\ldots\text{tr}\left(e^{-i \ell_{n_{-}} H}\right) \right\rangle\ ,
\end{aligned}
\end{equation}
However, as clearly stated in ref.~\cite{MaloneydS}, for eqs.~\eqref{eq:dssingleboundary} and \eqref{eq:noboundarywavefunction} to hold in full generality it is necessary that the conical volumes are obtained from a mere analytic continuation as in eq.~\eqref{eq:conicalWPvolumes}. This, however, is only established for $\alpha < \pi$, whereas the results of ref.~\cite{TuriaciBluntDefects} propose for general identification angles~$\alpha$ an implicit definition of Weil--Petersson volumes that goes beyond the analytic continuation prescription of eq.~\eqref{eq:conicalWPvolumes}. Hence, due to the integration range over $\alpha$ in eq.~\eqref{eq:dssingleboundary}, the naive analytic continuation of the individual volumes $V_{g,(b)}$ of the (asymptotic) thermal partition function possibly requires a further modification to the approach of ref.~\cite{MaloneydS} for the computation of the wavefunction $\Psi$.\footnote{Although it is not immediately clear if the path integral may be performed in the same manner as in ref.~\cite{MaloneydS} for the volumes of ref.~\cite{TuriaciBluntDefects}. See comments in ref.~\cite{TuriaciBluntDefects}.} Moreover, the authors of ref.~\cite{MaloneydS} show that eq.~\eqref{eq:noboundarywavefunction} may be derived from the approach of ref.~\cite{Hartle:1983ai}, such that the wavefunction~$\Psi$ is also equivalent to the no-boundary wavefunction. Therefore, further investigation is required in order to understand in how far the correspondence of the Hartle--Hawking construction of ref.~\cite{Hartle:1983ai} and the approach of ref.~\cite{MaloneydS} via continuation to Euclidean Anti-de~Sitter space holds at the non-perturbative level and in how far the validity of eq.~\eqref{eq:noboundarywavefunction} is guaranteed beyond the semi-classical level.\footnote{We would like to thank Joaquin Turiaci for valuable correspondence on these points.}
\section{Conclusion and Outlook} \label{sec:concl}
In this work we compute thermal partition functions of deformed JT~gravity theories from solutions to the KdV~hierarchy. These solutions govern the correlation functions of two-dimensional topological gravity, and --- similarly as in ref.~\cite{OkuyamaSakai1} --- we describe both undeformed and a rather general class of deformed theories of JT~gravity in terms of solutions to the KdV~hierarchy. In refs.~\cite{Maxfield3gravity,WittenDeformations} deformations of JT~gravity are described by suitable scalar potentials that do not alter the asymptotic boundaries of the two-dimensional hyperbolic space-time geometries. It would be interesting to relate deformations arising from scalar potentials to solutions of the KdV~hierarchy in the topological gravity description. While we can identify certain classes of deformations in both formulations --- in particular those that arise from a gas of defects with a finite number of defect species --- it would be interesting to investigate whether these two approaches towards deformations of JT~gravity are actually in one-to-one correspondence. As both descriptions yield infinite dimensional deformation spaces, a meaningful comparison of the two approaches to the deformation problem presumably requires a careful treatment using methods of functional analysis.
Interestingly, both standard JT~gravity and JT~gravity interacting with a finite number of defect species are governed by spectral densities given in terms of (modified) Bessel functions, whereas for more general deformations other transcendental functions occur. Therefore, it would be interesting to understand in how far standard JT~gravity and JT~gravity interacting with a gas of defects are singled out from other solutions to the KdV~hierarchy. For instance, the Witten--Kontsevich tau-function relates to the free energy of two-dimensional topological gravity \cite{WittenIntersection, KontsevichIntersection} and the Br\'ezin--Gross--Witten tau-function describes JT~supergravity \cite{Okuyama:2020qpm}. Yet other tau-functions are discussed from the mathematical perspective in ref.~\cite{MR4222602}. As the connection between specific solutions to the KdV~hierarchy and two-dimensional gravitational theories does not seem to be arbitrary, a systematic investigation of tau functions and the associated physical theories is an interesting idea to pursue.
As already addressed in ref.~\cite{SSS}, the discussed solutions to the KdV~hierarchy and the resulting thermal partition functions are asymptotic series in the genus expansion parameter~$g_s$, which are only rendered to analytic functions once non-perturbative effects are taken into account. Therefore, a challenging task is to derive solutions to the KdV~hierarchy that are analytic instead of just being an asymptotic series in the parameter $g_s$. In refs.~\cite{Dalley:1991qg,Dalley:1991vr} a non-perturbative completion of the solutions to the KdV~hierarchy is proposed that has recently been applied to JT~gravity in an interesting series of works \cite{CJ1,CJ3,CJ4}. Both the results of ref.~\cite{OkuyamaSakai1} and our work furnish an easy and systematic access to higher genus contributions, such that modern resurgence techniques could come into play to address non-perturbative effects in this context. Similar considerations in that direction are made in ref.~\cite{finitecutoffresurgence} for JT~gravity with a finite cutoff at the asymptotic space-time boundaries,\footnote{More work on JT gravity restricted to a finite $AdS_2$ subregion can be found in refs.~\cite{Gross:2019ach,Iliesiu:2020zld}. The general paradigm of finite cutoff $AdS/CFT$ was first explored in ref.~\cite{McGough:2016lol}.}
where a Borel resummation can be performed for the asymptotic series with respect to the cutoff parameter.
Applying the approach developed by Okuyama and Sakai \cite{OkuyamaSakai1,OkuyamaSakai2}, we compute in a certain low temperature limit the thermal partition functions (with one or more boundary components) for JT~gravity with deformations such as those arising from the presence of a gas of defects. In this limit the studied thermal partition functions become exact because non-perturbative corrections are suppressed. We determine the critical temperature of the Hawking--Page phase transition as a function of the defect parameters by analysing the two-boundary partition function with numerical methods. Depending on the sign of the defect coupling constant we find that the phase transition either occurs at higher or lower temperatures. The spectral form factor exhibits a similar behaviour, namely the time scale for the onset of the plateau is shifted to earlier or later times depending on the sign of the defect coupling. While we expect that this behaviour of the phase transition and the spectral form factor as a function of the defect parameters does not change upon including further subleading temperature corrections, it is nevertheless desirable to include further terms in the low temperature expansion in order to reliable analyse the Hawking--Page phase transition and the spectral form factor as a function of the defect parameters at higher temperature scales. JT~gravity in the presence of defects is linked to 3d gravity in the near-extremal limit, as reported in ref.~\cite{Maxfield3gravity}. It would be nice to understand and to interpret the changes in both the Hawking--Page phase transition and the spectral form factor more explicitly in that context.
We briefly comment on a possible matrix model/JT~gravity duality for two-dimensional de~Sitter backgrounds. Here we point out an apparent puzzle in light of the recent results of ref.~\cite{TuriaciBluntDefects}, which suggest that the Weil--Petersson volumes in the presence of conical singularities with large identification angles are in general not obtained via analytic continuation from surfaces with conical singularities with small identification angles. As a consequence, computing the wave function of the universe for JT~gravity on two-dimensional de~Sitter by use of analytic continuation techniques may only be an approximation. It is, however, still possible that upon going beyond the study of asymptotic series
the validity of this approach is nevertheless justified. We believe that this issue deserves further study.
\bigskip
\section*{Acknowledgements}
We would like to thank Alexander Belavin, Clifford Johnson, Joaquin Turiaci, Kefeng Liu, and Hao Xu for discussions and correspondences.
The work of H.J. is supported by the Cluster of Excellence ``Precision Physics, Fundamental Interactions and Structure of Matter'' (PRISMA+ --- EXC 2118/1).
The work of J.K.K. is supported in part by the Heising-Simons Foundation, the Simons Foundation, and National Science Foundation Grant No. NSF PHY-1748958. A.K. is supported by the ``Onassis Foundation" as an ``Onassis Scholar" (Scholarship ID: F ZO 030/2 (2020-2021)). S.F., J.K.K., and A.K.\ acknowledge support by the Bonn Cologne Graduate School of Physics and Astronomy (BCGS).
\newpage
\bibliographystyle{utphys}
|
1,116,691,497,874 | arxiv | \section{Introduction}
\label{sec:intro}
In statistics, spatial models are useful when residuals exhibit correlation in space after accounting for known covariates in a regression-type setting. Spatial data has long been classified into three categories, namely geostatistical (point-level) data, lattice (area-level) data, and point-pattern data \citep{cressie2015}. Due to reasons such as measurement method constraints and privacy considerations, different types of spatial data may be collected in different settings. Depending on the data type, different statistical models that capture the residual spatial correlation are used. In a nutshell, (1) geostatistical data are commonly collected in environmental science. For example, rainfall at weather stations \citep{Kyriakidis2001}, where the exact geo-coordinates for each observation are known. The strength of dependency is a function of the distance separation between two locations. Kriging can be used to model a smooth surface. (2) Lattice data can be either gridded or irregularly aligned, and occur in the form of aggregated observation over areas. They are often collected in epidemiology, such as disease prevalence of each district \citep{Chammartin2016}. Another source of lattice data is from measuring instruments, such as satellites where the spatial resolution is intrinsically limited, resulting in gridded observations. Gaussian Markov random field (GMRF) models such as conditionally autoregressive models are typically used to capture the spatial dependency between neighboring areas. Finally, (3) there is point-pattern data, where the locations themselves are stochastic. They are used to model epidemiological data of case locations \citep{Gatrell1996} or other event locations such as epicenter of earthquakes \citep{Ogata1998}. One approach to model such data is using a Poisson process, where the intensity function may depend on observed covariates. Despite having different modeling strategies for different types of data, a common purpose of all the aforementioned statistical models is to capture the residual spatial dependency between different observations. A natural way to do this is using Gaussian processes to model continuous spatial surfaces, which represent the underlying scientific process that drives the response variables together with observed covariates.
There are already attempts to analyze lattice data and point-pattern data using Gaussian process-based models. For example, \cite{Kelsall2002} modeled aggregated disease counts using a Gaussian process approach. The authors derived analytical approximations to area-level Poisson mean and produced continuous underlying relative risk functions using Markov chain Monte Carlo (MCMC). Point-pattern data can be linked to Gaussian process with a log-Gaussian Cox process (LGCP) \citep{Moller1998}. Instead of having a fixed intensity at each location, the intensity is modeled as a log-Gaussian process, yielding a doubly stochastic Poisson process. In modern spatial statistics, researchers are dealing with increased heterogeneity in the structure of spatial data that is collected. Different data sources may contain overlapping information concerning the same research questions. Recently, there are several approaches in the literature that proposed to use Gaussian process as a basis to fuse spatial data of different types. \cite{Moraga2017} proposed a model to analyze spatial data available in both geostatistical and lattice type, with the same set of covariates and a response variable observed at two different spatial resolutions. Their computation is made efficient using integrated nested Laplace approximations (INLA) \citep{Rue2009}. \cite{Wilson2018} extended the work by allowing non-Gaussian response variables. This opens up possibilities of modeling count data which commonly occurs in epidemiological settings. \cite{Shi2017} proposed a fixed rank kriging-based fusion model to combine multiple lattice-type remote sensing datasets. Other works on spatial fusion models \citep{Berrocal2010,McMillan2010, Sahu2010} also implemented efficient algorithms for specific model structure introduced in their applications. In general, spatial fusion models are challenged by their flexibility and computational efficiency. There exists a trade-off between them, i.e. more flexible modeling structure comes with a higher computational cost. For example, as shown in \cite{Wilson2018}, their fusion model with normally-distributed response variables took an order of minutes using INLA to compute. A more flexible modeling structure for Poisson-distributed response took several weeks using Hamiltonian Monte Carlo-based inference method.
In this paper, we extend previous works in those two aspects. In terms of flexibility, our framework incorporates an additional data type, namely point-pattern data. To the best of our knowledge, this is the first spatial fusion framework that incorporates all three types of spatial data. We additionally allow arbitrary combinations of those three data types in multivariate settings. We propose a unifying framework that includes the features of several well-established models, such as linear models of co-regionalization (LMC) \citep{Wackernagel2003,MacNab2016}, spatial factor model \citep{Wang2003}, and shared component model \citep{Held2001}. In terms of computational cost, we implement a fully Bayesian-based approach using Stan modeling language \citep{stan2017}. As a more efficient alternative, we offer INLA-based implementation which significantly reduces computation time from hours to minutes for thousands of observations. Last but not least, we benchmark the performance of these two implementations in terms of prediction and parameter estimation in simulation studies.
The rest of this paper is structured as follows: Section~\ref{sec:mod} introduces the unifying framework and explicitly show its link to existing spatial models. Section~\ref{sec:inf} discusses implementation strategies in Stan and INLA. Section~\ref{sec:sim} illustrates the framework using two simulated scenarios and an analysis on epidemiological datasets, as well as comparing the performance of our two implementations. Finally, we end with a summary followed by some discussion on identifiability problems and research outlook in Section~\ref{sec:dis}.
\section{Process-based Spatial Fusion Model}
\label{sec:mod}
\subsection{The Unifying Framework}
For $j=1,\dots, \ell$, we let $\boldsymbol{Y}_j(\cdot)$ denote the $j$th response variable with $n_j$ observations, with a conditional distribution that belongs to the exponential family. Each of the $\ell$ response can take any of the following data types: i) geostatistical data, observed at locations $\boldsymbol{s}_j \in D \subseteq \mathcal{R}^2$; ii) lattice data observed at areas $\boldsymbol{a}_j \subset D$; or iii) point-pattern data that has been discretized to regular fine grid containing mostly zeros or ones, observed at gridded locations $\boldsymbol{v}_j \in D$, where $\boldsymbol{Y}_j(\boldsymbol{v}_j)$ denotes the number of events in the grid cell containing $\boldsymbol{v}_j$. Further, we let $\boldsymbol{X}_j(\cdot)$ denote a full (column) rank $n_j \times p$ matrix of spatially-referenced covariates that are observed at the same spatial units as the corresponding response variables, $\boldsymbol{\beta}_j$ denote a vector $p \times 1$ of fixed effect coefficients. We assume there is a $q\times 1$ vector of zero-mean, unit variance, independent latent Gaussian processes $\boldsymbol{w}(\cdot)$ having a $\ell \times q$ design matrix $\boldsymbol{Z}$, i.e. $\boldsymbol{Z}_j\boldsymbol{w}(\cdot)$ is the $j$th linear combination of Gaussian processes. Each Gaussian process is parameterized by its own covariance function. Finally, non-linear operator $B_j(\cdot)$ subsets and aggregates some components of $\boldsymbol{Z}_j\boldsymbol{w}(\cdot)$ such that it matches the spatial resolution of the corresponding response variable. Overall, the framework can be formulated as
\begin{equation}
g_j(\mathbb{E}[\boldsymbol{Y}_j(\cdot) \vert \boldsymbol{\beta}_j, \boldsymbol{Z}_j, \boldsymbol{w}(\cdot)]) = \boldsymbol{X}_j(\cdot)\boldsymbol{\beta}_j + B_j(\boldsymbol{Z}_j\boldsymbol{w}(\cdot)),
\end{equation}
where $g_j(\cdot)$ is a link function that corresponds to the conditional distribution of $\boldsymbol{Y}_j(\cdot)$. Fig.~\ref{fig:abs} outlines a graphical formulation of the framework.
\begin{figure}
\includegraphics[width = \textwidth]{graphicalabs.pdf}
\caption{A graphical formulation of the spatial fusion model framework, consisting of multiple response variables of different type and multiple latent processes.}
\label{fig:abs}
\end{figure}
Change of support problems \citep{Gotway2002} arise when lattice data needs to be modeled. We only observe aggregated information while the underlying process is continuous. We employ a sampling-points approximation approach to stochastic integrals \citep{Gelfand2001,Fuentes2005} for aggregating latent processes. Let $\boldsymbol{s}'$ denote the set of all sampling points and each area contains $H$ sampling points. For the $i$th area $\boldsymbol{a}_{ji}$ in $j$th response, under a linear link function we obtain
\begin{equation}
\label{eq:linear}
w(\boldsymbol{a}_{ji}) = \vert \boldsymbol{a}_{ji} \vert^{-1} \int_{\boldsymbol{u}\in \boldsymbol{a}_{ji}}w(\boldsymbol{u})d\boldsymbol{u} \approx \frac{1}{H}\sum_{\boldsymbol{s}'\in \boldsymbol{a}_{ji}}w(\boldsymbol{s}').
\end{equation}
When a nonlinear link function is used for a response variable, the aggregation in \eqref{eq:linear} will result ecological bias \citep{Greenland1992}. For a general link function $g_j(\cdot)$, we have the following approximation,
\begin{equation}
\label{eq:nonlinear}
w(\boldsymbol{a}_{ji}) = g_j\left(\vert \boldsymbol{a}_{ji} \vert^{-1} \int_{\boldsymbol{u}\in \boldsymbol{a}_{ji}} g_j^{-1}\left(w(\boldsymbol{u})\right)d\boldsymbol{u}\right) \approx g_j\left(\frac{1}{H}\sum_{\boldsymbol{s}'\in \boldsymbol{a}_{ji}}g_j^{-1}\left(w(\boldsymbol{s}')\right)\right).
\end{equation}
Typically, a small $H$ is chosen to balance the trade-off between computational efficiency and model accuracy \citep{Fuentes2005,Liu2011}.
Albeit the latent processes have a continuous index, we work with a finite set of locations in practice.
The set of locations $\mathcal{U}$ to be modeled in the latent processes $\boldsymbol{w}(\cdot)$ comprise the locations where geostatistical data are observed, the locations of sampling points for lattice data and gridded locations for point-pattern data. The non-linear operator $B_j(\cdot)$ takes a different form for different data types. For geostatistical and point-pattern data, the non-linear operator $B_j(\cdot)$ subsets $\boldsymbol{Z}_j\boldsymbol{w}(\cdot)$ to the corresponding locations of the $j$th response variable. For lattice data and a linear link function, $B_j(\cdot)$ subsets $\boldsymbol{Z}_j\boldsymbol{w}(\cdot)$ to the sampling point locations and aggregates them to the corresponding areas by taking averages. With non-linear link functions, $B_j(\cdot)$ first applies an inverse link function and then aggregates.
\subsection{Link to Existing Models}
Our proposed unifying framework utilizes elements from existing literature and combines them to create a flexible yet efficient spatial fusion model framework. As a result, there are some strong links in terms of model structure between this framework and some established methods in spatial statistics. At the same time, they share the same potential identifiability issues.
In univariate settings, the unifying framework allows us to model each type of spatial data individually with a latent Gaussian process. When we have geostatistical data, it results in a geostatistical regression \citep{cressie2015}. With Poisson-distributed lattice data, we obtain a sampling-points approximation to the model used in \cite{Kelsall2002}, which is an alternative modeling strategy to Besag-York-Mollé model \citep{BYM}. With point-pattern data, we obtain a discretized LGCP \citep{Moller1998}.
In multivariate geostatistical data settings, the design matrix $\boldsymbol{Z}$ plays a pivotal role in the identifiability of model parameters. When the number of independent Gaussian processes is less than the number of responses $q < \ell$, we obtain a spatial factor model \citep{Wang2003}. The latent spatial factors are assumed to have zero-mean unit-variance Gaussian processes, such that $\boldsymbol{Z}$ controls the variance (partial sill) of latent processes. When $q = \ell$, we obtain a general LMC framework \citep{Wackernagel2003,Schmidt2003}. A similar LMC framework also exists for lattice data \citep{MacNab2016}. Identifiability issues occur in the LMC since the number of latent values to be estimated in the latent processes is equal to the total number of observations in response variables. Additional spatial hyper-parameters, and fixed-effect coefficients also need to be estimated. For this reason, regularization is done via one of the following: 1) empirical Bayes method by fixing some of the hyper-parameters; 2) choosing informative prior distributions in Bayesian models; or 3) using a lower triangular matrix for $\boldsymbol{Z}$ \citep{Schmidt2003}. In cases of $q > \ell$, we acquire a similar model structure as shared component models \citep{Held2001} for Gaussian processes, where multiple outcomes have their own latent spatial components plus some shared spatial components. In this setting, the values in $\boldsymbol{Z}$ need to be even further constrained to avoid identifiability issues \citep{Held2001}.
Our framework is also linked to other process-based spatial data fusion models, which combine geostatistical and lattice data types. When we let the response variables represent the same information but have different data types, we obtain the model presented in \cite{Wilson2018}, where an explicit relationship is used to link multiple response variables. If we further allow different information to be represented in the response variables, we reach the generalized spatial fusion model framework proposed in \cite{Wang2018}.
To the best of our knowledge, there is no existing approach or implementation that jointly models all three types of spatial data in a multivariate framework. With those links to the existing approaches, our framework extends upon them by combining different features and enhance the overall flexibility of spatial fusion models.
\section{Model Implementations}
\label{sec:inf}
It is well known that fitting full Gaussian processes in Bayesian models is computationally expensive in both univariate and multivariate settings. Marginalized and conjugate Gaussian process models dramatically save computation time but they are only available when fitting geostatistical data with normally-distributed outcomes \citep{Banerjee2014,zhang2019}. There exist several approaches to reduce the computational burden, such as low rank \citep{Cressie2007, Banerjee2008, Stein2008} and sparse \citep{Furrer2006, Rue2009, Datta2016} methods. Some of those approaches are utilized in existing spatial fusion models. \cite{Shi2017} adapted the spatial basis function approach from fixed rank kriging \citep{Cressie2007}. \cite{Moraga2017} used integrated nested Laplace approximations \citep{Rue2009}. \cite{Wang2018} exploited the nearest neighbor Gaussian process (NNGP) \citep{Datta2016}. In this paper, we offer two efficient implementation strategies for the unifying spatial fusion model framework. The first strategy follows an adaptation of NNGP implementation in \cite{Wang2019}. The second strategy follows \cite{Wilson2018} to use INLA, with additional approximations for non-linear link functions.
\subsection{Implementation using NNGP}
Fitting full Gaussian processes in a Bayesian hierarchical model is costly, therefore we seek for efficient methods to speed up the inference. An NNGP implementation approximates a full Gaussian process by assuming that latent variables in the Gaussian process are conditionally independent given their neighborhood sets, hence introducing sparsity in the precision matrix. \cite{Datta2016} showed that it significantly reduces computation time in geostatistical models while yielding results close to a full Gaussian process-based inference. In our implementation, we let each latent spatial process $w(\boldsymbol{\cdot})$ following an independent NNGP. Let $w_\mathcal{U}$ denote a latent process on the set of locations $\mathcal{U}$, then the NNGP likelihood according to \cite{Datta2016} can be written as
\begin{equation}
\label{eq:nngp}
p(w_\mathcal{U}) = \prod_{i=1}^{n_\mathcal{U}} \text{N}\Big(w(\boldsymbol{u}_i)\mid C_{\boldsymbol{u}_i, \mathcal{N}(\boldsymbol{u}_i)}C^{-1}_{\mathcal{N}(\boldsymbol{u}_i)}w_{\mathcal{N}(\boldsymbol{u}_i)}, C_{\boldsymbol{u}_i,\boldsymbol{u}_i}-C_{\boldsymbol{u}_i, \mathcal{N}(\boldsymbol{u}_i)}C^{-1}_{\mathcal{N}(\boldsymbol{u}_i)}C_{\boldsymbol{u}_i, \mathcal{N}(\boldsymbol{u}_i)}\Big),
\end{equation}
where $\mathcal{N}(\boldsymbol{u}_i)$ is the set of $\max(i-1,m)$ nearest neighbors from $\{\boldsymbol{u}_1, \boldsymbol{u}_2,\dots, \boldsymbol{u}_{i-1}\}$ for location $\boldsymbol{u}_i$ with a fixed constant $m$, $C_{\boldsymbol{u}_i, \mathcal{N}(\boldsymbol{u}_i)}$ is the cross-covariance matrix between the latent process $w(\boldsymbol{u}_i)$ and its neighbors $\mathcal{N}(\boldsymbol{u}_i)$, $C_{\mathcal{N}(\boldsymbol{u}_i)}$ is the covariance matrix of $w_{\mathcal{N}(\boldsymbol{u}_i)}$, and $C_{\boldsymbol{u}_i,\boldsymbol{u}_i}$ is the variance of $w(\boldsymbol{u}_i)$. The variance and covariance matrices are parameterized by spatial hyperparameters.
The full Bayesian hierarchical model is then implemented using Stan modeling language via the \textbf{rstan} package \citep{rstan2016} in R \citep{R2017}, consisting of likelihoods for each outcome variable and NNGP, as well as priors for fixed effect coefficients and spatial hyperparameters. Stan implements the No-U-Turn sampler \citep{Homan2014} based on Hamiltonian Monte Carlo, which provides efficient means of conducting full Bayesian inference for complex hierarchical structures.
\subsection{Implementation using INLA}
Although the computational efficiency can be improved by using NNGPs instead of full Gaussian processes, it is still not feasible to fit multiple latent processes with more than thousands of locations in $\mathcal{U}$. Therefore, we implement an alternative strategy using INLA.
Over a fixed set of locations, a Gaussian process is equivalent to a Gaussian random field (GRF). \cite{Lindgren2011} established a connection between GRFs and GMRFs through a stochastic partial differential equation approach, where a GRF can be approximated by triangulating the spatial domain and using a weighted sum of basis functions as
\begin{equation}
\label{eq:inla}
w_\mathcal{U} \approx \sum_{k=1}^{m} r_k\phi_k,
\end{equation}
where $m$ is the number of points in the triangulation, $r_k$ are Gaussian distributed weights and ${\phi_k}$ are basis functions. The weights $\boldsymbol{r} = [r_1,r_2,\dots,r_t]$ forms a GMRF with sparse precision matrix which makes computation efficient. In this approach, the covariance function must be a member of the Mat\'{e}rn family defined as
\begin{equation}
C_{\boldsymbol{u}_i, \boldsymbol{u}_j} = \frac{\sigma^2}{2^{\nu-1}\Gamma(\nu)}(\sqrt{2\nu}||\boldsymbol{u}_i - \boldsymbol{u}_j||/\phi)^\nu K_\nu (\sqrt{2\nu}||\boldsymbol{u}_i - \boldsymbol{u}_j||/\phi) ,
\end{equation}
where $||\boldsymbol{u}_i - \boldsymbol{u}_j||$ is the Euclidean distance between $\boldsymbol{u}_i$ and $\boldsymbol{u}_j$, $K_v$ is the modified Bessel function of second kind with order $\nu$, $\sigma^2$ is the partial sill and $\phi$ relates to the spatial range, with $\nu \in (0,1]$ being the smoothness parameter. The approximation in Eq.~\eqref{eq:inla} can be written as $w_\mathcal{U} \approx A\boldsymbol{r}$, where $A$ is a projection matrix that maps a GMRF defined on the triangulation mesh nodes to the observations' locations.
The key to implementing the spatial fusion models in INLA lies within the projection matrix, with a different structure required for each data type \citep{inlabook}. For geostatistical data, the $i$th row of the projection matrix corresponds to the $i$th location, it is filled with zeros except where 1) the location is on the $j$th vertex, then the $j$th column is 1 or 2) the location is within a triangulation area, then three cells get values based on a mixture of barycentric based weights from three neighboring vertices of the triangulation. For lattice data, we construct a projection matrix that links the $i$th area with the mean value of the GRF at mesh nodes which falls into the $i$th area. If the link function is linear, increasing the mesh density will increase the number of mesh nodes that falls into each area hence better approximate the average. However, for non-linear link functions, it is preferable to have less-dense mesh due to Jensen's inequality \citep{jensen1906}, which states
\begin{equation}
\frac{1}{H}\sum_{\boldsymbol{s}'\in \boldsymbol{a}_{ji}}g_j^{-1}\left(w(\boldsymbol{s}')\right) \gtrapprox g_j^{-1}\left(\frac{1}{H}\sum_{\boldsymbol{s}'\in \boldsymbol{a}_{ji}}w(\boldsymbol{s}')\right)
\end{equation}
for the $i$th area in the $j$th response. The approximation is better when there is only a smaller number of mesh nodes within each area \citep{follestad2003}. Finally, for the point-pattern data, we use an augmentation approach by \cite{Simpson2016}, which avoids discretizing the spatial domain into grid cells. The projection matrix is built as an identity matrix with dimension equal to the total number of mesh nodes, row-binded with a projection matrix that is constructed on observed locations in the same way as the geostatistical case.
The final model fitting is done by stacking the projection matrices together and assigning appropriate priors using the \textbf{INLA} \citep{rINLA} package in R. Advances in \textbf{INLA} \citep{Martins2013} such as allowing multiple likelihoods and `copy' feature made this implementation possible.
\section{Illustrations}
\label{sec:sim}
In this section, we conduct two simulation studies and an analysis of epidemiological datasets to illustrate our proposed framework. All results are obtained in R version 3.5.0 \citep{R2017}, on a Linux server with 256GB of RAM and two Intel Xeon 6-core 2.5GHz processors. All R codes used in the simulation studies are provided in the supplementary material.
\subsection{Simulation Study One}
\label{subsec:sim1}
We are interested in modeling a single latent spatial process within a $[0,10]\times [0,10]$ square, using three spatial response variables with one being from each type. First, we simulate a zero-mean GRF on densely uniformly distributed locations with a covariance matrix $C\left(\cdot,\cdot;\sigma^2, \phi\right)$. We then sub-sample 200 locations to obtain the latent process at observed locations. For lattice observations, we divide the square into 100 Voronoi cells and compute aggregated GRF from all locations while accounting for ecological bias using Eq.~\eqref{eq:nonlinear}. In addition, we generate a covariate for geostatistical and lattice response by sampling from a standard normal distribution. Afterwards, we generate a normally-distributed geostatistical response at the same sampled locations and a Poisson-distributed lattice response for each area. For point-pattern observations, we simulate from the same GRF on a coarse 20$\times$20 grid, then exponentiate the values to obtain intensity at the grid cells, afterwards we generate Poisson point process using each intensity value multiplied by cell area and an offset term as the final intensity. In summary, the response variables are generated according to
\begin{align}
\boldsymbol{Y}_1 \mid \boldsymbol{\beta}_1,\boldsymbol{w},\tau_1^2 &\sim \text{N}\left(\boldsymbol{X}_1 \boldsymbol{\beta}_1 + B_1(\boldsymbol{w}), \tau_1^2\boldsymbol{I}\right), \nonumber\\
\boldsymbol{Y}_2 \mid \boldsymbol{\beta}_2,\boldsymbol{w} &\sim \text{Pois}\left(\exp(\boldsymbol{X}_2 \boldsymbol{\beta}_2 + B_2(\boldsymbol{w}))\right), \nonumber\\
\boldsymbol{Y}_3 \mid \boldsymbol{w} &\sim \text{Pois}\left(A\exp(B_3(\boldsymbol{w}))\right).
\end{align}
In the simulation, we use an exponential covariance function ($\nu = 0.5$), i.e. $C(\boldsymbol{u}_i,\boldsymbol{u}_j;\sigma^2,\phi) = \sigma^2 \mathrm{exp}(-||\boldsymbol{u}_i-\boldsymbol{u}_j||/\phi)$. The influence of varying sample sizes and spatial hyperparameters on predictive performance was investigated by \cite{Wang2018}, therefore we only consider a single setup by setting $\sigma^2 = 0.5$ and $\phi = 1$. In addition, we set $\boldsymbol{\beta}_1 = (1, 5)^\top, \boldsymbol{\beta}_2 = (1, 1.5)^\top$ and $\tau^2 = 1$. $A$ is the product of grid cell area and an offset term which takes value 0.25.
We consider seven different model specifications within our proposed framework. (i - iii) three univariate models using a single data type each, namely one of geostatistical, lattice and point-pattern data, (iv - vi) three fusion models using different combinations of two data types, and (vii) a multivariate fusion model combining all three response variables. In Stan implementation, the intercepts and coefficients are assigned with independent $\textrm{N}(0,5^2)$ priors. The variance parameters $\sigma^2$ and $\tau^2$ are assigned with inverse Gamma prior $\textrm{IG}(2,1)$, which has a mean of one and undefined variance. For the spatial decay $\phi$, a zero-truncated normal prior $\textrm{N}(1, 3^2)$ is assigned. We use $m=5$ nearest neighbors and $H=5$ sampling points randomly selected within each area. We run 4 chains of 2,000 iterations with 1,000 warm-up samples, without thinning for each model. Multiple chain convergence is checked with potential scale reduction factors \citep{Brooks1998}. For INLA implementation, we use penalized complexity (PC) prior for Mat\'ern GRF \citep{Fuglstad2018} with $\alpha$ fixed at 1.5, corresponding to the exponential covariance function. In addition, we choose the median practical spatial range to be 2 (corresponds to the median of $\phi$ being 1) and the probability of $\sigma$ greater than 1.7 is 5\%, such that the allocated probability mass is closely matched with the priors in Stan. The rest of the priors in INLA are default options. The same data was modeled using both Stan and INLA implementations for comparison. Additionally, the simulation is repeated 100 times for INLA. We leave out the Stan implementation in the repetition part due to its long computation time.
We chose an additional 1600 sites to evaluate the predictive performance of models in terms of root mean squared prediction errors (RMSPE) under each scenario. The prediction sites are located at the centers of a $40\times 40$ grid that uniformly covers the sampling domain. Their predictive performance is shown in Figure~\ref{fig:sim1}. The first two venn diagrams show the RMSPEs for the simulated scenario under different models with Stan and INLA implementation. The last Venn diagram shows the average RMSPEs over 100 simulations for INLA. For both Stan and INLA implementations, the RMSPEs are smaller in multivariate fusion models compared to univariate process-based models. The joint modeling of all three types of spatial data has the lowest RMSPE on the prediction of the latent process at unobserved locations. Stan and INLA implementations produced comparable results, with the RMSPEs of Stan fall inside the ranges of repeated INLA simulations.
\begin{figure}
\includegraphics[width = \textwidth]{venn3.jpeg}
\caption{Venn diagram of root mean squared prediction error for the unifying fusion framework fitted to each data type (and combinations thereof), using models implemented in Stan and INLA. Values in overlapping areas indicate results from models with multiple data types.}
\label{fig:sim1}
\end{figure}
\subsection{Simulation Study Two}
In the second simulation study, we focus on comparing the parameter estimates of a multivariate fusion model with three response variables of different types and two latent processes. We firstly simulate two independent zero-mean unit-variance GRFs uniformly distributed on the spatial domain of $[0,100]\times [0,100]$ square, then compute the sub-sampled and aggregated GRF for each response variable in the same way as in simulation one. Each response depends on the latent processes via the design matrix
\begin{equation}
\boldsymbol{Z} = \begin{bmatrix} 1.2 & 0 \\ 0.5 & 1.2 \\ 0 & 1 \end{bmatrix}.
\end{equation}
The first geostatistical response variable only depends on the first latent process, the second lattice response variable depends on both latent processes, while the third point pattern response variable depends only on the second latent process. The response variables are generated as follows,
\begin{align}
\boldsymbol{Y}_1 \mid \boldsymbol{\beta}_1,\boldsymbol{w},\tau_1^2 &\sim \text{N}\left(\boldsymbol{X}_1 \boldsymbol{\beta}_1 + B_1(\boldsymbol{Z}\boldsymbol{w}), \tau_1^2\boldsymbol{I}\right), \nonumber\\
\boldsymbol{Y}_2 \mid \boldsymbol{\beta}_2,\boldsymbol{w} &\sim \text{Pois}\left(\exp(\boldsymbol{X}_2 \boldsymbol{\beta}_2 + B_2(\boldsymbol{Z}\boldsymbol{w}))\right), \nonumber\\
\boldsymbol{Y}_3 \mid \boldsymbol{w} &\sim \text{Pois}\left(A\exp(B_3(\boldsymbol{Z}\boldsymbol{w}))\right).
\end{align}
where $\boldsymbol{Y}_1$ consists of 500 geostatistical observations, $\boldsymbol{Y}_2$ has 100 lattice observations and $\boldsymbol{Y}_3$ represents the number of events observed at each of 400 cells on a $20\times 20$ grid. In addition, we set $\boldsymbol{\beta}_1 = (3,5)^\top, \boldsymbol{\beta}_2 = (0.5,2)^\top, \phi_1 = 5$, $\phi_2 = 25$ and $\tau_1^2 = 0.5$. Since we have two latent processes in the simulation, using any of the univariate model or fusion model with two response variables can lead to identifiability problem. Hence, we estimate the parameters only using the unifying spatial fusion model with three responses only. The model and their prior specifications for both Stan and INLA are the same as in simulation one, except for the spatial range parameter. The prior for both $\phi_1$ and $\phi_2$ is zero-truncated $\textrm{N}(10,10^2)$ in Stan and PC prior with median practical spatial range $20$ in INLA (corresponds to the median of $\phi$ being 10).
The parameter estimates based on posterior medians and their 95\% posterior credible intervals for both implementations are summarized in Table~\ref{tab:par}. We obtained similar parameter estimates in both models. The PC prior in INLA penalizes complex structure in GRF hence tends to have a slightly over-estimated range. The posterior median of fitted latent processes at locations with geostatistical observations are shown in Fig~\ref{fig:sim2}. The root mean squared errors using Stan are 0.50 and 0.44 for the first and second latent process, compared to 0.54 and 0.48 using INLA. The computation time for the Stan implementation of the fusion model is 1.9 hours while it takes 11 minutes for INLA.
\begin{table}
\caption{Parameter estimates and their 95\% posterior credible intervals (95\% CI) from the unifying spatial fusion model in simulation two with both Stan and INLA implementation.}
\label{tab:par}
\begin{tabular}{@{\extracolsep{30pt}}cccccc@{}}
\hline \hline
& & \multicolumn{2}{c}{Stan} & \multicolumn{2}{c}{INLA} \\ \cline{3-4} \cline{5-6}
& True & Median & 95\% CI & Median & 95\% CI \\ \hline
$\beta_{10}$ & 3.00 & 2.78 & (2.43, 3.11) & 2.78 & (2.41, 3.13) \\
$\beta_{11}$ & 5.00 & 5.03 & (4.92, 5.13) & 5.01 & (4.91, 5.12) \\
$\beta_{20}$ & 0.50 & 0.56 & (0.24, 0.84) & 0.61 & (0.3, 0.89) \\
$\beta_{21}$ & 2.00 & 1.94 & (1.74, 2.16) & 1.92 & (1.77, 2.09) \\
$Z_1$ & 1.20 & 1.33 & (1.16, 1.53) & 1.23 & (1.08, 1.41) \\
$Z_{21}$ & 0.50 & 0.51 & (0.28, 0.75) & 0.65 & (0.46, 0.87) \\
$Z_{22}$ & 1.20 & 1.03 & (0.8, 1.37) & 0.96 & (0.69, 1.36) \\
$Z_3$ & 1.00 & 0.86 & (0.7, 1.11) & 0.82 & (0.61, 1.15) \\
$\phi_1$ & 5.00 & 5.44 & (3.85, 8.67) & 5.93 & (4.12, 8.48) \\
$\phi_2$ & 25.00 & 17.36 & (10.7, 28.88) & 27.60 & (17.86, 45.99) \\
$\tau^2$ & 0.50 & 0.47 & (0.27, 0.73) & 0.55 & (0.37, 0.8) \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width = \textwidth]{sim2.eps}
\caption{True versus fitted latent process at locations with geostatistical observation. Pearson's correlation coefficients $\rho$ are displayed.}
\label{fig:sim2}
\end{figure}
\subsection{Application to LuftiBus-SNC Dataset}
\label{subsec:app}
In spatial epidemiology, the joint analysis of multiple diseases with similar etiology allows us to separate underlying risk factors into shared and disease-specific components. In this analysis, we examine the disease-specific spatial risk surface of lung cancer and shared spatial risk components between lung cancer and respiratory disease.
Chronic lung disease contributes substantially to morbidity and mortality worldwide, with chronic obstructive pulmonary disease (COPD) being the third leading cause of death \citep{Lozano2012}. Forced expiratory volume in one second (FEV1) is a measure of the amount of air a person can exhale during a pulmonary test, it can be used to diagnose disease and predict respiratory-related mortality \citep{Menezes2014}. While respiratory disease and lung cancer share many common risk factors such as smoking and exposure to air pollution, it is of interest to examine the lung cancer specific spatial risk component. It may provide insights into identifying risk factors that are solely associated with lung cancer.
Initiated as a health promotion campaign by Lunge Zurich \citep{luftibus} in Switzerland, the `LuftiBus' project collected lung function measurements including FEV1 and demographic information from local residents. The data from LuftiBus observed between 2003 and 2012 were deterministically linked with a census-based Swiss National Cohort (SNC) study, to obtain 44,071 people with demographic, health and environmental variables in Switzerland. More importantly, the linkage provides us with the residential location of individual participants.
For lattice data, we compute the expected cause-specific (respiratory and lung cancer, respectively) mortalities in each municipality, adjusted by 5-year age-group and gender using the SNC data. We assume there are two latent spatial risk surfaces which are associated respiratory disease and lung cancer. The first risk surface is shared between FEV1, respiratory mortality and lung cancer mortality, while the second is lung cancer-specific risk surface. Typically with lattice data, multivariate conditional autoregressive models allow us to jointly analyze multiple responses and identify different latent components \citep{Jin2005}. However, municipal boundaries are artificial, we argue that a continuous spatial surface is a more natural modeling assumption. Therefore, we use our process-based unifying framework to conduct the analysis. Another advantage is that it allow us to incorporate the rich FEV1 data from Luftibus. The fusion model is structured as
\begin{align}
\boldsymbol{Y}_{\text{FEV1}} &\sim \text{N}(\beta_{1,0} + \beta_{1,1} \; X_\text{age} + \beta_{1,2} \; X_\text{gender} - Z_{11}\;w_1, \tau^2\boldsymbol{I}), \nonumber\\ \nonumber
\boldsymbol{Y}_{\text{resp}} &\sim \text{Pois}\bigl(E_{\text{resp}} \exp(\beta_{2,0} + {Z_{21}\; w_1})\bigr), \\ \nonumber
\boldsymbol{Y}_{\text{cancer}} &\sim \text{Pois}\bigl(E_{\text{cancer}} \exp(\beta_{3,0} + {Z_{31}\;w_1 + Z_{32}\; w_2}),\bigr)
\end{align}
where $E_{\text{resp}}$ and $E_{\text{cancer}}$ are the expected cause-specific mortalities.
More than 60\% of the FEV1 measurements in the linked dataset are located in Canton of Zurich, therefore we restrict our analysis to Canton of Zurich. In addition, we focus the analysis on people who are 40 years or older, which results in 16,160 geostatistical observations. Since we have a large number of observations in the FEV1 outcome, the number of locations required to model in the latent processes is large. Therefore it is not feasible to use the Stan implementation. We conduct the analysis using the INLA implementation only. We use PC prior for the latent components with $\alpha = 1.5$ corresponding to exponential covariance function, median practical range of 1km and median $\sigma$ of 1. Figure~\ref{fig:dat} shows the locations of geostatistical observation and the standardized mortality ratio for respiratory disease and lung cancer. Figure~\ref{fig:app} shows the transformed posterior estimates of the latent processes representing relative risk surfaces in Canton of Zurich. The shared risk components between FEV1, respiratory mortality, and lung cancer mortality is highest in urban areas, with an effective range of 3.1 (95\% CI: 1.9, 5.2) km based on the exponential covariance function. The estimated relative risk is computed by exponentiating the latent process, which varies between 0.72 and 1.59. Meanwhile, the high-risk areas of lung cancer-specific components are scattered around Canton of Zurich, mainly in the north and west regions with an effective range of 1.2 (95\% CI: 0.3, 4.0) km. The variability is smaller than the shared component with values between 0.87 and 1.32, indicating a smoother risk surface compared to the shared component. The lung cancer-specific risk component is modeled via lattice data $\boldsymbol{Y}_\text{cancer}$ only, hence appearing to have some block-wise structures compared to the shared component.
\begin{table}[]
\caption{Parameter estimates and their 95\% posterior credible intervals (95\% CI) for the LuftiBus-SNC dataset. $\phi_1$ and $\phi_2$ are in meters.}
\label{tab:case}
\begin{tabular}{@{\extracolsep{8pt}}ccccc@{}}
\hline \hline
Parameter & $\beta_{1,0}$ & $\beta_{1,1}$ & $\beta_{1,2}$ & $\beta_{2,0}$ \\ \hline
Median & 4.74 & 0.907 & -0.0375 & -0.0501 \\
95\% CI & (4.79, 4.7) & (0.923, 0.891) & (-0.0368, -0.0382) & (-0.101, -0.00127) \\ \hline\hline
Parameter & $\beta_{3,0}$ & $\tau^2$ & $\phi_1$ & $\phi_2$ \\ \hline
Median & -0.112 & 0.268 & 1020 & 389 \\
95\% CI & (-0.171, -0.056) & (0.274, 0.262) & (628, 1720) & (87.3, 1320) \\ \hline\hline
Parameter & $Z_{11}$ & $Z_{21}$ & $Z_{31}$ & $Z_{32}$ \\ \hline
Median & 0.0887 & 0.148 & 0.177 & 0.527 \\
95\% CI & (0.0223, 0.233) & (0.09, 0.257) & (0.0597, 0.449) & (0.242, 1.37) \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\includegraphics[width = \textwidth]{data.eps}
\caption{Data used in the fusion model. Left: locations of geostatistical observation. Middle: respiratory standardized mortality ratio. Right: lung cancer standardized mortality ratio.}
\label{fig:dat}
\end{figure}
\begin{figure}
\includegraphics[width = \textwidth]{app.eps}
\caption{Estimated spatial relative risk surfaces. Left: shared component between FEV1, respiratory mortality and lung cancer mortality. Right: lung cancer mortality-specific component.}
\label{fig:app}
\end{figure}
\section{Summary and Discussions}
\label{sec:dis}
We have proposed a unifying process-based statistical framework to handle spatial data fusion. The framework allows all three types of spatial data, namely geostatistical, lattice and point-pattern data, to be easily incorporated into a single multivariate spatial model. This framework contains theoretical and computational elements from several existing literatures: the basis for modeling latent processes in Stan is NNGP \citep{Datta2016}, the first Bayesian implementation is based on Stan \citep{rstan2016}, the alternative implementation uses INLA \citep{Rue2009}, the sampling point approximation approach for modeling lattice data is adopted from \cite{Fuentes2005}, discretization \citep{Moller1998} is used in modeling point-pattern data in Stan while data augmentation \cite{Simpson2016} is used in INLA. We have combined all of the individual elements and constructed this unifying framework. The framework extends upon existing flexible spatial fusion models \citep{Wang2018,Wilson2018} by making point-pattern data also compatible, hence completes all three spatial data types. We have benchmarked our INLA implementation against full Bayesian inference in Stan, and observed comparable performance with significantly decreased computation time. In addition, we have shown in the first simulation study that it is advantageous to conduct multivariate analysis using multiple spatial datasets if they are available.
Identifiability issues arise when there is more than one latent spatial process in the fusion model. Similar concern has been brought up in other multivariate spatial models \citep{Ren2013,Held2001}. Since the model becomes invariant under certain orthogonal transformations, the design matrix $\boldsymbol{Z}$ is not identifiable. \cite{Held2001} proposed a specific constraint on the relationship among the individual elements of the design matrix. \cite{Ren2013} proposed to constrain one element of each row in the design matrix to be strictly positive and having an ordered spatial range parameter. The same constraints allow identifiable parameters in the INLA implementation but not in Stan. A distinction between our proposed framework and existing multivariate models is, that we could potentially have at most one observation at any of the spatial locations even when we have three response variables. This makes identifying more than one latent process at each location problematic. Our INLA implementation does not directly model the latent variable parameters at the set of locations $\mathcal{U}$, but on the mesh vertices. One solution in the Stan implementation is to re-use the observed locations as sampling points and locations representing grid cells of LGCP whenever possible, such that the number of latent process parameters is reduced. Alternatively, certain elements of the design matrix can be constrained to zero based on expert knowledge, as done in our application to the LuftiBus-SNC dataset. When a model involves Mat\'{e}rn covariance function with smoothness parameter greater than 1, our implementation in Stan can be easily adapted by modifying the likelihood expressions, while INLA models can still be used as an approximation.
The usage of our proposed framework is multifaceted. The interest sometimes lies within latent spatial processes when analyzing spatial data, which represent residual spatial correlation in the response variables after taking existing covariates into consideration. The result can be used for detecting spatial clusters of unexplained risk or shared scientific drivers for response variables, which warrant further investigation in identifying those unknown drivers. When the interest is in predicting response variable for a newly observed spatial unit, fusion model improves the prediction of latent processes which in turn can improve the response variable prediction. Furthermore, the framework can be modified to use a one-dimensional Gaussian process in the latent components such that it applies beyond spatial data. For example, it can be used in time series modeling where all the observations are in $\mathcal{R}$ and some machine learning applications \citep{Rasmussen2005}.
Further research could be done on checking the compatibility of different data sources for spatial fusion modeling, i.e. if overlapping information exists between different spatial datasets. Such information can help to inform the model structure, especially the design matrix $\boldsymbol{Z}$. Needless to say, the framework can be extended to include temporal components.
\bigskip
\begin{center}
{\large\bf SUPPLEMENTARY MATERIAL}
\end{center}
All the R code used for the simulation studies is available in a separate file on the authors website.
\bibliographystyle{chicago}
|
1,116,691,497,875 | arxiv | \section{Introduction}
The shallow water equations (SWE), also known as the Saint-Venant system, model a variety of geophysical flows and a vast amount of applications exists in the literature. See for instance, \cite{leveque2011tsunami}, where numerical challenges in modelling inundation of small-scale coastal regions were analyzed. Hydrodynamic modelling of open-channel flows involves the solution of 1D shallow water systems \cite{vazquez1999improved}. See \cite{bellos1992experimental,khan2014modeling} for a list of experiments in channels with different conditions. The case of
channels with vertical walls and variable cross-sectional width is studied in \cite{balbas2009central}.
Direct problems for the above phenomena have been intensively studied during the last decades. Computing the corresponding solutions requires the knowledge of the bed channel bathymetry, appropriate initial and boundary conditions and possibly specific model parameters such as friction's coefficients. The computation of the topography or bathymetry, needed specially in applications, may be obtained with the use of experimental methods. Examples of these techniques for river bathymetry include interferometric synthetic aperture radar (SAR), digital photogrammetry \cite{Marks_K,Westaway} or airborne laser altimetry (LiDAR). Although these methods are the most widely used for these problems, these techniques are expensive and time consuming, and are not always available.
In this work, we focus on the signature that the bathymetry leaves on the perturbations in transient flows given by the shallow water equations in channels with vertical walls and uniform width. This leads to an alternative approach. Namely, the solution of an inverse problem to estimate the channel's bathymetry and the Manning's friction coefficient from appropriate measurements of available data.
Research on this area is active. Let us discuss some recent contributions relevant to the present work such as those related to hyperbolic PDE-constrained problems and specifically the shallow water equations in channels. An optimal estimation for parameters in 1-D hyperbolic systems based on the adjoint method was proposed in \cite{Nguyen_etal_2016}. Applications to parameter estimation of a real hydrological system or overland flow with infiltration can be found in \cite{nguyen2016parameter} and \cite{nguyen2014optimal}, respectively. The problem of optimally determining the initial conditions in the shallow water equations was treated in \cite{Kevlahan_etal}, giving sufficient conditions for convergence to the true initial conditions. Monte-Carlo and gradient-based optimization methods used to calibrate the shallow water equations are studied in \cite{Lacasta_etal}.
Not only the topography in hydrological systems can be estimated. The Manning roughness's coefficient has also been identified in the context of a complex channel network in \cite{ding2004identification,ding2005identification,ding2012optimal}. Furthermore, a general framework to deal with hyperbolic PDE_constrained optimization problems was presented in \cite{montecinos2019numerical}. A coupled system of the PDE-constrained problem and the adjoint formulation was presented, and conditions are provided to guarantee existence of an optimal solution. A direct numerical approach to reconstruct river bed topography from free surface elevation data is presented in \cite{Gessese_etal_2011}. Then, in \cite{Gessese_etal_2013} the authors consider velocity measurements and assume a steady flow. Bathymetry imaging using depth-averaged quasi-steady velocity observations are carried out in \cite{Lee_etal}.
Optimal flood control in rivers and watersheds have been successfully investigated in the literature. Among them, adjoint sensitivity analysis (ASA) based on a variational principle has been applied to find the sensitivity of hydrodynamic variables with respect to control variables in one- and two- dimensional flow models \cite{Kawahara,Sandersa,Sandersb,Sandersc,Ding}. Variational data assimilation (VDA) and adjoint sensitivity analysis (ASA) have been used for estimating unknown bathymetries of rivers and improving flood predictions by assimilating observed flow variables from measurements and satellite images. See \cite{Mazauric2003,Dimet2003,MAZAURIC2004403} for more details. A similar variational approach for Lagrangian data assimilation in rivers applied to the identification of bottom topography and initial water depth and velocity estimation was presented in \cite{Honnorata,Honnoratb,Honnoratc}). Specifically in \cite{Honnoratb}, taking into account that the velocity measurements can be scarce and that they usually require complex human interventions, the authors introduce a method which uses Lagrangian data, that can be extracted from video images, into the assimilation process. This method is applied by the authors to the identification of bed elevation and initial conditions for an academic configuration of a river hydraulic model. Such model is based on the shallow water equations.
In \cite{P2Languer2005}, a four-dimensional variational data assimilation technique is presented as a tool to forecast floods. They modified the shallow-water equations to include a simplified sediment transport model. Their main purpose is to compute the thickness of the sediment layer bounded by the height of the solid bathymetry or bedrock and the mobile bathymetry composed of sediments.
We finally note that in \cite{P4Brisset2018}, given altimetry measurements, the identification capability of time varying inflow discharge and the Strickler coefficient (defined as a power-law in the water depth) of the 1D river Saint--Venant model is investigated. The bathymetry, however, is either provided or estimated from one in-situ measurement following \cite{GARAMBOIS2015103} and \cite{Gessese_etal_2011}. Estimations of the bathymetry from discrete surface velocity data is currently an important and active field of research.
In this work, we are concerned with the next natural case of practical interest. Namely, the one of recovering the bed bathymetry and the Manning's friction coefficient from prescribed transient velocity data in open channels with vertical walls and varying width. Estimating the bathymetry and the Manning's friction coefficient from transient velocity measurements is a lot more challenging compared to a situation where the flow corresponds to a steady state. To our knowledge this problem has not been addressed in the literature in the case of channels with varying width. More precisely, in this work, using a cost functional subject to the shallow water equations, we formulate an algorithm capable of recovering the bathymetry from a set of point values of the fluid's velocity in the channel. Specifically, we formulate a cost functional which includes the SWE through Lagrange multipliers. Boundary and initial conditions are included. The constrained optimization problem is solved by a continuous descent method. Namely, the gradient is computed analytically by the adjoint state method, then discretized. Assuming Fr\'echet differentiability of the functional, we characterize, analyze and obtain an explicit expression for the gradient. It is perhaps a matter of taste, but we find a neater proof using the Fr\'{e}chet derivative instead of that of Gateaux.
The paper is organized as follows. In Section \ref{sec:MatMethods}, the shallow water model in channels with varying width, a constrained minimization approach and the derivation of the analytic gradient procedure are presented. Direct and adjoint equations are solved numerically with a second order accurate Roe-type upwind scheme, and the details are presented in Section \ref{sec:NumMethods} together with the line search method. Section \ref{sec:TestProblems} is devoted to benchmark problems, and the numerical performance and efficiency of the method are studied. A numerical sensitivity analysis starts with a case where the observed velocity corresponds to a steady state (in the absence of friction) and the exact bottom bathymetry consists of a bump plus a sinusoidal perturbation. This simple setting is used to verify the reliability of the algorithm before transient flows with shock waves are considered in further tests. Finally, we invert both the bathymetry and the Manning's friction coefficient simultaneously in a transient flow and we also present a numerical test involving wet-dry states motivated by experimental data. In all cases the approximated bathymetry is very accurate. Conclusions are presented in the last section. Details of the derivation of the gradient is left to Appendix \ref{sec:AppendixGradient}.
\section{Materials and methods}
\label{sec:MatMethods}
\subsection{The shallow water model}
\begin{figure}[h!]
\begin{center}
{\includegraphics[width=0.49\textwidth]{Fig1a.eps}}
{\includegraphics[width=0.4\textwidth]{Fig1b}}
\end{center}
\caption{\label{fig:SWSchematic} Schematic of the shallow water model. The left panel shows the flow profile along the channel. The right panel shows the top view.}
\end{figure}
The direct problem consist of the 1-D SWE for rectangular channels with varying width and friction, which is written as a hyperbolic balance law as
\begin{equation}
\label{eq:SWSigma}
\begin{pmatrix}
\sigma h \\
\sigma hu
\end{pmatrix}_t +
\begin{pmatrix}
\sigma hu \\
\sigma hu^2 +\frac{g}{2}\sigma h^2
\end{pmatrix}_x =
\begin{pmatrix}
0 \\
\frac{1}{2}gh^2\sigma_x-g\sigma h B_x -gn^2\frac{\sigma h}{R^{4/3}}u\vert u\vert
\end{pmatrix}, \; \; a < x < b, \; \; t > 0.
\end{equation}
Here $h$ is the depth of the layer, $u$ the velocity, $B(x)$ the bathymetry, $\sigma(x)$ the channel's width at position $x$, $g=9.81 \text{ ms}^{-2}$ the acceleration of gravity, $n$ is the Manning's friction coefficient, and $R= \frac{ \sigma h}{ \sigma + 2 h}$ the hydraulic radius. The hydraulic radius is the ratio of the wet area and the wetted perimeter. See \cite{khan2014modeling} for more details on the friction term. Figure \ref{fig:SWSchematic} shows the schematic of the model. The velocity is in units of meters per second, while $x,h,\sigma$ are in units of meters and time in seconds. The friction coefficient is in units of $\text{ s}\text{ m}^{-1/3}$.
The adjoint problem is solved backwards in time. In order to have a well posed problem, appropriate initial and boundary conditions are imposed. The details are included in Section \ref{sec:TestProblems}.
\subsection{A constrained minimization approach and the analytic gradient}
There exists a variety of techniques to measure the velocity in open channels. See for instance \cite{bolognesi2017measurement} where river discharge estimations and measurements of velocity using an aircraft system are analyzed. The inverse problem to solve is stated as follows.
For simplicity, let us assume that the cross-sectional velocity $u$ is measured at the points $(x_j,t_k),$ $j=1,2,\ldots N,$ $k=1,2,\ldots K$. Namely, $u(x_j,t_k)\approx \hat{u}_{j,k}$. The inverse problem of interest consists in estimating the bathymetry $B\equiv B(x)$, and the Manning's friction coefficient $n$, given this velocity data. For that end, let us introduce the least square functional,
\begin{equation}
\begin{array}{rcl}
J:L^2(a,b)\times \mathbb{R} & \rightarrow & \mathbb{R} \\
(B,n) & \mapsto & J(B,n),
\end{array}
\end{equation}
given by
\begin{equation}
J(B,n) = \frac{1}{2} \sum_{j,k} \left( u(x_j,t_k,B)-\hat u_{j,k} \right)^2.
\end{equation}
Our goal is to minimize $J$ constrained to $h,u$ solving the shallow water system \eqref{eq:SWSigma}.
The constrained optimization problem is solved by a continuous descent method. Namely, the gradient is computed analytically by the adjoint state method, then discretized. Let us define the (linear) observation operator
\begin{equation}
\label{eq:ObsOp}
\mathcal{M}u=\lbrace u(x_j,t_k) \rbrace\in\mathbb{R}^{J\times K}.
\end{equation}
Let us also consider the Lagrangian
\begin{equation}
\begin{array}{l}
\mathcal L(B,n,h,u,\lambda,\mu) = \frac{1}{2}\Vert \mathcal{M}u-\hat{u}\Vert^2 + \\
\\
\left\langle
\left(
\begin{array}{c}
\lambda \\
\mu
\end{array}
\right),
\left(
\begin{array}{c}
(\sigma h)_t+(\sigma hu)_x \\
(\sigma hu)_t+(\sigma hu^2+\frac{g}{2}\sigma h^2)_x-\frac{g}{2}h^2\sigma_x+g\sigma h B_x +gn^2\frac{\sigma h}{R^{4/3}}u\vert u\vert
\end{array}
\right)
\right\rangle_{L^2(\Omega\times (0,T))},
\end{array}
\end{equation}
where $\Omega=(a,b)$, and $\lambda,\mu$ are Lagrange multipliers. Here $\langle \cdot, \cdot \rangle_{L^2(\Omega\times (0,T))}$ is the normalized inner product in $L^2(\Omega\times (0,T))$, given by
\[
\langle f, g \rangle_{L^2(\Omega\times (0,T))} = \frac{1}{(b-a) T} \int_0^T \int_a^b f(x,t) \overline{g(x,t)} dx dt.
\]
\bigskip
Assuming Fr\'{e}chet differentiability of $J$, the next result yields an expression for the gradient.
\begin{theorem}
\label{th:Lagrange}
Let $B$ and $n$ be given, and let $h,u$ solve the shallow water equations. Suppose that the velocity measurements are taken in space and time $(x,t)\in [a,b]\times [0,T]$ for $T>0.$ Furthermore, assume the Lagrange multipliers solve the adjoint equations
\begin{equation}
\label{eq:AdjointEqn}
\begin{array}{c}
\sigma \lambda_t+\sigma u\lambda_x
+\sigma u\mu_t+(\sigma u^2 +g\sigma h)\mu_x
+\left(gh\sigma_x-g\sigma B_x -
gn^2\left(\frac{1}{h}+\frac{2}{\sigma}\right)^{1/3}\left(2-\frac{1}{3}\frac{\sigma}{h}\right) u\sqrt{u^2+\varepsilon}\right) \mu
= 0 \\
\sigma h \mu_t+2\sigma hu\mu_x+\sigma h \lambda_x
-gn^2\frac{\sigma h}{R^{4/3}}\frac{2u^2+\varepsilon}{\sqrt{u^2+\varepsilon}}\mu
= \mathcal{M}^*(\mathcal{M}u-\hat{u}) ,
\end{array}
\end{equation}
with final and boundary conditions
\begin{equation}
\label{eq:LambdaInitCond}
\lambda(x,T)=0,\quad \mu(x,T)=0,\quad x\in(a,b),
\end{equation}
and
\begin{equation}
\label{eq:MuInitCond}
\lambda(b,t)=0,\quad \mu(b,t)=0.\quad t\in(0,T).
\end{equation}
Then the Fr\'{e}chet derivative of the functional $J$ is
\[
DJ(B,n)(\xi_1,\xi_2)= \langle\xi_1,-\int_0^T(g\sigma h\mu)_x\, dt\rangle\,
+ \langle\mu,2g \; n \frac{\sigma h}{R^{4/3}} u\sqrt{u^2+\varepsilon}\rangle\xi_2.
\]
Consequently
\[
\nabla J(B,n) = \left(
\begin{array}{c}
-\overline{(g \sigma h \mu )}_x \\
\langle\mu,2g \; n\frac{\sigma h}{R^{4/3}}u\sqrt{u^2+\varepsilon}\rangle
\end{array}
\right)
\]
\end{theorem}
\bigskip
Here, $\overline{(\cdot)}$ denotes the time average
\[
\bar f(x) = \frac{1}{T} \int_0^T f(x,t) dt.
\]
The proof of this theorem is left to Appendix \ref{sec:AppendixGradient}.
We note that if the direct problem is solved for times $t\in [0,T]$ with initial conditions at $t=0$ as specified in Section \ref{sec:TestProblems}, the adjoint problem has zero final conditions at time $t=T$ and the solution is found backwards in time from $t=T$ to $t=0$.
\section{Numerical Methods}
\label{sec:NumMethods}
\subsection{A Roe-type scheme for the hyperbolic systems \eqref{eq:SWSigma} and \eqref{eq:AdjointEqn}}
A wide variety of numerical schemes have been proposed to solve the shallow water equations. Such schemes use different approaches and satisfy desirable properties for different goals. In \cite{garcia2000numerical}, a Roe-type well-balanced numerical scheme is proposed. That is, it exactly preserves steady sates at rest, adding accuracy when computing near steady-state flows. The central-upwind scheme presented in \cite{kurganov2007second} satisfies both the well-balance and the positivity-preserving properties. That is, it recognizes steady states at rest and it maintains the positivity of layer's depth over time. Many other approaches have also been studied. We refer the interested reader to the above works and references therein for more information. \\
\noindent
{\bf The direct problem}
In quasilinear form, it is well known (see e.g. \cite{balbas2009central}) that system \eqref{eq:SWSigma} can be written as
\begin{equation}
\label{eq:MatrixA}
\vect W_t +
A
\vect W_x =
\vect S
\end{equation}
where
\begin{equation}
\label{eq:Variables}
\vect W =
\begin{pmatrix}
\sigma h \\
\sigma hu
\end{pmatrix},
A =
\begin{pmatrix}
0 & 1 \\
c^2-u^2 & 2 u
\end{pmatrix} ,
\text{ and }
\vect S =
\begin{pmatrix}
0 \\
-g \sigma h B_x + g h^2 \sigma_x -\frac{g n^2 A}{R^{4/3}} |u| u
\end{pmatrix}
\end{equation}
are the vector of conserved variables, the coefficient matrix, and the vector of source terms respectively. The coefficient matrix has eigenvalues $\lambda_1 = u-c, \lambda_2 = u+c$, and corresponding eigenvectors $r_1 = (1, u-c )^T$ and $r_2 = (1,u+c)^T$, where $c=\sqrt{gh}$. As a result, system \eqref{eq:SWSigma} is conditionally hyperbolic provided $h > 0$. Hyperbolicity is lost when $h = 0$ in a dry state.
Roe-type upwind schemes were first introduced in \cite{roe1981approximate}. The numerical scheme requires the computation of a Roe matrix $\bar A (\vect W_\ell, \vect W_r)$ for any left and right states $\vect W_\ell$ and $\vect W_r$. The flux $\vect F = \vect F(W,\sigma) = \left( \sigma hu , \sigma hu^2 +\frac{g}{2}\sigma h^2 \right)^T$ of the model in conservation form \eqref{eq:SWSigma} depend explicitly on the solution variables but also on the model parameter $\sigma$. For such flux, the Roe matrix $\bar A(\vect W_\ell, \vect W_r)$ must satisfy $\bar A(\vect W_\ell, \vect W_r) \to A(\vect W)$ as $\vect W_\ell, \vect W_r \to \vect W$, it must have real eigenvalues with a complete set of eigenvectors, and
\begin{equation}
\Delta \vect F = \bar A(\vect W_\ell,\vect W_r) \Delta \vect W + \left( 0 , -g \bar{h}^2 \Delta \sigma/2 \right)^T,
\end{equation}
where $\Delta \vect F = \vect F(\vect W_r)-\vect F (\vect W_\ell)$, $\Delta \vect W = \vect W_r - \vect W_\ell$, $\Delta \sigma = \sigma_r - \sigma_\ell$, and $\bar{h}$ is an approximation of $h$ between the left and right states. One such matrix is given by the following Roe linearizations
\[
\bar u = \frac{\sqrt{\sigma_\ell h_\ell} \; u_\ell + \sqrt{\sigma_r h_r} \; u_r}{\sqrt{\sigma_\ell h_\ell}+\sqrt{\sigma_r h_r}},\; \bar h = \frac{\sqrt{\sigma_\ell}\; h_\ell + \sqrt{\sigma_r} \; h_r }{\sqrt{\sigma_\ell} + \sqrt{\sigma_r} } \text{, and } \bar c = \sqrt{g \bar h}.
\]
In \cite{garcia2000numerical}, a Roe-type upwind scheme is derived with the aid of a convenient discretization of the source terms that balance the flux gradients for steady states at rest. That is, the numerical scheme is well balanced. See \cite{hubbard2000flux} for more details. In order to extend it here for the case of channels with varying width, one possible discretization of the source terms is given by
\[
\Delta x \; \overline{\vect S} =
\begin{pmatrix}
0 \\
-g \bar{\sigma} \bar h \Delta B + g \bar h^2 \Delta \sigma - \frac{g n^2 \bar{\sigma} \bar{h}}{\bar{R}^{4/3}} |\bar u | \bar u_,
\end{pmatrix},
\]
where
\[
\bar \sigma = \sqrt{\sigma_\ell \sigma_r}, \text{ and } \bar R= \frac{\bar \sigma \bar h}{\bar \sigma + 2\bar h}, \Delta B = B_r-B_\ell, \Delta \sigma = \sigma_r-\sigma_\ell.
\]
Finite differences of the conserved variables and the linearized source terms are decomposed in terms of the eigenvectors $\bar{ \vect r}_{1} = (1,\bar u - \bar c )^T$, $\bar{ \vect r}_{2} = (1,\bar u + \bar c )^T$ as
\[
\Delta \vect W = \alpha_{1} \bar { \vect r}_{1 }+ \alpha_{2} \bar{\vect r}_{2}, \; \;
\Delta x \widehat{\vect S} = \beta_{1} \bar{ \vect r}_{2}+ \beta_{1} \bar{ \vect r}_{2},
\]
where
\[
\begin{array}{lcllcl}
\alpha_{1} & = & \frac{- \Delta (\sigma h u) + (\bar u + \bar c) \Delta (\sigma h )}{2 \bar c}, & \beta_{1} & = & \frac{\bar c \bar \sigma}{2} \Delta B -\frac{ \bar c \bar h}{2} \Delta \sigma + \frac{n^2 \bar c \bar \sigma}{2 \bar R^{4/3}} \; \Delta x \; |\bar u| \bar u, \\
\alpha_{2} & = & \frac{\Delta (\sigma h u) - (\bar u - \bar c) \Delta (\sigma h)}{2 \bar c}, & \beta_{2} & = & -\frac{\bar c \bar \sigma}{2} \Delta B + \frac{ \bar c \bar h}{2} \Delta \sigma - \frac{n^2 \bar c \bar \sigma}{2 \bar R^{4/3}} \; \Delta x \; |\bar u| \bar u.
\end{array}
\]
More details on the implementation of the numerical scheme can be found in \cite{roe1987upwind}. For the sake of completeness, we include the details here. We denote by $\vect W_{j+\frac{1}{2}}$ the Roe averages between the states $\vect W_j$ and $\vect W_{j+1}$ in the domain with cells $I_{j}=[x_{j-\frac{1}{2}},x_{j+\frac{1}{2}}]$, $x_{j\pm \frac{1}{2}}= x_j \pm \Delta x/2$. The second order numerical scheme is given by
\begin{equation}
\label{eq:Upwind}
\begin{array}{lcl}
\vect W_j^{k+1} & = & \vect W_j^k - \frac{\Delta t}{\Delta x} \sum_{\lambda_{j-1/2,p}^k > 0} (\alpha_{j-1/2,p}^k \lambda_{j-1/2,p}^k-\beta_{j-1/2,p}^k )\; \bar{ \vect r}_{j-1/2,p}^k \\
& & -\frac{\Delta t}{\Delta x} \sum_{\lambda_{j+1/2,p}^k < 0} ( \alpha_{j+1/2,p}^k \lambda_{j+1/2,p}^k -\beta_{j+1/2,p}^k ) \; \bar{ \vect r}_{j+1/2,p}^k \\
& & -\sum_{p=1}^2 \frac{\Delta t}{\Delta x} \frac{1}{2} \phi \left( \theta_{j+1/2,p}^k \right) \left( \text{sign}(\nu_{j,p}^k) - \nu_{j,p}^k \right) \left( \alpha_{j+1/2,p}^k \lambda_{j+1/2,p}^k-\beta_{j+1/2,p}^k \right)\; \bar{ \vect r}_{j+1/2,p}^k \\
& & +\sum_{p=1}^2 \frac{\Delta t}{\Delta x} \frac{1}{2} \phi \left( \theta_{j-1/2,p}^k \right) \left( \text{sign}(\nu_{j-1,p}^k) - \nu_{j-1,p}^k \right) \left( \alpha_{j-1/2,p}^k \lambda_{j-1/2,p}^k-\beta_{j-1/2,p}^k \right)\; \bar{\vect r}_{j-1/2,p}^k .
\end{array}
\end{equation}
Here, $\phi(\theta) = \max( 0, \max (\min (2 \theta,1) , \min( \theta , 2 ) ) )$ is known as the superbee limiter function, and for each cell $I_j$, we define
\[
\theta_{j+1/2,p} = \frac{\alpha_{j,p}\lambda_{j,p}-\beta_{j,p}}{\alpha_{j',p}\lambda_{j',p}-\beta_{j',p}}, \; j' = j-\text{sign}(\lambda_{j,p}), \; \nu_{j,p} = \frac{\Delta t}{\Delta x}\lambda_{j,p}.
\]
The last two terms in equation \eqref{eq:Upwind} are the second order corrections. See \cite{leveque1992numerical} for more details, where the reader can also find the sonic entropy fix that is usually done for Roe-type upwind schemes and that has also been implemented here. In the case where $\lambda_{j-1,p} < 0 < \lambda_{j,p}$, $\lambda_{j-1/2,p}$ in the first term of equation \eqref{eq:Upwind} is replaced by $\lambda_{j-1/2,p}^r = \lambda_{j,p} \; (\lambda_{j-1/2,p}-\lambda_{j-1,p})/(\lambda_{j,p}-\lambda_{j-1,p})$. Symmetrically, if $\lambda_{j,p} < 0 < \lambda_{j+1,p}$, $\lambda_{j+1/2,p}$ in the second term of equation \eqref{eq:Upwind}, $\lambda_{j+1/2,p}$ is replaced by $\lambda_{j+1/2,p}^\ell = \lambda_{j,p} \; (\lambda_{j+1,p}-\lambda_{j+1/2,p})/(\lambda_{j+1,p}-\lambda_{j,p})$.\\
\noindent
{\bf The adjoint problem}
The adjoint problem can be written as
\begin{dmath}
\begin{pmatrix}
\lambda \\
\mu
\end{pmatrix}_t
+
\begin{pmatrix}
0 & c^2 - u^2 \\
1 & 2 u
\end{pmatrix}
\begin{pmatrix}
\lambda \\
\mu
\end{pmatrix}_x
=
\begin{pmatrix}
-\frac{u}{\sigma h} \mathcal M^* (\mathcal M u - \hat u) + \left[ -\frac{g h}{\sigma} \sigma_x + g B_x +g n^2 u \left( \frac{ 2 h -\sigma/3 }{\sigma h \; R^{1/3}} \sqrt{u^2+\epsilon} - \frac{2 u^2+\epsilon}{R^{4/3} \sqrt{u^2+\epsilon}} \right)\right] \mu \\
\frac{1}{\sigma h} \mathcal M^* (\mathcal M u - \hat u) + \frac{g n^2}{ R^{4/3}} \frac{2u^2+\epsilon}{\sqrt{u^2 + \epsilon}} \mu
\end{pmatrix},
\end{dmath}
where $\hat u $ is the observed velocity in space and time. We note that $\lambda$ has units of velocity and $\mu$ is non-dimensional. On the other hand, $M^* (\mathcal M u - \hat u)$ is given in units of squared velocity.
As noted in Theorem \ref{th:Lagrange}, the final conditions in equations \eqref{eq:LambdaInitCond} and \eqref{eq:MuInitCond}, are given at time $t=T$ and the solution is computed backwards in time from $t=T$ to $t=0$. In addition, we note that the coefficient matrix
\[
A^* =
\begin{pmatrix}
0 & c^2 - u^2 \\
1 & 2 u
\end{pmatrix}
\]
is the transpose of the coefficient matrix in the direct problem. The eigenvalues are the same and the corresponding eigenvectors are
\[
\vect r_1^* =
\begin{pmatrix}
-\bar u - \bar c \\
1
\end{pmatrix},
\; \text{ and } \;
\vect r_2^* =
\begin{pmatrix}
-\bar u + \bar c \\
1
\end{pmatrix}.
\]
Analogously to the direct problem, the finite difference of the solution variable $\Delta \vect W^* = (\Delta \lambda, \Delta \mu)^T$ and the linearized source terms
\begin{dmath}
\Delta x \; \vect S^*=
\begin{pmatrix}
-\frac{\bar u \; \Delta x \; \mathcal \overline{ M^* (\mathcal M u - \hat u}) }{\bar \sigma \bar h } + \left[ -\frac{g \bar h}{\bar \sigma} \Delta \sigma + g \Delta B +g n^2 \bar u \left( \frac{ 2 \bar h -\bar \sigma/3 }{\bar \sigma \bar h \; \bar R^{1/3}} \sqrt{\bar u^2+\epsilon} - \frac{2 \bar u^2+\epsilon}{\bar R^{4/3} \sqrt{\bar u^2+\epsilon}} \right)\right] \bar \mu \\
\frac{1}{\bar \sigma \bar h} \overline{ \mathcal M^* (\mathcal M u - \hat u}) + \frac{g n^2}{ \bar R^{4/3}} \frac{2 \bar u^2+\epsilon}{\sqrt{\bar u^2 + \epsilon}} \bar \mu
\end{pmatrix},
\end{dmath}
where
\[
\bar \mu = \frac{\mu_\ell + \mu_r}{2}, ; \; \overline{ \mathcal M^* ( \mathcal M u - \hat u )} = \frac{ \mathcal M^*( \mathcal M u_\ell - \hat u_\ell) + \mathcal M^*( \mathcal M u_r - \hat u_r )}{2}
\]
are decomposed as
\[
\Delta \vect W^* = \alpha_{1}^* \bar { \vect r}_{1}^*+ \alpha_{2}^* \bar{\vect r}_{2}^*, \; \;
\Delta x \widehat{\vect S}^* = \beta_{1}^* \bar{ \vect r}_{2}^* + \beta_{1}^* \bar{ \vect r}_{2}^*.
\]
Here, the coefficients in the decompositions are given by
\[
\begin{array}{lcllcl}
\alpha_1^* & = & \frac{(-\bar u + \bar c) \Delta \mu - \Delta \lambda}{2 \bar c}, & \beta_1^* & = & \frac{(-\bar u + \bar c) \bar S_2^* - \bar S_1^*}{2 \bar c}, \\ \\
\alpha_2^* & = & \frac{(\bar u + \bar c) \Delta \mu + \Delta \lambda}{2 \bar c}, & \beta_2^* & = & \frac{(\bar u + \bar c) \bar S_2^* + \bar S_1^*}{2 \bar c}.
\end{array}
\]
The corresponding numerical scheme for the adjoint problem solved backward in time is given by
\begin{equation}
\label{eq:UpwindAdjoint}
\begin{array}{lcl}
\vect W_j^{*,k} & = & \vect W_j^{*,k+1} + \frac{\Delta t}{\Delta x} \sum_{\lambda_{j-1/2,p}^{k+1} > 0} (\alpha_{j-1/2,p}^{*,k+1} \lambda_{j-1/2,p}^{k+1}-\beta_{j-1/2,p}^{*,{k+1}} )\; \vect r_{j-1/2,p}^{*,k+1} \\
& & +\frac{\Delta t}{\Delta x} \sum_{\lambda_{j+1/2,p}^{k+1} < 0} ( \alpha_{j+1/2,p}^{*,k+1} \lambda_{j+1/2,p}^{k+1} -\beta_{j+1/2,p}^{*,k+1} ) \; \vect r_{j+1/2,p}^{*,k+1} \\
& & -\sum_{p=1}^2 \frac{\Delta t}{\Delta x} \frac{1}{2} \phi \left( \theta_{j+1/2,p}^{*,k+1} \right) \left( \text{sign}(\nu_{j,p}^{k+1}) - \nu_{j,p}^{k+1} \right) \left( \alpha_{j+1/2,p}^{*,k+1} \lambda_{j+1/2,p}^{k+1}-\beta_{j+1/2,p}^{+,k+1} \right)\; \vect r_{j+1/2,p}^{*,k+1} \\
& & +\sum_{p=1}^2 \frac{\Delta t}{\Delta x} \frac{1}{2} \phi \left( \theta_{j-1/2,p}^{*,k+1} \right) \left( \text{sign}(\nu_{j-1,p}^{k+1}) - \nu_{j-1,p}^{k+1} \right) \left( \alpha_{j-1/2,p}^{*,k+1} \lambda_{j-1/2,p}^{k+1}-\beta_{j-1/2,p}^{*,k+1} \right)\; \vect r_{j-1/2,p}^{*,k+1},
\end{array}
\end{equation}
where
\[
\theta_{j+1/2,p}^* = \frac{\alpha_{j,p}^*\lambda_{j,p}-\beta_{j,p}^*}{\alpha_{j',p}^* \lambda_{j',p}-\beta_{j',p}^*}.
\]
\subsection{A line search method}
\label{sec:SearchMethod}
The bathymetry and the Manning's friction coefficient are inferred iteratively. We start with an initial guess which in principle must be not too far from the target. In each step, one computes the gradient and advance in the steepest search direction. The amplitude to advance in the steepest direction is initially obtained empirically. One then modulates it to minimize the error. We continue iteratively until an error threshold is achieved. The algorithm is summarized as follows.\\
\noindent \textbf{Algorithm.} (Continuous descent)
\label{alg:descent}
Given a starting point $B_0$ and $n_o$, a convergence
tolerance $\epsilon$, and $k\leftarrow 0;$
\noindent
while $\left\Vert \nabla J(B _{k},n_k)\right\Vert >\epsilon ;$
Compute the steepest search direction
\begin{equation}
\label{eq:pk}
p_{k} = -\nabla J(B_k,n_k);
\end{equation}
Set
\begin{equation}
\label{eq:SteepestDescent}
B _{k+1} = B _{k}+\alpha_k p_{1,k}; \; \; \;
n_{k+1} = n_k+\alpha_k p_{2,k};
\end{equation}
$k\leftarrow k+1;$
\noindent
end (while)\\[4pt]
Since
\[
\nabla J(B,n) = \left(
\begin{array}{c}
-\overline{(g \sigma h \mu )}_x \\
\langle\mu,2g\frac{\sigma h}{R^{4/3}}u\sqrt{u^2+\varepsilon}\rangle
\end{array}
\right),
\]
we have
\[
p_k=
\begin{pmatrix}
\overline{(g \sigma h \mu )}_x \\
-\langle\mu,2g\frac{\sigma h}{R^{4/3}}u\sqrt{u^2+\varepsilon}\rangle
\end{pmatrix}.
\]
We recall that the overline denotes time average. The steepest search direction for the bathymetry is a function of $x$ only, and a constant for the Manning's friction coefficient. At each iteration step where $p_k$ from equation \eqref{eq:pk} is already computed, we choose the coefficient $\alpha_k$ in equation \eqref{eq:SteepestDescent} as follows. One starts with an initial value ($\alpha_k = 0.5$ here) and compute $(B_{k+1},n_{k+1})$ according to equation \eqref{eq:SteepestDescent}. One then calculates the error in the velocity with that estimated bathymetry. If the error decreases when when $\alpha_k$ is reduced by a certain factor ($0.8$ here), we keep reducing it until the error does not decrease anymore. \\
\noindent
{\bf Note:} When either the Manning's friction coefficient or the bathymetry are known, we can estimate the other parameter by considering only one of the entries in $p_k$.
\section{Test problems}
\label{sec:TestProblems}
The above technique is numerically tested in this section. The direct problem often involves bathymetries consisting of a bump or a channel's throat by which the fluid passes through. Depending on the parameter regime, the flow may accelerate/decelerate and reduce/increase its cross-sectional wet area as it passes through the bump and/or throat. We consider here different situations to show the merits and robustness of the algorithm.\\
\noindent
{\bf Numerical setup and boundary conditions}\\
The inverse problem consists of inferring the bathymetry $B$ and Manning's friction coefficient $n$ from the transient velocity $u(x,t)$. We assume that the velocity is observed at all spatial positions and at all times. That is, the velocity data is assumed to be available at all grid points and at all times. For the direct problem, we initially specify the total height $w$ and the velocity $u$. Such data is assumed to be known at $t=0$. At each step, the estimated bathymetry is used to obtain the initial depth $h=w-B$. Regarding the adjoint problem which is solved backwards in time, here we impose zero Dirichlet final conditions.
At the left boundary, a discharge $Q_{\text{left}}$ and a surface elevation $w_{\text{left}}$ are specified at inflow and are extrapolated at outflow for the direct problem. An inflow/outflow at the left boundary occurs when the eigenvalues of the coefficient matrix are positive/negative. At the right boundary, a discharge $Q_{\text{right}}$ and a surface elevation $w_{\text{right}}$ are specified at inflow and extrapolated at outflow. An inflow/outflow at the right boundary occurs when the eigenvalues of the coefficient matrix are negative/positive. The adjoint problem is used to compute the gradient $\nabla J$. Zero Neumann boundary conditions are implemented for the adjoint variables $\lambda,\mu$. We assume we know the bathymetry elevation at the boundaries, and we prescribe them to be $B_{\text{in}}=B_{\text{out}}=0$ at both ends.
We quantify the error and the relative error with the $L^\infty$ norm, and are given by
\[
e = \sup_x |B_{\text{approx}}(x) - B_{\text{exact}} | , \text{ and } e_{\text{rel}} = \frac{e}{\sup_x | B_{\text{exact}} (x)| },
\]
where $B_{\text{approx}}$ and $B_{\text{exact}}$ are the approximated and exact bathymetries.
The time window $[0,T]$ where both the direct and the adjoint problems are solved need to be chosen carefully. The end time $T$ needs to be large enough to have the needed information to invert the problem. However, if $T$ is too large, it induces strong interactions with the boundary, where the bathymetry is prescribed. In any case, we have found that the bathymetry estimation is not very sensitive to the end time. We specify the end time $T$ in each numerical test.
\subsection{Bathymetry bump}
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=0.49 \textwidth]{Fig2a.eps}
\includegraphics[width=0.49 \textwidth]{Fig2b.eps}
\caption{\label{fig:BumpSinSubcr} Left panel: 3D view of the channel. Top right panel: Exact total height $w=h+B$ (blue solid line) and bathymetry (black solid line) are shown. Bottom right panel: Velocity as a function of $x$. The solution corresponds to a steady state with discharge $Q_{\text{in}} = 1$ at inflow (left boundary) and total height $w_{\text{out}} = 1.5$ at outflow (right boundary).}
\end{figure}
\end{center}
The synthetic data in this first numerical test is obtained with a particular choice of a bathymetry elevation and channel's geometry. The exact bathymetry to be estimated is given by
\begin{equation}
\label{eq:BumpBExact}
B_{\text{exact}}(x) =
\left\{
\begin{array}{lcl}
\frac{1}{2} \left( 1-\left( 4 \left( x-\frac{1}{2} \right) \right)^2 \right) + 0.02 \sin \left( 16 \pi \left( x - \frac{1}{4} \right) \right) & \text{ if } x \in [0.25 , 0.75]\\
0 & \text{ if } x \in [0,1] \setminus [0.25, 0.75].
\end{array}
\right.
\end{equation}
The channel's width is given by
\begin{equation}
\label{eq:sigma}
\sigma(x) = 2 \min \left( 2.4 ( x - 0.5)^2+0.35 , 0.5 \right).
\end{equation}
The 3D view of the channel is shown in the left panel of Figure \ref{fig:BumpSinSubcr}. Following \cite{khan2014modeling}, the Manning's friction coefficient is fixed to $n=0.009 \text{ s}\text{ m}^{-1/3}$ in this numerical test and the bathymetry is the only model's parameter to estimate.
We first test the algorithm in a simple setting. In particular, the velocity considered here corresponds to a subcritical smooth steady state (in the absence of friction). Smooth steady states are characterized by two invariants. Namely, the discharge $Q = hu$ and the energy $E=\frac{1}{2} u^2 + g(h+B)$ are both constant throughout the domain when the friction coefficient $n$ vanishes. One could use such invariants to estimate the bathymetry. However, the algorithm presented here is designed for transient flows as well, and the setting in this numerical experiment is meant to test the accuracy in the approximations of the bathymetry. Given the bathymetry $B$, a corresponding steady state may be computed by specifying two quantities. Here we specify the discharge $Q_{\text{in}}=1$ at inflow at the left boundary and the total height $w_{\text{out}}= B_{\text{out}}+h_{\text{out}}=1.5$ at outflow at the right boundary. Figure \ref{fig:BumpSinSubcr} shows the total height $w=h+B$ and exact bathymetry $B$ in the top right panel, while the bottom right panel shows the exact velocity that is observed.
\begin{center}
\begin{figure}[h!]
\centering
\includegraphics[width=0.49 \textwidth]{Fig3a.eps}
\includegraphics[width=0.49 \textwidth]{Fig3b.eps}\\
\caption{\label{fig:BumpSolution} The top left panel shows the exact bathymetry (solid blue line) given by equation \eqref{eq:BumpBExact}, the initial guess $B_o = 0$ (black dashed line) and the intermediate steps (dotted lines in different colors and mark sizes). The top right panel shows the error $|B_n - B_{\text{exact}}|$ at steps $16,32,48$ and $96$.}
\end{figure}
\end{center}
In general, the initial guess needs to be close enough to the target in order to converge to the correct state. However, here we test the robustness of the algorithm by considering an initial guess $B_o = 0$. The numerical results for the inverse problem are given in Figure \ref{fig:BumpSolution}. In the left panel, the exact solution is identified with the solid blue line, while the initial guess is denoted by the black dashed line. The final approximation is computed using 96 steps in the algorithm described in Section \ref{sec:SearchMethod} and a resolution of 200 grid points. The direct and adjoint problems are solved in the time window $[0,T]$ with $T=0.2$. In the panel, we show the estimation of the bathymetry for the steps 0,16,32,48, and 96. The final step is not easy to distinguish because it is very close to the target. The error is shown in the right panel. The maximum error is $e=8.1 \times 10^{-3}$, which corresponds to a relative error of $e_{\text{rel}} = 1.56 \%$. The maximum error is located near the left boundary. Away from that region, the error is reduced to $3.1 \times 10^{-3}$, which corresponds to a relative error of $0.59 \%$.
\subsection{A transient flow}
\label{sec:TestTransientFlow}
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=0.49 \textwidth]{Fig4a.eps}
\includegraphics[width=0.49 \textwidth]{Fig4b.eps}
\caption{\label{fig:BumpSinNonStat} Left panel: Exact bathymetry (blue solid line), the initial guess (black dashed line), and the intermediate steps in the algorithm (dotted lines). Right panel: Error in steps 1,2,3 and 45. }
\end{figure}
\end{center}
The ultimate goal in this work is to estimate the bathymetry in transient flows. For that end, the observed velocity in this numerical tests is time dependent and the synthetic data is constructed as follows. Given the bathymetry in equation \eqref{eq:BumpBExact}, the initial condition in the direct problem is the smooth supercritical steady state associated to the discharge $Q_{\text{steady}}=8$ and the total height $w_{\text{out}}= B_{\text{out}}+h_{\text{out}}=1$. However, the discharge imposed at the left boundary is $Q_{\text{in}} = 9.6$, which is $20\%$ higher compared to that corresponding to the steady state. The resulting transient flow consists of a perturbation to a steady state. The right-going shockwave propagates and passes through the bump in the bathymetry. This generates a time-dependent velocity which is used as the synthetic data in the adjoint problem.
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=0.49 \textwidth]{Fig5a.eps}
\includegraphics[width=0.49 \textwidth]{Fig5b.eps}\\
\includegraphics[width=0.49 \textwidth]{Fig5c.eps}
\includegraphics[width=0.49 \textwidth]{Fig5d.eps}\\
\includegraphics[width=0.49 \textwidth]{Fig5e.eps}
\includegraphics[width=0.49 \textwidth]{Fig5f.eps}
\caption{\label{fig:TransientFlow} Left column: Approximated (blue dashed line) and exact (red dashed line) total heights at times $t=0.0036, 0.0355$, and $0.08$ in descending order, for the transient flow of Section \ref{sec:TestTransientFlow}. The exact (solid black line) and approximated (dotted red line) bathymetries are also shown. The steady state height is included to highlight the difference compared to the transient flow (dotted black line). Right column: approximated (dotted red line) and exact (dashed blue line) velocities. }
\end{figure}
\end{center}
The Manning's friction coefficient is fixed to $n=0.009 \text{ s}\text{ m}^{-1/3}$ in this numerical test and the end time $T=0.2$. Figure \ref{fig:BumpSinNonStat} shows the approximated bathymetry in steps 1,2,3, and 45 in the left panel. For completeness, the error is exhibited in the right panel. The maximum error in the last step (away from the left boundary) is $e=8.96 \times 10^{-4}$, which corresponds to a relative error of $e_{\text{rel}} = 0.17\%$. Flows in realistic applications may not be in steady state equilibrium. Estimating the bathymetry from transient velocity measurements is a lot more challenging compared to a situation where the flow corresponds to a steady state. The present algorithm has shown to be efficient in those circumstances.
Figure \ref{fig:TransientFlow} shows the time evolution of the total height (left column) and velocity (right column) at times $t=0.0036, 0.03555$, and $0.08$. The total height is computed both using both the exact and the approximated bathymetries. The exact total height is denoted by the dashed blue line, while the approximated solution is identified with the dashed red line. The steady state total height is also shown for reference (black dotted line). The exact and approximated topographies are denoted by the solid blue and dotted red lines, respectively. The approximated solution is very accurate even in this time-dependent problem, and the differences are hard to be distinguished.
\subsection{Bump in a channel with varying width, friction and discontinuous top surface}
\label{sec:BumpDiscSurf}
\begin{center}
\begin{figure}[h]
\centering
\includegraphics[width=0.48 \textwidth]{Fig6a.eps}
\includegraphics[width=0.50 \textwidth]{Fig6b.eps}
\caption{\label{fig:InverseCodeSigmaBumpDiscSurf} Top left panel: Exact bathymetry (blue solid line), the initial guess (black dashed line), and the intermediate steps in the algorithm (dotted lines). Bottom left panel: Error in steps 1,2,3,4,5 and 30. Right panel: 3D view of the channel (yellow surface), the exact bathymetry (black surface), and the initial surface elevation $w = B+h$ (blue surface). }
\end{figure}
\end{center}
Discontinuous weak solutions of hyperbolic systems like the shallow water equations may appear in finite time. The robustness of the algorithm is tested here by estimating the bathymetry from velocity data with discontinuities. The exact bathymetry is given by
\[
B(x) =
\left\{
\begin{array}{lcl}
\frac{1}{4} \left[ \cos \left( 10 \pi \left( x-\frac{1}{2} \right) \right) +1\right] & \text{ if } & 0.4 < x \le 0.6, \\
0 & & \text{otherwise,}
\end{array},
\right.
\]
and the width $\sigma$ is given by equation \eqref{eq:sigma}.
The computed state in this numerical test is a steady state with shockwave in the absence of friction. However, here the Manning's friction coefficient is set to $n=0.009 \text{ s}\text{ m}^{-1/3}$, following \cite{khan2014modeling}. The initial height at the left and right boundaries are $w_{\text{in}} = 1.1, w_{\text{out}} = 0.75$. An initial shock is placed at $x_{\text{shock}}=0.65$ with left and right states given by $w_{\text{left}} = 1.417, u_{\text{left}} = 8.085,$ and $w_{\text{right}} = 0.931, u_{\text{right}} = 12.198$. The energy is initially piecewise constant, satisfying $E_{\text{in}} = 46.59$ for $x\le x_{\text{shock}}$, and $E_{\text{out}} = 83.52$ for $x > x_{\text{shock}}$. This shockwave is stationary in the absence of friction. The right panel of Figure \ref{fig:InverseCodeSigmaBumpDiscSurf} shows the 3D view of the channel (yellow surface), the bathymetry $B$ (black surface), and the initial height's elevation $w$ (blue surface).
The top left panel of Figure \ref{fig:InverseCodeSigmaBumpDiscSurf} shows the estimated bathymetry given by the algorithm in Section \ref{sec:SearchMethod} at steps 1,2,3,4,5 and 30. Here we use a time window $[0,T]$ with $T=0.2$. The initial state is $B_o =0$. This is significantly far from the target, which consists of a bump at the center of the domain. The first step already has a bump-like structure, with a small jump near the shockwave. At step 5, the bathymetry is close the the target and at the final step 30, the approximation is virtually on top of the exact bathymetry. The error as a function of $x$ is shown in the bottom left panel for different steps, where the convergence to the exact solution is evident. At step 30, the error is below $e=4.2 \times 10^{-3}$, which corresponds a relative error of $e_{\text{rel}} = 0.85 \%$ of the maximum bathymetry's elevation. We note that the algorithm works well even in the presence of shockwaves and friction.
\subsection{Bathymetry and Manning's friction coefficient inversion}
\begin{center}
\begin{figure}[h!]
\centering
\includegraphics[width=0.49 \textwidth]{Fig7a.eps}
\includegraphics[width=0.49 \textwidth]{Fig7b.eps}
\caption{\label{fig:InverseCodeSigmaBumpManningDiscSurf} Top left panel: Exact bathymetry (blue solid line), the initial guess (black dashed line), and the intermediate steps in the algorithm (dotted lines). Bottom left panel: Error in steps 1,101,201,301,401 and 2000. Top right panel: The exact topography (solid blue line) and the exact total height (black dashed line) are displayed. Bottom right panel: The approximated Manning's friction coefficient is shown as a function of iteration step (solid blue line), and the exact coefficient is included for reference in the dashed red line.}
\end{figure}
\end{center}
In this test, we invert both the bathymetry and the Manning's friction coefficient simultaneously. Although the value of $n$ in Section \ref{sec:BumpDiscSurf} is realistic, it may not have a strong effect in the flow in the time window considered here. As a sensitivity test, we have increased the target value of the exact Manning's friction coefficient to $n_{\text{exact}}=0.027 \text{ s}\text{ m}^{-1/3}$, which is three times larger compared to the previous one. We chose a smaller time window with $T=0.02$. However, in this case it took 2000 iteration steps to converge to the exact solution with an error of $e=3.8 \times 10^{-3}$, which corresponds to a relative error of $e_{\text{rel}} = 0.77 \%$ of the maximum bathymetry's elevation.
Using the same symbols as in Figure \ref{fig:InverseCodeSigmaBumpDiscSurf}, the top left panel of Figure \ref{fig:InverseCodeSigmaBumpManningDiscSurf} shows the exact bathymetry, and the approximated bathymetry at the initial and intermediate steps 101,201,301, 401 and 2000. The error is shown in the bottom left panel. Although it took many more steps, the error at the final step is very small.
We can simultaneously estimate both the bathymetry's elevation and the Manning's friction coefficient in the algorithm in Section \ref{sec:SearchMethod}. The second component of the gradient $\nabla J$ has the approximated friction coefficient $n$ as a factor itself. As a result, the initial value cannot be zero because it represents an equilibrium value in the algorithm. We set the initial value of the Manning's friction coefficient as $n_o = 0.0027 = \frac{1}{10} n_{\text{exact}}$. The bottom right panel of Figure \ref{fig:InverseCodeSigmaBumpManningDiscSurf} shows the estimated Manning's friction coefficient as a function of step number. The estimated value is already close to the exact value after about 20 steps. We only show 200 steps to see the variations in the early steps. However, the plot for 2000 steps (not shown) shows a convergence to the exact value.
The top right panel of Figure \ref{fig:InverseCodeSigmaBumpManningDiscSurf} shows the bathymetry $B$ and the initial surface elevation with a shockwave used in the present numerical test and the previous Section \ref{sec:BumpDiscSurf}. The algorithm provides very accurate results in flows with or without friction, steady or transient states, with initial guesses that are significantly far from the exact solutions.
\subsection{Manning's coefficient inversion in the presence of wet-dry states}
\label{sec:DamBreakReal}
The numerical test in this last section is motivated by laboratory experiments of dam breaks conducted in converging/diverging channels. See for instance, Chapter 5 of the book \cite{khan2014modeling} for a list of experiments in channels with different bed slopes and different wet and dry conditions. The experiments in \cite{khan2014modeling} Section 5.3.4 were taken from \cite{bellos1992experimental}. The channel has vertical walls and width variations along the $x$-axis, approximately given by the graph in the left panel of Figure \ref{fig:DamBreakRealCaseSigmaHeight}. The channel's length is 21.2 m, and its width is 1.4 m from 0 to 5 m, and from 16.8 to 21.2 m. The minimum width is 0.6 m at $x_m = 8.5 \text{ m}$.
\begin{figure}[h!]
\begin{center}
{\includegraphics[width=0.39 \textwidth]{Fig8a.eps}}
{\includegraphics[width=0.59 \textwidth]{Fig8b.eps}}
\end{center}
\caption{\label{fig:DamBreakRealCaseSigmaHeight} Left panel: Approximated channel's width as a function of $x$ (top view of the channel). The points $P_1$ and $P_2$ indicate the locations where the depth was measured in the corresponding experiment in \cite{khan2014modeling}. Right panel: Total height $w=h+B$ at time $t=4\text{ s}$ with initial conditions \eqref{eq:InitCondRealCase} and the boundary conditions below. }
\end{figure}
In the experiment, the flow is initially given by
\begin{equation}
\label{eq:InitCondRealCase}
u(x,t=0) = 0, w(x,t=0) =
\left\{
\begin{array}{lcl}
0.3 \text{m} & \text{ if } & x < x_m = 8.5 \text{ m},\\
H_{\text{out }}= 10^{-5} \text{ m}& & \text{ otherwise },
\end{array}
\right.
\end{equation}
which corresponds to a flow initially at rest, where the downstream part of the channel is dry (a threshold value has been used). The gate is assumed to be instantaneously removed. The left boundary is a solid wall. We have used zero Dirichlet left boundary conditions in the velocity and Neumann left boundary conditions for the height. The right boundary extrapolates the data at outflow, and imposes $H_{\text{out}}$ at inflow. Once the dam breaks, the flow evolves as illustrated in the left panel of Figure \ref{fig:DamBreakRealCaseComp} at $t=4 \text{ s}$. The resolution here is $\Delta x = 21.2 \text{ m}/200$.
The bathymetry is flat $B=0$. So, we are interested in estimating the Manning's friction coefficient, which is unknown. Unfortunately, the depth at two locations ($P_1$ and $P_2$ in Figure \ref{fig:DamBreakRealCaseComp}) are the only quantities reported in this experiment. In \cite{hernandez2016central}, it was found that one good approximation for the Manning's friction coefficient is $n=0.0084 \text{ s}\text{ m}^{-1/3}$. Here we create synthetic velocity data based on this value to approximate the friction coefficient based on those velocity measurements. The purpose of this numerical test is to show that the algorithm works well even in the presence of wet-dry states, in connection with the above experiment.
\begin{figure}[h!]
\begin{center}
{\includegraphics[width=0.49 \textwidth]{Fig9a.eps}}
{\includegraphics[width=0.49 \textwidth]{Fig9b.eps}}
\end{center}
\caption{\label{fig:DamBreakRealCaseComp} Left panel: Convergence of the Manning's friction coefficient to the correct value. The approximations are plotted for different steps. Right panel: comparison between experimental data and the numerical approximation obtained by the present schemes of the water's height at two particular locations in $x$, versus time. The left point is located at the left boundary $P_1 = 0$, and the right point is located at $P_2 = 18.5 \text{m}$.}
\end{figure}
The approximated Manning's coefficient are shown for different steps in the left panel of Figure \ref{fig:DamBreakRealCaseComp}. One can observe that the Manning's coefficient is very closed to the exact value after 20 steps in the algorithm. The approximated values were obtained using a time window $[0,T]$ with $T=4 \text{ s}$ before the flow reaches the boundaries.
In the experimental data in \cite{bellos1992experimental,khan2014modeling}, the height was measured in time at two particular locations. One at the left boundary $P_1 = 0$, and the other one near the right boundary $P_2 = 18.5 \text{ m}$. The right panel in Figure \ref{fig:DamBreakRealCaseComp} compares the real and numerical values. We observe a good agreement, specially at the location $P_2$ near the right boundary, for the entire simulation. The numerical approximation at $P_1$ is accurate for the first part of the simulation, and overestimates it for the second half. Boundary conditions and the adjustment of the Manning coefficient might affect the predictions.
\section{Conclusions}
In this work we formulated a constrained optimization problem to estimate the bed channel bathymetry and Manning's friction coefficient from available data of the fluid's velocity. The continuous problem is first presented and analyzed. A quadratic functional which by means of Lagrange multipliers incorporates the shallow water equations is minimized using the Fr\'echet derivative. A continuous descent method is formulated to obtain the minimal solution. Both direct and adjoint systems are proposed to be solved by a second-order Roe-type upwind numerical scheme. However, the algorithm works for any other efficient and robust numerical scheme. We estimate the bathymetry of transient flows as well as the Manning's friction coefficient. Several benchmark problems are presented in order to verify the numerical performance of the proposed method. A simple steady-state case is first formulated to verify the reliability of the algorithm before transient flows can be treated. In this first case we considered a steady state velocity and a bathymetry bump with a sinusoidal perturbation. In a second test we considered a transient flow consisting of a right-going perturbation to a steady state. Finally, we simultaneously estimated both the bathymetry and the Manning's friction coefficient in a channel with varying width and discontinuous top surface and a numerical test was presented to estimate the Manning's coefficient in the presence of wet-dry states, motivated by experimental data. We obtained very accurate approximations of the bathymetry in all cases.
We have provided an algorithm that works very well even in transient flows in channels with vertical walls of varying width, discontinuous top surfaces, and even wet-dry states. The need for a good initial guess and the empirical initial coefficient ($\alpha_k$) in the search direction are often limitations for approaches like the one presented here. However, we have shown that our algorithm is not very sensitive to those parameters and we provided criteria to choose the best coefficient $\alpha_k$ together with conditions to stop our algorithm. Furthermore, our setting is flexible and may be adapted to estimate other parameters or systems. \\
\noindent
{\bf Acknowledgements:}
Research supported in part by grants UNAM-DGAPA-PAPIIT IN113019 \& Conacyt A1-S-17634.
|
1,116,691,497,876 | arxiv | \section{Introduction}
The absence of arbitrage is of fundamental interest in many areas of financial mathematics.
Our goal is to provide a systematic discussion for a financial market with one risky asset modeled via its discounted price process \(P = (P_t)_{t \in [0, T]}\), which we assume to be either the stochastic exponential of an It\^o process, i.e. to have dynamics
\begin{align}\label{eq: SEM}
d}%{\operatorname{d}\hspace{-0.055cm} P_t = P_t (b_t d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t d}%{\operatorname{d}\hspace{-0.055cm} W_t),
\end{align}
or to be a positive diffusion with Markov switching, i.e. to have dynamics
\begin{align}\label{eq: MSM}
d}%{\operatorname{d}\hspace{-0.055cm} P_t = b(P_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (P_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\end{align}
where \(\xi = (\xi_t)_{t \in [0, T]}\) is a continuous-time Markov chain and \(W = (W_t)_{t \in [0, T]}\) is a Brownian motion.
For semimartingale markets the classical concepts of no arbitrage are the notions of \emph{no free lunch with vanishing risk (NFLVR)} as defined by Delbaen and Schachermayer \cite{DelbaenSchachermayer94,DelbaenSchachermayer98} and \emph{no feasible free lunch with vanishing risk (NFFLVR)} as defined by Sin \cite{sin1996strictly}.
The difference between (NFLVR) and (NFFLVR) is captured by the concept of a \emph{financial bubble} in the sense of Cox and Hobson \cite{Cox2005}.
For our market it is well-known that (NFLVR) is equivalent to the existence of an \emph{equivalent local martingale measure (ELMM)}, see \cite{DelbaenSchachermayer98}, and that (NFFLVR) is equivalent to the existence of an \emph{equivalent martingale measure (EMM)}, see \cite{Cherny2007,sin1996strictly,Yan97}.
The no arbitrage condition used in the stochastic portfolio theory of Fernholz \cite{fernholz2002stochastic} is \emph{no relative arbitrage (NRA)}. In complete markets Fernholz and Karatzas \cite{fernholz2010} showed that (NRA) is equivalent to the existence of a \emph{strict martingale density (SMD)}.
A weaker concept is \emph{no unbounded profit with bounded risk (NUPBR)}, which is known to be equivalent to the existence of a \emph{strict local martingale density (SLMD)}, see \cite{Choulli1996}. (NUPBR) is considered to be the minimal notion needed for portfolio optimization, see \cite{Karatzas2007}.
The first findings of this article are integral tests for the existence and non-existence of SMDs, ELMMs and EMMs.
For \eqref{eq: SEM} the tests are formulated in terms of Markovian upper and lower bounds for the volatility coefficient \(\sigma\) and for \eqref{eq: MSM} the tests depend on \(x \mapsto \sigma (x, j)\) with \(j\) in the state space of the Markov chain \(\xi\).
The main novelty of our results is that they apply in the presence of multiple sources of risk.
Beside the Markov switching framework, this is for instance the case in diffusion models with a change point, which represent a change of the economical situation caused for instance by a sudden adjustment in the interest rates or a default of a major financial institution. In general, the question whether (NFLVR) and/or (NFFLVR) hold for a model with a change point is difficult, see \cite{FONTANA20143009} for some results in this direction. Our integral tests provide explicit criteria, which are easy to verify.
For many applications of the Markov switching model \eqref{eq: MSM} it is important to know how the change to an ELMM affects the dynamics of the Markov chain \(\xi\).
As a second contribution, we study this question form a general perspective for independent sources of risk modeled via martingale problems. In particular, we show that the \emph{minimal local martingale measure (MLMM)}, see \cite{doi:10.1111/j.1467-9965.1992.tb00027.x}, preserves the independence and the laws of the sources of risk. To our knowledge, this property has not been reported in the literature.
A third contribution of this article are integral tests for the martingale property of certain stochastic exponentials driven by It\^o processes or switching diffusions. These characterizations are our key tools to study the absence of arbitrage.
We comment on related literature.
For continuous semimartingale models the absence of arbitrage has been studied by Criens \cite{criens17b}, Delbaen and Shirakawa \cite{Delbaen2002}, Lyasoff \cite{MAFI:MAFI530} and Mijatovi\'c and Urusov \cite{MU(2012)}. Criens, Delbaen and Shirakawa and Mijatovi\'c and Urusov proved integral tests for the existence of SMDs, ELMMs and EMMs in diffusion frameworks. Our results can be viewed as generalizations to an It\^o process or Markov switching framework. For a model comparable to \eqref{eq: SEM}, Lyasoff proved that the existence of an ELMM is determined by the equivalence of a probability measure to the Wiener measure. The structure of this characterization is very different from our results.
In Section \ref{sec: comments} below we comment in more detail on the results in \cite{criens17b, Delbaen2002,MAFI:MAFI530,MU(2012)}.
The martingale property of stochastic exponentials is under frequent investigation. At this point we mention the articles of Blanchet and Ruf \cite{doi:10.1080/15326349.2015.1114891}, Cheridito et al. \cite{CFY}, Criens \cite{criens17b} and Kallsen and Muhle--Karbe \cite{KMK}. Criens used arguments based on Lyapunov functions and contradictions to verify the martingale property of certain stochastic exponential in a multi-dimensional diffusion setting. We transfer these techniques to a general It\^o process setting.
Cheridito et al. and Kallsen and Muhle--Karbe related the martingale property of a stochastic exponential to an explosion probability via a method based on the concept of \emph{local uniqueness} as defined in \cite{JS}. This technique traces back to work of Jacod and M\'emin \cite{JM76} and Kabanov et al. \cite{KLS-LACOM1,KLS-LACOM2}.
We use a similar argument for the Markov switching setting. The main difficulties are the proofs of explosion criteria and local uniqueness.
Both approaches have a close relation to the work of Blanchet and Ruf, where a tightness criterion for the martingale property of non-negative local martingales has been proven. The connection between Lyapunov functions, explosion and tightness is for instance explained in \cite{doi:10.1080/17442508.2019.1657430}.
Let us also comment on consecutive problems and extensions of our results: In case the discounted price process \(P\) is a positive It\^o process of the type
\[
d}%{\operatorname{d}\hspace{-0.055cm} P_t = b_t d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\]
our results on the martingale property of stochastic exponentials can be used to obtain characterizations for no arbitrage with a similar structure as for the model \eqref{eq: MSM}.
Moreover, in case \(P\) is the stochastic exponential of a diffusion with Markovian switching, i.e.
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} P_t &= P_t d}%{\operatorname{d}\hspace{-0.055cm} S_t, \\
d}%{\operatorname{d}\hspace{-0.055cm} S_t &= b(S_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (S_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\end{align*}
our martingale criteria yield conditions for no arbitrage with a similar structure as for \eqref{eq: SEM}.
It is also interesting to ask about multi-dimensional models. In this case, results in the spirit of \cite{criens17b} can be proven by similar arguments as used in this article. However, the conditions are rather complicated to formulate and space consuming. Therefore, we restrict ourselves to the one-dimensional case.
The article is structured as follows. In Section \ref{sec: MP SE} we give conditions for the martingale and strict local martingale property of certain stochastic exponentials.
In Section \ref{sec: Arb GM} we study the model \eqref{eq: SEM} and in Section \ref{sec: MSM} we study the model \eqref{eq: MSM}. In Section \ref{sec: modifying MLMM} we show that the MLMM preserves independence and laws for sources of risk and we explain how the MLMM can be modified to affect the law of an additional source of risk. The proofs are collected in the remaining sections.
\section{Martingale property of Stochastic Exponentials}\label{sec: MP SE}
Fix a finite time horizon \(0 < T < \infty\)
and let \((\Omega, \mathcal{F}, \mathbf{F}, \mathds{P})\) be a complete filtered probability space with right-continuous and complete filtration \(\mathbf{F} = (\mathcal{F}_t)_{t \in [0, T]}\). Moreover, fix a state space \(I \triangleq (l, r)\) with \(- \infty \leq l < r \leq + \infty\).
In the following two sections we provide conditions for the martingale and strict local martingale property of certain stochastic exponentials.
\subsection{The general case}\label{sec: GC}
Assume that \(S = (S_t)_{t \in [0, T]}\) is an \(I\)-valued It\^o process with deterministic initial value \(S_0 \in I\) and dynamics
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_t = b_t d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\]
where \(W = (W_t)_{t \in [0, T]}\) is a one-dimensional Brownian motion and \(b = (b_t)_{t \in [0, T]}\) and \(\sigma = (\sigma_t)_{t \in [0, T]}\) are real-valued progressively measurable processes. It is implicit that \(b\) and \(\sigma\) are such that the integrals are well-defined, i.e. a.s.
\[
\int_0^T \big(|b_s| + \sigma^2_s \big)d}%{\operatorname{d}\hspace{-0.055cm} s < \infty.
\]
We assume that \(\llambda \otimes \mathds{P}\)-a.e.
\(
\sigma \not = 0,
\)
which latter will correspond to the assumption that we consider an asset price process with a non-vanishing volatility.
Let \(c = (c_t)_{t \in [0, T]}\) be a real-valued progressively measurable process such that a.s.
\[
\int_0^Tc^2_s d}%{\operatorname{d}\hspace{-0.055cm} s < \infty,
\]
and let \(N = (N_t)_{t \in [0, T]}\) be a local martingale such that a.s. \(\Delta N \geq - 1\) and \([N, W] = 0\).
We ask for conditions under which the non-negative local martingale
\begin{align}\label{eq:Z}
Z \triangleq \mathcal{E} \Big(N + \int_0^\cdot c_s d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big),
\end{align}
is a true or a strict local martingale.
Here, \(\mathcal{E}\) denotes the stochastic exponential.
The structure of \(Z\) is very important in mathematical finance, because \(Z\) is the prototype of a strict local martingale density, see Lemma \ref{lem: decom} below.
Let \(\underline{a}, \overline{a} \colon I \to (0, \infty), \underline{u}, \overline{u} \colon I \to \mathbb{R}\) and \(\zeta \colon [0, T] \to \mathbb{R}_+\) be Borel functions such that \[\frac{1}{\overline{a}} + \frac{1}{\underline{a}} + |\overline{u}| + |\underline{u}| \in L^1_\textup{loc} (I), \qquad \zeta \in L^1([0, T]).\]
In case \((f, g)\) is one of the pairs \((\underline{u}, \underline{a}), (\underline{u}, \overline{a}), \dots\) we set
\begin{align}\label{eq: v1}
v(f, g)(x) \triangleq \int_{x_0}^x \exp \Big( - \int_{x_0}^y 2 f(z) d}%{\operatorname{d}\hspace{-0.055cm} z \Big) \int_{x_0}^y \frac{2 \exp \big(\int_{x_0}^u2 f(z) d}%{\operatorname{d}\hspace{-0.055cm} z\big)}{g(u)} d}%{\operatorname{d}\hspace{-0.055cm} u d}%{\operatorname{d}\hspace{-0.055cm} y,\quad x \in I,
\end{align}
where \(x_0 \in I\) is fixed.
Let \(l_n \searrow l, r_n \nearrow r\) be sequences such that \(l < l_{n+1} < l_n < r_n < r_{n +1} < r\).
The first main result of this section is the following:
\begin{theorem}\label{theo: mart Ito}
Assume the following:
\begin{enumerate}
\item[\textup{(M1)}] The sequence \[
\tau_n \triangleq \inf (t \in [0, T] \colon S_t \not\in (l_n, r_n)), \quad n \in \mathbb{N},
\]
is a localizing sequence for \(Z\), i.e. \(Z_{\cdot \wedge \tau_n}\) is a martingale for every \(n \in \mathbb{N}\). We use the convention that \(\inf (\emptyset) \triangleq \infty\).
\item[\textup{(M2)}] For \(\llambda \otimes \mathds{P}\)-a.a. \((t, \omega) \in [0, T] \times \Omega\)
\begin{align*}
\sigma^2_t(\omega) &\leq \zeta (t) \overline{a}(S_t(\omega)),\\
\underline{u}(S_t(\omega)) \sigma_t^2 (\omega) &\leq b_t(\omega) + c_t(\omega) \sigma_t(\omega),\\
\overline{u}(S_t(\omega)) \sigma_t^2 (\omega) &\geq b_t(\omega) + c_t(\omega) \sigma_t(\omega).
\end{align*}
\item[\textup{(M3)}] \(
\lim_{x \nearrow r} v(\overline{u}, \overline{a})(x) = \lim_{x \searrow l} v(\underline{u}, \overline{a}) (x) = \infty.
\)
\end{enumerate}
Then, \(Z\) is a martingale.
\end{theorem}
The proof of this theorem is given in Section \ref{sec: pf}.
\begin{remark}
(M3) is independent of the choice of \(x_0\), see \cite[Problem 5.5.28]{KaraShre}.
\end{remark}
Next, we provide a counterpart to Theorem \ref{theo: mart Ito}.
Let \(\mathscr{H}\) be the set of all Borel functions \(h \colon \mathbb{R}_+ \to \mathbb{R}_+\) which are starting at zero, are strictly increasing and satisfy
\[
\int_0^\varepsilon \frac{d}%{\operatorname{d}\hspace{-0.055cm} z}{h^2(z)} = \infty \text{ for all } \varepsilon > 0,\]
and let \(\mathscr{K}\) be the set of all Borel functions \(\kappa \colon \mathbb{R}_+ \to \mathbb{R}_+\), which are starting at zero, are strictly increasing and concave and satisfy
\[
\int_0^\varepsilon \frac{d}%{\operatorname{d}\hspace{-0.055cm} z}{\kappa (z)} = \infty \text{ for all } \varepsilon > 0.\]
In case \((f, g)\) is one of the pairs \((\underline{u}, \underline{a}), (\underline{u}, \overline{a}), \dots\) we say that \((f, g)\) satisfies the \emph{Yamada--Watanabe (YW) conditions}, if for every \(n \in \mathbb{N}\) there exist \(h_n \in \mathscr{H}\) and \(\kappa_n \in \mathscr{K}\) such that and for all \(x, y \in [l_n, r_n]\)
\begin{align*}
|g^\frac{1}{2} (x) - g^\frac{1}{2} (y)| &\leq h_n (|x - y|),\\ |g(x) f(x) - g(y) f(y)| &\leq \kappa_n (|x - y|).
\end{align*}
The second main result of this section is the following:
\begin{theorem}\label{theo: general SLM}
Assume one of the following conditions:
\begin{enumerate}
\item[\textup{(SL1)}]
The pair \((\underline{u}, \underline{a})\) satisfies the YW conditions, for \(\llambda \otimes \mathds{P}\)-a.a. \((t, \omega) \in [0, T] \times \Omega\)
\begin{equation}\label{eq: bounderies}\begin{split}
\underline{a}(S_t(\omega)) &\leq \sigma^2_t(\omega),\\
\underline{u}(S_t(\omega)) \sigma_t^2 (\omega) &\leq b_t(\omega) + c_t(\omega) \sigma_t(\omega),
\end{split}\end{equation}
and \(
\lim_{x \nearrow r} v(\underline{u}, \underline{a})(x) < \infty.
\)
\item[\textup{(SL2)}]
The pair \((\overline{u}, \underline{a})\) satisfies the YW conditions,
for \(\llambda \otimes \mathds{P}\)-a.a. \((t, \omega) \in [0, T] \times \Omega\)
\begin{align*}
\underline{a}(S_t(\omega)) &\leq \sigma^2_t(\omega),\\
\overline{u}(S_t(\omega)) \sigma_t^2 (\omega) &\geq b_t(\omega) + c_t(\omega) \sigma_t(\omega),
\end{align*}
and \(\lim_{x \searrow l} v(\overline{u}, \underline{a})(x) < \infty.\)
\end{enumerate}
Then, \(Z\) is a strict local martingale.
\end{theorem}
The proof of this theorem is given in Section \ref{sec: pf}.
In Section \ref{sec: dis mart theo} below we comment on the assumptions of Theorems \ref{theo: mart Ito} and \ref{theo: general SLM} and related literature.
\subsection{Markov switching case}\label{sec: MG MS}
In this section we consider a special case of the setting from Section \ref{sec: GC} and assume that \(S\) is a switching diffusion.
Before we introduce the setting in detail, we clarify terminology: A process is called a \emph{Feller--Markov chain} if it is a Markov chain which is a Feller process in the sense that the corresponding transition semigroup is a self-map on the space of continuous functions vanishing at infinity. For conditions implying that a Markov chain is Feller--Markov we refer to \cite{anderson2012continuous}. It is also important to stress that whenever we have fixed a filtration and a Markov chain, we presume that the Markov chain is Markovian for the given filtration.
All non-explained terminology for Markov chains, such as \emph{irreducible, recurrent,} etc., can be found in \cite{norris_1997}.
We assume that \(S = (S_t)_{t \in [0, T]}\) is an \(I\)-valued It\^o process with deterministic initial value \(S_0 \in I\) and dynamics
\begin{align}\label{eq: SD}
d}%{\operatorname{d}\hspace{-0.055cm} S_t = b(S_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (S_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\end{align}
where \(W = (W_t)_{t \in [0, T]}\) is a one-dimensional Brownian motion, \(\xi = (\xi_t)_{t \in [0, T]}\) is a continuous-time irreducible Feller--Markov chain with state space \(J \triangleq \{1, \dots, N\}, 1 \leq N \leq \infty,\) and deterministic initial value \(j_0 \in J\), and \(b \colon I \times J \to \mathbb{R}\) and \(\sigma \colon I \times J \to \mathbb{R}\backslash \{0\}\) are Borel functions such that
\begin{align} \label{eq: int aspp}
\frac{1 + |b(\cdot, j)|}{\sigma^2(\cdot, j)} \in L^1_\textup{loc}(I) \text{ for all } j \in J.
\end{align}
It is implicit that the integrals in \eqref{eq: SD} are well-defined.
We allow \(N = \infty\) in which case \(J= \mathbb{N}\). A process of the type \eqref{eq: SD} is called a \emph{switching diffusion} and the elements of \(J\) are called \emph{regimes}.
Let \(c \colon I \times J \to \mathbb{R}\) be a Borel function such that
\begin{align}\label{eq: loc inte assp}
\frac{c(\cdot, j)}{\sigma (\cdot, j)} \in L^2_\textup{loc}(I) \text{ for all } j \in J.
\end{align}
\begin{lemma}\label{lem: c finite}
Almost surely \(\int_0^T c^2(S_s, \xi_s) d}%{\operatorname{d}\hspace{-0.055cm} s < \infty\).
\end{lemma}
\begin{proof}
Set \(F \triangleq \{\xi_s \colon s \in [0, T]\}\),
\(m \triangleq \min_{s \in [0, T]} S_s\) and \(M \triangleq \max_{s \in [0, T]} S_s\).
Using that \(\xi\) only makes finitely many jumps in the finite time interval \([0, T]\), the occupation times formula for continuous semimartingales and \eqref{eq: loc inte assp}, we obtain a.s.
\begin{align*}
\int_0^T c^2 (S_s, \xi_s) d}%{\operatorname{d}\hspace{-0.055cm} s &= \int_0^T \Big( \frac{c (S_s, \xi_s)}{\sigma (S_s, \xi_s)} \Big)^2 d}%{\operatorname{d}\hspace{-0.055cm} [S, S]_s \\&\leq \sum_{j \in F} \int_0^T \Big( \frac{c (S_s, j)}{\sigma (S_s, j)} \Big)^2 d}%{\operatorname{d}\hspace{-0.055cm} [S, S]_s
\\&= \sum_{j \in F} \int_{m }^{M} \Big( \frac{c (x, j)}{\sigma (x, j)} \Big)^2 2L^S_T(x) d}%{\operatorname{d}\hspace{-0.055cm} x
\\&\leq \max_{y \in [m, M]} 2L_T^S(y) \sum_{j \in F} \int_{m}^{M} \Big( \frac{c (x, j)}{\sigma (x, j)} \Big)^2 d}%{\operatorname{d}\hspace{-0.055cm} x < \infty,
\end{align*}
where \(L^S\) denotes the local time of \(S\). The lemma is proven.
\end{proof}
We are interested in the martingale property of the non-negative local martingale
\[
Z \triangleq \mathcal{E} \Big(\int_0^\cdot c(S_s, \xi_s) d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big).
\]
This definition coincides with \eqref{eq: Z} for the choices \(c = c(S, \xi)\) and \(N = 0\).
Before we state the main result of this section, we fix some notation.
Because \(L^2_\textup{loc} (I) \subset L^1_\textup{loc} (I)\), \eqref{eq: int aspp} and \eqref{eq: loc inte assp} imply that
\[
\frac{|b (\cdot, j) + c (\cdot, j) \sigma (\cdot, j)|}{\sigma^2(\cdot, j)} \in L^1_\textup{loc}(I) \text{ for all } j \in J.
\]
Thus, we can set
\[
v(x, j) \triangleq \int_{x_0}^x \exp \Big( - \int_{x_0}^y \frac{2(b + c \sigma)(z, j)}{\sigma^2(z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z \Big) \int_{x_0}^y \frac{2\exp \big(\int_{x_0}^s \frac{2(b + c \sigma)(z, j)}{\sigma^2(z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z\big)}{\sigma^2(s, j)} d}%{\operatorname{d}\hspace{-0.055cm} s d}%{\operatorname{d}\hspace{-0.055cm} y
\]
for \((x, j) \in I \times J\) and a fixed \(x_0 \in I\).
We say that \emph{\(\sigma\) satisfies the Engelbert--Schmidt (ES) conditions for \(j \in J\)} if one of the following holds:
\begin{enumerate}
\item[(ES1)] For every compact set \(K \subset I\) there are Borel functions \(f \colon K \to [0, \infty]\) and \(h \colon \mathbb{R} \to [0, \infty]\) and a constant \(c > 0\) such that the following properties are satisfied:
\begin{enumerate}
\item[(i)] \(\frac{f}{\sigma^2 (\cdot, j)} \in L^1 (K)\).
\item[(ii)] For every neighborhood \(U\) of the origin
\[
\int_U \frac{d}%{\operatorname{d}\hspace{-0.055cm} y}{h(y)} = \infty.
\]
\item[(iii)] For all \(x, x + y \in K, y \in (- c, c)\)
\[|\sigma (x + y, j) - \sigma (x, j)|^2 \leq f(x) h(y).\]
\end{enumerate}
\item[(ES2)] For every compact set \(K \subset I\) there are Borel functions \(g \colon K \to \mathbb{R}\) and \(h \colon \mathbb{R} \to [0, \infty]\) and a constant \(c > 0\) such that the following properties are satisfied:
\begin{enumerate}
\item[(i)] \(g\) is increasing.
\item[(ii)] For every neighborhood \(U\) of the origin
\[\int_U \frac{d}%{\operatorname{d}\hspace{-0.055cm} y}{h(y)} = \infty.\]
\item[(iii)] For all \(x, x + y \in K, y \in (- c, c)\backslash \{0\}\)
\[|\sigma (x + y, j) - \sigma (x, j)|^2 \leq h(y) \frac{|g(x + y) - g(x)|}{|y|}.\]
\item[(iv)] \(\inf_{x \in K} \sigma (x, j) > 0\).
\end{enumerate}
\end{enumerate}
We say that the Markov chain \(\xi\) is recurrent if it is a recurrent Markov chain when extended to the infinite time interval \(\mathbb{R}_+\).
The following theorem gives an almost complete answer to the question when \(Z\) is a true or strict local martingale. A proof is given in Section \ref{sec: pf MS}.
\begin{theorem}\phantomsection\label{theo: mart MS}
\begin{enumerate}
\item[\textup{(i)}] Suppose that \(c\) is bounded on compact subsets of \(I \times J\),
that \(\sigma\) satisfies the ES conditions for all \(j \in J\) and that
\begin{align}\label{eq: MS M integral test}
\lim_{x \nearrow r} v(x, j) = \lim_{x \searrow l} v(x, j) = \infty \text{ for all } j \in J.
\end{align}
Then, \(Z\) is a martingale.
\item[\textup{(ii)}] Assume that \(\xi\) is recurrent and that there exists a \(j \in J\) such that \(\sigma\) satisfies the ES conditions for \(j\)
and
\begin{align}\label{eq: explosive regime cond}
\lim_{x \nearrow r} v(x, j) < \infty \text{ or } \lim_{x \searrow l} v(x, j) < \infty.
\end{align}
Then, \(Z\) is a strict local martingale.
\end{enumerate}
\end{theorem}
\begin{remark} The proof of Theorem \ref{theo: mart MS} (ii) is based on a contradiction argument. In case \eqref{eq: explosive regime cond} holds and \(Z\) is a martingale there exists an \(I\)-valued switching diffusion with an explosive regime \(j\). The recurrence of \(\xi\) ensures that this switching diffusion reaches the regime \(j\), which leads to a contradiction. In case the initial regime \(j_0\) is already explosive, more precisely if \(\sigma\) satisfies the ES conditions for \(j_0\) and \(\lim_{x \nearrow r} v(x, j_0) < \infty\) or \(\lim_{x \searrow l} v(x, j_0) < \infty\), the recurrence of \(\xi\) is not needed.
\end{remark}
Noting that \(\xi\) is recurrent in case \(N< \infty\), we obtain the following:
\begin{corollary}\label{coro: suff nece MS}
Suppose that \(c\) is bounded on compact subsets of \(I \times J\), that \(\sigma\) satisfies the ES conditions for all \(j \in J\) and that \(N < \infty\). Then,
\(Z\) is a martingale if and only if \eqref{eq: MS M integral test} holds.
\end{corollary}
\begin{proof}
If \(N < \infty\), the recurrence of \(\xi\) follows from \cite[Theorems 1.5.6, 3.4.1]{norris_1997}. Now, the claim is due to Theorem \ref{theo: mart MS}.
\end{proof}
In financial applications, \(N\) can be interpreted as the number of states of the business cycle and therefore \(N < \infty\) is a reasonable assumption.
\subsection{Comments on related literature}\label{sec: dis mart theo}
The martingale property of non-negative local martingales is under frequent investigation. We mention a few related works: A general semimartingale setting has been considered in \cite{criens2018EJP, J79, JS} and a diffusion and/or jump-diffusion setting has been studied in \cite{CFY,KMK, LS, MU(2012), RufSDE,Sin}.
To the best of our knowledge, for a general It\^o process or Markov switching setting Theorems \ref{theo: mart Ito}, \ref{theo: general SLM} and \ref{theo: mart MS} are the first results which provide integral tests for the martingale property of certain stochastic exponentials.
For the diffusion case
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_t = b(S_t) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (S_t) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\]
a complete characterization of the martingale property of the non-negative local martingale
\[
Z = \mathcal{E} \Big( \int_0^\cdot c(S_s)d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big)
\]
has been proven in \cite{MU(2012)} under local integrability conditions. We stress that in \cite{MU(2012)} the diffusion \(S\) is allowed to explode, which is a feature not included in our framework.
Provided \(S\) is non-explosive, the main theorem of \cite{MU(2012)} shows that \(Z\) is a martingale if and only if
\[
\lim_{x \nearrow r} v(u, \sigma^2) = \lim_{x \searrow l} v(u, \sigma^2) = \infty,
\]
where
\(
u \triangleq \frac{b + c \sigma}{\sigma^2}
\)
and \(v\) is defined as in \eqref{eq: v1}.
The same condition is implied by either Theorems \ref{theo: mart Ito} and \ref{theo: general SLM}, or Corollary \ref{coro: suff nece MS}.
For the strict local martingale property we require that \(\sigma\) satisfies the ES conditions, which are not imposed in \cite{MU(2012)}.
The key idea underlying Theorems \ref{theo: mart Ito}, \ref{theo: general SLM} and \ref{theo: mart MS} is a local change of measure combined with either a Lyapunov-type argument (in case of Theorem \ref{theo: mart Ito}), a comparison with one-dimensional diffusions (in case of Theorem \ref{theo: general SLM}) or a local uniqueness property (in case of Theorem \ref{theo: mart MS}).
The idea of using a local change of measure is not new. It has for instance been used in \cite{CFY,criens17b, criens2018EJP,RufSDE,Sin}.
The Lyapunov and comparison arguments were inspired by \cite{criens17b}, where a multi-dimensional diffusions has been studied. To use the ideas in our general setting, we prove a new Lyapunov condition for It\^o processes and we transport the comparison arguments from a multi-dimensional diffusion setting to a one-dimensional It\^o process framework, see Section \ref{sec: pf} below.
The idea of relating local uniqueness to the martingale property of a stochastic exponential traces back to \cite{JM76,KLS-LACOM1,KLS-LACOM2}. More recently, the method was used in \cite{CFY, criens17b, KMK, Sin}.
Although the terminology suggests the converse, local uniqueness is a strong version of uniqueness in law.
In the proof of Theorem \ref{theo: mart MS} we deduce local uniqueness from pathwise uniqueness by a Yamada--Watanabe-type argument.
\section{On the absence and existence of arbitrage}
Let \(0< T < \infty\) be a finite time horizon and let \((\Omega, \mathcal{F}, \mathbf{F}, \mathds{P})\) be a complete filtered probability space with right-continuous and complete filtration \(\mathbf{F} = (\mathcal{F}_t)_{t \in [0, T]}\).
We consider a financial market consisting of one risky asset with discounted price process \(P = (P_t)_{t \in [0, T]}\), which is assumed to be a positive continuous semimartingale with deterministic initial value.
Recall the following classical terminology: A probability measure \({\mathbb{Q}}\) is called an \emph{equivalent (local) martingale measure (E(L)MM)} if \({\mathbb{Q}} \sim \mathds{P}\) and \(P\) is a (local) \({\mathbb{Q}}\)-martingale. A strictly positive local \(\mathds{P}\)-martingale \(Z = (Z_t)_{t \in [0, T]}\) with \(Z_0 = 1\) is called a \emph{strict (local) martingale density (S(L)MD)} if \(ZP\) is a (local) \(\mathds{P}\)-martingale.
In the following we study existence and non-existence of SMDs, ELMMs and EMMs in case \(P\) is either the stochastic exponential of an It\^o process or a positive switching diffusion. In case \(P\) is a positive It\^o process or the stochastic exponential of a real-valued switching diffusion similar results can be deduced from the martingale criteria in Section \ref{sec: MP SE}.
\subsection{Stochastic exponential model}\label{sec: Arb GM}
Suppose that \(P\) is the stochastic exponential of the real-valued It\^o process \(S = (S_t)_{t \in [0, T]}\) with deterministic initial value \(S_0 \in \mathbb{R}\) and dynamics
\begin{align}\label{eq: S ito}
d}%{\operatorname{d}\hspace{-0.055cm} S_t = b_t d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\end{align}
where \(W = (W_t)_{t \in [0, T]}\) is a one-dimensional Brownian motion and \(b = (b_t)_{t \in [0, T]}\) and \(\sigma = (\sigma_t)_{t \in [0, T]}\) are real-valued progressively measurable processes such that the stochastic integrals in \eqref{eq: S ito} are well-defined.
We assume that \(\llambda \otimes \mathds{P}\)-a.e.
\(
\sigma \not = 0,
\)
which corresponds to the assumption that \(P\) has a non-vanishing volatility.
\subsubsection{Absence of arbitrage}
In the following we study when a SMD, ELMM or EMM exists.
As a minimal condition we assume that (NUPBR) holds. This is equivalent to the existence of a \emph{market price of risk \(\theta = (\theta_t)_{t \in [0, T]}\)}, i.e. a real-valued progressively measurable process such that a.s.
\[
\int_0^T \theta^2_s d}%{\operatorname{d}\hspace{-0.055cm} s < \infty
\]
and
\begin{align} \label{eq: def MPR}
\llambda \otimes \mathds{P}\text{-a.e.} b - \theta \sigma = 0.
\end{align}
We define the continuous local martingale
\begin{align}\label{eq: Z}
Z \triangleq \mathcal{E} \Big(- \int_0^\cdot \theta_s d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big).
\end{align}
Integration by parts and \eqref{eq: def MPR} yield that
\begin{align}\label{eq: prod rule LM}
d}%{\operatorname{d}\hspace{-0.055cm} Z_t P_t
= Z_t P_t (\sigma_t - \theta_t)d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\end{align}
which shows that \(ZP\) is a local martingale or, equivalently, that \(Z\) is a SLMD. We observe the following:
\begin{enumerate}
\item[\textup{(O1)}] If \(ZP\) is a martingale, then \(Z\) is a SMD by definition.
\item[\textup{(O2)}] If \(Z\) is a martingale, we can define a probability measure \({\mathbb{Q}}\) by the Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_T\) and \({\mathbb{Q}}\) is an ELMM by \eqref{eq: prod rule LM} and \cite[Proposition III.3.8]{JS}.
\item[\textup{(O3)}] If \(ZP\) and \(Z\) are martingales, then \({\mathbb{Q}}\) as defined in (O2) is an EMM by \cite[Proposition III.3.8]{JS}.
\end{enumerate}
In summary, to prove the existence of a SMD, ELMM and EMM we have to identify conditions for the martingale property of \(ZP\) and \(Z\).
The following is the main result of this section:
\begin{theorem}\label{theo: main1 SEM}
Suppose the following:
\begin{enumerate}
\item[\textup{(L1)}] The sequence \begin{align}\label{eq: loc seq L1}
\tau_n \triangleq \inf (t \in [0, T] \colon |S_t| \geq n ), \quad n \in \mathbb{N},
\end{align} is a localizing sequence for \(Z\).
\item[\textup{(L2)}]
There are Borel functions \(\overline{a} \colon \mathbb{R} \to (0, \infty)\) and \(\zeta \colon [0, T] \to \mathbb{R}_+\) such that
\begin{align*}
\frac{1}{\overline{a}} \in L^1_\textup{loc}(\mathbb{R}), \quad \zeta \in L^1([0, T]),
\end{align*}
and \(\sigma^2_t(\omega) \leq \zeta(t) \overline{a} (S_t (\omega))\) for \(\llambda \otimes \mathds{P}\)-a.a. \((t, \omega) \in [0, T] \times \Omega\).
\end{enumerate}
Then, \(Z\) is a martingale, \({\mathbb{Q}}\) defined by \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_T\) is an ELMM and
\begin{align}\label{eq:def B}
B = W + \int_0^\cdot \theta_t d}%{\operatorname{d}\hspace{-0.055cm} t
\end{align}
is a \({\mathbb{Q}}\)-Brownian motion such that
\[
S = S_0 + \int_0^\cdot \sigma_t d}%{\operatorname{d}\hspace{-0.055cm} B_t.
\]
If in addition
\begin{align} \label{eq: cond SMD}
\int_1^\infty \frac{d}%{\operatorname{d}\hspace{-0.055cm} z}{\overline{a}(z)} = \infty,
\end{align}
then \({\mathbb{Q}}\) is an EMM and \(Z\) is a SMD.
\end{theorem}
\begin{proof}
We apply Theorem \ref{theo: mart Ito} with \(I \triangleq \mathbb{R}, l_n \triangleq - n, r_n \triangleq n\) and \(c \triangleq - \theta\). Note that (L1) equals (M1). Furthermore, set \(\underline{u} (x) \equiv \overline{u} (x) \triangleq 0\). Then, (L2) implies (M2), because \eqref{eq: def MPR} implies \(\llambda \otimes \mathds{P}\)-a.e. \(b + c \sigma = 0\). Finally, note that
\[
\int_{x_0}^{\pm \infty} \exp \Big(- 2 \int_{x_0}^x \underline{u} (y) d}%{\operatorname{d}\hspace{-0.055cm} y \Big) d}%{\operatorname{d}\hspace{-0.055cm} x = \int_{x_0}^{\pm \infty} \exp \Big(- 2 \int_{x_0}^x \overline{u} (y) d}%{\operatorname{d}\hspace{-0.055cm} y \Big) d}%{\operatorname{d}\hspace{-0.055cm} x = \pm \infty,
\]
which shows that (M3) holds due to \cite[Problem 5.5.27]{KaraShre}. We conclude that \(Z\) is a martingale and that \({\mathbb{Q}}\) is an ELMM by (O2).
Next, we assume that \eqref{eq: cond SMD} holds. We apply Theorem \ref{theo: mart Ito} with \(I \triangleq \mathbb{R}, l_n \triangleq -n, r_n \triangleq n\) and \(c \triangleq \sigma - \theta\) to show that the local martingale
\[
Z' \triangleq \frac{Z P}{P_0} = \mathcal{E} \Big( \int_0^\cdot (\sigma_s - \theta_s) d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big)
\]
is a martingale. In this case, \({\mathbb{Q}}\) is an EMM and \(Z\) is a SMD by (O1) and (O3).
By (L1), the set \(\{Z_{\gamma\wedge \tau_n} \colon \gamma \ \textup{stopping time}\}\) is uniformly integrable (see \cite[Proposition I.1.47]{JS}). Thus,
\begin{align*}
\sup_{\gamma} {\mathds{E}}^\mathds{P} \big[ &Z'_{\gamma \wedge \tau_n} \mathds{1}_{\{Z'_{\gamma \wedge \tau_n} \geq K\}} \big] \\&\leq e^{|S_0| + n} \sup_{\gamma} {\mathds{E}}^\mathds{P} \big[ Z_{\gamma \wedge \tau_n} \mathds{1}_{\{Z_{\gamma \wedge \tau_n} \geq e^{- |S_0| - n}K\}} \big] \to 0 \text{ as } K \to \infty,
\end{align*}
where the \(\sup_{\gamma}\) is meant to be the supremum over all stopping times \(\gamma\).
Due to \cite[Proposition I.1.47]{JS}, we conclude that (M1) holds for \(Z'\).
Note that \eqref{eq: def MPR} implies that \(\llambda \otimes \mathds{P}\)-a.e. \(b + c \sigma = \sigma^2\). Thus, we set \(\underline{u} (x)\equiv \overline{u}(x) \triangleq 1\) and note that (L2) implies (M2) for \(Z'\). Using Fubini's theorem and \eqref{eq: cond SMD}, we obtain that
\begin{align*}
\lim_{x \nearrow \infty} v (1, \overline{a}) (x) &= 2 \int_{x_0}^\infty e^{- 2 y} \int_{x_0}^y \frac{e^{2u}}{\overline{a}(u)} d}%{\operatorname{d}\hspace{-0.055cm} u d}%{\operatorname{d}\hspace{-0.055cm} y
\\&= 2 \int_{x_0}^\infty \frac{e^{2u}}{\overline{a}(u)} \int_u^\infty e^{- 2y} d}%{\operatorname{d}\hspace{-0.055cm} y d}%{\operatorname{d}\hspace{-0.055cm} u
\\&= \int_{x_0}^\infty \frac{d}%{\operatorname{d}\hspace{-0.055cm} u}{\overline{a}(u)} = \infty.
\end{align*}
Because
\[
\int_{x_0}^{- \infty} \exp \Big( - 2 \int_{x_0}^x d}%{\operatorname{d}\hspace{-0.055cm} y \Big) d}%{\operatorname{d}\hspace{-0.055cm} x = - \infty,
\]
\cite[Problem 5.5.27]{KaraShre} yields that
\(\lim_{x \searrow - \infty} v(1, \overline{a}) (x) = \infty\). Hence, (M3) holds for \(Z'\). We conclude that \(Z'\) is a martingale and the proof is complete.
\end{proof}
In our setting there might exist several ELMMs and it is an important question which ELMM should be chosen for applications. The ELMM from Theorem \ref{theo: main1 SEM} is the \emph{minimal local martingale measure (MLMM)} as defined in \cite{doi:10.1111/j.1467-9965.1992.tb00027.x}.\footnote{In \cite{doi:10.1111/j.1467-9965.1992.tb00027.x} the MLMM has been called \emph{minimal martingale measure}. Because we distinguish between ELMMs and EMMs we adjust the terminology.} For financial interpretations of the MLMM we refer to \cite{doi:10.1111/j.1467-9965.1992.tb00027.x} and for a general overview on possible applications we refer to \cite{FS2010}.
In Theorem \ref{theo: indp preserving} below we discover a new property of the MLMM: The MLMM preserves independence and laws of sources of risk.
In the following paragraph we relate the assumptions (L1) and (L2) to so-called \emph{weakly equivalent local martingale measures (WELMM)} as introduced in \cite{Kardaras2010}.
We explain the connection from a general point of view under the assumptions that \(\mathcal{F} = \mathcal{F}_T\) and that (NUPBR) holds.
With slight abuse of notation, let \(Z = (Z_t)_{t \in [0, T]}\) be a SLMD with localizing sequence \((\tau_n)_{n \in \mathbb{N}}\). For every \(n \in \mathbb{N}\) we can define a probability measure \({\mathbb{Q}}^n\) by the Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}^n}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_{T \wedge \tau_n}\). It is easy to see that \({\mathbb{Q}}^n\) is an ELMM for the stopped process \(P_{\cdot \wedge \tau_n}\). In other words, for every \(n \in \mathbb{N}\) the notion (NFLVR) holds for all admissible strategies which invest riskless after \(\tau_n\).
Roughly speaking, this observation suggests that (NFLVR) holds in case we can take the limit \(n \to \infty\).
As explained in Section 2.4.2 of \cite{Kardaras2010}, Alaoglu's theorem yields that \(({\mathbb{Q}}^n)_{n \in \mathbb{N}}\) has an accumulation point \(\mathsf{Q}\) for the weak\(^*\) topology on the dual of \(L^\infty (\Omega, \mathcal{F}, \mathds{P})\), which is a finitely additive probability such that \(\mathsf{Q}(A) = 0\) for all \(A \in \mathcal{F}\) with \(\mathds{P}(A) = 0\), see the Appendix of \cite{Cvitanic2001}. We use the sans-serif typeface to highlight that \(\mathsf{Q}\) is not necessarily a probability measure, because it may fail to be countably additive. Note that \(\mathsf{Q} = {\mathbb{Q}}^n\) on \(\mathcal{F}_{\tau_n}\) for every \(n \in \mathbb{N}\).
Using this fact, it follows that for all \(A \in \mathcal{F}\) with \(\mathsf{Q}(A) = 0\) we also have \(\mathds{P}(A) = 0\),
which shows that \(\mathsf{Q}\) and \(\mathds{P}\) have the same null-sets. Indeed, if \(A \in \mathcal{F} = \mathcal{F}_T\) is such that \(\mathsf{Q}(A) = 0\), we have \(A \cap \{\tau_n> T\} \in \mathcal{F}_{\tau_n}\) and consequently
\[
{\mathbb{Q}}^n(A \cap \{\tau_n> T\}) = \mathsf{Q} (A \cap \{\tau_n > T\}) = 0
\]
for all \(n \in \mathbb{N}\).
This implies \(\mathds{P}(A \cap \{\tau_n > T\}) = 0\) and, because \(\mathds{P}\)-a.s. \(\tau_n \nearrow \infty\) as \(n \to \infty\), we conclude that \(\mathds{P}(A) = 0\).
Following \cite{Kardaras2010}, we call \(\mathsf{Q}\) a WELMM.
The main difference between WELMMs and ELMMs, and therefore between (NUPBR) and (NFLVR), is that a WELMM is not necessarily a measure.
The idea of condition (L1) is to identify a WELMM, which, as explained above, is a natural candidate for an ELMM. Assuming that \((\tau_n)_{n \in \mathbb{N}}\) is given by \eqref{eq: loc seq L1} means controlling the MPR via the size of the asset.
This assumption is reasonable from a modeling perspective, because, as explained by Lyasoff \cite[p. 488]{MAFI:MAFI530},
"excessively large expected instantaneous net returns from risky securities entail excessively large demands for money (to invest in such securities), which, in turn, means higher and higher interest rates, which, in turn, means lower and lower market price of risk".
In the diffusion settings of Mijatovi\'c and Urusov \cite{Mijatovic2012}, (L1) is equivalent to the local integrability condition \cite[Eq. 3.2]{Mijatovic2012} on the MPR, see \cite[Lemma 6.3]{MU(2012)}.
Condition (L2) takes care on the countable additivity of the candidate WELMM, which corresponds to problems arising when \(n \to \infty\). Indeed, \(\mathsf{Q}\) is countably additive if and only if
\begin{align}\label{eq: to show}
\limsup_{n \to \infty} \mathsf{Q} (\tau_n > T) = \limsup_{n \to \infty}{\mathbb{Q}}^n (\tau_n > T) = 1,
\end{align}
which is also the condition we check in the proof of Theorem \ref{theo: mart Ito}.
If \(\mathsf{Q}\) is countably additive, then \eqref{eq: to show} follows from the monotone convergence theorem and the fact that \(\mathds{P}\)-a.s. \(\tau_n \nearrow \infty\) as \(n \to \infty\). Conversely, assume that \eqref{eq: to show} holds. Let \((E_k)_{k \in \mathbb{N}} \subset \mathcal{F}\) be a decreasing sequence with \(\bigcap_{k \in \mathbb{N}} E_k = \emptyset\). Then, because \(E_k \in \mathcal{F} = \mathcal{F}_T\), we have \(E_k \cap \{\tau_n > T\} \in \mathcal{F}_{\tau_n}\), which yields that
\begin{align*}
\limsup_{k \to \infty} \mathsf{Q}(E_k) &\leq \mathsf{Q} (\tau_n \leq T) + \limsup_{k \to \infty} \mathsf{Q} (E_k \cap \{\tau_n > T\})
\\&= \mathsf{Q} (\tau_n \leq T) + \limsup_{k \to \infty} {\mathbb{Q}}^n (E_k \cap \{\tau_n > T\})
\\&= \mathsf{Q} (\tau_n \leq T) \to 0 \text{ with } n \to \infty.
\end{align*}
Thus, \(\mathsf{Q}\) is continuous at zero, which implies that it is countably additive.
\subsubsection{Existence of a financial bubble}
In Theorem \ref{theo: main1 SEM} we gave conditions for the existence of an ELMM. In this section, we derive a counterpart to \eqref{eq: cond SMD}, which implies the existence of a financial bubble in the sense of \cite{Cox2005}.
As we explain next, the question when a SMD exists is strongly connected to the question when a non-negative local martingale is a strict local martingale. We recall the following:
\begin{lemma}\label{lem: decom}
If \(Z\) is a SLMD, then there exists a market price of risk \(\theta = (\theta_t)_{t \in [0, T]}\) and a local martingale \(N = (N_t)_{t \in [0, T]}\) such that a.s. \(\Delta N > -1, [N, W] = 0\) and
\begin{align}\label{eq: decom Z}
Z = \mathcal{E} \Big(N - \int_0^\cdot \theta_s d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big).
\end{align}
\end{lemma}
\begin{proof}
See \cite[Theorem 1]{doi:10.1080/07362999508809418}.
\end{proof}
In case \(Z\) is a SMD, \eqref{eq: decom Z} holds and
\begin{align}\label{eq: ZP}
Z P = P_0 \mathcal{E} \Big( N + \int_0^\cdot (\sigma_s-\theta_s) d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big)
\end{align}
is a martingale by definition. If this is not the case, we have a contradiction and no SMD exists.
The following is the main result of this section:
\begin{theorem}\label{theo: no SMD gen}
Suppose there exists a Borel function \(\underline{a} \colon \mathbb{R}\to (0, \infty)\) such that \((1, \underline{a})\) satisfies the YW conditions (see Section \ref{sec: GC} for this terminology), \(\underline{a} (S_t (\omega)) \leq \sigma^2_t (\omega)\) for \(\llambda \otimes \mathds{P}\)-a.a. \((t, \omega) \in [0, T] \times \Omega\) and
\begin{align}\label{eq: cond no SMD}
\int_1^\infty \frac{d}%{\operatorname{d}\hspace{-0.055cm} z}{\underline{a}(z)} < \infty.
\end{align}
Then, no SMD exists.
\end{theorem}
\begin{proof}
We use Theorem \ref{theo: general SLM} with \(I \triangleq \mathbb{R}\) and
\(\underline{u} \triangleq 1\) to show that \(ZP\) as defined in \eqref{eq: ZP} is a strict local martingale. Because \(\theta\) is a MPR, \(\llambda \otimes \mathds{P}\)-a.e. \(b + (\sigma - \theta) \sigma = \sigma^2 = \underline{u}(S) \sigma^2\). Furthermore, Fubini's theorem and \eqref{eq: cond no SMD} yield that
\[
\lim_{x \nearrow \infty}v(1, \underline{a}) (x) =
\int_{x_0}^\infty \frac{d}%{\operatorname{d}\hspace{-0.055cm} z}{\underline{a}(z)} < \infty.
\]
Thus, the conditions from part (ii) of Theorem \ref{theo: general SLM} hold and we conclude that \(ZP\) is a strict local martingale. Consequently, as explained above, no SMD exists.
\end{proof}
The conditions \eqref{eq: cond SMD} and \eqref{eq: cond no SMD} provide a test for the MLMM to be an EMM or not.
In a diffusion setting the conditions boil down to a single sufficient and necessary condition, which is also given in \cite[Proposition 5.2]{criens17b}.
\subsubsection{Example: Diffusion models with a change point}
Fontana et al. \cite{FONTANA20143009} study (NUPBR) and (NFLVR) for a model with a change point. The authors are interested in the influence of filtrations, which represent different levels of information. Under a weak form of the \(\mathcal{H}'\)-hypothesis the model can be included into our framework. More precisely, in this case \(S\) is of the form
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_t = \mu_t d}%{\operatorname{d}\hspace{-0.055cm} t + \big(\sigma^{(1)}(t, S_t) \mathds{1}_{\{t \leq \tau\}} + \sigma^{(2)}(t, S_t) \mathds{1}_{\{t > \tau\}}\big) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\]
where \(\tau\) is a stopping time. The coefficient \(\sigma^{(i)}\) is assumed to be positive, continuous and Lipschitz continuous in the second variable uniformly in the first, see \cite[Condition I]{FONTANA20143009}.
Theorem \ref{theo: main1 SEM} provides local conditions for (NFLVR). For instance in the special cases described in \cite[Section 3.3]{FONTANA20143009}, Theorem \ref{theo: main1 SEM} yields that (NFLVR) always holds, because
\begin{align}\label{eq: mu assp Fea}
\mu_t = \mu^{(1)} (t, S_t) \mathds{1}_{\{t \leq \tau\}} + \mu^{(2)}(t, S_t) \mathds{1}_{\{t > \tau\}},
\end{align}
where \(\mu^{(i)}\) is locally bounded. This extends the observation from \cite{FONTANA20143009} that (NUPBR) holds in these cases.
Furthermore, if in addition to \eqref{eq: mu assp Fea} for \(i = 1, 2\)
\[
\big(\sigma^{(i)} (t, x)\big)^2 \leq \textup{const. } x, \quad (t, x) \in [0, T] \times [1, \infty),
\]
then even (NFFLVR) holds. The notion (NFFLVR) has not been studied in \cite{FONTANA20143009}.
\subsection{Diffusion model with Markov switching}\label{sec: MSM}
In this section, we assume that \(P\) is a positive continuous semimartingale with deterministic initial value \(P_0 \in (0, \infty)\) and dynamics
\[
d}%{\operatorname{d}\hspace{-0.055cm} P_t = b(P_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (P_t, \xi_t) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\]
where \(W = (W_t)_{t \in [0, T]}\) is a one-dimensional Brownian motion, \(\xi = (\xi_t)_{t \in [0, T]}\) is a continuous-time irreducible Feller--Markov chain with state space \(J \triangleq \{1, \dots, N\}\), \(1 \leq N \leq \infty\), and deterministic initial value \(j_0 \in J\), and \(b \colon (0, \infty) \times J \to \mathbb{R}\) and \(\sigma \colon (0, \infty) \times J \to \mathbb{R}\backslash \{0\}\) are Borel functions such that
\begin{align*}
\frac{1 + |b(\cdot, j)|}{\sigma^2(\cdot, j)} \in L^1_\textup{loc}((0, \infty)) \text{ for all } j \in J.
\end{align*}
We can interpret \(N\) as the number of all possible states of the business cycle. The assumption of irreducibilily means that we exclude all states of the business cycle which are not attainable from the initial state. We assume \(\xi\) to be a Feller process for technical reasons. In case \(N < \infty\) any Markov chain with values in \(J\) is a Feller process, because all real-valued functions on \(J\) are continuous and vanishing at infinity.
Due to Lemma \ref{lem: indep MC BM} in the Appendix, the sources of risk \(\xi\) and \(W\) are independent.
The lemma even shows that it is not possible to model \(\xi\) and \(W\) as Markov processes for a superordinate filtration without their independence. This observation gives a novel interpretation for the independence assumption, which is typically interpreted as the price process being influenced by the business cycle and an additional independent source of risk represented by the driving Brownian motion.
\subsubsection{Absence and existence of arbitrage}\label{sec: existence MM MS}
We impose the following two assumptions: The coefficient \(b\) is bounded on compact subsets of \((0, \infty) \times J\), \(\sigma^2\) is bounded away from zero on compact subsets of \((0, \infty) \times J\) and \(\sigma\) satisfies the ES conditions for all \(j \in J\), see Section \ref{sec: MG MS} for this terminology.
We define
\[
\theta(x, j) \triangleq \frac{b(x, j)}{\sigma (x, j)},
\]
which is a Borel map bounded on compact subsets of \((0, \infty) \times J\). The process \(\theta_t \triangleq \theta (P_t, \xi_t)\) is a MPR. We define the continuous local martingale \(Z\) as in \eqref{eq: Z}. Note that the observations (O1) -- (O3) in Section \ref{sec: Arb GM} also hold in this setting.
We call the E(L)MM \({\mathbb{Q}}\) with Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} = Z_T\) the \emph{minimal (local) martingale measure (M(L)MM)}.
The following theorem provides conditions for the existence of the M(L)MM and for \(Z\) to be a SMD.
\begin{theorem}\phantomsection\label{theo: main existence MS}
\begin{enumerate}
\item[\textup{(i)}] Assume that
\begin{align}\label{eq: MS iff}
\int_0^1 \frac{z}{\sigma^2(z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z= \infty \text{ for all } j \in J.
\end{align}
Then, \(Z\) is a martingale and the probability measure \({\mathbb{Q}}\) defined by the Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_T\) is an ELMM. Moreover, \(B\) as defined in \eqref{eq:def B} is a \({\mathbb{Q}}\)-Brownian motion such that
\[P = P_0 + \int_0^\cdot \sigma (P_t, \xi_t)d}%{\operatorname{d}\hspace{-0.055cm} B_t.
\]
If in addition
\begin{align}\label{eq: SMD MS}
\int_1^\infty \frac{z}{\sigma^2(z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z = \infty \text{ for all } j \in J,
\end{align}
then \({\mathbb{Q}}\) is an EMM.
\item[\textup{(ii)}] If \eqref{eq: SMD MS} holds, then \(Z\) is a SMD.
\end{enumerate}
\end{theorem}
\begin{proof}
The claim follows similar to the proof of Theorem \ref{theo: main1 SEM} when Theorem \ref{theo: mart MS} is used instead of Theorem \ref{theo: mart Ito}.
\end{proof}
Theorem \ref{theo: mart MS} suggests that in case \(\xi\) is recurrent, the conditions in Theorem \ref{theo: main existence MS} are sufficient and necessary. The following theorem makes this precise.
\begin{theorem}\label{theo: main non existence MS} Suppose that \(\xi\) is recurrent.
\begin{enumerate}
\item[\textup{(i)}]
If there exists a \(j \in J\) such that
\begin{align}\label{eq: no MLMM}
\int_0^1 \frac{z}{\sigma^2(z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z < \infty,
\end{align}
then \(Z\) is a strict local martingale and the MLMM does not exist.
\item[\textup{(ii)}]
If there exists a \(j \in J\) such that
\begin{align}\label{eq: no MMM}
\int_1^\infty \frac{z}{\sigma^2 (z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z < \infty,
\end{align}
then \(Z\) is no SMD. In particular, the MMM does not exist.
\end{enumerate}
\end{theorem}
\begin{proof}
The claim follows similar to the proof of Theorem \ref{theo: no SMD gen} when Theorem \ref{theo: mart MS} is used instead of Theorem \ref{theo: general SLM}.
\end{proof}
Recalling that in case \(N < \infty\) the Markov chain \(\xi\) is recurrent, we obtain the following:
\begin{corollary}\label{coro: MS MMM}
Suppose that \(N < \infty\). Then, the following are equivalent:
\begin{enumerate}
\item[\textup{(a)}]
The MLMM exists if and only if \eqref{eq: MS iff} holds.
\item[\textup{(b)}]
The MMM exists if and only if \eqref{eq: MS iff} and \eqref{eq: SMD MS} hold.
\item[\textup{(c)}]
\(Z\) is a SMD if and only if \eqref{eq: SMD MS} holds.
\end{enumerate}
\end{corollary}
With \(N = 1\) we recover \cite[Corollary 3.4, Theorems 3.6 and 3.11]{Mijatovic2012}. Corollary \ref{coro: MS MMM} means that the M(L)MM exists if and only if the M(L)MM exists for all markets with fixed regimes.
We will see in the next section that in case one of the frozen markets allows arbitrage, it is not possible to find a risk-neutral market in which the business cycle has Markovian dynamics.
\subsubsection{Non-existence of structure preserving ELMMs and EMMs}
Let \(\mathcal{L}_{\textup{sp}}\) the set of all ELMMs \({\mathbb{Q}}\) such that \(\xi\) is an irreducible recurrent Feller--Markov chain on \((\Omega, \mathcal{F}, \mathbf{F}, {\mathbb{Q}})\) and let \(\mathcal{M}_\textup{sp}\) be the set of all EMMs in \(\mathcal{L}_\textup{sp}\).
The main result of this section is the following:
\begin{theorem}\phantomsection\label{theo: NE MS}
\begin{enumerate}
\item[\textup{(i)}]
Suppose there exists a \(j \in J\) such that \eqref{eq: no MLMM} holds and \(\sigma\) satisfies the ES conditions for \(j\).
Then, \(\mathcal{L}_\textup{sp} = \emptyset\).
\item[\textup{(ii)}]
Suppose there exists a \(j \in J\) such that \eqref{eq: no MMM} holds and \(\sigma\) satisfies the ES conditions for \(j\).
Then, \(\mathcal{M}_\textup{sp} = \emptyset\).
\end{enumerate}
\end{theorem}
\begin{proof}
The result follows from the contradiction argument used in the proof of Theorem \ref{theo: general SLM}, where Theorem \ref{theo: existence Markov} has to be used instead of Theorem \ref{theo: 1D Feller p2}.
\end{proof}
In Section \ref{sec: modifying MLMM} we show that an equivalent change to the MLMM does not affect the Markov chain \(\xi\). Thus, Theorem \ref{theo: NE MS} generalizes Theorem \ref{theo: main non existence MS}.
\subsubsection{Example: Markov switching CEV model}
We consider a version of the CEV model (see \cite{Cox15}) with Markov switching.
Take \(\beta \colon J \to (0, \infty)\) and assume that
\[
\sigma (x, j) = x^{\beta(j)}, \quad (x, j) \in (0, \infty) \times J.
\]
Furthermore, assume that \(b \colon (0, \infty) \times J \to \mathbb{R}\) is locally bounded such that
\begin{align*}
\int_{1}^\infty \int_{1}^y \frac{\exp (- \int_s^y \frac{2b(z, j)}{z^{2 \beta(j)}} d}%{\operatorname{d}\hspace{-0.055cm} z)}{s^{2 \beta(j)}} d}%{\operatorname{d}\hspace{-0.055cm} s d}%{\operatorname{d}\hspace{-0.055cm} y
= \int_0^1 \int_y^1 \frac{\exp (- \int_s^y \frac{2b(z, j)}{z^{2 \beta(j)}} d}%{\operatorname{d}\hspace{-0.055cm} z)}{s^{2 \beta(j)}} d}%{\operatorname{d}\hspace{-0.055cm} s d}%{\operatorname{d}\hspace{-0.055cm} y = \infty
\end{align*}
for all \(j \in J\).
Then, the discounted asset price process \(P\) exists due to Theorem \ref{theo: existence Markov} below.
Let \(Z\) be defined as in \eqref{eq: Z} with \(\theta_t = \frac{b(S_t, \xi_t)}{\sigma(S_t, \xi_t)}\).
In case \(N< \infty\), Corollary \ref{coro: MS MMM} shows the following:
\begin{enumerate}
\item[\textup{(a)}] The MLMM exists if and only if \(\beta(j) \geq 1\) for all \(j \in J\).
\item[\textup{(b)}] The MMM exists if and only if \(\beta(j) = 1\) for all \(j \in J\).
\item[\textup{(c)}] \(Z\) is a SMD if and only if \(\beta (j) \leq 1\) for all \(j \in J\).
\end{enumerate}
\subsection{Comments on related literature}\label{sec: comments}
For continuous semimartingale markets the existence and non-existence of SMDs, ELMMs and EMMs has been studied in \cite{criens17b,Delbaen2002, MAFI:MAFI530, MU(2012)}.
We comment on these works in more detail.
In \cite{Delbaen2002,MU(2012)} a one-dimensional diffusion framework has been considered. We discuss the results from \cite{MU(2012)} and refer to \cite[Remark 3.2]{MU(2012)} for comments on the relation between \cite{Delbaen2002} and \cite{MU(2012)}. In \cite{MU(2012)} it is assumed that the price process \(P = (P_t)_{t \in [0, T]}\) is a \([0, \infty)\)-valued diffusion such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} P_t = b(P_t) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (P_t) d}%{\operatorname{d}\hspace{-0.055cm} W_t, \quad P_0 \in (0, \infty),
\]
where \(b \colon (0, \infty) \to \mathbb{R}\) and \(\sigma \colon (0, \infty) \to \mathbb{R} \backslash \{0\}\) are Borel functions satisfying
\[
\frac{1 + |b|}{\sigma^2} \in L^1_\textup{loc} ((0, \infty)),
\]
see also \cite[Definition 5.5.20]{KaraShre}. In the following we assume that \(P\) cannot explode to zero. In \cite{MU(2012)} the notions (NFLVR) and (NFFLVR) are also studied in case \(P\) can explode to zero and (NFLVR), (NFFLVR) and (NRA) are further studied for the infinite time horizon. For the non-explosive case the results from \cite{MU(2012)} are as follows:
\begin{enumerate}
\item[(a)] (NFLVR) \(\Leftrightarrow\) \(\frac{b}{\sigma} \in L^2_\textup{loc} ((0, \infty))\) and \(\int_0^1 \frac{x}{\sigma(x)} d}%{\operatorname{d}\hspace{-0.055cm} x = \infty\), see \cite[Corollary 3.4]{MU(2012)}.
\item[(b)] (NFFLVR) \(\Leftrightarrow\) \(\frac{b}{\sigma} \in L^2_\textup{loc} ((0, \infty))\) and \(\int_0^1 \frac{x}{\sigma(x)} d}%{\operatorname{d}\hspace{-0.055cm} x = \int_1^\infty \frac{x}{\sigma (x)} d}%{\operatorname{d}\hspace{-0.055cm} x = \infty\), see \cite[Theorem 3.6]{MU(2012)}.
\item[(c)] If \(\frac{b}{\sigma} \in L^2_\textup{loc}((0, \infty))\), then (NRA) \(\Leftrightarrow\) \(\int_1^\infty \frac{x}{\sigma (x)} d}%{\operatorname{d}\hspace{-0.055cm} x = \infty\), see \cite[Theorem 3.11]{MU(2012)}.
\end{enumerate}
Applying Corollary \ref{eq: SMD MS} with \(N = 1\) shows versions of (a) -- (c) under slightly more restrictive regularity assumptions on \(b\) and \(\sigma\).
The novelty of Corollary \ref{eq: SMD MS} or more generally Theorems \ref{theo: main existence MS} and \ref{theo: main non existence MS} is their scope of application.
A multi-dimensional diffusion setting has been studied in \cite{criens17b}. We explain the one-dimensional version: Assume that the price process \(P = (P_t)_{t \in [0, T]}\) is the stochastic exponential of
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_t = b(S_t) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (S_t) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\]
where \(b, \sigma \colon \mathbb{R} \to \mathbb{R}\) are locally bounded Borel functions such that \(\sigma^2\) is locally bounded away from zero. In this setting, \cite[Propositions 5.1]{criens17b} shows that (NFLVR) always holds and \cite[Proposition 5.2]{criens17b} implies that (NFFLVR) \(\Leftrightarrow\) (NRA) \(\Leftrightarrow\) \(\int_1^\infty \frac{d}%{\operatorname{d}\hspace{-0.055cm} x}{\sigma^2(x)} = \infty\).
Under slightly different regularity assumptions on \(b\) and \(\sigma\), the same observation follows from Theorems \ref{theo: main1 SEM} and \ref{theo: no SMD gen}.
The novelty of Theorems \ref{theo: main1 SEM} and \ref{theo: no SMD gen} is that no diffusion structure is needed. In particular, the coefficients \(b\) and \(\sigma\) are allowed to depend on the path of \(S\) or several sources of risk.
In \cite{criens17b} the main interest lies in the multi-dimensional setting.
We stress that it is possible to extend our results to a multi-dimensional framework. The type of condition will be similar as in \cite{criens17b}.
In \cite{MAFI:MAFI530} the price process \(P = (P_t)_{t \in [0, T]}\) is assumed to be the stochastic exponential of
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_t = - \alpha (t, S, X) \theta_t d}%{\operatorname{d}\hspace{-0.055cm} t + \alpha (t, S, X) d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\]
where \(X = (X_t)_{t \in [0, T]}\) is a continuous process, \(\alpha\) and \(\theta\) are suitable processes such that the integrals are well-defined and \(\llambda \otimes \mathds{P}\)-a.e. \(\alpha \not = 0\).
The process \(X\) is called \emph{information process}.
This setting is closely related to those from Section \ref{sec: Arb GM}.
Let \(\mathscr{W}\) be the Wiener measure and let \(\nu\) be the law of
\(
- \int_0^\cdot \theta_s d}%{\operatorname{d}\hspace{-0.055cm} s + W.
\)
The main result from \cite{MAFI:MAFI530} is the following: If a.s. \(\int_0^T \theta_s^2 d}%{\operatorname{d}\hspace{-0.055cm} s < \infty\), then (NFLVR) \(\Leftrightarrow \mathscr{W} \sim \nu\),
see \cite[Proposition 2.3]{MAFI:MAFI530}.
This result is very different from ours, which are intended to give easy to verify conditions for a large class of models.
\section{Modifying Minimal Local Martingale Measures}\label{sec: modifying MLMM}
In Section \ref{sec: existence MM MS} we proved conditions for the existence of the minimal (local) martingale measure in a Markov switching framework. We ask the following consecutive questions:
\begin{enumerate}
\item[1.] Does the MLMM change the dynamics of the Markov chain?
\item[2.] Is it possible to modify the MLMM such that the dynamics of the Markov chain are changed in a tractable manner?
\end{enumerate}
In this section we answer these questions from a general perspective under an independence assumption, which holds in our Markov switching framework.
\subsection{Martingale problems}
To characterize additional sources of risk in our financial market, we introduce a martingale problem.
Let \(J\) be a Polish space, define \(D(\mathbb{R}_+, J)\) to be the space of all c\`adl\`ag functions \(\mathbb{R}_+ \to J\) and \(\mathcal{D}\) to be the \(\sigma\)-field generated by the coordinate process \(X = (X_t)_{t \geq 0}\), i.e. \(X_t(\omega) = \omega(t)\) for \(\omega \in D(\mathbb{R}_+, J)\) and \(t \in \mathbb{R}_+\).
We equip \(D(\mathbb{R}_+, J)\) with the Skorokhod topology, which renders it into a Polish space.
It is well-known that \(\mathcal{D}\) is the Borel \(\sigma\)-field on \(D(\mathbb{R}_+, J)\). We refer to \cite{EK,JS} for more details.
Let \({\mathbf{D}}^o \triangleq (\mathcal{D}^o_t)_{t \geq 0}\) be the filtration induced by \(X\), i.e. \(\mathcal{D}^o_t \triangleq \sigma (X_s, s \in [0, t])\), and let \({\mathbf{D}} \triangleq (\mathcal{D}_t)_{t \geq 0}\) be its right-continuous version, i.e. \(\mathcal{D}_t \triangleq\bigcap_{s > t} \mathcal{D}^o_s\) for all \(t \in \mathbb{R}_+\).
Let \((B_n)_{n \in \mathbb{N}}\) be an increasing sequence of nonempty open sets in \(J\) such that \(\bigcup_{n \in \mathbb{N}}B_n = J\) and define
\begin{align}\label{eq: rhon}
\rho_n (\omega) \triangleq \inf\big(t \in \mathbb{R}_+ \colon \omega(t) \not \in B_n \textup{ or } \omega (t-) \not \in B_n\big),\quad \omega \in D(\mathbb{R}_+, J), n \in \mathbb{N}.
\end{align}
Due to \cite[Proposition 2.1.5]{EK}, \(\rho_n\) is a \({\mathbf{D}}^o\)-stopping time and, due to \cite[Problem 4.27]{EK}, \(\rho_n \nearrow \infty\) as \(n \to \infty\). We will use the sequence \((\rho_n)_{n \in \mathbb{N}}\) as a localizing sequence for test martingales of our martingale problem. We fix this sequence, because for some arguments we need a common localizing sequence consisting of \({\mathbf{D}}^o\)-stopping times.
The input data for our martingale problem is the following:
\begin{enumerate}
\item[(i)] A set \(A \subseteq C(J, \mathbb{R})\), where \(C(J, \mathbb{R})\) denotes the space of continuous functions \(J\to\mathbb{R}\).
\item[(ii)] A map \(L \colon A \to \mathcal{PM}\) such that for all \(f \in A, t \in \mathbb{R}_+\) and \(\omega \in D (\mathbb{R}_+, J)\)
\[
\int_0^{t} \big|Lf(\omega, s)\big| d}%{\operatorname{d}\hspace{-0.055cm} s < \infty,
\]
where \(\mathcal{PM}\) denotes the space of all \({\mathbf{D}}\)-progressively measurable processes.
\item[(iii)] An initial value \(j_0 \in J\).
\item[(iv)] A time horizon \(0 < T \leq \infty\).
\end{enumerate}
We use the convention that in case \(T = \infty\) the interval \([0, T]\) is identified with \(\mathbb{R}_+\).
\begin{definition}
\begin{enumerate}
\item[\textup{(i)}] Let \((\Omega^o, \mathcal{F}^o, \mathbf{F}^o, \mathds{P}^o)\) be a filtered probability space with right-continuous filtration \(\mathbf{F}^o = (\mathcal{F}^o_t)_{t \in [0, T]}\), supporting a c\`adl\`ag, adapted, \(J\)-valued process \(\xi = (\xi_t)_{t \in [0, T]}\).
We say that \(\xi\) is a \emph{solution process to the martingale problem \((A, L, j_0, T)\)}, if for all \(f \in A\) and \(n \in \mathbb{N}\) the process
\begin{align}\label{eq: pro MP}
M^{f, n} \triangleq f (\xi_{\cdot \wedge \rho_n (\xi)}) - f(\xi_0) - \int_0^{\cdot \wedge \rho_n (\xi)} L f (\xi, s) d}%{\operatorname{d}\hspace{-0.055cm} s
\end{align}
is a martingale, \(\mathds{P}^o (\xi_0 = j_0) = 1\) and for all \(t \in [0, T]\) there exists a constant \(C = C(f, n, t) > 0\) such that a.s. \(\sup_{s \in [0, t]} |M^{f, n}_s| \leq C\).
\item[\textup{(ii)}] We say that the \emph{martingale problem has a solution} if there exists a filtered probability space which supports a solution process.
\item[\textup{(iii)}] We say that the martingale problem satisfies \emph{uniqueness} if the laws (seen as Borel probability measures on \(D(\mathbb{R}_+, J)\)) of any two solution processes, possibly defined on different filtered probability spaces, coincide.
\item[\textup{(iv)}] If for all \(j_0 \in J\) the martingale problem \((A, L, j_0, T)\) has a solution and satisfied uniqueness, we call the martingale problem \((A, L, T)\) \emph{well-posed}.
\end{enumerate}
\end{definition}
Martingale problems were introduced by Stroock and Varadhan \cite{SV} in a diffusion setting. Martingale problems for semimartingales were studied in \cite{J79} and Markovian martingale problems with a Polish state space were studied in \cite{EK}. Our definition is unifying in the sense that it deals with non-Markovian processes and a Polish state space. Most of the conditions for existence and uniqueness given in \cite{EK, J79, SV} also apply to our setting.
\begin{example}[Martingale problem for Markov chains]\label{ex: xi2}
Suppose that \(J = \{1, \dots, N\}\) with \(1 \leq N \leq \infty\). We equip \(J\) with the discrete topology. Let \(\xi = (\xi_t)_{t \geq 0}\) be a Feller--Markov chain with initial value \(j_0 \in J\) and \(Q\)-matrix \(Q\).
Due to \cite[Theorem 5]{doi:10.1112/jlms/s2-5.2.267}, the generator \((\mathcal{L}, D(\mathcal{L}))\) of \(\xi\) is given by \(
\mathcal{L} = Q\) and
\(
D(\mathcal{L}) = \{f \in C_0(J) \colon Q f \in C_0(J)\},
\)
where \(C_0(J)\) denotes the space of all continuous functions \(J \to \mathbb{R}\) which are vanishing at infinity.
Due to Dynkin's formula (see \cite[Proposition VII.1.6]{RY}) the process \(\xi\) solves the martingale problem \((D(\mathcal{L}), \mathcal{L}, j_0, \infty)\) and, due to \cite[Theorem 3.33]{liggett2010continuous}, the martingale problem satisfies uniqueness.
Conversely, in case \(\xi\) is a solution process to the martingale problem \((\mathcal{L}, D(\mathcal{L}), j_0, \infty)\), where \((\mathcal{L}, D(\mathcal{L}))\) given as above is the generator of a Feller process, \(\xi\) is a Feller--Markov chain with \(Q\)-matrix \(Q\), see \cite[Theorem 3.4.2]{EK} and \cite[Theorem 3.33]{liggett2010continuous}.
\end{example}
\subsection{How to modify the MLMM}
Fix a finite time horizon \(0 < T < \infty\) and let \((\Omega, \mathcal{F}, \mathbf{F}, \mathds{P})\) be a complete filtered probability space with right-continuous and complete filtration \(\mathbf{F} = (\mathcal{F}_t)_{t \in [0, T]}\), which supports a solution process \(\xi = (\xi_t)_{t \in [0, T]}\) to the martingale problem \((A, L, j_0, T)\). Moreover, assume that the martingale problem \((A, L, j_0, T)\) satisfies uniqueness.
Let \(W = (W_t)_{t \in [0, T]}\) be a one-dimensional Brownian motion such that \(\sigma (W_t, t \in [0, T])\) and \(\sigma (\xi_t, t \in [0, T])\) are independent. We think of \(W\) and \(\xi\) as two independent sources of risk influencing the market.
The independence assumption is satisfied when \(\xi\) is a Feller--Markov chain, see Lemma \ref{lem: indep MC BM} in the Appendix.
In the following theorem we find a new property of the MLMM. To wit, we show that the MLMM preserves the independence of the sources of risk and their laws. Because the M(L)MM is often used for pricing, this observation is important for analytical and numerical computations. We prove the following theorem in Section \ref{sec: pf theo modi ind}.
\begin{theorem}\label{theo: indp preserving}
Let \(c = (c_t)_{t \in [0, T]}\) be a real-valued progressively measurable process such that a.s. \[\int_0^T c_s^2 d}%{\operatorname{d}\hspace{-0.055cm} s < \infty\] and define
\begin{align*}
Z &\triangleq \mathcal{E} \Big( \int_0^\cdot c_s d}%{\operatorname{d}\hspace{-0.055cm} W_s \Big), \quad
B \triangleq W - \int_0^\cdot c_s d}%{\operatorname{d}\hspace{-0.055cm} s.
\end{align*}
Suppose further that \(Z\) is a martingale and that the martingale problem \((A, L, j_0, T)\) satisfies uniqueness.
Define \({\mathbb{Q}}\) by the Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_T\).
Then, \(\sigma ( B_t, t \in [0, T])\) and \(\sigma (\xi_t, t \in [0,T])\) are \({\mathbb{Q}}\)-independent, \(B\) is a \({\mathbb{Q}}\)-Brownian motion and \(\xi\) is a solution process to the martingale problem \((A, L, j_0, T)\) on \((\Omega, \mathcal{F}, \mathbf{F}, {\mathbb{Q}})\).
\end{theorem}
Let us outline an important consequence of Theorem \ref{theo: indp preserving}:
If the MLMM exists, then its density is of the same type as \(Z\) in Theorem \ref{theo: indp preserving} and it follows that the joint law of the sources of risk remains unchanged by an equivalent change to the MLMM.
In particular, in the setting of Section \ref{sec: MG MS} this means that \(\xi\) stays a Markov chain after a change to the MLMM.
We ask further whether it is possible to modify the MLMM such that the law of \(\xi\) can be affected in a tractable manner. An answer to this question is provided by the next theorem.
A proof can be found in Section \ref{sec: pf mg JS}.
\begin{theorem}\label{theo: cdc}
Let \(f \in A\) be strictly positive and suppose that the process
\begin{align}\label{eq: mart to show bounded}
Z \triangleq \frac{f(\xi)}{f(j_0)} \exp \Big(- \int_0^\cdot \frac{Lf (\xi, s)}{f(\xi_s)} d}%{\operatorname{d}\hspace{-0.055cm} s \Big)
\end{align}
is a martingale.
Set \[
A^* \triangleq \big\{g \in A \colon fg \in A\big\},
\]
and
\[
L^* g \triangleq \frac{L (f g) - g L f}{f}.
\]
Suppose that
for every \(g \in A^*\) and \(n \in \mathbb{N}\) there exists a constant \(C = C(g, n) > 0\) such that a.s.
\[
\sup_{t \in [0, T]} \Big| g(\xi_{t \wedge \rho_n(\xi)}) - g(\xi_0) - \int_0^{t \wedge \rho_n(\xi)} L^* g(\xi, s) d}%{\operatorname{d}\hspace{-0.055cm} s \Big| \leq C.
\]
Define the probability measure \({\mathbb{Q}}\) by the Radon--Nikodym derivative
\(
\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_T.
\)
Then, \(\sigma(\xi_t, t \in [0, T])\) and \(\sigma(W_t, t \in [0, T])\) are \({\mathbb{Q}}\)-independent, \(W\) is a \({\mathbb{Q}}\)-Brownian motion and \(\xi\) is a solution process for the martingale problem \((A^*, L^*, j_0, T)\) on \((\Omega, \mathcal{F}, \mathbf{F}, {\mathbb{Q}})\).
\end{theorem}
\begin{remark}\label{rem: loc mart}
\begin{enumerate}
\item[\textup{(i)}]
For all \(\omega \in D(\mathbb{R}_+, J)\) and \(g \in A^*\)
\[
\int_0^T\Big(\Big|\frac{L f(\omega, s)}{f(\omega(s))}\Big| + \big| L^* g(\omega, s)\big| \Big)d}%{\operatorname{d}\hspace{-0.055cm} s < \infty,
\]
because \(f\) and \(g\) are continuous and the set \(\{\omega(t) \colon t \in [0, T]\} \subseteq J\) is relatively compact, see \cite[Problem 16, p. 152]{EK}.
Consequently, \(Z\) and the martingale problem \((A^*, L^*, j_0, T)\) are well-defined.
\item[\textup{(ii)}]
In view of \cite[Corollary 2.3.3]{EK}, the process \eqref{eq: mart to show bounded} is always a local martingale by the definition of the martingale problem.
\end{enumerate}
\end{remark}
We explain an application of Theorem \ref{theo: cdc}:
Suppose that the MLMM exists. Then, using the change of measure described in Theorem \ref{theo: cdc}, the MLMM can be changed further such that the law of \(\xi\) gets affected as described in the theorem, while the local martingale property of the price process is preserved. We stress that in this manner the MLMM induces a family of ELMMs, which is often infinite.
In a Markov switching framework with \(N < \infty\) the following proposition explains how the \(Q\)-matrix of the driving Feller--Markov chain can be changed.
\begin{proposition}\label{prop: com MC}
Suppose that \(J = \{1, \dots, N\}\) with \(N < \infty\) and
\[
L f(\omega, s) = Q f (\omega(s)), \quad \omega \in D(\mathbb{R}_+, J), s \in \mathbb{R}_+,
\]
for a \(Q\)-matrix \(Q = (q_{ij})_{i, j \in J}\) and \(f \in A \triangleq \mathbb{R}^N\). Let \(f \in (0, \infty)^N\) and \(A^*,L^*\) as in Theorem \ref{theo: cdc}. Then, \(A^* = \mathbb{R}^N\) and
\[
L^* f(\omega, s) = Q^* f (\omega(s)), \quad f \in \mathbb{R}^N, \omega \in D(\mathbb{R}_+, J), s \in \mathbb{R}_+,
\]
for \(Q^* = (q^*_{ij})_{i, j \in J}\) with
\begin{align*}
q^*_{ij} \triangleq
\begin{cases}
q_{ij} \frac{f (j)}{f (i)},&i \not = j,\\
- \sum_{k \not = i} q_{ik} \frac{f(k)}{f(i)},& i = j.
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
See \cite[Proposition 5.1]{palmowski2002}.
\end{proof}
A useful criterion for the martingale property of \eqref{eq: mart to show bounded} is given by Theorem \ref{prop: mg f} below. We consider it as an extension of results from \cite{CFY, J79, SV}.
In the following \(X = (X_t)_{t \geq 0}\) denotes the coordinate process on \(D(\mathbb{R}_+,J)\).
\begin{definition} A set \(\widetilde{A} \subseteq A\) is called a \emph{determining set} for the martingale problem \((A, L, \infty)\) if for all \(j_0 \in J\) a Borel probability measure \(\mu\) on \(D(\mathbb{R}_+, J)\) is the law of a solution process to the martingale problem \((A, L, j_0, \infty)\) if and only if for all \(f \in \widetilde{A}\) and \(n \in \mathbb{N}\) the process
\[
f(X_{\cdot \wedge \rho_n}) - f(X_0) - \int_0^{\cdot \wedge \rho_n} Lf (X, s)d}%{\operatorname{d}\hspace{-0.055cm} s
\]
is a \(\mu\)-martingale and \(\mu(X_0 = j_0) = 1\).
\end{definition}
\begin{example}[Determining set for Feller--Markov chains]
Let \(J, A\) and \(L\) be as in Example \ref{ex: xi2}. Note that
\[
G \triangleq \big\{ (f, Qf) \colon f \in A\big\} \subset C_0(J) \times C_0 (J).
\]
Because \(C_0(J)\) equipped with the uniform metric is a separable metric space, \(G\) is a separable metric space when equipped with the taxicap uniform metric.
Hence, we find a countable set \(\widetilde{A} \subseteq A\) such that for each \((f, g) \in G\) there exists a sequence \((f_n)_{n \in \mathbb{N}} \subset \widetilde{A}\) with
\[
\|f_n - f\|_\infty + \|Q f_n - g\|_\infty \to 0 \quad \text{ as } n\to \infty.
\]
Due to \cite[Proposition 4.3.1]{EK}, \(\widetilde{A}\) is a determining set for the martingale problem \((A, L, \infty)\).
\end{example}
A proof for the following theorem can be found in Section \ref{sec: pf mg JS}.
\begin{theorem}\label{prop: mg f}
Let \(f, A^*\) and \(L^*\) be as in Theorem \ref{theo: cdc}. Moreover, assume there exists a countable determining set for the martingale problem \((A^*, L^*, \infty)\)
and that
\[
L^* g (\xi, t) = Kg(\xi_{t}), \quad f \in A^*, t \in \mathbb{R}_+,
\]
where \(K\) maps \(A^*\) into the space of Borel functions \(J \to \mathbb{R}\). Finally, assume that the martingale problem \((A^*, L^*, \infty)\) is well-posed and that \((\rho_n (\xi))_{n \in \mathbb{N}}\) is a localizing sequence for the local martingale \eqref{eq: mart to show bounded}, see Remark \ref{rem: loc mart}. Then, the process \eqref{eq: mart to show bounded} is a martingale.
\end{theorem}
Roughly speaking, this theorem shows that in Markovian settings we can modify the law of \(\xi\) whenever the martingale problem \((A^*, L^*, \infty)\) is well-posed.
\begin{remark}
The existence of a solution to the martingale problem \((A^*, L^*, j_0, T)\) is often necessary for the martingale property of \(Z\), see Theorem \ref{theo: cdc}.
\end{remark}
\section{Proof of Theorems \ref{theo: mart Ito} and \ref{theo: general SLM}}\label{sec: pf}
The following section is divided into three parts. In the first part we prove Lyapunov-type conditions for non-explosion of It\^o processes, in the second part we prove non-existence conditions for It\^o processes and in the third part we deduce Theorems \ref{theo: mart Ito} and \ref{theo: general SLM}.
\subsection{Criteria for non-explosion}
In this section we pose ourselves into a version of the setting from Section \ref{sec: GC}.
Let \(I = (l, r)\) be as in Section \ref{sec: GC} and \((\Omega,\mathcal{F})\) be a measurable space which supports three real-valued processes \(S = (S_t)_{t \in [0, T]}, b = (b_t)_{t \in [0,T]}\) and \(\sigma = (\sigma_t)_{t \in [0, T]}\).
For every \(n \in \mathbb{N}\) we fix a probability measure \({\mathbb{Q}}^n\) and a right-continuous \({\mathbb{Q}}^n\)-complete filtration \(\mathbf{F}^n= (\mathcal{F}^n_t)_{t \in [0, T]}\) on \((\Omega, \mathcal{F})\) such that \(S, b\) and \(\sigma\) are \(\mathbf{F}^n\)-progressively measurable. We set \(\tau_n\) as in Theorem \ref{theo: mart Ito}, i.e.
\[
\tau_n = \inf(t \in [0, T] \colon S_t \not \in (l_n, r_n)),
\]
where \(l_n \searrow l, r_n \nearrow r\) are sequences such that \(l < l_{n+1} < l_n < r_n < r_{n +1} < r\).
Moreover, suppose that \({\mathbb{Q}}^n\)-a.s.
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_{t \wedge \tau_n} = b_t \mathds{1}_{\{t \leq \tau_n\}} d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t \mathds{1}_{\{t \leq \tau_n\}} d}%{\operatorname{d}\hspace{-0.055cm} W^n_t, \quad S_0 \in I,
\]
where \(W^n = (W^n_t)_{t \in [0, T]}\) is a Brownian motion on \((\Omega, \mathcal{F}, \mathbf{F}^n, {\mathbb{Q}}^n)\). It is implicit that the integrals are well-defined.
We also assume that
\begin{align}\label{eq: nondeg n}
\llambda \otimes {\mathbb{Q}}^n\text{-a.e. } \sigma \not = 0 \text{ for all } n \in \mathbb{N}
\end{align}
and we fix a Borel function \(\zeta \colon [0, T] \to \mathbb{R}_+\) such that \(\zeta \in L^1([0, T])\).
\subsubsection{A Lyapunov criterion}\label{sec: cond E}
In this section we give a Lyapunov-type condition for
\begin{align}\label{eq: cond qn}
\limsup_{n \to \infty} {\mathbb{Q}}^n(\tau_n = \infty) = 1. \end{align}
For \(f \in C^1(I, \mathbb{R})\) with locally absolutely continuous derivative, it is well-known that there exists a \(\llambda\)-null set \(N^f \subset I\) such that \(f\) has a second derivative \(f''\) on \(I \backslash N^f\). In this case, we set
\[
\mathcal{L} f \triangleq f'(S) b + \tfrac{1}{2} f'' (S) \mathds{1}_{I \backslash N^f}(S) \sigma^2.
\]
\begin{theorem}\label{theo: NE 1}
Let \(V \colon I \to (0, \infty)\) be differentiable with locally absolutely continuous derivative such that
\begin{align}\label{eq: minimum convergence}
\limsup_{n \to \infty} V(l_n) \wedge V(r_n) = \infty.
\end{align}
Suppose there exists a \(\llambda\)-null set \(N \subset I\) such that
\begin{equation}\label{ineq: Lyapunov2}\begin{split}
\mathcal{L}V (t) (\omega) \mathds{1}_{I \backslash N} (S_t(\omega)) \leq \zeta(t)&V(S_t(\omega)) \mathds{1}_{I \backslash N}(S_t(\omega)) \\&\text{ for } \llambda \otimes {\mathbb{Q}}^n\text{-a.a. } (t, \omega) \in [0, T] \times \Omega, \quad n \in \mathbb{N}.
\end{split}
\end{equation}
Then, \eqref{eq: cond qn} holds.
\end{theorem}
\begin{proof}
Let \(L^S\) be the local time of the continuous \({\mathbb{Q}}^n\)-semimartingale \(S_{\cdot \wedge \tau_n}\). The occupation times formula yields that \({\mathbb{Q}}^n\)-a.s.
\begin{align*}
\int_0^{\tau_n \wedge T} \mathds{1}_N(S_{s}) \sigma^2_s d}%{\operatorname{d}\hspace{-0.055cm} s = 2 \int_{- \infty}^\infty \mathds{1}_N(x) L^S_{T} (x) d}%{\operatorname{d}\hspace{-0.055cm} x = 0,
\end{align*}
which implies that \({\mathbb{Q}}^n\)-a.s \(\llambda (\{t \in [0, \tau_n \wedge T] \colon S_{t} \in N\}) = 0\). We will use this fact in the following without further reference.
Set
\begin{align*}
U^n \triangleq \exp \Big(- \int_0^{\cdot \wedge \tau_n} \zeta(s)d}%{\operatorname{d}\hspace{-0.055cm} s\Big) V(S_{\cdot \wedge \tau_n}).
\end{align*}
Using a generalized version of It\^o's formula (see \cite[Lemma IV.45.9]{RW2}),
we obtain that
the process
\begin{equation*}\begin{split}
U^n + \int_0^{\cdot \wedge \tau_n} \exp \Big(- \int_0^{s} \zeta(z) d}%{\operatorname{d}\hspace{-0.055cm} z \Big)\big(\zeta(s) V(S_s) - \mathcal{L} V (s) \big) d}%{\operatorname{d}\hspace{-0.055cm} s
\end{split}
\end{equation*}
is a local \({\mathbb{Q}}^n\)-martingale.
We deduce from \eqref{ineq: Lyapunov2} and the fact that non-negative local martingales are supermartingales, that \({\mathbb{Q}}^n\)-a.s.
\begin{align*}
U^n \leq {\mathbb{Q}}^n\textup{-supermartingale starting at } U_0 = V(S_0).
\end{align*}
W.l.o.g. we assume that \(S_0 \in (l_{1}, r_{1})\).
Note that for all \(n \in \mathbb{N}\) we have \({\mathbb{Q}}^n\)-a.s. \(S_{\tau_n} \in \{l_n, r_n\}\) on \(\{\tau_n \leq T\}\).
We conclude that for all \(n \in \mathbb{N}\)
\begin{align*}
{\mathbb{Q}}^n(\tau_n \leq T) \exp \Big(- \int_0^T \zeta(s)d}%{\operatorname{d}\hspace{-0.055cm} s \Big) (V(l_n) \wedge V(r_n))
&\leq {\mathds{E}}^{{\mathbb{Q}}^n} \big[ U^n_{\tau_n} \mathds{1}_{\{\tau_n \leq T\}} \big]
\\&\leq {\mathds{E}}^{{\mathbb{Q}}^n} \big[ U^n_{T}\big] \leq V(S_0).
\end{align*}
By \eqref{eq: minimum convergence} there exists a sequence \((n_k)_{k \in \mathbb{N}} \subset \mathbb{N}\) with \(n_k \to \infty\) as \(k \to \infty\) such that
\(
V(l_{n_k}) \wedge V(r_{n_k}) > 0\) for all \(k \in \mathbb{N}
\)
and
\(
\lim_{k \to \infty} V(l_{n_k}) \wedge V(r_{n_k}) = \infty.
\)
We deduce from
\[
0 \leq {\mathbb{Q}}^{n_k} (\tau_{n_k} \leq T) \leq V(S_0) \exp \Big(\int_0^T \zeta(s)d}%{\operatorname{d}\hspace{-0.055cm} s \Big) \frac{1}{V(l_{n_k}) \wedge V(r_{n_k})}
\]
that
\[
\lim_{k \to \infty} {\mathbb{Q}}^{n_k} (\tau_{n_k} \leq T) = 0.
\]
Because \(\{\tau_n \leq T\}^c = \{\tau_n = \infty\}\), we obtain
\[
1 = \lim_{k \to \infty} {\mathbb{Q}}^{n_k} (\tau_{n_k} = \infty) \leq \limsup_{n \to \infty} {\mathbb{Q}}^n(\tau_n = \infty) \leq 1,
\]
which implies \eqref{eq: cond qn}. The proof is complete.
\end{proof}
\subsubsection{An integral test}
Let \(\overline{a} \colon I \to (0, \infty)\) and \(\underline{u}, \overline{u} \colon I \to \mathbb{R}\) be Borel functions such that
\[
\frac{1}{\overline{a}} + |\underline{u}| + |\overline{u}| \in L^1_\textup{loc}(I).
\]
Recall from Section \ref{sec: MP SE} that in case \((f, g)\) is one of the pairs \((\underline{u}, \overline{a}), (\overline{u}, \overline{a})\) we set
\begin{align}\label{eq: v}
v(f, g)(x) = \int_{x_0}^x \exp \Big( - \int_{x_0}^y 2 f(z) d}%{\operatorname{d}\hspace{-0.055cm} z \Big) \int_{x_0}^y \frac{2 \exp (\int_{x_0}^u2 f(z) d}%{\operatorname{d}\hspace{-0.055cm} z)}{g(u)} d}%{\operatorname{d}\hspace{-0.055cm} u d}%{\operatorname{d}\hspace{-0.055cm} y,\quad x \in I,
\end{align}
for a fixed \(x_0 \in I\).
The main result of this section is the following:
\begin{theorem}\label{theo: 1D Feller}
Suppose that
\begin{align} \label{eq: U1 ass}
\lim_{x \nearrow r} v \left( \overline{u}, \overline{a}\right)(x) =
\lim_{x \searrow l} v\left(\underline{u}, \overline{a}\right)(x) =\infty.
\end{align}
Moreover, for all \(n \in \mathbb{N}\) assume that for \(\llambda \otimes {\mathbb{Q}}^n\)-a.a. \((t, \omega) \in [0, T] \times \Omega\)
\begin{equation}\label{eq: to hold}
\begin{split}
\sigma^2_t (\omega) &\leq \zeta (t) \overline{a} (S_t (\omega)), \\
b_t (\omega) &\leq \sigma_t^2 (\omega) \overline{u} (S_t(\omega)),\\
b_t (\omega) &\geq \sigma^2_t (\omega) \underline{u} (S_t(\omega)).
\end{split}
\end{equation}
Then, \eqref{eq: cond qn} holds.
\end{theorem}
\begin{proof}
Due to \cite[Lemma 5.5.26]{KaraShre}, there are differentiable functions \(U_1 \colon [x_0, r) \to [1, \infty)\) and \(U_2 \colon (l, x_0] \to [1, \infty)\) with locally absolutely continuous derivatives and a \(\llambda\)-null set \(N' \subset I\) such that \(U_1\) is increasing, \(U_2\) is decreasing, \(U_1 (x_0) = U_2(x_0) = 1, U'_1 (x_0) = U'_2(x_0) = 0\), for all \(x \in [x_0, r) \backslash N'\) and for all \(y \in (l, x_0] \backslash N'\)
\begin{align*}
\overline{a} (x) \left(\tfrac{1}{2} U_1'' (x) + \overline{u} U_1' (x)\right) &= U_1 (x)\quad \textup{ and }\quad
\overline{a} (y) \left(\tfrac{1}{2} U_2'' (y) + \underline{u} U_2' (y)\right)= U_2 (y),
\end{align*}
\(1 + v(\overline{u}, \overline{a}) \leq U_1\)
and \(1 + v(\underline{u}, \overline{a}) \leq U_2\).
We define
\begin{align*}
V \triangleq \begin{cases}
U_1,&\textup{ on } [x_0, r),
\\
U_2,&\textup{ on } (l,x_0],
\end{cases}
\end{align*}
which is a differentiable function with locally absolutely continuous derivative. In particular, \(V' \geq 0\) on \([x_0, r)\), \(V' \leq 0\) on \((l, x_0]\),
\(
\frac{1}{2} V'' + \underline{u} V' \geq 0
\)
on \((l, x_0] \backslash N'\) and \(
\frac{1}{2} V'' + \overline{u} V' \geq 0
\) on \([x_0, r)\backslash N'\).
Furthermore, \begin{align*}\lim_{x \nearrow r} V(x) = \lim_{x\searrow l} V(x) = \infty,\end{align*} due to the assumption \eqref{eq: U1 ass}.
Let \(\widetilde{N}\) be the set of all \((t, \omega) \in [0, T] \times \Omega\) such that \eqref{eq: to hold} holds.
For all \((t, \omega) \in \widetilde{N} \colon S_t (\omega) \in [x_0, r) \backslash N'\)
\begin{align*}
\mathcal{L}V (t) (\omega)
&= \tfrac{1}{2} \sigma^2_t(\omega) V'' (S_t(\omega)) + b_t(\omega) V' (S_t(\omega))
\\&\leq \sigma^2_t(\omega) \left( \tfrac{1}{2} V'' (S_t(\omega)) + \overline{u} (S_t(\omega)) V' (S_t(\omega))\right)
\\&\leq \zeta (t) \overline{a}(S_t(\omega)) \left(\tfrac{1}{2} V'' (S_t(\omega))+ \overline{u} (S_t(\omega))V' (S_t(\omega))\right)
= \zeta (t) V (S_t(\omega)).
\end{align*}
In the same manner we see that for all \((t, \omega) \in \widetilde{N}\colon S_t (\omega) \in (l, x_0] \backslash N'\)
\[
\mathcal{L} V(t) (\omega) \leq \zeta(t) V(S_t(\omega)).
\]
We conclude that \eqref{ineq: Lyapunov2} holds for \(N = N'\).
The claim follows from Theorem \ref{theo: NE 1}.
\end{proof}
\subsection{Criteria for non-existence}
In this section we give a converse to Theorem \ref{theo: 1D Feller}. As in Section \ref{sec: MP SE}, let \(I = (l, r)\) with \(- \infty \leq l < r \leq + \infty\) and let \(\underline{a} \colon I \to (0, \infty)\) and \(\underline{u}, \overline{u} \colon I \to \mathbb{R}\) be Borel functions such that
\[
\frac{1}{\underline{a}} + |\underline{u}| + |\overline{u}| \in L^1_\textup{loc}(I).
\]
If \((f, g)\) is one of the pairs \((\underline{u}, \underline{a}), (\overline{u}, \underline{a})\), we set \(v(f, g)\) as in \eqref{eq: v}.
Let \(0 < T < \infty\), \((\Omega, \mathcal{F})\) be a measurable space with right-continuous filtration \(\mathbf{F} = (\mathcal{F}_t)_{t \in [0, T]}\) and \(s_0 \in I\). Suppose that \((\Omega, \mathcal{F}, \mathbf{F})\) supports three progressively measurable processes \(S = (S_t)_{t \in [0, T]}, b = (b_t)_{t \in [0, T]}\) and \(\sigma = (\sigma_t)_{t \in [0, T]}\). We define \(\mathcal{I}\) be the set of all pairs \(({\mathbb{Q}}, B)\) consisting of a probability measure \({\mathbb{Q}}\) on \((\Omega, \mathcal{F})\) and an \((\mathbf{F}, {\mathbb{Q}})\)-Brownian motion \(B = (B_t)_{t \in [0, T]}\) with the properties that \(S\) is \({\mathbb{Q}}\)-a.s. \(I\)-valued and
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_t = b_t d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t d}%{\operatorname{d}\hspace{-0.055cm} B_t,\quad S_0 = s_0,
\]
where it is implicit that the integrals are well-defined.
\begin{theorem} \phantomsection \label{theo: 1D Feller p2}
\begin{enumerate}
\item[\textup{(i)}]
Suppose that the pair \((\underline{u}, \underline{a})\) satisfies the YW conditions (see Section \ref{sec: GC} for this terminology) and
\begin{align*}
\lim_{x \nearrow r} v \left( \underline{u}, \underline{a}\right)(x) < \infty. \end{align*}
Then, there exists no pair \(({\mathbb{Q}}, B) \in \mathcal{I}\) such that for \(\llambda \otimes {\mathbb{Q}}\)-a.a. \((t, \omega) \in [0, T] \times \Omega\)
\begin{equation}\label{eq: contr 1}
\begin{split}
\underline{a} (S_t(\omega)) &\leq \sigma^2_t(\omega),\\
\underline{u} (S_t(\omega)) \sigma^2_t(\omega) &\leq b_t(\omega).
\end{split}
\end{equation}
\item[\textup{(ii)}]
Suppose that the pair \((\overline{u}, \underline{a})\) satisfies the YW conditions and
\[
\lim_{x \searrow l} v\left(\overline{u}, \underline{a}\right)(x) < \infty.
\]
Then, there exists no pair \(({\mathbb{Q}}, B)\in \mathcal{I}\) such that for \(\llambda \otimes {\mathbb{Q}}\)-a.a. \((t, \omega) \in [0, T] \times \Omega\)
\begin{equation}\label{eq: contr 2}
\begin{split}
\underline{a} (S_t(\omega)) &\leq \sigma^2_t(\omega),\\
\overline{u} (S_t(\omega)) \sigma^2_t(\omega) &\geq b_t(\omega).
\end{split}
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
\textbf{(i).}
We use a comparison and contradiction argument as in the proof of \cite[Theorem 4.1]{criens17b}.
For contradiction, assume that \(({\mathbb{Q}}, B)\in \mathcal{I}\) is such that \eqref{eq: contr 1} holds. W.l.o.g. we assume that \(\mathbf{F}\) is \({\mathbb{Q}}\)-complete. In the following we work on \((\Omega, \mathcal{F}, \mathbf{F}, {\mathbb{Q}})\).
Because \(\underline{a}\) is positive and continuous and a.s \[\llambda (\{t \in [0, T] \colon \underline{a}(S_t) > \sigma^2_t\}) = 0, \qquad \int_0^T \sigma^2_s d}%{\operatorname{d}\hspace{-0.055cm} s < \infty,\] the function \[[0, T] \ni t \mapsto \int_0^t \frac{\sigma^2_s}{\underline{a}(S_s)}d}%{\operatorname{d}\hspace{-0.055cm} s\] is a.s. finite, continuous and strictly increasing, which implies that the same holds for the function
\[
\phi_t \triangleq \inf \Big( s \in [0, T] \colon \int_0^s \frac{\sigma^2_r}{\underline{a}(S_r)} d}%{\operatorname{d}\hspace{-0.055cm} r \geq t \Big),\quad t \in [0, T],
\]
see \cite[pp. 179 -- 180]{RY}.
Furthermore, we have a.s. \(\phi_t \leq t \text{ for all } t \in [0, T]\). We redefine \(\phi_t\) to be zero on the null sets where the previously mentioned properties fail.
Because \(\mathbf{F}\) is complete, this modification of \((\phi_t)_{t \in [0, T]}\) is an increasing and continuous sequence of finite stopping times.
Next, we set \(\mathbf{F}_\phi \triangleq (\mathcal{F}_{\phi_t})_{t \in [0, T]}\).
The following lemma follows from \cite[Propositions V.1.4, V.1.5]{RY}.
\begin{lemma}\label{lem: tc}
Suppose that \((H_t)_{t \in [0, T]}\) is progressively measurable. Then, the time-changed process \((H_{\phi_t})_{t \in [0, T]}\) is \(\mathbf{F}_\phi\)-progressively measurable and a.s.
\[
\int_0^t H_{\phi_s} d}%{\operatorname{d}\hspace{-0.055cm} s = \int_0^{\phi_t} \frac{H_s \sigma^2_s}{\underline{a}(S_s)} d}%{\operatorname{d}\hspace{-0.055cm} s, \quad t \in [0, T],
\]
provided the integrals are well-defined.
Moreover, the process \(B_\phi = (B_{\phi_t})_{t \in [0, T]}\) is a continuous local \(\mathbf{F}_\phi\)-martingale with a.s.
\(
[B_\phi, B_\phi]= \phi
\),
and if a.s. \(\int_0^T H^2_s d}%{\operatorname{d}\hspace{-0.055cm} s < \infty\) then also a.s. \(\int_0^T H_{\phi_s}^2 d}%{\operatorname{d}\hspace{-0.055cm} \phi_s < \infty\) and a.s.
\[
\int_0^t H_{\phi_s} d}%{\operatorname{d}\hspace{-0.055cm} B_{\phi_s} = \int_0^{\phi_t} H_s d}%{\operatorname{d}\hspace{-0.055cm} B_s, \quad t \in [0, T].
\]
\end{lemma}
We deduce from Lemma \ref{lem: tc}
that a.s.
\begin{align*}
\llambda \big(\big\{ t \in [0, T] \colon \underline{a}(S_{\phi_t}) &> \sigma^2_{\phi_t} \text{ or } \underline{u}(S_{\phi_t}) \sigma^2_{\phi_t} > b_{\phi_t}\big\}\big) \\&= \int_0^{\phi_T} \frac{\mathds{1}_{\{\underline{a}(S_s) > \sigma^2_s\} \cup \{\underline{u}(S_s) \sigma^2_s > b_s\}} \sigma^2_s}{\underline{a}(S_s)} d}%{\operatorname{d}\hspace{-0.055cm} s = 0.
\end{align*}
We will use this observation in the following without further reference.
Applying Lemma \ref{lem: tc} with
\[
H_t \triangleq \frac{\underline{a}(S_t)}{\sigma^2_t} \mathds{1}_{\{\sigma^2_t > 0\}}, \quad t \in [0, T],
\]
yields that a.s.
\begin{align}\label{eq: meas eq}
d}%{\operatorname{d}\hspace{-0.055cm} \phi_t = \frac{\underline{a}(S_{\phi_t})}{\sigma^2_{\phi_t}} d}%{\operatorname{d}\hspace{-0.055cm} t.
\end{align}
Using again Lemma \ref{lem: tc}, we obtain that a.s. for all \(t \in [0, T]\)
\begin{align*}
S_{\phi_t} &= S_{\phi_0} + \int_0^{\phi_t} b_{s}d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^{\phi_t} \sigma_s d}%{\operatorname{d}\hspace{-0.055cm} B_s
\\&= s_0 + \int_0^t \frac{b_{\phi_s}\underline{a}(S_{\phi_s})}{\sigma^2_{\phi_s}} d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^t \sigma_{\phi_s} d}%{\operatorname{d}\hspace{-0.055cm} B_{\phi_s}
\\&= s_0 + \int_0^t \frac{b_{\phi_s}\underline{a}(S_{\phi_s})}{\sigma^2_{\phi_s}} d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^t \underline{a}^\frac{1}{2} (S_{\phi_s}) d}%{\operatorname{d}\hspace{-0.055cm} B'_s,
\end{align*}
where
\[
B' \triangleq \int_0^\cdot \frac{\sigma_{\phi_s} d}%{\operatorname{d}\hspace{-0.055cm} B_{\phi_s}}{\underline{a}^\frac{1}{2} (S_{\phi_s})}.
\]
Due to Lemma \ref{lem: tc} and \eqref{eq: meas eq}, we obtain that a.s. for all \(t \in [0, T]\)
\begin{align*}
[B', B']_t &= \int_0^t \frac{\sigma^2_{\phi_s}}{\underline{a}(S_{\phi_s})} d}%{\operatorname{d}\hspace{-0.055cm} [B_{\phi}, B_{\phi}]_s \\&= \int_0^t \frac{\sigma^2_{\phi_s}}{\underline{a}(S_{\phi_s})} d}%{\operatorname{d}\hspace{-0.055cm} \phi_s
\\ &= \int_0^t \frac{\sigma^2_{\phi_s}}{\underline{a}(S_{\phi_s})} \frac{\underline{a}(S_{\phi_s})}{\sigma^2_{\phi_s}} d}%{\operatorname{d}\hspace{-0.055cm} s
= t.
\end{align*}
Consequently, \(B'\) is a continuous local \(\mathbf{F}_\phi\)-martingale with a.s. \([B', B']_t = t\) for \(t \in [0, T]\), i.e. an \(\mathbf{F}_\phi\)-Brownian motion due to L\'evy's characterization.
We summarize that
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} S_{\phi_t} =\underline{a} (S_{\phi_t}) \frac{b_{\phi_t}}{\sigma^2_{\phi_t}} d}%{\operatorname{d}\hspace{-0.055cm} t + \underline{a}^\frac{1}{2}(S_{\phi_t}) d}%{\operatorname{d}\hspace{-0.055cm} B'_t, \quad S_{\phi_0} = s_0.
\end{align*}
Using a standard extension of \((\Omega, \mathcal{F}, \mathbf{F}_{\phi}, {\mathbb{Q}})\) we can extend \((B'_t)_{t \in [0, T]}\) to a Brownian motion \((B'_t)_{t \geq 0}\), see, e.g., the proof of \cite[Theorem V.1.7]{RY}.
We will use the following terminology: When we say that \((V_t)_{t \geq 0}\) is a continuous \([l, r]\)-valued process we mean that all its paths are continuous in the \([l, r]\)-topology \emph{and} absorbed in \(\{l, r\}\), i.e. that \(V_t = V_{\tau(V)}\) for all \(t \geq \tau(V) \triangleq \inf(t \in \mathbb{R}_+ \colon V_t \not \in I)\). This convention is in line with \cite[Definition 5.5.20]{KaraShre}.
\begin{definition}\label{def: see}
Let \(\mu \colon I \to \mathbb{R}\) and \(v \colon I \to \mathbb{R}\) be Borel functions.
We say that an SDE
\begin{align}\label{eq: def SDE}
d}%{\operatorname{d}\hspace{-0.055cm} V_t = \mu(V_t) d}%{\operatorname{d}\hspace{-0.055cm} t + v(Y_t)d}%{\operatorname{d}\hspace{-0.055cm} B^*_t,
\end{align}
where \((B^*_t)_{t \geq 0}\) is a one-dimensional Brownian motion,
satisfies \emph{strong existence and uniqueness up to explosion}, if on any complete probability space \((\Omega^o, \mathcal{F}^o, \mathds{P}^o)\) with complete right-continuous filtration \(\mathbf{F}^o = (\mathcal{F}^o_t)_{t \geq 0}\), which supports a Brownian motion \((B^*_t)_{t \geq 0}\) and an \(I\)-valued \(\mathcal{F}^o_0\)-measurable random variable \(\psi\), there exists a up to indistinguishability unique adapted continuous \([l, r]\)-valued process \((V_t)_{t \geq 0}\) such that a.s.
\[
V_{t \wedge \theta_n} = \psi + \int_0^{t \wedge \theta_n} \mu(V_s)d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^{t \wedge \theta_n} v(V_s)d}%{\operatorname{d}\hspace{-0.055cm} B^*_s, \quad t \geq 0, n \in \mathbb{N},
\]
where
\[
\theta_n \triangleq \inf(t \in \mathbb{R}_+ \colon V_t \not \in (l_n, r_n)), \quad n \in \mathbb{N}.
\]
It is implicit that the integrals are well-defined.
The process \((V_t)_{t \geq 0}\) is called the \emph{solution process to \eqref{eq: def SDE} with driver \((B^*_t)_{t \geq 0}\)}.
\end{definition}
Due to \cite[Remark 4.50 (2), Theorem 4.53]{MANA:MANA19911510111},
the SDE
\begin{align}\label{eq: some SDE}
d}%{\operatorname{d}\hspace{-0.055cm} V_t = \underline{a}(V_t) \underline{u}(V_t)d}%{\operatorname{d}\hspace{-0.055cm} t + \underline{a}^\frac{1}{2} (V_t)d}%{\operatorname{d}\hspace{-0.055cm} B^*_t
\end{align}
satisfies strong existence and uniqueness up to explosion.
Consequently, there exists a solution process \((Y_t)_{t \geq 0}\) to \eqref{eq: some SDE} with driver \((B'_t)_{t \geq 0}\).
The following lemma is proven after the proof of Theorem \ref{theo: 1D Feller p2} is complete.
\begin{lemma}\label{lem: order}
Almost surely \(Y_t \leq S_{\phi_t}\) for all \(t \leq T \wedge \tau(Y)\).
\end{lemma}
Because \((Y_t)_{t \geq 0}\) is regular due to \cite[Proposition 2.2]{mijatovic2012b} and \(\lim_{x \nearrow r} v \left( \underline{u}, \underline{a}\right)(x) < \infty\), we deduce from \cite[Proposition 2.12]{mijatovic2012b} and \cite[Theorem 1.1]{bruggeman2016} that \((Y_t)_{t \in [0, T]}\) reaches \(r\) with positive probability. Consequently, due to Lemma \ref{lem: order}, \((S_t)_{t \in [0, T]}\) reaches \(r\) with positive probability. This is a contradiction.
\textbf{(ii).}
For contradiction, assume that \(({\mathbb{Q}}, B)\in \mathcal{I}\) is such that \eqref{eq: contr 2} holds.
By the same arguments as in part (i), there exists a process \((Y_t)_{t \geq 0}\) such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} Y_t = \underline{a}(Y_t) \overline{u} (Y_t) d}%{\operatorname{d}\hspace{-0.055cm} t +\underline{a}^\frac{1}{2}(Y_t) d}%{\operatorname{d}\hspace{-0.055cm} B'_t, \quad Y_0 = s_0,
\]
and a.s. \(S_{\phi_t} \leq Y_t\) for all \(t \leq T \wedge \tau(Y).\) Because \(\lim_{x \searrow l} v\left(\overline{u}, \underline{a}\right)(x) <\infty,\) the process \((Y_t)_{t \in [0, T]}\) reaches \(l\) with positive probability and again the pathwise ordering gives a contradiction.
\end{proof}
\noindent
\textit{Proof of Lemma \ref{lem: order}:}
There are functions \(h_n \in \mathscr{H}\) and \(\kappa_n \in \mathscr{K}\) such that for all \(x, y \in [l_n, r_n]\)
\begin{align*}
|\underline{a}^\frac{1}{2} (x) - \underline{a}^\frac{1}{2} (y)| &\leq h_n (|x - y|),\quad |\underline{a}(x) \underline{u}(x) - \underline{a}(y) \underline{u}(y)| \leq \kappa_n (|x - y|).
\end{align*}
We set \[\rho_n \triangleq \inf (t \in [0, T] \colon S_{\phi_t} \not \in (l_n, r_n) \textup{ or } Y_t \not \in (l_n, r_n)).\]
Note that for all \(t \in (0, T]\) we have
\[
\int_0^{t \wedge \rho_n} \frac{d}%{\operatorname{d}\hspace{-0.055cm} [Y - S_{\phi},Y - S_{\phi}]_s}{h^2_n(|Y_s -S_{\phi_s}|)} = \int_0^{t \wedge \rho_n} \frac{\big(\underline{a}^\frac{1}{2} (Y_s) - \underline{a}^\frac{1}{2} (S_{\phi_s})\big)^2}{h^2_n(|Y_s - S_{\phi_s}|)} d}%{\operatorname{d}\hspace{-0.055cm} s \leq \int_0^t d}%{\operatorname{d}\hspace{-0.055cm} s = t.
\]
Thus, \cite[Lemma IX.3.3]{RY} implies that the local time of \(Y_{\cdot \wedge \rho_n}- S_{\phi_{\cdot \wedge \rho_n}}\) in the origin is a.s. zero.
We deduce from Tanaka's formula that a.s.
\[
(Y_{t \wedge \rho_n} - S_{\phi_{t \wedge \rho_n}})^+ = \int_0^{t \wedge \rho_n} \mathds{1}_{\{Y_s - S_{\phi_s} > 0\}} d}%{\operatorname{d}\hspace{-0.055cm} (Y_s - S_{\phi_s}), \quad t \in [0, T].
\]
Taking expectation, using the martingale property of the Brownian part of \(Y_{\cdot \wedge \rho_n} - S_{\phi_{\cdot \wedge \rho_n}}\) and Jensen's inequality yields that for all \(t \in [0, T]\)
\begin{align*}
{\mathds{E}}^{\mathbb{Q}} \big[ (Y_{t \wedge \rho_n} - S_{\phi_{t \wedge \rho_n}})^+\big] &= {\mathds{E}}^{\mathbb{Q}}\Big[ \int_0^{t \wedge \rho_n} \mathds{1}_{\{Y_s - S_{\phi_s} > 0\}} \Big( \underline{a} (Y_s) \underline{u}(Y_s) - \underline{a}(S_{\phi_s}) \frac{b_{\phi_s}}{\sigma^2_{\phi_s}}\Big)d}%{\operatorname{d}\hspace{-0.055cm} s \Big]
\\&\leq {\mathds{E}}^{\mathbb{Q}}\Big[ \int_0^{t \wedge \rho_n} \mathds{1}_{\{Y_s - S_{\phi_s} > 0\}} \big| \underline{a} (Y_s) \underline{u}(Y_s) - \underline{a}(S_{\phi_s}) \underline{u}(S_{\phi_s})\big|d}%{\operatorname{d}\hspace{-0.055cm} s \Big]
\\&\leq {\mathds{E}}^{\mathbb{Q}}\Big[ \int_0^{t \wedge \rho_n} \mathds{1}_{\{Y_s - S_{\phi_s} > 0\}} \kappa_n (|Y_s - S_{\phi_s}|)d}%{\operatorname{d}\hspace{-0.055cm} s \Big]
\\&\leq \int_0^t {\mathds{E}}^{\mathbb{Q}}\big[ \kappa_n ((Y_{s \wedge \rho_n} - S_{\phi_{s \wedge \rho_n}})^+)\big] d}%{\operatorname{d}\hspace{-0.055cm} s
\\&\leq \int_0^t \kappa_n \big( {\mathds{E}}^{\mathbb{Q}}\big[(Y_{s \wedge \rho_n} - S_{\phi_{s \wedge \rho_n}})^+\big] \big)d}%{\operatorname{d}\hspace{-0.055cm} s.
\end{align*}
Finally, Bihari's lemma (see \cite[Lemma E.2]{criens17b}) yields that for all \(t \in [0, T]\)
\[
{\mathds{E}}^{\mathbb{Q}} \big[ (Y_{t \wedge \rho_n} - S_{\phi_{t \wedge \rho_n}})^+\big] = 0.
\]
Consequently, due to the continuous paths of \(Y\) and \(S_\phi\), the claim follows.
\qed
\subsection{Proof of Theorem \ref{theo: mart Ito}}
Because non-negative local martingales are supermartingales, \(Z\) is a martingale if and only if \({\mathds{E}}^\mathds{P}[Z_T] = 1\).
By (M1), we can define \({\mathbb{Q}}^n\) by the Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}^n}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} = Z_{T \wedge \tau_n}\).
We note that the assumption \(\llambda \otimes \mathds{P}\)-a.e. \(\sigma \not = 0\) implies \eqref{eq: nondeg n}.
Due to Girsanov's theorem, there exists a \({\mathbb{Q}}^n\)-Brownian motion \(B^n = (B^n_t)_{t \in [0, T]}\) such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_{t \wedge \tau_n} = (b_t + c_t \sigma_t) \mathds{1}_{\{t \leq \tau_n\}} d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t \mathds{1}_{\{t \leq \tau_n\}} d}%{\operatorname{d}\hspace{-0.055cm} B^n_t.
\]
The monotone convergence theorem yields that
\begin{align*}
{\mathds{E}}^\mathds{P} \big[ Z_T\big] &= \limsup_{n \to \infty} {\mathds{E}}^\mathds{P} \big[ Z_T \mathds{1}_{\{\tau_n = \infty\}}\big]
\\&= \limsup_{n \to \infty} {\mathbb{Q}}^n(\tau_n = \infty).
\end{align*}
In view of (M2) and (M3), Theorem \ref{theo: 1D Feller} yields that
\[
\limsup_{n \to \infty} {\mathbb{Q}}^n (\tau_n = \infty) = 1.
\]
Thus, \({\mathds{E}}^\mathds{P}[Z_T] = 1\) and the proof is complete.
\qed
\subsection{Proof of Theorem \ref{theo: general SLM}}
For contradiction, assume that \((Z_t)_{t \in [0, T]}\) is a martingale.
Define a probability measure \({\mathbb{Q}}\) by the Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_T\). By Girsanov's theorem, there exists a \({\mathbb{Q}}\)-Brownian motion \(B = (B_t)_{t \in [0, T]}\) such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_t = (b_t + c_t \sigma_t)d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma_t d}%{\operatorname{d}\hspace{-0.055cm} B_t.
\]
Consequently, in case (SL1) holds we obtain a contradiction to part (i) of Theorem \ref{theo: 1D Feller p2} and in case (SL2) holds we obtain a contradiction to part (ii) of Theorem \ref{theo: 1D Feller p2}. The proof is complete.
\qed
\section{Proof of Theorem \ref{theo: mart MS}}\label{sec: pf MS}
The section is split into two parts: First, we prove existence, non-existence and local uniqueness for switching diffusions and second, we deduce Theorem \ref{theo: mart MS}.
\subsection{Existence and non-existence criteria}\label{sec: jump type}
As in Section \ref{sec: MG MS}, let \(I = (l, r)\) with \(- \infty \leq l < r \leq + \infty\) and \(J = \{1, \dots, N\}\) with \(1 \leq N \leq \infty\). Moreover, let \(u \colon I \times J \to \mathbb{R}\) and \(\sigma \colon I \times J \to \mathbb{R}\backslash \{0\}\) be Borel functions such that
\begin{align}\label{eq: ES cond}
\frac{1 + u(\cdot, j)}{\sigma^2 (\cdot, j)} \in L^1_\textup{loc} (I) \text{ for all } j \in J.
\end{align}
We fix \(x_0 \in I\) and set
\[
v(x, j) \triangleq \int_{x_0}^x \exp \left( - \int_{x_0}^y \frac{2u(z, j)}{\sigma^2(z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z \right) \int_{x_0}^y \frac{2 \exp (\int_{x_0}^s \frac{2u(z, j)}{\sigma^2(z, j)} d}%{\operatorname{d}\hspace{-0.055cm} z)}{\sigma^2(s, j)} d}%{\operatorname{d}\hspace{-0.055cm} s d}%{\operatorname{d}\hspace{-0.055cm} y
\]
for \((x, j) \in I \times J\).
Let \((\Omega, \mathcal{F}, \mathbf{F}, \mathds{P})\) be a filtered complete probability space with a right-continuous and complete filtration \(\mathbf{F} = (\mathcal{F}_t)_{t \geq 0}\), which supports a Brownian motion \(W = (W_t)_{t \geq 0}\), a \(J\)-valued irreducible continuous-time Feller--Markov chain \(\xi = (\xi_t)_{t \geq 0}\) and an \(I\)-valued \(\mathcal{F}_0\)-measurable random variable \(\phi\).
The main result of this section is the following:
\begin{theorem}\phantomsection\label{theo: existence Markov}
\begin{enumerate}
\item[\textup{(i)}] Suppose that \(\sigma\) satisfies the ES conditions for all \(j \in J\) (see Section \ref{sec: MG MS} for this terminology) and
that \begin{align}\label{eq: MC FT} \lim_{x \searrow l} v(x, j) = \lim_{x \nearrow r} v(x, j) = \infty \text{ for all } j \in J.\end{align} Then, there exists an adapted \(I\)-valued continuous process \((Y_t)_{t \geq 0}\) such that
\begin{align}\label{eq: SDE MDP}
Y = \phi + \int_0^\cdot u(Y_s, \xi_s)d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^\cdot \sigma (Y_s, \xi_s) d}%{\operatorname{d}\hspace{-0.055cm} W_s,
\end{align}
where it is implicit that the integrals are well-defined.
\item[\textup{(ii)}]
Assume there exists a \(j \in J\) such that \(\sigma\) satisfies the ES conditions for \(j\)
and
\[\lim_{x \searrow l} v(x, j) < \infty \text{ or } \lim_{x \nearrow r} v(x, j) < \infty.\]
Let \(0 < T \leq \infty\) be a time horizon. If \(\xi\) is recurrent, then there exists no adapted \(I\)-valued continuous process \(Y = (Y_t)_{t \in [0, T]}\) such that \eqref{eq: SDE MDP} holds.
\end{enumerate}
\end{theorem}
\begin{proof}
The case \(N = 1\) concerns classical diffusions for which all claims are known, see \cite{bruggeman2016,MANA:MANA19911510111, KaraShre} for details. We prove the claim under the assumption \(N \geq 2\).
\textbf{(i).}
We define the jump times of \(\xi\) inductively by
\[
\gamma_0 \triangleq 0, \quad \gamma_n \triangleq \inf(t \geq \gamma_{n-1} \colon \xi_t \not = \xi_{\gamma_{n-1}}), \quad n \in \mathbb{N}.
\]
Because \(\xi\) is irreducible, we have a.s. \(\gamma_n < \infty\) (see \cite[Theorem 10.19]{Kallenberg}) and a.s. \(\gamma_n - \gamma_{n-1} > 0\) for all \(n \in \mathbb{N}\).
We follow the idea from the proof of \cite[Theorem IV.9.1]{IW} and construct the process \(Y\) explicitly from solutions to the SDEs
\begin{align}\label{eq: SDE 1}
d}%{\operatorname{d}\hspace{-0.055cm} X^j_t = u(X^j_t, j) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (X^j_t, j)d}%{\operatorname{d}\hspace{-0.055cm} W'_t,
\end{align}
where \(W' = (W'_t)_{t \geq 0}\) is a Brownian motion.
For the construction we require a strong existence and uniqueness property, which we explain next.
Fix \(j \in J\). It follows from \cite[Remark 4.50 (2), Theorem 4.53]{MANA:MANA19911510111} and Feller's test for explosion (see \cite[Theorem 5.5.29]{KaraShre}) that the SDE \eqref{eq: SDE 1} has a weak solution and that it satisfies pathwise uniqueness for all deterministic initial values. We conclude from \cite[Theorem 18.14]{Kallenberg} that there exists a Borel function \(F^{j} \colon I \times C(\mathbb{R}_+, \mathbb{R}) \to C(\mathbb{R}_+, I)\) such that for any one-dimensional Brownian motion \(W' = (W'_t)_{t \geq 0}\) and any \(I\)-valued random variable \(\psi\), which is independent of \(\sigma(W'_t, t \in \mathbb{R}_+)\), the process \(X^j = F^{j}(\psi, W')\) is a solution process to \eqref{eq: SDE 1} with \(X^j_0 = \psi\), which is adapted to the completion of the natural filtration of \(W'\) and \(\psi\), see \cite[Definition 5.2.1]{KaraShre}. The function \(F^j\) is independent of the law of \(\psi\) and universally adapted (see \cite[p. 346]{Kallenberg} for a definition).
Set \(W^n \triangleq W_{\cdot + \gamma_n} - W_{\gamma_n}\). Due to \cite[Proposition V.1.5]{RY} and L\'evy's characterization, \(W^n\) is a Brownian motion for the filtration \(\mathbf{F}^n \triangleq (\mathcal{F}_{t + \gamma_n})_{t \geq 0}\).
In particular, \(W^n\) is independent of \(\mathcal{F}_{\gamma_n}\).
By induction, define \begin{align*}
Y^0 &\triangleq \sum_{j \in J} F^{j}(\phi, W) \mathds{1}_{\{\xi_0 = j\}}, \\ Y^n &\triangleq \sum_{j \in J} F^{j}(Y^{n-1}_{\gamma_n - \gamma_{n-1}}, W^n) \mathds{1}_{\{\xi_{\gamma_n} = j\}}, \quad n \in \mathbb{N}.
\end{align*}
Moreover, set
\[
Y_t \triangleq \sum_{n = 0}^\infty Y^n_{t - \gamma_n} \mathds{1}_{\{\gamma_n \leq t < \gamma_{n+1}\}},\quad t \in \mathbb{R}_+.
\]
The process \(Y\) is \(I\)-valued and continuous and, as we explain next, it is also \(\mathbf{F}\)-adapted.
Define \(H_t \triangleq Y^n_{t - \gamma_n} \mathds{1}_{\{\gamma_n < t\}}\).
We claim that \((H_t)_{t \geq 0}\) is \(\mathbf{F}\)-progressively measurable. Because \(t \mapsto Y^n_{t - \gamma_n} \mathds{1}_{\{\gamma_n < t\}}\) is left-continuous and \(s \mapsto Y^n_{t - s} \mathds{1}_{\{s < t\}}\) is right-continuous, an approximation argument shows that is suffices to explain that \((h_t)_{t \geq 0} \triangleq (Y^n_{t - \zeta} \mathds{1}_{\{\zeta < t\}})_{t \geq 0}\) is \(\mathbf{F}\)-adapted for any \(\mathbf{F}\)-stopping time \(\zeta\) which takes values in the countable set \(2^{-m} \overline{\mathbb{N}}\) for some \(m \in \mathbb{N}\) and satisfies \(\zeta \geq \gamma_n\).
Let \(G \in \mathcal{B}(\mathbb{R})\) and set \(N_{m, t} \triangleq 2^{-m} \overline{\mathbb{N}}\cap [0, t)\). We have
\[
\{h^m_t \in G\} = \Big(\bigcup_{k \in N_{m, t}} \big( \{h^m_t \in G\} \cap \{\zeta = k\}\big)\Big) \cup \big(\{0 \in G\} \cap \{\zeta \geq t\}\big) \in \mathcal{F}_t.
\]
Here, we use that \(\{Y^n_{t - k} \in G\} \in \mathcal{F}_{t - k + \gamma_n} \subseteq \mathcal{F}_{t - k + \zeta}\) and that \(\mathcal{F}_{t - k + \zeta} \cap \{\zeta = k\} \in \mathcal{F}_t\). Thus, \((H_t)_{t \geq 0}\) is \(\mathbf{F}\)-progressively measurable and consequently \((Y_t)_{t \geq 0}\) is \(\mathbf{F}\)-adapted.
We note that
\[
\gamma_n - \gamma_{n - 1} = \inf(t \in \mathbb{R}_+ \colon \xi_{t + \gamma_{n - 1}} \not = \xi_{\gamma_{n-1}}),
\]
which is an \(\mathbf{F}^{n - 1}\)-stopping time. Thus, \(Y^{n-1}_{\gamma_n - \gamma_{n - 1}}\) is \(\mathcal{F}_{\gamma_n}\)-measurable and therefore independent of \(\sigma(W^n_t, t \in \mathbb{R}_+)\). This yields that the process \(X^{n, j} \triangleq F^{j} (Y^{n - 1}_{\gamma_n - \gamma_{n-1}}, W^n)\) satisfies
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} X^{n, j}_t &= u(X^{n, j}_t, j) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (X^{n, j}_t, j) d}%{\operatorname{d}\hspace{-0.055cm} W^n_t, \quad X^{n, j}_0 = Y^{n-1}_{\gamma_n - \gamma_{n -1}}.
\end{align*}
Thus, due to classical rules for time-changed stochastic integrals (see \cite[Propositions V.1.4, V.1.5]{RY}), a.s. for \(t \in [\gamma_n, \gamma_{n+1}]\) on \(\{\xi_{\gamma_n} = j\}\) we have
\begin{align}
Y^n_{t - \gamma_n}
&= Y^{n-1}_{\gamma_n - \gamma_{n - 1}} + \int_0^{t - \gamma_n} u(X^{n, j}_s, j)d}%{\operatorname{d}\hspace{-0.055cm} s+ \int_0^{t - \gamma_n} \sigma(X^{n, j}_s, j) d}%{\operatorname{d}\hspace{-0.055cm} W^n_s \nonumber
\\&= Y^{n-1}_{\gamma_n - \gamma_{n-1}} + \int_{\gamma_n}^{t} u(Y^n_{s - \gamma_n}, j)d}%{\operatorname{d}\hspace{-0.055cm} s + \int_{\gamma_n}^t \sigma (Y^n_{s - \gamma_n}, j) d}%{\operatorname{d}\hspace{-0.055cm} W_s \nonumber
\\&= Y^{n-1}_{\gamma_n - \gamma_{n-1}} + \int_{\gamma_n}^{t} u(Y_s, \xi_{s})d}%{\operatorname{d}\hspace{-0.055cm} s + \int_{\gamma_n}^t \sigma (Y_s, \xi_{s}) d}%{\operatorname{d}\hspace{-0.055cm} W_s. \nonumber
\end{align}
By induction, a.s. for \(t \in [\gamma_n, \gamma_{n + 1}]\)
\[
Y^n_{t - \gamma_n} = \phi + \int_0^t u(Y_s, \xi_{s}) d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^t \sigma (Y_s, \xi_{s})d}%{\operatorname{d}\hspace{-0.055cm} W_s.
\]
Therefore, the process \(Y\) satisfies the SDE
\[
d}%{\operatorname{d}\hspace{-0.055cm} Y_t = u(Y_t, \xi_{t}) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma(Y_t, \xi_{t}) d}%{\operatorname{d}\hspace{-0.055cm} W_t,\quad S_0 = \phi,
\]
and the proof of (i) is complete.
\textbf{(ii).}
For contradiction, assume that \(Y\) satisfies \eqref{eq: SDE MDP}.
Let \(j \in J\) be such that \(\lim_{x \searrow l} v(x, j) < \infty\) or \(\lim_{x \nearrow r} v(x, j) < \infty\).
We define
\[
\delta \triangleq \inf (t \in \mathbb{R}_+ \colon \xi_t = j ), \quad \zeta \triangleq \inf(t \geq \delta \colon \xi_t \not = j).
\]
Because \(\xi\) is recurrent, we have a.s. \(\delta < \infty\), see \cite[Theorem 1.5.7]{norris_1997}.
Due to the strong Markov property of \(\xi\) and \cite[Lemma 10.18]{Kallenberg}, for all \(G \in \mathcal{B}(\mathbb{R}_+)\) it holds that
\begin{align}\label{eq: exp}
\mathds{P} (\zeta - \delta \in G) = - \int_G q_{jj} e^{q_{jj} x} d}%{\operatorname{d}\hspace{-0.055cm} x,
\end{align}
where \(q_{jj} < 0\) is the \(j\)-th diagonal element of the \(Q\)-matrix of \(\xi\).
Recall our convention that we call a process \(V = (V_t)_{t \geq 0}\) to be continuous and \([l, r]\)-valued in case all paths are continuous in the \([l, r]\)-topology \emph{and} absorbed in \(\{l, r\}\), i.e. that \(V_t = V_{\tau(V)}\) for all \(t \geq \tau(V) \triangleq \inf(t \in \mathbb{R}_+ \colon V_t \not \in I)\).
It follows from \cite[Remark 4.50 (2), Theorem 4.53]{MANA:MANA19911510111} that the SDE \eqref{eq: SDE 1}
satisfies strong existence and uniqueness up to explosion in the sense of Definition \ref{def: see}.
Consequently, there exists a continuous \([l, r]\)-valued process \(X = (X_t)_{t \geq 0}\) such that
\begin{align}\label{eq: SDE e}
d}%{\operatorname{d}\hspace{-0.055cm} X_t = u(X_t, j) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (X_t, j) d}%{\operatorname{d}\hspace{-0.055cm} W^\delta_t,\quad X_0 = Y_{\delta \wedge T},
\end{align}
where \(W^\delta \triangleq W_{\cdot + \delta \wedge T} - W_{\delta \wedge T}\) is a Brownian motion for the filtration \(\mathbf{F}^\delta \triangleq (\mathcal{F}_{t + \delta \wedge T})_{t \geq 0}\).
We prove the following lemma after the proof of (ii) is complete.
\begin{lemma}\label{lem: comparison}
Almost surely \(Y_{t + \delta} = X_t\) for all \(0 \leq t \leq \zeta - \delta\) on \(\{\zeta \leq T\}\).
\end{lemma}
Because on \(\{\tau (X) < \infty\}\) we have \(X_{\tau(X)} \not \in I\), Lemma \ref{lem: comparison} implies that
\begin{equation}\label{eq: long comp} \begin{split}
\mathds{P} (\tau (X) \leq \zeta - \delta, \zeta \leq T) = 0.
\end{split}\end{equation}
The proof of the following lemma is given after the proof of (ii) is complete.
\begin{lemma}\label{lem: loc pathwise uniqueness}
Suppose that the SDE \eqref{eq: def SDE} satisfies strong existence and uniqueness up to explosion. Let \(\psi\) be an \(I\)-valued \(\mathcal{F}_0\)-measurable random variable and let \((V_t)_{t \geq 0}\) be the solution process to \eqref{eq: def SDE} with driver \(W\) and initial value \(\psi\) and let \(\tau\) be a stopping time. Then, all adapted \(I\)-valued continuous processes \((U_t)_{t \geq 0}\) with
\[
d}%{\operatorname{d}\hspace{-0.055cm} U_t = \mu(U_t)\mathds{1}_{\{t \leq \tau\}} d}%{\operatorname{d}\hspace{-0.055cm} t + v (U_t)\mathds{1}_{\{t \leq \tau\}} d}%{\operatorname{d}\hspace{-0.055cm} W_t, \quad U_0 = \psi,
\]
are indistinguishable from \((V_{t \wedge \tau})_{t \geq 0}\).
\end{lemma}
Let \(l_n \searrow l, r_n \nearrow r\) be sequences such that \(l < l_{n+1} < l_n < r_n < r_{n +1} < r\) and set for a function \(\alpha \colon \mathbb{R}_+ \to [l, r]\)
\[
\tau_n (\alpha) \triangleq \inf(t \in \mathbb{R}_+ \colon \alpha(t) \not \in (l_n, r_n)).
\]
We conclude from Lemma \ref{lem: loc pathwise uniqueness} and Galmarino's test (see \cite[Lemma III.2.43]{JS}) that for all \(n \in \mathbb{N}\) the SDE
\begin{align}\label{eq: stopped sde}
d}%{\operatorname{d}\hspace{-0.055cm} X^j_t = u(X^j_t, j) \mathds{1}_{\{t \leq \tau_n(X^j) \}} d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (X^j_t,j) \mathds{1}_{\{t \leq \tau_n(X^j)\}} d}%{\operatorname{d}\hspace{-0.055cm} W_t,
\end{align}
satisfies weak existence and pathwise uniqueness in the usual sense, see \cite[Definitions 5.3.1, 5.3.2]{KaraShre}.
Thus, due to \cite[Theorem 18.14]{Kallenberg}, there exists a Borel function \(F^n \colon \mathbb{R} \times C(\mathbb{R}_+, \mathbb{R}) \to C(\mathbb{R}_+, I)\) such that whenever \(X^j\) solves \eqref{eq: stopped sde} with driver \(W= (W_t)_{t \geq 0}\) and (possibly stochastic) initial value \(X^j_0\), then a.s. \(X^j = F^n(X^j_0, W)\).
Lemma \ref{lem: loc pathwise uniqueness} and Galmarino's test yield that a.s.
\begin{align}\label{eq: gal 1}
\tau_n (X) = \tau_n(F^n (Y_{\delta \wedge T}, W^\delta)).
\end{align}
Because strong existence and uniqueness up to explosion holds for the SDE \eqref{eq: SDE 1}, for a.a. \(\omega \in \Omega\) there exists
an \(\mathbf{F}^\delta\)-adapted continuous \([l, r]\)-valued process \(Y^\omega = (Y^\omega_t)_{t \geq 0}\) such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} Y^\omega_t = u(Y^\omega_t, j) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (Y^\omega_t, j) d}%{\operatorname{d}\hspace{-0.055cm} W^\delta_t, \quad Y_0^\omega = Y_{\delta(\omega) \wedge T} (\omega) \in I.
\]
We stress that the initial value \(Y_{\delta(\omega) \wedge T}(\omega)\) is deterministic.
Lemma \ref{lem: loc pathwise uniqueness} and Galmarino's test yield that a.s.
\begin{align}\label{eq: gal2}
\tau_n (Y^\omega) = \tau_n (F^n(Y_{\delta (\omega) \wedge T} (\omega), W^\delta)).
\end{align}
We prove the following lemma after the proof of (ii) is complete.
\begin{lemma} \label{lem: ind}
For all \(G \in \mathcal{B}(\mathbb{R}_+)\) we have a.s. \[\mathds{P}(\zeta - \delta \in G | \mathcal{F}_{\delta \wedge T}, \sigma(W^\delta_t, t \in \mathbb{R}_+)) = - \int_G q_{jj} e^{q_{jj} x} d}%{\operatorname{d}\hspace{-0.055cm} x.\]
\end{lemma}
Using \eqref{eq: long comp}, the monotone convergence theorem and then \eqref{eq: gal 1}, we obtain that
\begin{align*}
0 &= \lim_{n \to \infty} \mathds{P} (\tau_n (X) \leq \zeta - \delta, \zeta \leq T)
\\&= \lim_{n \to \infty} \mathds{P}(\tau_n (F^n (Y_{\delta \wedge T}, W^\delta)) \leq \zeta - \delta, \zeta - \delta + \delta \leq T),
\intertext{using \cite[Theorem 5.4]{Kallenberg} and Lemma \ref{lem: ind} we further obtain that}
&= \lim_{n \to \infty} \int_0^T \mathds{P}(\tau_n(F^n(Y_{\delta \wedge T}, W^\delta)) \leq s, s + \delta \leq T) (- q_{jj}) e^{q_{jj} s} d}%{\operatorname{d}\hspace{-0.055cm} s
\\&= \lim_{n \to \infty} \int_{0}^T {\mathds{E}}^{\mathds{P}} \big[\mathds{P} (\tau_n (F^n (Y_{\delta \wedge T}, W^\delta)) \leq s| \mathcal{F}_{\delta \wedge T}) \mathds{1}_{\{s + \delta \leq T\}}\big] (- q_{jj}) e^{q_{jj} s}d}%{\operatorname{d}\hspace{-0.055cm} s,
\intertext{which, due to \cite[Theorem 5.4]{Kallenberg} and the independence of \(W^\delta\) and \(\mathcal{F}_{\delta \wedge T}\), equals}
&= \lim_{n \to \infty} \int_{0}^T \int_\Omega \mathds{P}(\tau_n (F^n (Y_{\delta (\omega) \wedge T} (\omega), W^\delta)) \leq s) \mathds{1}_{\{s + \delta (\omega) \leq T\}} \mathds{P}( d}%{\operatorname{d}\hspace{-0.055cm} \omega) (-q_{jj})e^{q_{jj} s} d}%{\operatorname{d}\hspace{-0.055cm} s,
\intertext{and finally, with \eqref{eq: gal2} and the monotone convergence theorem, we obtain}
&= \lim_{n \to \infty} \int_{0}^T \int_\Omega \mathds{P}(\tau_n (Y^\omega) \leq s) \mathds{1}_{\{s + \delta (\omega) \leq T\}} \mathds{P}( d}%{\operatorname{d}\hspace{-0.055cm} \omega) (- q_{jj}) e^{q_{jj} s} d}%{\operatorname{d}\hspace{-0.055cm} s
\\&= \int_{0}^T \int_\Omega \mathds{P}(\tau (Y^\omega) \leq s) \mathds{1}_{\{s + \delta (\omega) \leq T\}} \mathds{P}( d}%{\operatorname{d}\hspace{-0.055cm} \omega) (- q_{jj}) e^{q_{jj} s} d}%{\operatorname{d}\hspace{-0.055cm} s.
\end{align*}
Due to Feller's test for explosion (see \cite[Theorem 5.5.29]{KaraShre}), \(Y^\omega\) reaches \(l\) or \(r\) in finite time with positive probability. In fact, because \(Y^\omega\) is regular due to \cite[Proposition 2.2]{mijatovic2012b}, \cite[Theorem 1.1]{bruggeman2016} implies that \(Y^\omega\) even reaches \(l\) or \(r\) arbitrarily fast with positive probability, i.e. \(\mathds{P}(\tau (Y^\omega) \leq \varepsilon) > 0\) for all \(\varepsilon > 0\). Consequently, the identity
\[
\int_{0}^T \int_\Omega \mathds{P}(\tau (Y^\omega) \leq s) \mathds{1}_{\{s + \delta (\omega) \leq T\}} \mathds{P}( d}%{\operatorname{d}\hspace{-0.055cm} \omega) (- q_{jj}) e^{q_{jj} s} d}%{\operatorname{d}\hspace{-0.055cm} s = 0
\]
implies that for \(\llambda\)-a.a. \(s \in (0, T)\) we have \(\mathds{P}(\delta \leq T - s) = 0\).
However, because \(\xi\) is irreducible, we have \(\mathds{P}(\xi_t= j) > 0\) for all \(t > 0\). This is a contradiction and the proof of (ii) is complete.
\end{proof}
\noindent
\textit{Proof of Lemma \ref{lem: comparison}:}
Define
\(
\iota \triangleq \zeta \wedge T - \delta \wedge T
\).
Note that for all \(t \in \mathbb{R}_+\)
\begin{align*}
\{\iota \leq t\} = \{\zeta \leq t + \delta \wedge T\} \in \mathcal{F}_{t + \delta \wedge T},
\end{align*}
which shows that \(\iota\) is an \(\mathbf{F}^\delta\)-stopping time.
Moreover, we have for all \(s, t \in \mathbb{R}_+\)
\begin{align*}
\{s \wedge \iota + \delta \wedge T \leq t\} = \big(\{s + \delta &\wedge T \leq t\} \cap \overbrace{\{s + \delta \wedge T \leq \zeta \wedge T\}}^{\in \mathcal{F}_{s + \delta \wedge T}} \big) \\&\cup \big(\{\zeta \wedge T \leq t\} \cap \underbrace{\{s + \delta \wedge T > \zeta \wedge T\}}_{\in \mathcal{F}_{\zeta \wedge T}}\big) \in \mathcal{F}_t.
\end{align*}
Thus, the random time \(s \wedge \iota + \delta \wedge T\) is an \(\mathbf{F}\)-stopping time.
We deduce from classical rules for time-changed stochastic integrals that a.s. for all \(t \in \mathbb{R}_+\)
\begin{align*}
Y_{t \wedge \iota + \delta \wedge T} &= \phi + \int_0^{t \wedge \iota + \delta \wedge T} u(Y_s, \xi_{s}) d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^{t \wedge \iota + \delta \wedge T} \sigma(Y_s, \xi_{s}) d}%{\operatorname{d}\hspace{-0.055cm} W_s
\\&= Y_{\delta \wedge T}+ \int_{0}^{t} u(Y_{s \wedge \iota + \delta \wedge T}, j) \mathds{1}_{\{s \leq \iota\}} d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^{t} \sigma(Y_{s \wedge \iota + \delta \wedge T}, j) \mathds{1}_{\{s \leq \iota\}} d}%{\operatorname{d}\hspace{-0.055cm} W^\delta_s.
\end{align*}
Because the SDE \eqref{eq: SDE 1} satisfies strong existence and uniqueness up to explosion,
Lemma \ref{lem: loc pathwise uniqueness} implies that a.s. \(Y_{t \wedge \iota + \delta \wedge T} = X_{t \wedge \iota}\) for all \(t \in \mathbb{R}_+\). On \(\{\zeta \leq T\} \subseteq \{\delta \leq T\}\) we have \(\iota = \zeta - \delta\) and the claim follows.
\qed\\\\
\noindent
\textit{Proof of Lemma \ref{lem: loc pathwise uniqueness}:}
Due to localization, we can assume that \(\tau\) is finite.
By \cite[Proposition V.1.5]{RY} and L\'evy's characterization, the process
\[
\widehat{W}_t \triangleq W_{t + \tau} - W_\tau,\quad t \in \mathbb{R}_+,
\] is an \((\mathcal{F}_{t + \tau})_{t \geq 0}\)-Brownian motion.
Due to the strong existence and uniqueness hypothesis, there exists a solution process \(O = (O_t)_{t \geq 0}\) to the SDE
\[
d}%{\operatorname{d}\hspace{-0.055cm} O_t = \mu(O_t) d}%{\operatorname{d}\hspace{-0.055cm} t + v(O_t)d}%{\operatorname{d}\hspace{-0.055cm} \widehat{W}_t, \quad O_0 = U_\tau.
\]
We set
\[
Z_t \triangleq \begin{cases} U_t,&t \leq \tau,\\
O_{t - \tau},&t>\tau.
\end{cases}
\]
The process \(Z\) has continuous paths and similar arguments as used in the proof of Theorem \ref{theo: existence Markov} (i) show that it is \(\mathbf{F}\)-adapted.
Let
\[
\theta^Z_n \triangleq \inf(t \in \mathbb{R}_+ \colon Z_t \not \in (l_n, r_n)).
\]
On \(\{\tau \geq t \wedge \theta^Z_n\}\) we have a.s.
\begin{align*}
Z_{t \wedge \theta^Z_n}
&= \psi + \int_0^{t \wedge \theta^Z_n} \mu(Z_s)d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^{t \wedge \theta^Z_n}v (Z_s)d}%{\operatorname{d}\hspace{-0.055cm} W_s.
\end{align*}
Next, we discuss what happens on the set \(\{\tau < t \wedge \theta^Z_n\}\).
Set \[\theta^O_n \triangleq \inf (t \in \mathbb{R}_+ \colon O_t \not \in (l_n, r_n)).\]
On \(\{\tau < \theta^Z_n\}\) we have a.s.
\(
\theta^Z_n = \theta^O_n + \tau.
\)
Moreover, note that
\[
t \wedge (\theta^O_n + \tau) - \tau = \begin{cases} \theta^O_n,& \textup{ if }\theta^O_n + \tau \leq t,\\
t - \tau,&\textup{ if } t \leq \theta^O_n + \tau.
\end{cases}
\]
Thus, \(t \wedge (\theta^O_n + \tau) - \tau \leq \theta^O_n\).
Classical rules for time-changed stochastic integrals yield that on \(\{\tau < t \wedge \theta^Z_n\}\) a.s.
\begin{equation*}
\begin{split}
Z_{t \wedge \theta^Z_n}
&= Z_\tau + \int_0^{t \wedge \theta^Z_n - \tau} \mu(O_s)d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^{t \wedge \theta^Z_n - \tau} v (O_s)d}%{\operatorname{d}\hspace{-0.055cm} \widehat{W}_s
\\&= Z_\tau + \int_\tau^{t \wedge \theta^Z_n} \mu(O_{s - \tau}) d}%{\operatorname{d}\hspace{-0.055cm} s + \int_{\tau}^{t \wedge \theta^Z_n} v (O_{s - \tau}) d}%{\operatorname{d}\hspace{-0.055cm} W_s
\\&= \psi + \int_0^{t \wedge \theta^Z_n} \mu(Z_s)d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^{t \wedge \theta^Z_n} v (Z_s)d}%{\operatorname{d}\hspace{-0.055cm} W_s.
\end{split}
\end{equation*}
We conclude that \(Z\) is a solution process of the SDE \eqref{eq: def SDE} with driver \(W\) and initial value \(\psi\).
By the strong existence and uniqueness hypothesis, we conclude that a.s. \(Z = V\). The definition of \(Z\) implies the claim.
\qed
\\\\\noindent
\textit{Proof of Lemma \ref{lem: ind}:}
Denote the Wiener measure with initial value \(x \in \mathbb{R}\) by \(\mathscr{W}_x\) and by \(\mu_j\) the law of a Feller--Markov chain with the same \(Q\)-matrix as \(\xi\) and initial value \(j \in J\). Let \(\mathcal{C}\) be the \(\sigma\)-field on \(C(\mathbb{R}_+, \mathbb{R})\) generated by the coordinate process.
We deduce from Lemma \ref{lem: indep MC BM} in the Appendix, \cite[Proposition 4.1.5, Theorems 4.4.2, 4.4.6]{EK} that \((j, x) \mapsto (\mu_j \otimes \mathscr{W}_x) (F)\) is Borel for every \(F \in \mathcal{D} \otimes \mathcal{C}\) and that the process \((\xi, W)\) is a strong Markov process in the following sense: For all \(F \in \mathcal{D} \otimes \mathcal{C}\) and every a.s. finite stopping time \(\theta\) we have a.s.
\[
\mathds{P} ((\xi_{\cdot + \theta}, W_{\cdot + \theta}) \in F |\mathcal{F}_\theta) = (\mu_{\xi_\theta} \otimes \mathscr{W}_{W_\theta}) (F).
\]
For all \(A \in \mathcal{D}\) and \(F \in \mathcal{C}\) the strong Markov properties of \(\xi, W\) and \((\xi, W)\) imply that a.s.
\begin{align*}
\mathds{P}(\xi_{\cdot + \delta \wedge T} \in A&, W_{\cdot + \delta \wedge T} \in F|\mathcal{F}_{\delta \wedge T})\\ &= \mu_{\xi_{\delta \wedge T}}(A)\ \mathscr{W}_{W_{\delta \wedge T}}(F)
\\&= \mathds{P}(\xi_{\cdot + \delta \wedge T} \in A|\mathcal{F}_{\delta \wedge T})\mathds{P}(W_{\cdot + \delta \wedge T} \in F|\mathcal{F}_{\delta \wedge T}).
\end{align*}
This implies that \(\sigma(\zeta - \delta)\) and \(\sigma(W^\delta_t, t \in \mathbb{R}_+)\) are independent given \(\mathcal{F}_{\delta \wedge T}\). Now, \cite[Proposition 5.6]{Kallenberg} yields that a.s.
\[
\mathds{P}(\zeta - \delta \in G |\mathcal{F}_{\delta \wedge T}, \sigma(W^\delta_t, t \in \mathbb{R}_+)) = \mathds{P}(\zeta - \delta \in G |\mathcal{F}_{\delta \wedge T}).
\]
By the strong Markov property of \(\xi\) and \eqref{eq: exp}, we have for \(F \in \mathcal{F}_\delta\)
\begin{align*}
\mathds{P}(\zeta - \delta \in G, F) =
- \int_G q_{jj} e^{q_{jj} x} d}%{\operatorname{d}\hspace{-0.055cm} x\ \mathds{P}(F).
\end{align*}
The proof is complete.
\qed
\subsection{Local uniqueness}\label{sec: LU}
For the space of continuous functions from \(\mathbb{R}_+\) into \(I\) or \(\mathbb{R}\), we denote by \(\mathcal{C}\) the \(\sigma\)-field generated by the coordinate process.
Moreover, we denote by \(\mathbf{C}^o \triangleq (\mathcal{C}^o_t)_{t \geq 0}\) the filtration generated by the corresponding coordinate process and by \(\mathbf{C} \triangleq (\mathcal{C}_t)_{t \geq 0}\) its right-continuous version. The image space will be clear from the context.
Let \[\rho \colon C(\mathbb{R}_+, I) \times D (\mathbb{R}_+, J) \to [0, \infty]\] be a \(\mathbf{C}^o \otimes \mathbf{D}^o\)-stopping time. An example for \(\rho\) is
\[
\tau (\alpha, \omega) \triangleq \inf (t \in \mathbb{R}_+ \colon \alpha (t) \not \in U \text{ or } \omega(t) \not \in V),
\]
where \(U \subseteq I\) and \(V \subseteq J\) are open:
\begin{lemma}\label{lem: gamma nst}
\(\tau\) is a \(\mathbf{C}^o \otimes \mathbf{D}^o\)-stopping time.
\end{lemma}
\begin{proof}
See \cite[Proposition I.4.5]{RY} and \cite[Proposition 2.1.5]{EK}.
\end{proof}
Let \(u \colon I \times J \to \mathbb{R}\) and \(\sigma \colon I \times J \to \mathbb{R}\backslash \{0\}\) be Borel functions such that \eqref{eq: ES cond} holds, \(\sigma\) satisfies \eqref{eq: MC FT} and the ES conditions for all \(j \in J\) (see Section \ref{sec: MG MS} for this terminology). In other words, we ask that the conditions from part (i) of Theorem \ref{theo: existence Markov} hold.
For \(i = 1, 2\), let \((\Omega^i, \mathcal{F}^i, \mathbf{F}^i, \mathds{P}^i)\) be a filtered probability space with right-continuous complete filtration \(\mathbf{F}^i = (\mathcal{F}^i_t)_{t \geq 0}\). Let \(W^i = (W^i_t)_{t \geq 0}\) be a one-dimensional Brownian motion, \(\xi^i = (\xi^i_t)_{t \geq 0}\) be a \(J\)-valued irreducible Feller--Markov chain with \(Q\)-matrix \(Q\) and \(\xi^i_0 = j_0 \in J\), and let \(X^i = (X^i_t)_{t \geq 0}\) be an adapted continuous \(I\)-valued process such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} X^i_{t \wedge \rho(Y^i, \xi^i)} = u(X^i_t, \xi^i_t) \mathds{1}_{\{t \leq \rho (X^i, \xi^i)\}} d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (X^i_t, \xi^i_t) \mathds{1}_{\{t \leq \rho (X^i, \xi^i)\}} d}%{\operatorname{d}\hspace{-0.055cm} W^i_t, \quad X^i_0 = y_0 \in I.
\]
It is implicit that the stochastic integrals are well-defined. We stress that \(\xi^1\) and \(\xi^2\) have the same law, because they have the same \(Q\)-matrix, see Example \ref{ex: xi2}.
The main observation of this section is the following:
\begin{theorem}\label{theo: LU}
\(\mathds{P}^1 \circ (X^1_{\cdot \wedge \rho(X^1, \xi^1)}, \xi^1)^{-1} = \mathds{P}^2 \circ (X^2_{\cdot \wedge \rho(X^2, \xi^2)}, \xi^2)^{-1}\).
\end{theorem}
\begin{proof}
We follow the Yamada--Watanabe-type idea used in \cite{J80}.
Define
\begin{align*}
\Omega^* &\triangleq C(\mathbb{R}_+, I) \times C(\mathbb{R}_+, I) \times D(\mathbb{R}_+, J) \times C(\mathbb{R}_+, \mathbb{R}),\\
\mathcal{F}^* & \triangleq \mathcal{C} \otimes \mathcal{C} \otimes \mathcal{D} \otimes \mathcal{C},
\end{align*}
and for \(i =1, 2\)
\begin{align*}
Y^i &\colon \Omega^* \to C(\mathbb{R}_+, I), &\hspace{-3.4cm}Y^i(\omega^1, \omega^2, \omega^3, \omega^4) = \omega^i,\\
Z^1 &\colon \Omega^* \to D(\mathbb{R}_+, J), &\hspace{-3cm}Z^1(\omega^1, \omega^2, \omega^3, \omega^4) = \omega^3,\\
Z^2 &\colon \Omega^* \to C(\mathbb{R}_+, \mathbb{R}), &\hspace{-3cm}Z^2(\omega^1, \omega^2, \omega^3, \omega^4) = \omega^4.
\end{align*}
Denote the Wiener measure by \(\mathscr{W}\) and denote the unique law of \(\xi^i\) by \(\mu\). Due to Lemma \ref{lem: indep MC BM} in the Appendix, we have
\[
\mathds{P}^i \circ (\xi^i, W^i)^{-1} = \mu \otimes \mathscr{W}.
\]
When the space of continuous functions is equipped with the local uniform topology it is a Polish spaces and the corresponding Borel \(\sigma\)-fields is generated by the coordinate process.
Thus, there exist regular conditional probabilities
\[
Q^i \colon D(\mathbb{R}_+, J) \times C(\mathbb{R}_+, \mathbb{R}) \times \mathcal{C} \to [0, 1]\] such that
\begin{align*}
\mathds{P}^i (X^i \in d}%{\operatorname{d}\hspace{-0.055cm} \omega^1, \xi^i \in d}%{\operatorname{d}\hspace{-0.055cm} \omega^2, W^i \in d}%{\operatorname{d}\hspace{-0.055cm} \omega^3) &= Q^i(\omega^2, \omega^3, d}%{\operatorname{d}\hspace{-0.055cm} \omega^1) \mu(d}%{\operatorname{d}\hspace{-0.055cm} \omega^2) \mathscr{W} (d}%{\operatorname{d}\hspace{-0.055cm} \omega^3).
\end{align*}
We define a probability measure \({\mathbb{Q}}\) on \((\Omega^*, \mathcal{F}^*)\) by
\begin{align*}
{\mathbb{Q}} (d}%{\operatorname{d}\hspace{-0.055cm} \omega^1 \times d}%{\operatorname{d}\hspace{-0.055cm} \omega^2 \times d}%{\operatorname{d}\hspace{-0.055cm} \omega^3 \times d}%{\operatorname{d}\hspace{-0.055cm} \omega^4) \triangleq Q^1(\omega^3, \omega^4, d}%{\operatorname{d}\hspace{-0.055cm} \omega^1) Q^2(\omega^3, \omega^4, d}%{\operatorname{d}\hspace{-0.055cm} \omega^2) \mu(d}%{\operatorname{d}\hspace{-0.055cm} \omega^3) \mathscr{W}(d}%{\operatorname{d}\hspace{-0.055cm} \omega^4).
\end{align*}
With abuse of notation, denote the \({\mathbb{Q}}\)-completion of \(\mathcal{F}^*\) again by \(\mathcal{F}^*\) and denote by \(\mathcal{F}^*_t\) the \({\mathbb{Q}}\)-completion of
\begin{align*}
\bigcap_{s > t} \left(\mathcal{C}_s\otimes \mathcal{C}_s\otimes \mathcal{D}_s \otimes \mathcal{C}_s \right), \quad t \in \mathbb{R}_+.
\end{align*}
From now on we consider \((\Omega^*, \mathcal{F}^*, \mathbf{F}^* = (\mathcal{F}^*_t)_{t \geq 0}, {\mathbb{Q}})\) as underlying filtered probability space.
In view of \cite[Propositions 4.6, 5.6]{J80},
for all \(A \in \mathcal{C}_{t}\) the map
\(
\omega \mapsto Q^i (\omega, A)
\)
is measurable w.r.t. the \(\mu \otimes \mathscr{W}\)-completion of \(\bigcap_{s > t} (\mathcal{D}^o_s \otimes \mathcal{C}^o_s)\). In other words, \cite[Hypothesis 10.43]{J79} is satisfied and we deduce from \cite[Lemmata 2.7, 2.9]{J80}, \cite[Proposition 10.46]{J79} and L\'evy's characterization that \(Z^1\) is a Markov chain with \(Q\)-matrix \(Q\), \(Z^2\) is a Brownian motion
and
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} Y_{t \wedge \rho(Y^i, Z^1)}^i = u(Y_t^i, Z^1_{t})\mathds{1}_{\{t \leq \rho(Y^i, Z^1)\}} d}%{\operatorname{d}\hspace{-0.055cm} t
+ \sigma(Y_t^i, Z^1_{t})\mathds{1}_{\{t \leq \rho(Y^i, Z^1)\}} d}%{\operatorname{d}\hspace{-0.055cm} Z^2_t,\quad Y^i_0 = y_0.
\end{align*}
The proof of the following lemma is given after the proof of Theorem \ref{theo: LU} is complete.
\begin{lemma}\label{loc pathwise uniqueness}
Almost surely \(Y^1_{\cdot \wedge \rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1)} = Y^2_{\cdot \wedge \rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1)}\).
\end{lemma}
Due to Galmarino's test, this implies a.s. \(\rho(Y^1, Z^1) = \rho(Y^2, Z^1)\). Thus, a.s. \(Y^1_{\cdot \wedge \rho(Y^1, Z^1)} = Y^2_{\cdot \wedge \rho(Y^2, Z^1)}\) and
the claim follows from the definition of \({\mathbb{Q}}\).
\end{proof}
\noindent
\textit{Proof of Lemma \ref{loc pathwise uniqueness}:}
\textbf{Step 1:} Due to localization, we can assume that \(\rho (Y^1, Z^1) \wedge \rho(Y^2, Z^1)\) is finite.
Recall the following fact (see \cite[Proposition III.3.5]{RY}): If \((Z_t)_{t \geq 0}\) is a Feller--Markov chain for the right-continuous filtration \(\mathbf{G}= (\mathcal{G}_t)_{t \geq 0}\) and \(\gamma\) is a finite \(\mathbf{G}\)-stopping time, then \((Z_{t + \gamma})_{t \geq 0}\) is a Feller--Markov chain for a filtration \((\mathcal{G}_{t + \gamma})_{t \geq 0}\) and both chains have the same \(Q\)-matrix.
Due to Theorem \ref{theo: cdc} (i), for \(i = 1, 2\) there exists a process \((O^i_t)_{t \geq 0}\) defined by
\[
d}%{\operatorname{d}\hspace{-0.055cm} O^i_t = u(O^i_t, Z^1_{t + \rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1)}) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (O^i_t, Z^1_{t + \rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1)}) d}%{\operatorname{d}\hspace{-0.055cm} W^\rho_t,
\]
where
\[
W^\rho_t \triangleq Z^2_{t + \rho (Y^1, Z^1) \wedge \rho(Y^2, Z^1)} - Z^2_{\rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1)},\quad t \in \mathbb{R}_+,
\]
with initial value \(O^i_0 = Y^i_{\rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1)}\).
Now, set
\[
V^i_t \triangleq \begin{cases}
Y^i_t,&t \leq \rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1),\\
O^i_{t - \rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1)},&t > \rho(Y^1, Z^1) \wedge \rho(Y^2, Z^1).
\end{cases}
\]
As in the proof of Lemma \ref{lem: loc pathwise uniqueness}, we deduce from classical rules for time-changed stochastic integrals that
\begin{align}\label{eq: global eq}
d}%{\operatorname{d}\hspace{-0.055cm} V^i_t = u(V^i_t, Z^1_t)d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma(V^i_t, Z^1_t)d}%{\operatorname{d}\hspace{-0.055cm} Z^2_t, \quad V^i_0 = y_0,
\end{align}
i.e. that \(V^1\) and \(V^2\) are global solutions.
Thus, it remains to show a version of pathwise uniqueness for the global equation \eqref{eq: global eq}.
\textbf{Step 2:} We use induction. Let \((\zeta_n)_{n \in \mathbb{N}}\) be the stopping times
\[
\zeta_1 \triangleq \inf(t \in \mathbb{R}_+ \colon Z^1_t \not = Z^1_0),\quad \zeta_n \triangleq \inf (t \geq \zeta_{n-1} \colon Z^1_{t} \not = Z^1_{\zeta_{n-1}}),\quad n \geq 2.
\]
We stress that \(\zeta_n \nearrow \infty\) as \(n \to \infty\).
Almost surely on \(\{t \leq \zeta_1\}\) we have
\begin{align*}
V^i_t &= y_0 + \int_0^t u(V^i_s, j_0) d}%{\operatorname{d}\hspace{-0.055cm} s + \int_0^t \sigma(V^i_s, j_0) d}%{\operatorname{d}\hspace{-0.055cm} Z^2_s,\quad i = 1,2.
\end{align*}
Recalling that under the assumptions from Theorem \ref{theo: existence Markov} (i) the SDE \eqref{eq: SDE 1} satisfies strong existence and uniqueness (up to explosion), we deduce from Lemma \ref{lem: loc pathwise uniqueness} that a.s. \(V^1_t = V^2_t\) for all \(t \leq \zeta_1\).
In case \(N = 1\), we have \(\zeta_1 = \infty\) and the proof is complete. In the following, we assume that \(N \geq 2\) in which case a.s. \(\zeta_n < \infty\) for all \(n \in \mathbb{N}\).
Suppose that \(n \in \mathbb{N}\) is such that a.s. \(V^1_t = V^2_t\) for all \(t \leq \zeta_{n}\). Using classical rules for time-changed stochastic integrals, we obtain that a.s. on \(\{t \leq \zeta_{n + 1} - \zeta_n\} \cap \{Z^1_{\zeta_n} = j\}\)
\begin{align*}
V^i_{t + \zeta_n} &= V^i_{\zeta_n} + \int_{\zeta_n}^{t + \zeta_n} u(V^i_s, j) d}%{\operatorname{d}\hspace{-0.055cm} s + \int_{\zeta_n}^{t + \zeta_n} \sigma(V^i_s, j) d}%{\operatorname{d}\hspace{-0.055cm} Z^2_s
\\&= V^i_{\zeta_n} + \int_{0}^{t} u(V^i_{s + \zeta_n}, j) d}%{\operatorname{d}\hspace{-0.055cm} s + \int_{0}^{t} \sigma(V^i_{s + \zeta_n}, j) d}%{\operatorname{d}\hspace{-0.055cm} W^n_s,
\end{align*}
where
\[
W^n_t \triangleq Z^2_{t + \zeta_n} - Z^2_{\zeta_n},\quad t \in \mathbb{R}_+.
\]
We conclude again from Lemma \ref{lem: loc pathwise uniqueness} that a.s. \(V^1_{t + \zeta_n} = V^2_{t + \zeta_n}\) for all \(t \leq \zeta_{n + 1} - \zeta_n\). Consequently, a.s. \(V^1_t = V^2_t\) for all \(t \leq \zeta_{n +1}\) and our claim follows.
\qed
\subsection{Proof of Theorem \ref{theo: mart MS}}
\textbf{(i).} Recall that \(J = \{1, \dots, N\}\) with \(1 \leq N \leq \infty\).
For \(n \in \mathbb{N}\) define
\[
\tau_n \triangleq \inf (t \in [0, T] \colon S_t \not\in (l_n, r_n) \text{ or } \xi_t \geq n \wedge N).
\]
Because \(c\) is assumed to be bounded on compact subsets of \(I \times J\), Novikov's condition implies that \((\tau_n)_{n \in \mathbb{N}}\) is a localizing sequence for \(Z\).
We define \({\mathbb{Q}}^n\) by the Radon--Nikodym derivative \(\frac{d}%{\operatorname{d}\hspace{-0.055cm} {\mathbb{Q}}^n}{d}%{\operatorname{d}\hspace{-0.055cm} \mathds{P}} \triangleq Z_{T \wedge \tau_n}\). By Girsanov's theorem,
\[
B^n \triangleq W - \int_0^{\cdot \wedge \tau_n} c (S_s, \xi_s)d}%{\operatorname{d}\hspace{-0.055cm} s
\]
is a \({\mathbb{Q}}^n\)-Brownian motion such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} S_{t \wedge \tau_n} = (b (S_t, \xi_t) + c (S_t, \xi_t) \sigma (S_t, \xi_t)) \mathds{1}_{\{t \leq \tau_n\}} d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (S_t, \xi_t) \mathds{1}_{\{t \leq \tau_n\}} d}%{\operatorname{d}\hspace{-0.055cm} B^n_t.
\]
We deduce from Lemma \ref{lem: indep MC BM}, Example \ref{ex: xi2} and Theorem \ref{theo: indp preserving} that under \({\mathbb{Q}}^n\) the process \(\xi\) remains a Feller--Markov chain with unchanged \(Q\)-matrix. W.l.o.g. we extend \(W, \xi\) and \(\mathbf{F}\) to the infinite time interval \(\mathbb{R}_+\).
Applying Theorem \ref{theo: existence Markov} with \(u \triangleq b + c \sigma\) yields that on \((\Omega, \mathcal{F}, \mathbf{F}, \mathds{P})\) there exists an adapted continuous \(I\)-valued process \(X = (X_t)_{t \geq 0}\) such that
\[
d}%{\operatorname{d}\hspace{-0.055cm} X_t = (b(X_t, \xi_t) + c(X_t, \xi_t) \sigma (X_t, \xi_t)) d}%{\operatorname{d}\hspace{-0.055cm} t + \sigma (X_t, \xi_t)d}%{\operatorname{d}\hspace{-0.055cm} W_t, \quad X_0 = S_0.
\]
We set
\[
\rho_n \triangleq \inf (t \in [0, T] \colon X_t \not\in (l_n, r_n) \text{ or } \xi_t \geq n \wedge N).
\]
It follows from Lemma \ref{lem: gamma nst} and Theorem \ref{theo: LU} that
\[
\mathds{P} \circ (X_{\cdot \wedge \rho_n}, \xi)^{-1} = {\mathbb{Q}}^n \circ (S_{\cdot \wedge \tau_n}, \xi)^{-1}.
\]
Consequently, using Galmarino's test, we obtain that
\[
\lim_{n \to \infty} {\mathbb{Q}}^n (\tau_n = \infty) = \lim_{n \to \infty} \mathds{P} (\rho_n = \infty) = 1.
\]
Now, it follows as in the proof of Theorem \ref{theo: mart Ito} that \(Z\) is a martingale.
\textbf{(ii).} This result follows similar as Theorem \ref{theo: general SLM}, where Theorem \ref{theo: existence Markov} has to be used instead of Theorem \ref{theo: 1D Feller p2}. We omit the details.
\qed
\section{Proof of Theorem \ref{theo: indp preserving}}\label{sec: pf theo modi ind}
\textbf{Step 1. }
Let \(g \in A\) and set
\begin{align}\label{eq: Mg}
M^g_t \triangleq g(\xi_t) - g(\xi_0)- \int_0^t L g(\xi, s) d}%{\operatorname{d}\hspace{-0.055cm} s, \quad t \in [0, T].
\end{align}
Due to the definition of the martingale problem \((A, L, T)\), the process \(M^g\) is a local martingale with localizing sequence \((\rho_n (\xi))_{n \in \mathbb{N}}\).
Thus, the quadratic variation process \([M^g, W]\) is well-defined. Our first step is to show that a.s. \([M^g, W] = 0\).
We explain that \(WM^g\) is a local martingale for the completed right-continuous version of the natural filtration of \(\xi\) and \(W\). Let \(0 \leq s < t \leq T\), \(G \in \sigma (W_r, r \in [0, s]) \triangleq \mathcal{W}_s\) and \(F \in \sigma (\xi_r, r \in [0, s]) \triangleq \mathcal{E}_s\). The independence assumption yields that
\begin{align*}
{\mathds{E}}^\mathds{P}\big[W_t M^g_{t \wedge \rho_m (\xi)} \mathds{1}_{G \cap F}\big] &= {\mathds{E}}^\mathds{P}\big[W_t \mathds{1}_G\big] {\mathds{E}}^\mathds{P}\big[M^g_{t \wedge \rho_m (\xi)} \mathds{1}_F\big] \\&= {\mathds{E}}^\mathds{P}\big[W_s \mathds{1}_G\big] {\mathds{E}}^\mathds{P}\big[M^g_{s \wedge \rho_m(\xi)} \mathds{1}_F\big] \\&= {\mathds{E}}^\mathds{P}\big[W_s M^g_{s \wedge \rho_m(\xi)} \mathds{1}_{G \cap F}\big].
\end{align*}
By a monotone class argument, we have
\[
{\mathds{E}}^\mathds{P}\big[W_t M^g_{t \wedge \rho_m(\xi)} \mathds{1}_B\big] = {\mathds{E}}^\mathds{P}\big[W_s M^g_{s \wedge \rho_m(\xi)} \mathds{1}_B\big]
\]
for all \(B \in \mathcal{W}_s \vee \mathcal{E}_s\). Due to the downwards theorem (see \cite[Theorem II.51.1]{RW1}), the process \(W M^g_{\cdot \wedge \rho_m(\xi)}\) is a martingale for the completed right-continuous version \(\mathbf{G} \triangleq (\mathcal{G}_t)_{t \in [0, T]}\) of \((\mathcal{W}_t \vee \mathcal{E}_t)_{t \in [0, T]}\). Consequently, because \(\rho_m (\xi)\nearrow \infty\) as \(m \to \infty\), \(W M^g\) is a local \(\mathbf{G}\)-martingale.
By the tower rule, also \(W\) and \(M^g\) are local \(\mathbf{G}\)-martingales.
Integration by parts implies that
\begin{align*}
[W,M^g] &= WM^g - \int_0^\cdot W_{s} d}%{\operatorname{d}\hspace{-0.055cm} M^g_s - \int_0^\cdot M^g_{s-} d}%{\operatorname{d}\hspace{-0.055cm} W_s,
\end{align*}
where the stochastic integrals are defined as local \(\mathbf{G}\)-martingales.
Here, we use that \([W, M^g]\) can be defined independently of the filtration.
We deduce that the process \([W, M^g]\) is a continuous local \(\mathbf{G}\)-martingale of finite variation and hence a.s. \([W, M^g] = 0\).
\textbf{Step 2.} In this step we identify the laws of \(B\) and \(\xi\) under \({\mathbb{Q}}\).
Clearly, \(B\) is a \({\mathbb{Q}}\)-Brownian motion due to Girsanov's theorem.
Next, we show that on \((\Omega, \mathcal{F}, \mathbf{F}, {\mathbb{Q}})\) the process \(\xi\) is a solution process for the martingale problem \((A, L, T)\).
By Step 1 and Girsanov's theorem, the process
\[
M^g - \int_0^t\frac{d}%{\operatorname{d}\hspace{-0.055cm} [Z, M^g]_s}{Z_{s}} = M^g - \int_0^\cdot \theta_sd}%{\operatorname{d}\hspace{-0.055cm}\hspace{0.05cm} [W, M^g]_s
= M^g
\]
is a local \({\mathbb{Q}}\)-martingale.
The equivalence \({\mathbb{Q}}\sim \mathds{P}\) implies that \({\mathbb{Q}}(\xi_0 = e_0) = 1\) and that \(M^g_{\cdot \wedge \rho_n(\xi)}\) is \({\mathbb{Q}}\)-a.s. bounded.
Thus, the claim follows.
\textbf{Step 3.} We prove \({\mathbb{Q}}\)-independence of \(B\) and \(\xi\) borrowing an idea from \cite[Theorem 4.10.1]{EK}.
We define \(C^2_b(\mathbb{R})\) to be the set of all bounded twice continuously differentiable functions \(\mathbb{R} \to \mathbb{R}\) with bounded first and second derivative.
Suppose that \(f \in C^2_b(\mathbb{R})\) with \(\inf_{x \in \mathbb{R}} f(x) > 0\) and define
\[
K^f_t \triangleq f(B_t) \exp \Big( - \frac{1}{2} \int_0^t \frac{f''(B_s)}{f(B_s)} d}%{\operatorname{d}\hspace{-0.055cm} s \Big),\quad t \in [0, T].
\]
By It\^o's formula, we have
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} K^f_t &= \exp \Big(- \frac{1}{2} \int_0^t \frac{f''(B_s)}{f(B_s)} d}%{\operatorname{d}\hspace{-0.055cm} s \Big) \big(d}%{\operatorname{d}\hspace{-0.055cm} f(B_t) - \tfrac{1}{2} f''(B_t) d}%{\operatorname{d}\hspace{-0.055cm} t \big)
\\&= \exp \Big(- \frac{1}{2} \int_0^t \frac{f''(B_s)}{f(B_s)} d}%{\operatorname{d}\hspace{-0.055cm} s \Big)f'(B_t)d}%{\operatorname{d}\hspace{-0.055cm} B_t.
\end{align*}
Thus, \(K^f\) is a \({\mathbb{Q}}\)-martingale, as it is a bounded local \({\mathbb{Q}}\)-martingale.
Recall that the quadratic variation process is not affected by an equivalent change of measure.
By Step 1, \({\mathbb{Q}}\)-a.s. \([B, M^g] = 0\).
Due to integration by parts, we obtain that
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} K^f_t M^g_{t} &= K^f_t d}%{\operatorname{d}\hspace{-0.055cm} M^g_{t} + M^g_{t -} d}%{\operatorname{d}\hspace{-0.055cm} K^f_t + d}%{\operatorname{d}\hspace{-0.055cm} [K^f, M^g]_t
\\&= K^f_t d}%{\operatorname{d}\hspace{-0.055cm} M^g_{t} + M^g_{t-} d}%{\operatorname{d}\hspace{-0.055cm} K^f_t,
\end{align*}
which implies that \(K^f M^g_{\cdot \wedge \rho_m(\xi)}\) is a \({\mathbb{Q}}\)-martingale, as it is a bounded local \({\mathbb{Q}}\)-martingale.
Let \(\zeta\) be a stopping time such that \(\zeta \leq T\) and set
\[
\widetilde{{\mathbb{Q}}} (G) \triangleq \frac{{\mathds{E}}^{\mathbb{Q}}\big[ \mathds{1}_{G} K^f_\zeta\big]}{{\mathds{E}}^{\mathbb{Q}}\big[K^f_\zeta\big]},\quad G \in \mathcal{F}.
\]
Because \(K^f M^g_{\cdot \wedge \rho_m(\xi)}, K^f\) and \(M^f_{\cdot \wedge \rho_m(\xi)}\) are \({\mathbb{Q}}\)-martingales (see also Step 2), the optional stopping theorem implies that for all stopping times \(\psi \leq T\)
\begin{align*}
{\mathds{E}}^{\widetilde{{\mathbb{Q}}}} \big[ M^g_{\psi \wedge \rho_m(\xi)} \big] &= \frac{{\mathds{E}}^{\mathbb{Q}} \big[M^g_{\psi \wedge \rho_m(\xi)} K^f_\zeta\big]}{{\mathds{E}}^{\mathbb{Q}} \big[K^f_\zeta\big]}
= 0.
\end{align*}
Consequently, by \cite[Proposition II.1.4]{RY}, \(M^g_{\cdot \wedge \rho_m(\xi)}\) is a \(\widetilde{{\mathbb{Q}}}\)-martingale. Because \(\widetilde{{\mathbb{Q}}} \sim {\mathbb{Q}}\), this implies that on \((\Omega, \mathcal{F}, \mathbf{F}, \widetilde{{\mathbb{Q}}})\) the process \(\xi\) is a solution process for the martingale problem \((A, L, T)\).
The uniqueness assumption for the martingale problem \((A, L, j_0, T)\) implies that
\begin{align}\label{eq: conclusion}
\widetilde{{\mathbb{Q}}} (\Gamma) = {\mathbb{Q}}(\Gamma)
\end{align}
for all
\[
\Gamma \triangleq \big\{\xi_{t_1} \in G_1, \dots, \xi_{t_n} \in G_n\big\},
\]
where \(G_1, \dots, G_n \in \mathcal{B}(J)\) and \(0 \leq t_1 < \dots < t_n \leq T\).
We fix \(\Gamma\) such that \({\mathbb{Q}}(\Gamma) > 0\) and define
\[
\widehat{{\mathbb{Q}}} (F) \triangleq \frac{{\mathds{E}}^{\mathbb{Q}} \big[ \mathds{1}_F \mathds{1}_\Gamma\big]}{{\mathbb{Q}}(\Gamma)}, \quad F \in \mathcal{F}.
\]
Using the definition of \(\widetilde{{\mathbb{Q}}}\), \eqref{eq: conclusion}, the fact that \(K^f\) is a \({\mathbb{Q}}\)-martingale and the optional stopping theorem, we obtain
\begin{align*}
{\mathds{E}}^{\widehat{{\mathbb{Q}}}} \big[K^f_\zeta\big] = \frac{{\mathds{E}}^{\mathbb{Q}} \big[K^f_\zeta \mathds{1}_\Gamma\big]}{{\mathbb{Q}}(\Gamma)} = \frac{\widetilde{{\mathbb{Q}}}(\Gamma) {\mathds{E}}^{\mathbb{Q}} \big[K^f_\zeta\big]}{{\mathbb{Q}}(\Gamma)} = {\mathds{E}}^{\mathbb{Q}} \big[K^f_\zeta\big] = f(0).
\end{align*}
Because \(\zeta\) was arbitrary, we conclude that \(K^f\) is a \(\widehat{{\mathbb{Q}}}\)-martingale. Furthermore, \(\widehat{{\mathbb{Q}}}(B_0 = 0) = 1\) follows from the fact that \(B\) is a \({\mathbb{Q}}\)-Brownian motion.
Finally, due to \cite[Proposition 4.3.3]{EK}, the process \(B\) is a \(\widehat{{\mathbb{Q}}}\)-Brownian motion. We conclude that
\[
\widehat{{\mathbb{Q}}} \big(B_{s_1} \in F_1, \dots, B_{s_k} \in F_k\big) = {\mathbb{Q}} \big(B_{s_1} \in F_1, \dots, B_{s_k} \in F_k\big),
\]
for all \(F_1, \dots, F_k \in \mathcal{B}(\mathbb{R})\) and \(0 \leq s_1 < \dots < s_k \leq T\).
By the definition of \(\widehat{{\mathbb{Q}}}\), we have proven that
\begin{align*}
{\mathbb{Q}}\big(B_{s_1} &\in F_1, \dots, B_{s_k} \in F_k, \xi_{t_1} \in G_1, \dots, \xi_{t_m} \in G_m\big) \\&= {\mathbb{Q}} \big(B_{s_1} \in F_1, \dots, B_{s_k} \in F_k\big) {\mathbb{Q}}\big(\xi_{t_1} \in G_1, \dots, \xi_{t_m} \in G_n\big),
\end{align*}
which implies that the \(\sigma\)-fields \(\sigma(\xi_t, t \in [0, T])\) and \(\sigma(B_t, t \in [0, T])\) are \({\mathbb{Q}}\)-independent. The proof is complete.
\qed
\section{Proof of Theorem \ref{theo: cdc}}\label{sec: pf mg JS}
Because \(\sigma(\xi_t, t \in [0, T])\) and \(\sigma(W_t, t \in [0, T])\) are assumed to be \(\mathds{P}\)-independent, it follows as in the proof of Theorem \ref{theo: indp preserving} that a.s. \([Z, W] = 0\). Thus, Girsanov's theorem implies that \(W\) is a \({\mathbb{Q}}\)-Brownian motion.
Take \(0 \leq s_1 < \dots < s_m \leq T, 0 \leq t_1 < \dots < t_n \leq T\),\((G_k)_{k \leq m} \subset \mathcal{B}(J)\) and \((F_k)_{k \leq n} \subset \mathcal{B}(\mathbb{R})\), and set
\begin{align*}
\Gamma_1 &\triangleq \big\{\xi_{s_1} \in G_1, \dots, \xi_{s_m} \in G_m\big\},\\
\Gamma_2 &\triangleq \big\{W_{t_1} \in F_1, \dots, W_{t_n} \in F_n\big\}.
\end{align*}
The \(\mathds{P}\)-independence of \(\sigma(\xi_t, t \in [0, T])\) and \(\sigma(W_t, t \in [0, T])\) and the uniqueness of the Wiener measure yield that
\begin{align*}
{\mathbb{Q}}(\Gamma_1 \cap \Gamma_2) &= {\mathds{E}}^\mathds{P} \big[ Z_T \mathds{1}_{\Gamma_1 \cap \Gamma_2}\big]
\\&= {\mathds{E}}^\mathds{P} \big[Z_T \mathds{1}_{\Gamma_1} \big] \mathds{P}(\Gamma_2)
\\&= {\mathbb{Q}}(\Gamma_1) {\mathbb{Q}}(\Gamma_2).
\end{align*}
We conclude that \(\sigma(\xi_t, t \in [0, T])\) and \(\sigma(W_t, t \in [0, T])\) are \({\mathbb{Q}}\)-independent.
For \(g \in A^*\) we set
\begin{align*}
M^g_t &\triangleq g(\xi_t) - g(\xi_0) - \int_0^t L^* g(\xi, s)d}%{\operatorname{d}\hspace{-0.055cm} s, \quad t \in [0, T],\\
K^{f}_t &\triangleq f(\xi_t) - f(\xi_0)- \int_0^t Lf(\xi, s)d}%{\operatorname{d}\hspace{-0.055cm} s, \quad t \in [0, T],\\
K^{fg}_t &\triangleq f(\xi_t) g(\xi_t) - f(\xi_0) g(\xi_0)- \int_0^t L(fg)(\xi, s)d}%{\operatorname{d}\hspace{-0.055cm} s, \quad t \in [0, T].
\end{align*}
The processes \(K^f\) and \(K^{fg}\) are local \(\mathds{P}\)-martingales.
We set
\[
V_t \triangleq \frac{1}{f(\xi_0)} \exp \Big(- \int_0^t \frac{Lf (\xi, s)}{f(\xi_s)} d}%{\operatorname{d}\hspace{-0.055cm} s \Big),\quad t \in [0, T].
\]
Integration by parts implies that
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} Z_t &= V_t \Big(d}%{\operatorname{d}\hspace{-0.055cm} f(\xi_t) - f(\xi_{t}) \frac{L (\xi, t)}{f(\xi_t)} d}%{\operatorname{d}\hspace{-0.055cm} t\Big) = V_t d}%{\operatorname{d}\hspace{-0.055cm} K^f_t.
\end{align*}
Using again integration by parts and the identity \(L^* g = \frac{1}{f} (L (fg) - g Lf)\) yields
\begin{align*}
d}%{\operatorname{d}\hspace{-0.055cm} Z_t M^g_t &= Z_{t-} d}%{\operatorname{d}\hspace{-0.055cm} M^g_t + M^g_{t-} d}%{\operatorname{d}\hspace{-0.055cm} Z_t + d}%{\operatorname{d}\hspace{-0.055cm} [Z, M^g]_t
\\&= V_t \Big( f(\xi_{t-}) d}%{\operatorname{d}\hspace{-0.055cm} M^g_t + M^g_{t-} d}%{\operatorname{d}\hspace{-0.055cm} K^f_t + d}%{\operatorname{d}\hspace{-0.055cm} [f(\xi), g(\xi)]_t \Big)
\\&= V_t \Big( f(\xi_{t-}) d}%{\operatorname{d}\hspace{-0.055cm} g(\xi_t) - f(\xi_{t-}) L^* g(\xi, t)d}%{\operatorname{d}\hspace{-0.055cm} t + g(\xi_{t-}) d}%{\operatorname{d}\hspace{-0.055cm} f(\xi_t) \\&\qquad - g(\xi_{t-}) L f(\xi, t)d}%{\operatorname{d}\hspace{-0.055cm} t - \Big(g(\xi_0) + \int_0^t L^* g(\xi, s)d}%{\operatorname{d}\hspace{-0.055cm} s \Big)d}%{\operatorname{d}\hspace{-0.055cm} K^f_t + d}%{\operatorname{d}\hspace{-0.055cm} [f(\xi), g(\xi)]_t \Big)
\\&= V_t \Big( d}%{\operatorname{d}\hspace{-0.055cm} \big((fg)(\xi_t)\big) - L (f g)(\xi, t)d}%{\operatorname{d}\hspace{-0.055cm} t - \Big(g(\xi_0) + \int_0^t L^* g(\xi, s)d}%{\operatorname{d}\hspace{-0.055cm} s \Big)d}%{\operatorname{d}\hspace{-0.055cm} K^f_t \Big)
\\&= V_t \Big( d}%{\operatorname{d}\hspace{-0.055cm} K^{fg}_t - \Big(g (\xi_0) + \int_0^t L^* g(\xi, s)d}%{\operatorname{d}\hspace{-0.055cm} s \Big) d}%{\operatorname{d}\hspace{-0.055cm} K^f_t \Big).
\end{align*}
We conclude that \(Z M^g\) is a local \(\mathds{P}\)-martingale and it follows from \cite[Proposition III.3.8]{JS} that \(M^g\) is a local \({\mathbb{Q}}\)-martingale.
Due to the equivalence \({\mathbb{Q}} \sim \mathds{P}\), we conclude that on \((\Omega, \mathcal{F}, \mathbf{F}, {\mathbb{Q}})\) the process \(\xi\) is a solution process to the martingale problem \((A^*, L^*, j_0, T)\).
\qed
\section{Proof of Theorem \ref{prop: mg f}}
Let \((X_t)_{t \geq 0}\) be the coordinate process on \(D (\mathbb{R}_+, J)\) and denote
\[
M^f_t \triangleq \frac{f(X_t)}{f(j_0)} \exp \Big(- \int_0^t \frac{L f(X, s)}{f(X_{s})} d}%{\operatorname{d}\hspace{-0.055cm} s\Big), \quad t \in [0, T].
\]
Define by \(\mu \triangleq \mathds{P} \circ \xi^{-1}\) a Borel probability measure on \(D (\mathbb{R}_+, J)\).
We have to show that \[
{\mathds{E}}^\mu\big[M^f_{T}\big] = 1.\]
It follows from \cite[Lemma 2.9]{J79} that \(M^f\) is a local \(\mu\)-martingale with localizing sequence \((\rho_n)_{n \in \mathbb{N}}\).
For all \(n \in \mathbb{N}\), define a Borel probability measure \(\mu_n\) on \(D (\mathbb{R}_+, J)\) via the Radon--Nikodym derivative \[\frac{d}%{\operatorname{d}\hspace{-0.055cm} \mu_n}{d}%{\operatorname{d}\hspace{-0.055cm} \mu} = M^f_{T \wedge \rho_n}.\]
The following lemma is proven after the proof of Theorem \ref{prop: mg f} is complete.
\begin{lemma}\label{lem:luJS}
Let \(\mu^*\) be the unique law of a solution process to the martingale problem \((A^*, L^*, j_0, \infty)\).
For all \(n \in \mathbb{N}\) we have \(\mu_n = \mu^*\) on \(\mathcal{D}^o_{T \wedge \rho_n}\).
\end{lemma}
Recalling that \(\{\rho_n > T\} \in \mathcal{D}^o_{T \wedge \rho_n}\), Lemma \ref{lem:luJS} implies that
\[
{\mathds{E}}^\mu \big[M^f_T\big] = \lim_{n \to \infty} {\mathds{E}}^\mu \big[M^f_{T \wedge \rho_n} \mathds{1}_{\{\rho_n > T\}}\big] = \lim_{n \to \infty} \mu^*(\rho_n > T) = 1.
\]
This completes the proof. \qed
\\\\
\noindent
\textit{Proof of Lemma \ref{lem:luJS}:}
We adapt the proof of \cite[Theorem III.2.40]{JS}.
To simplify our notation, we set \(T \wedge \rho_n\triangleq\rho\).
We denote by \(\mu_j\) the unique law of a solution process to the martingale problem \((A^*, L^*, j, \infty)\).
\textbf{Step 1.} We show that \(j \mapsto \mu_j (G)\) is Borel for all \(G \in \mathcal{D}\), following the strategy outlined in \cite[Exercise 6.7.4]{SV}.
Recall that we assume that \(A^*\) contains a countable determining set \(\widetilde{A}\).
Let \(\mathcal{P}\) be the space of Borel probability measures on \(D(\mathbb{R}_+, J)\) equipped with the topology of convergence in distribution.
Note that a \({\mathbf{D}}^o\)-adapted process is a \({\mathbf{D}}\)-martingale if and only if it is a \({\mathbf{D}}^o\)-martingale. The implication \(\Rightarrow\) follows from the downward theorem (see \cite[Theorem II.51.1]{RW1}) and the implication \(\Leftarrow\) follows from the tower rule. For \(g \in \widetilde{A}\) set
\[
K^g_{t} \triangleq g(X_{t}) - g(X_0) - \int_0^{t} K g(X_{s}) d}%{\operatorname{d}\hspace{-0.055cm} s, \quad t \in \mathbb{R}_+.\]
Define \(\mathcal{I}\) to be the set of all \(\nu \in \mathcal{P}\) such that \(\nu \circ X_0^{-1} = \delta_j\) for some \(j \in J\). Moreover, let \(\mathcal{M}\) be the set of all \(\nu \in \mathcal{P}\) such that
\[
{\mathds{E}}^\nu \big[ (K^g_{t \wedge \rho_m} - K^g_{s \wedge \rho_m})\mathds{1}_G\big] = 0,
\]
for all \(g \in \widetilde{A}\), all rational \(s < t, m \in \mathbb{N}\) and \(G\) in a countable determining class of \(\mathcal{D}^o_s\).
By the uniqueness assumption, \(\{\mu_j, j \in J\} = \mathcal{I} \cap \mathcal{M}\). Because the set \(\{\delta_j, j \in J\}\) is Borel due to \cite[Theorem 8.3.7]{cohn13} and \(\nu \mapsto \nu \circ X_0^{-1}\) is continuous, \(\mathcal{I}\) is Borel.
The set \(\mathcal{M}\) is Borel due to \cite[Theorem 15.13]{aliprantis2013infinite}.
We conclude that \(\{\mu_j, j \in J\}\) is Borel.
Let \(\Phi \colon \{\mu_j, j \in J\} \to J\) be defined by \(\Phi(\mu_j) = j\) for all \(j \in J\). We note that \(\Phi\) is a continuous injection. Thus, the inverse map \(\Phi^{-1}\) is Borel due to Kuratovski's theorem (\cite[Proposition 8.3.5]{cohn13}). This means that \(j \mapsto \mu_j(G)\) is Borel for all \(G \in \mathcal{D}\).
\textbf{Step 2.} Because \(\mu_n \sim \mu\), we have \(\mu_n(X_0 = j_0) = 1\).
As in the proof of Theorem \ref{theo: cdc}, we see that for all \(g \in A^*\) the process \(K^g_{\cdot \wedge \rho}\)
is a \(\mu_n\)-martingale.
\textbf{Step 3.} For every \(t \in \mathbb{R}_+\) we denote by \(\theta_t \colon D(\mathbb{R}_+, J) \to D(\mathbb{R}_+, J)\) the shift operator given by \(\theta_t \omega (s) = \omega(t + s)\).
Recalling that \(\rho\) is bounded, we deduce from \cite[Lemma III.2.44]{JS} that \[\mathcal{D}^o_\rho \vee \theta^{-1}_\rho (\mathcal{D}) = \mathcal{D}.\]
Hence, we can associate to each \(G \in \mathcal{D}\) a (not necessarily unique) \(G' \in \mathcal{D}^o_{\rho} \otimes \mathcal{D}\) such that
\[
G = \big\{\omega \in D \colon (\omega, \theta_{\rho(\omega)}\omega) \in G'\big\}.
\]
We define
\[
\nu (G) \triangleq \int \mu_n(d}%{\operatorname{d}\hspace{-0.055cm} \omega) \mu_{\omega(\rho(\omega))} (d}%{\operatorname{d}\hspace{-0.055cm} \omega^*) \mathds{1}_{G'} (\omega, \omega^*).
\]
It follows from \cite[Lemma III.2.47]{JS} that \(\nu\) is a probability measure, i.e. that \(\nu\) is defined unambiguously.
Our goal is to show that \(\nu\) solves the martingale problem \((A^*, L^*, j_0, \infty)\).
Providing an intuition, \(\nu\) is the law of
\[
\begin{cases} X^1_t,& t < \rho (X^1),\\
X^2_{t - \rho (X^1)}, &t \geq \rho (X^1),
\end{cases}
\]
in case \(X^1\) is sampled according to \(\mu_n\) and \(X^2\) is sampled according to \(\mu_{j}\) with \(j = X^1_{\rho (X^1)}\). In other words, we extend \(\mu_n\) to a solution of the global martingale problem.
For \(G \in \mathcal{D}^o_0\) we can choose \(G' = G \times D (\mathbb{R}_+, J)\). Consequently,
\[
\nu(X_0 = j_0) = \mu_n(X_0 = j_0) = 1.
\]
Let \(\psi\) be a bounded \({\mathbf{D}}^o\)-stopping time and fix \(m \in \mathbb{N}\).
For \(\omega, \alpha \in D(\mathbb{R}_+, J)\) and \(t \in \mathbb{R}_+\) we set
\[
z(\omega, \alpha) (t) \triangleq \begin{cases} \omega(t),&t < \rho(\omega),\\
\alpha(t - \rho(\omega)),&t \geq \rho(\omega),\end{cases}
\]
and
\[
V(\omega, \alpha) \triangleq \begin{cases} \big((\psi \wedge \rho_m) \vee \rho - \rho\big) (z(\omega, \alpha)), &\alpha (0) = \omega (\rho(\omega)),\\
0,&\text{otherwise}.\end{cases}
\]
Due to \cite[Theorem IV.103]{DellacherieMeyer78} the map
\(V\) is \(\mathcal{D}^o_\rho \otimes \mathcal{D}\)-measurable such that \(V(\omega, \cdot)\) is a \({\mathbf{D}}^o\)-stopping time for all \(\omega \in D(\mathbb{R}_+, J)\). Furthermore, it is evident from the definition that
\[
(\psi \wedge \rho_m) (\omega) \vee \rho(\omega) = \rho(\omega) + V(\omega, \theta_{\rho(\omega)} \omega)
\]
for \(\omega \in D (\mathbb{R}_+, J)\).
For all \(\omega \in \{\rho< \psi \wedge \rho_m\} \in \mathcal{D}_\rho^o\) and \(\alpha \in D(\mathbb{R}_+, J)\) with \(\alpha (0) = \omega(\rho(\omega))\) we have
\(
V(\omega, \alpha)
\leq \rho_m(\alpha).
\)
Note further that for \(\omega \in \{\rho < \psi \wedge \rho_m\}\)
\begin{align*}
K^g_{V(\omega, \theta_{\rho(\omega)} \omega)} (\theta_{\rho(\omega)}\omega) &= K^g_{(\psi \wedge \rho_m)(\omega) - \rho(\omega)} (\theta_{\rho(\omega)} \omega ) \\&= g(\omega((\psi \wedge \rho_m)(\omega))) - g(\omega(\rho(\omega))) - \int_{\rho(\omega)}^{(\psi \wedge \rho_m)(\omega)} Kg (\omega(s)) d}%{\operatorname{d}\hspace{-0.055cm} s
\\&= K^g_{(\psi \wedge \rho_m)(\omega)}(\omega) - K^g_{\rho(\omega)}(\omega).
\end{align*}
Because \(K^g_{\cdot \wedge \rho}\) is a \(\mu_n\)-martingale, we have
\[
{\mathds{E}}^\nu \big[ K^g_{\rho \wedge \psi \wedge \rho_m}\big] = {\mathds{E}}^{\mu_n} \big[K^g_{\rho \wedge \psi\wedge \rho_m}\big] = 0,
\]
due to the optional stopping theorem.
Therefore, we obtain
\begin{align*}
{\mathds{E}}^\nu \big[ K^g_{\psi \wedge \rho_m} \big] &= {\mathds{E}}^\nu \big[ K^g_{\psi \wedge \rho_m} - K^g_{\rho \wedge \psi \wedge \rho_m}\big]
\\&= {\mathds{E}}^\nu \big[ \big(K^g_{\psi \wedge \rho_m} - K^g_{\rho}\big) \mathds{1}_{\{\rho < \psi \wedge \rho_m\}}\big]
\\&= {\mathds{E}}^\nu\big[K^g_{V (\cdot, \theta_\rho)} ( \theta_\rho ) \mathds{1}_{\{\rho < \psi \wedge \rho_m\}}\big]
\\&=\int \mu_n(d}%{\operatorname{d}\hspace{-0.055cm} \omega) {\mathds{E}}^{\mu_{\omega(\rho(\omega))}} \big[ K^g_{V (\omega, \cdot) \wedge \rho_m}\big] \mathds{1}_{\{\rho(\omega) < (\psi \wedge \rho_m)(\omega)\}} = 0,
\end{align*}
again due to the optional stopping theorem (recall that \(V(\omega, \cdot)\) is bounded and that \(K^g_{\cdot \wedge \rho_m}\) is a \(\mu_j\)-martingale for all \(j \in J\)). We conclude from \cite[Proposition II.1.4]{RY} that \(K^g_{\cdot \wedge \rho_m}\) is a \(\nu\)-martingale
and hence that under \(\nu\) the coordinate process \((X_t)_{t \geq 0}\) solves the martingale problem \((A^*, L^*, j_0, \infty)\). The uniqueness assumption implies that \(\nu = \mu^*\). Because also for \(G \in \mathcal{D}^o_\rho\) we can choose \(G' = G \times D(\mathbb{R}_+, J)\), we obtain that
\(
\mu^* (G) = \nu(G) = \mu_n(G).
\)
The proof is complete.
\qed
|
1,116,691,497,877 | arxiv | \section{Introduction}
Let $\nn$, $\nn_0$, $\zz$ and $\cc$ denote the sets of
positive integers, nonnegative integers, integers and complex numbers, respectively. For $n\in\nn$ we set
$\sigma(n)= \sum_{ d \mid n } d$, where $d$ runs through the positive divisors of $n$.
If $n\not \in\nn$ we set $\sigma(n)=0$.
For $a_1, a_2, a_3, a_4 \in \nn$, and $n \in \nn_0$
we define
\begin{align*}
N(a_1,a_2,a_3,a_4 ;n):={\rm card}\{(x_1,x_2,x_3,x_4)\in \zz^4 \mid n= a_1x_1^2 +a_2x_2^2+a_3x_3^2+ a_4 x_4^2 \}.
\end{align*}
It is a classical result of Jacobi \cite {jacobi, williamsBook} that
\begin{align*}
N(1,1,1,1;n)=8\sigma(n) - 32 \sigma (n/4).
\end{align*}
Formulas for $N(a_1, a_2, a_3, a_4;n)$ for the quaternary quadratic forms
\begin{align*}
(a_1,a_2,a_3,a_4)= (1,1,1,2), (1,1,2,2), (1,2,2,2), (1,1,1,5), (1,1,5,5), (1,5,5,5)
\end{align*}
are in the literature,
see for example \cite{alaca2007quaternary, alaca2007nineteen, ASLW-2009, K1155, L1122, L1112-1222, L1115, L1555, L1155, Lomadse, W1112-1222}.
There are twenty-six quaternary quadratic forms
$ a_1x_1^2 +a_2x_2^2+a_3x_3^2+ a_4 x_4^2 $, where $a_1, a_2, a_3, a_4 \in \{1,2,5,10\}$, $a_1\leq a_2\leq a_3\leq a_4$ and
$\gcd (a_1,a_2,a_3,a_4)=1$, see Table 2.1.
In this paper, we determine an explicit formula for $N(a_1,a_2,a_3,a_4 ;n)$ for each of these quaternary
forms in a uniform manner. We use a modular forms approach.
For $q \in \cc$ with $|q|<1$, Ramanujan's theta function $\varphi (q)$ is defined by
\begin{align*}
\varphi (q) = \sum_{n=-\infty}^{\infty} q^{n^2}.
\end{align*}
For $a_1, a_2,a_3,a_4 \in \nn$ we have
\begin{align}
\sum_{n=1}^{\infty} N(a_1,a_2,a_3,a_4;n) q^n=\varphi(q^{a_1}) \varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4}).
\end{align}
The Dedekind eta function $\eta (z)$ is the holomorphic function defined on the upper half plane $\hh = \{ z \in \cc \mid \mbox{\rm Im}(z) >0 \}$
by
\begin{align*}
\eta (z) = e^{\pi i z/12} \prod_{n=1}^{\infty} (1-e^{2\pi inz}).
\end{align*}
Throughout the remainder of the paper we take $q=q(z):=e^{2\pi i z}$ with $z\in \hh$.
Thus we can express $\eta (z)$ as
\begin{align}
\eta (z) = q^{1/24} \prod_{n=1}^{\infty} (1-q^{n}).
\end{align}
An eta quotient is defined to be a finite product of the form
\begin{align*}
f(z) = \prod_{\delta } \eta^{r_{\delta}} ( \delta z),
\end{align*}
where $\delta$ runs through a finite set of positive integers and the exponents $r_{\delta}$ are non-zero integers.
It is known (see for example \cite[Corollary 1.3.4]{Berndt}) that
\begin{align}
\varphi(q) = \frac{\eta^5(2z)}{\eta^2(z) \eta^2(4z)}.
\end{align}
\section{Modular spaces $M_2(\Gamma_0(40),\chi_i)$ with $i\in\{0,1,2,3\}$}
For $n\in \nn$ and Dirichlet characters $\chi$ and $\psi$ we define $\displaystyle \sigma_{\chi,\psi}(n)$ by
\begin{align}
\sigma_{\chi,\psi }(n) :=\sum_{1 \leq m|n}\psi(m)\chi(n/m)m.
\end{align}
If $n \not\in \nn$ we set $\sigma_{\chi,\psi }(n)=0$. Let $\chi_0$ denote the trivial character, that is $\chi_0 (n) =1$ for all $n \in \zz$.
Hence $\sigma_{\chi_0, \chi_0}(n) $ coincides with the sum of divisors function $\sigma (n)$. Let $N\in\nn$. The modular subgroup $\Gamma_0(N)$ is defined by
\begin{align*}
\Gamma_0(N) = \left\{ \left(
\begin{array}{cc}
a & b \\
c & d
\end{array}
\right) \Big | \; a,b,c,d\in \zz ,~ ad-bc = 1,~c \equiv 0 \pmd {N}
\right\} .
\end{align*}
Let $\chi$ be a Dirichlet character of modulus dividing $N$ and let $k\in \zz$.
We write $M_k(\Gamma_0(N),\chi)$ to denote the space of modular forms of weight $k$ with multiplier system $\chi$ for $\Gamma_0(N)$, and $E_{k}(\Gamma_0(N),\chi)$ and $S_{k}(\Gamma_0(N),\chi)$ to denote the subspaces of Eisenstein forms and cusp forms of $M_{k}(\Gamma_0(N),\chi)$, respectively. If $\chi =\chi_0$, then we write $M_k(\Gamma_0(N))$ for $ M_k(\Gamma_0(N),\chi_0)$, and $S_k(\Gamma_0(N))$ for $S_k(\Gamma_0(N),\chi_0)$.
It is known (see for example \cite[p. 83]{stein}) that
\begin{align}
M_{k}(\Gamma_0(N),\chi)=E_{k}(\Gamma_0(N),\chi)\oplus S_{k}(\Gamma_0(N),\chi).
\end{align}
For $n \in \zz$ we define three Dirichlet characters by
\begin{align}
\chi_1 ( n)=\dqu{5}{n},~\chi_2 (n) =\dqu{8}{n},~\chi_3 (n) =\dqu{40}{n}.
\end{align}
We define the Eisenstein series
\begin{align}
& L(q):=E_{\chi_0,\chi_0}(q)=-\frac{1}{24}+ \sum_{n=1}^{\infty} \sigma(n) q^n,\\
& E_{\chi_0,\chi_1}(q)=-\frac{1}{5}+\sum_{n=1}^{\infty} \sigma_{\chi_0, \chi_1}(n) q^n,~
E_{\chi_1,\chi_0}(q)=\sum_{n=1}^{\infty} \sigma_{\chi_1, \chi_0}(n) q^n,\\
&E_{\chi_0,\chi_2}(q)=-\frac{1}{2}+\sum_{n=1}^{\infty} \sigma_{\chi_0, \chi_2}(n) q^n,~
E_{\chi_2,\chi_0}(q)=\sum_{n=1}^{\infty} \sigma_{\chi_2, \chi_0}(n) q^n,\\
&E_{\chi_0,\chi_3}(q)=-7+\sum_{n=1}^{\infty} \sigma_{\chi_0, \chi_3}(n) q^n, ~
E_{\chi_3,\chi_0}(q)=\sum_{n=1}^{\infty} \sigma_{\chi_3, \chi_0}(n) q^n,\\
& E_{\chi_1,\chi_2}(q)=\sum_{n=1}^{\infty} \sigma_{\chi_1, \chi_2}(n) q^n,~\hspace{10mm}
E_{\chi_2,\chi_1}(q)=\sum_{n=1}^{\infty} \sigma_{\chi_2, \chi_1}(n) q^n.
\end{align}
We use the following lemma to determine if certain eta quotients are modular forms.
See \cite[p. 174]{Gordon}, \cite[Corollary 2.3, p. 37]{Kohler}, \cite[Theorem 5.7, p. 101]{Kilford}, \cite{Ligozat} and \cite[Theorem 1.64]{ono}.
\begin{lemma} {\rm \bf (Ligozat)}
Let $N\in\nn$ and $f(z)=\ds \prod_{1\leq \delta \mid N} \eta^{r_{\delta}}(\delta z)$ be an eta quotient.
Let $s=\ds \prod_{ 1\leq \delta \mid N} \delta^{|r_{\delta}|}$ and $\ds k = \frac{1}{2} \sum_{1 \leq \delta \mid N} r_{\delta}$.
Suppose that the following conditions are satisfied:\\
{\em (L1)~} $\ds \sum_{ 1\leq \delta \mid N} \delta \cdot r_{\delta} \equiv 0 \smod {24}$,\\
{\em (L2)~} $\ds \sum_{ 1 \leq \delta \mid N} \frac{N}{\delta} \cdot r_{\delta} \equiv 0 \smod {24}$,\\
{\em (L3)~} $\ds \sum_{1 \leq \delta \mid N} \frac{ \gcd (d, \delta)^2 \cdot r_{\delta} }{\delta} \geq 0 $ for each
positive divisor $d$ of $N$, \\
{\em (L4)~} $k$ is an integer.
\noindent
Then $f(z) \in M_k(\Gamma_0(N),\chi)$, where the character $\chi$ is given by $\chi (m) = \ds \dqu{(-1)^ks}{m}$.
{\em (L3)$'$~} In addition to the above conditions, if the inequality in {\em(L3)} is strict for each positive divisor $d$ of $N$,
then $f(z) \in S_k(\Gamma_0(N), \chi)$.
\end{lemma}
In Table 2.1, we group our twenty-six quaternary forms $(a_1, a_2, a_3 ,a_4)$
according to modular spaces $M_2(\Gamma_0(40),\chi)$ to which
$\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4})$ belong.
\begin{center}
Table 2.1\\[1mm]
\begin{tabular}{|l|l|l|l|} \hline
$M_2(\Gamma_0(40))$ & $ M_2(\Gamma_0(40),\chi_1)$ & $M_2(\Gamma_0(40),\chi_2)$ & $M_2(\Gamma_0(40),\chi_3)$\\
\hline
$(1,1,1,1) \checkmark$ & $(1,1,1,5)\checkmark$ & $(1,1,1,2)\checkmark$ & $(1,1,1,10)$ \\
$(1,1,2,2)\checkmark$ & $(1,1,2,10)\ast$ & $(1,1,5,10)$ &$(1,1,2,5)\ast$ \\
$(1,1,5,5)\checkmark$ &$(1,2,2,5)\ast$ & $(1,2,2,2)\checkmark$ & $(1,2,2,10)$ \\
$(1,1,10,10)$ & $(1,5,5,5)\checkmark$ & $(1,2,5,5)$ & $(1,5,5,10)$ \\
$(1,2,5,10)\ast$ & $(1,5,10,10)$ & $(1,2,10,10)$ & $(1,10,10,10)$ \\
$(2,2,5,5)$ & $(2,5,5,10)$ & $(2,2,5,10)$ & $(2,2,2,5)$ \\
& & & $(2,5,5,5)$ \\
& & & $(2,5,10,10)$ \\
\hline
\end{tabular}
\end{center}
Formulas $N(a_1,a_2,a_3,a_4;n)$ for the forms with a checkmark ($\checkmark$) in Table 2.1 are known.
Of the remaining nineteen forms, four are universal and identified with an asterisk ($\ast$).
We deduce from \cite[Sec. 6.1, p. 93]{stein} that
\begin{align}
\dim(E_2(\Gamma_0(40)))=7,~\dim(S_2(\Gamma_0(40)))=3.
\end{align}
We also deduce from \cite[Sec. 6.3, p. 98]{stein} that
\begin{align}
&\dim(E_2(\Gamma_0(40),\chi_1))=8,~\dim(S_2(\Gamma_0(40),\chi_1))=2, \\
&\dim(E_2(\Gamma_0(40),\chi_2))=4,~\dim(S_2(\Gamma_0(40),\chi_2))=4, \\
&\dim(E_2(\Gamma_0(40),\chi_3))=4,~\dim(S_2(\Gamma_0(40),\chi_3))=4.
\end{align}
\begin{theorem}
Let $\chi_1, \chi_2, \chi_3$ be as in {\em (2.3)}.
If $(a_1,a_2,a_3,a_4)$ is in the first, second, third or fourth column of {\em Table 2.1}, then
\begin{align*}
&\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4}) \in M_2(\Gamma_0(40)),\\
&\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4}) \in M_2(\Gamma_0(40),\chi_1),\\
&\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4}) \in M_2(\Gamma_0(40),\chi_2),\\
&\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4}) \in M_2(\Gamma_0(40),\chi_3),
\end{align*}
respectively.
\end{theorem}
{\bf Proof.}
The assertion directly follows from (1.3) and Lemma 2.1.
\eop
\vspace{1mm}
Let $n\in\nn$. We define the eta quotients $A_k(q)$, $B_k(q)$, $C_k(q)$, $D_k(q)$
and integers $a_k(n)$, $b_k(n)$, $c_k(n)$, $d_k(n)$ as follows:
\begin{align}
&A_1(q)=\displaystyle\sum_{n=1}^{\infty} a_1(n)q^n = \eta^2(2z)\eta^2({10}z),\\
&A_2(q)=A_1(q^2)=\displaystyle\sum_{n=1}^{\infty} a_2(n)q^n = \eta^2(4z)\eta^2({20z}), \\
&A_3(q)=\displaystyle\sum_{n=1}^{\infty} a_3(n)q^n = \frac{\eta^5(4z)\eta({10}z)\eta^2({40}z)}{\eta(2z)\eta^2(8z)\eta({20}z)},\\
&B_1(q)=\sum_{n=1}^{\infty}b_1(n)q^n=\frac{\eta(2z) \eta^4(20 z)}{\eta(10z)},\\
&B_2(q)=\sum_{n=1}^{\infty}b_2(n)q^n=\frac{\eta^4(4z) \eta(10 z)}{\eta(2z)}, \\
&C_1(q)=\sum_{n=1}^{\infty}c_1(n)q^n=\frac{\eta^2(z) \eta(8 z)\eta^2(10 z)\eta(40 z)}{\eta(2z)\eta(20z)},\\
&C_2(q)=\sum_{n=1}^{\infty}c_2(n)q^n=\frac{\eta(z) \eta(5z)\eta^2(8z)\eta^2(20z)}{\eta(4z)\eta(10z)},\\
&C_3(q)=\sum_{n=1}^{\infty}c_3(n)q^n=\frac{\eta^6(2z) \eta(10z)\eta^2(40z)}{\eta^2(z)\eta^2(4z)\eta(20z)},\\
&C_4(q)=\sum_{n=1}^{\infty}c_4(n)q^n=\frac{\eta^6(4z) \eta^2(5z)\eta(20z)}{\eta^2(2z)\eta^2(8z)\eta(10z)},\\
&D_1(q)=\sum_{n=1}^{\infty}d_1(n)q^n=\frac{\eta^2(z) \eta^6(4 z)\eta(20 z)}{\eta^3(2z)\eta^2(8z)},\\
&D_2(q)=\sum_{n=1}^{\infty}d_2(n)q^n=\frac{\eta^2(5z) \eta(8z)\eta(10z)\eta(40z)}{\eta(20z)},\\
&D_3(q)=\sum_{n=1}^{\infty}d_3(n)q^n=\frac{\eta(z) \eta(5z)\eta(20z)\eta^2(40z)}{\eta(10z)},\\
&D_4(q)=\sum_{n=1}^{\infty}d_4(n)q^n=\frac{\eta(z) \eta(4z)\eta(5z)\eta^2(8z)}{\eta(2z)}.
\end{align}
\begin{theorem}
Let $\chi_1, \chi_2, \chi_3$ be as in {\em (2.3)}. Then
\begin{align*}
&\{A_1(q), A_2(q), A_3(q)\}, \hspace{12mm}
\{B_1(q), B_2(q)\}, \\
&\{C_1 (q), C_2 (q), C_3 (q), C_4 (q) \}, ~~
\{D_1(q), D_2(q), D_3(q), D_4(q)\}
\end{align*}
are bases for $S_2(\Gamma_0(40))$, $S_2(\Gamma_0(40),\chi_1)$, $S_2(\Gamma_0(40),\chi_2)$ and $S_2(\Gamma_0(40),\chi_3)$, respectively.
\end{theorem}
{\bf Proof.} The set $\{A_1(q), A_2(q), A_3(q)\}$ is linearly independent over $\cc$.
By Lemma 2.1, we have $A_k(q)\in S_2(\Gamma_0(40))$ for $k=1,2,3$. The assertion now follows from (2.9).
Similarly, the remaining three assertions follow from (2.10), (2.11), (2.12) and Lemma 2.1.
\eop
\begin{theorem}
Let $\chi_0$ be the trivial character and $\chi_1, \chi_2, \chi_3$ be as in {\em(2.3)}. Then
\begin{align*}
&\{L(q)-tL(q^t) \mid t=2,4,5,8,10,20,40 \} ,\\
&\{E_{\chi_0,\chi_1}(q^t), E_{\chi_1,\chi_0}(q^t) \mid t=1,2,4,8\}, \\
&\{E_{\chi_0,\chi_2}(q^t), E_{\chi_2,\chi_0}(q^t) \mid t=1,5\}, \\
&\{E_{\chi_0,\chi_3}(q), E_{\chi_1,\chi_2}(q), E_{\chi_2,\chi_1}(q), E_{\chi_3,\chi_0}(q)\}
\end{align*}
are bases for $E_2(\Gamma_0(40))$, $ E_2(\Gamma_0(40),\chi_1)$, $ E_2(\Gamma_0(40),\chi_2)$ and $ E_2(\Gamma_0(40),\chi_3)$, respectively.
\end{theorem}
{\bf Proof.}
The assertions follow from \cite[Theorem 5.9]{stein} with
$\chi=\psi=\chi_0$;
$\epsilon=\chi_1$ and $\chi, \psi \in \{\chi_0, \chi_1\}$;
$\epsilon=\chi_2$ and $\chi, \psi \in \{\chi_0, \chi_2\}$;
$\epsilon=\chi_3$ and $\chi, \psi \in \{\chi_0, \chi_1, \chi_2, \chi_3\}$, respectively.
\eop
\begin{theorem}
Let $\chi_0$ be the trivial character and $\chi_1, \chi_2, \chi_3$ be as in {\em(2.3)}. Then
\begin{align*}
&\{L(q)-tL(q^t)\mid t=2,4,5,8,10,20,40\} \cup \{A_1(q), A_2(q), A_3(q)\}, \\
&\{E_{\chi_0,\chi_1}(q^t), E_{\chi_1,\chi_0}(q^t) \mid t=1,2,4,8\}\cup\{B_1(q),B_2(q)\},\\
&\{E_{\chi_0,\chi_2}(q^t), E_{\chi_2,\chi_0}(q^t) \mid t=1,5\}\cup \{C_k(q)\mid k=1,2,3,4\}, \\
&\{E_{\chi_0,\chi_3}(q), E_{\chi_1,\chi_2}(q) ,E_{\chi_2,\chi_1}(q),E_{\chi_3,\chi_0}(q)\}\cup\{ D_k(q) \mid k=1,2,3,4 \}
\end{align*}
are bases for $M_2(\Gamma_0(40))$, $M_2(\Gamma_0(40),\chi_1)$, $M_2(\Gamma_0(40),\chi_2)$, $M_2(\Gamma_0(40),\chi_3)$, respectively.
\end{theorem}
{\bf Proof.} The assertions follow from (2.2), Theorems 2.2 and 2.3.
\eop
\vspace{1mm}
We now give four theorems (Theorems 2.5--2.8) from which the theorems of Section 3 (Theorems 3.1--3.4) follow.
\begin{theorem}
We have
\begin{align*}
&\begin{aligned}
\varphi^4 (q) =& 8L(q) -32L(q^4),
\end{aligned}\\
&\begin{aligned}
\varphi^2 (q) \varphi^2 (q^2)=& 4 L(q) - 4 L(q^2) +8 L(q^4) - 32 L(q^{8}) ,
\end{aligned}\\
&\begin{aligned}
\varphi^2 (q) \varphi^2 (q^5)=& \frac{4}{3}L(q) - \frac{16}{3}L(q^4) + \frac{20}{3}L(q^5) - \frac{80}{3}L(q^{20})
+ \frac{8}{3}A_1(q),
\end{aligned}\\
&\begin{aligned}
\varphi^2 (q) \varphi^2 (q^{10})=& \frac{2}{3}L(q) - \frac{2}{3}L(q^2) + \frac{4}{3}L(q^4) + \frac{10}{3}L(q^5)
- \frac{16}{3}L(q^8) - \frac{10}{3} L(q^{10}) \\
& +\frac{20}{3}L(q^{20}) -\frac{80}{3}L(q^{40})+\frac{10}{3} A_1(q) + \frac{8}{3}A_2(q) +4 A_3(q),
\end{aligned}\\
&\begin{aligned}
\varphi (q)\varphi (q^2)\varphi (q^5)\varphi (q^{10})=& L(q) - L(q^2) -2 L(q^4) -5L(q^5) +8 L(q^8) \\
&+5 L(q^{10}) +10 L(q^{20}) -40L(q^{40}) + A_1(q) + 2A_3(q),
\end{aligned}\\
&\begin{aligned}
\varphi^2 (q^2) \varphi^2 (q^5)=& \frac{2}{3}L(q) - \frac{2}{3}L(q^2) + \frac{4}{3}L(q^4) + \frac{10}{3}L(q^5)
- \frac{16}{3}L(q^8) - \frac{10}{3} L(q^{10}) \\
&+\frac{20}{3}L(q^{20}) -\frac{80}{3}L(q^{40}) -\frac{2}{3} A_1(q) + \frac{8}{3}A_2(q) -4 A_3(q).
\end{aligned}
\end{align*}
\end{theorem}
{\bf Proof.} Let $(a_1,a_2,a_3,a_4)$ be any of the quaternary quadratic forms listed in the first column of Table 2.1. By Theorem 2.1
we have $\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4}) \in M_2(\Gamma_0(40))$.
By Theorem 2.4, $\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4})$ must be a linear combination of $L(q)-tL(q^t)$ ($t=2,4,5,8,10,20,40$)
and $A_k(q)$ ($k\in\{1,2,3\}$), namely
\begin{align}
\varphi(q^{a_1})\varphi(q^{a_2})\varphi(q^{a_3})\varphi(q^{a_4}) =& x_1( L(q) -2L(q^2))+x_2 ( L(q) -4L(q^4)) \nonumber\\
&+ x_3( L(q) -5L(q^5)) + x_4 ( L(q) -8L(q^8)) \nonumber \\
& + x_5 ( L(q) -10L(q^{10})) + x_6 ( L(q) -20L(q^{20})) \\
& +x_7 ( L(q) -40L(q^{40})) +y_1 A_1(q) + y_2A_2(q) + y_3 A_3(q). \nonumber
\end{align}
We only prove the last equation in the theorem as the others can be proven similarly. Let $(a_1,a_2,a_3,a_4)=(2,2,5,5)$.
Appealing to \cite[Theorem 3.13]{Kilford}, we find that the Sturm bound for the modular space $M_2(\Gamma_0(40))$ is $12$.
So, equating the coefficients of $q^{n}$ for $0\leq n\leq 12$ on both sides of (2.26),
we find a system of linear equations with the unknowns $x_i$ ($1\leq i\leq 7$), $y_1$, $y_2$ and $y_3$.
Using MAPLE we solve the system and find that
\begin{eqnarray*}
x_1=x_5=\frac{1}{3}, x_2=x_6=-\frac{1}{3}, x_3=y_1=-\frac{2}{3}, x_4=x_7=\frac{2}{3}, y_2=\frac{8}{3}, y_3=-4.
\end{eqnarray*}
Substituting these values back in (2.26), and with the obvious simplifications, we find the asserted equation.
\eop
\vspace{2mm}
\begin{corollary} Let $n\in\nn$. We have
\begin{align*}
N(1,1,10,10;n)=N(2,2,5,5;n) \mbox{ if } n\equiv 0 \pmdd 2.
\end{align*}
\end{corollary}
{\bf Proof.} From Theorem 2.5, we have
\begin{align}
\varphi^2(q)\varphi^2(q^{10})-\varphi^2(q^2)\varphi^2(q^5)=4A_1(q)+8A_3(q).
\end{align}
It is clear from (1.2), (2.13) and (2.15) that
\begin{align}
a_1(n)= a_3(n)=0 \mbox{ if } n\equiv 0 \pmdd 2.
\end{align}
The assertion now follows from (1.1), (2.27) and (2.28).
\eop
\vspace{2mm}
Similarly to Theorem 2.5, Theorems 2.6--2.8 follow from Theorems 2.1 and 2.4.
\begin{theorem}
Let $\chi_0$ be the trivial character and $\chi_1$ be as in {\em(2.3)}. Then
\begin{align*}
&\begin{aligned}
\varphi^3 (q) \varphi (q^5)= & E_{\chi_0,\chi_1}(q) -2E_{\chi_0,\chi_1}(q^2) -4 E_{\chi_0,\chi_1}(q^4) + 5E_{\chi_1,\chi_0}(q) \\
& + 10E_{\chi_1,\chi_0}(q^2) -20E_{\chi_1,\chi_0}(q^4),
\end{aligned}\\
&\begin{aligned}
\varphi^2 (q)\varphi (q^2)\varphi (q^{10})= &-\frac{1}{2}E_{\chi_0,\chi_1}(q) +\frac{1}{2} E_{\chi_0,\chi_1}(q^2) - E_{\chi_0,\chi_1}(q^4) \\
&-4 E_{\chi_0,\chi_1}(q^8) +\frac{5}{2}E_{\chi_1,\chi_0}(q)+\frac{5}{2}E_{\chi_1,\chi_0}(q^2) \\
& + 5E_{\chi_1,\chi_0}(q^4) -20E_{\chi_1,\chi_0}(q^8)+ 2B_2(q),
\end{aligned}\\
&\begin{aligned}
\varphi(q)\varphi ^2(q^2)\varphi (q^5)= & \frac{1}{2}E_{\chi_0,\chi_1}(q) -\frac{1}{2} E_{\chi_0,\chi_1}(q^2) - E_{\chi_0,\chi_1}(q^4) \\
& -4 E_{\chi_0,\chi_1}(q^8) +\frac{5}{2}E_{\chi_1,\chi_0}(q)+\frac{5}{2}E_{\chi_1,\chi_0}(q^2)\\
& - 5E_{\chi_1,\chi_0}(q^4)+20E_{\chi_1,\chi_0}(q^8)+5 B_1(q) -B_2(q),
\end{aligned}\\
&\begin{aligned}
\varphi (q) \varphi^3 (q^5)=& E_{\chi_0,\chi_1}(q) -2 E_{\chi_0,\chi_1}(q^2) -4 E_{\chi_0,\chi_1}(q^4) + E_{\chi_1,\chi_0}(q) \\
&+ 2E_{\chi_1,\chi_0}(q^2) -4E_{\chi_1,\chi_0}(q^4),
\end{aligned}\\
&\begin{aligned}
\varphi(q)\varphi (q^5)\varphi^2 (q^{10})=&\frac{1}{2}E_{\chi_0,\chi_1}(q) -\frac{1}{2} E_{\chi_0,\chi_1}(q^2) - E_{\chi_0,\chi_1}(q^4) \\
& -4E_{\chi_0,\chi_1}(q^8)+\frac{1}{2}E_{\chi_1,\chi_0}(q)+\frac{1}{2}E_{\chi_1,\chi_0}(q^2)\\
& - E_{\chi_1,\chi_0}(q^4)+4E_{\chi_1,\chi_0}(q^8)- B_1(q) +B_2(q),
\end{aligned}\\
&\begin{aligned}
\varphi(q^2)\varphi^2 (q^5)\varphi (q^{10})=&-\frac{1}{2}E_{\chi_0,\chi_1}(q) +\frac{1}{2} E_{\chi_0,\chi_1}(q^2) - E_{\chi_0,\chi_1}(q^4)\\
& -4E_{\chi_0,\chi_1}(q^8) +\frac{1}{2}E_{\chi_1,\chi_0}(q)+\frac{1}{2}E_{\chi_1,\chi_0}(q^2)\\
&+ E_{\chi_1,\chi_0}(q^4) -4E_{\chi_1,\chi_0}(q^8)-2 B_1(q).
\end{aligned}
\end{align*}
\end{theorem}
\begin{theorem}
Let $\chi_0$ be the trivial character and $\chi_2$ be as in {\em (2.3)}. Then
\begin{align*}
&\begin{aligned}
\varphi^3(q) \varphi (q^2) =& -2 E_{\chi_0,\chi_2}(q) + 8 E_{\chi_2,\chi_0}(q) ,
\end{aligned}\\
&\begin{aligned}
\varphi^2(q) \varphi (q^5) \varphi (q^{10})
=& \frac{2}{13}\big(2 E_{\chi_0,\chi_2}(q) - 15E_{\chi_0,\chi_2}(q^5) +8 E_{\chi_2,\chi_0}(q) + 60E_{\chi_2,\chi_0}(q^5)\big) \\
&+ \frac{8}{13}\big(6 C_1(q) - 4C_2(q) - 3 C_3(q) + 4 C_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi(q) \varphi^3 (q^2) =& -2 E_{\chi_0,\chi_2}(q) + 4 E_{\chi_2,\chi_0}(q) ,
&\end{aligned}\\
&\begin{aligned}
\varphi(q) \varphi (q^2) \varphi^2 (q^5)=& \frac{2}{13}\big(-3 E_{\chi_0,\chi_2}(q) -10E_{\chi_0,\chi_2}(q^5) +12E_{\chi_2,\chi_0}(q) \big)\\
& -\frac{80}{13}E_{\chi_2,\chi_0}(q^5) + \frac{8}{13}\big(-2 C_2(q) - 5 C_3(q) + C_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi(q) \varphi (q^2) \varphi^2 (q^{10})=&\frac{2}{13}\big(-3 E_{\chi_0,\chi_2}(q) -10E_{\chi_0,\chi_2}(q^5) +6 E_{\chi_2,\chi_0}(q)\big)\\
&-\frac{40}{13}E_{\chi_2,\chi_0}(q^5) + \frac{4}{13}\big(2 C_1(q) - 2 C_3(q) + 5 C_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi^2(q^2) \varphi (q^5) \varphi (q^{10})
=& \frac{2}{13}\big(2 E_{\chi_0,\chi_2}(q) -15E_{\chi_0,\chi_2}(q^5) +4 E_{\chi_2,\chi_0}(q) +30E_{\chi_2,\chi_0}(q^5)\big) \\
& + \frac{4}{13}\big(-4 C_1(q) + 12 C_2(q) + 8 C_3(q) - 3 C_4(q)\big).
\end{aligned}
\end{align*}
\end{theorem}
\begin{theorem}
Let $\chi_0$ be the trivial character and $\chi_1, \chi_2, \chi_3$ be as in {\em(2.2)}. Then
\begin{align*}
&\begin{aligned}
\varphi^3 (q)\varphi (q^{10})= & \frac{1}{7} \big(-E_{\chi_0,\chi_3}(q) - 5E_{\chi_1,\chi_2}(q) +4 E_{\chi_2,\chi_1}(q)
+20E_{\chi_3,\chi_0}(q)\big) \\
&+\frac{4}{7}\big(-3D_1(q)+15D_2(q) -15 D_3(q)+ 9 D_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi^2(q)\varphi (q^2)\varphi (q^5)=&\frac{1}{7}\big(- E_{\chi_0,\chi_3}(q) +5E_{\chi_1,\chi_2}(q) - 4 E_{\chi_2,\chi_1}(q) +20 E_{\chi_3,\chi_0}(q)\big)\\
& + \frac{8}{7}\big(-D_1(q) + 2D_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi(q)\varphi^2 (q^2)\varphi (q^{10})=& \frac{1}{7}\big(- E_{\chi_0,\chi_3}(q) -5E_{\chi_1,\chi_2}(q) +2 E_{\chi_2,\chi_1}(q) +10E_{\chi_3,\chi_0}(q)\big) \\
& +\frac{4}{7}\big(D_1(q)+ 5 D_2(q) + 5 D_3(q)+D_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi(q)\varphi^2 (q^5)\varphi (q^{10})=& \frac{1}{7} \big(-E_{\chi_0,\chi_3}(q) -E_{\chi_1,\chi_2}(q) +4 E_{\chi_2,\chi_1}(q) + 4 E_{\chi_3,\chi_0}(q) \\
& +\frac{8}{7}\big(D_2(q) -D_3(q)+D_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi (q)\varphi^3 (q^{10})=& - \frac{1}{7} \big(E_{\chi_0,\chi_3}(q) + E_{\chi_1,\chi_2}(q) -2 E_{\chi_2,\chi_1}(q) - 2E_{\chi_3,\chi_0}(q) \big) \\
&+\frac{12}{7}\big(D_2(q) +D_3(q)+D_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi^3 (q^2) \varphi (q^5)=& \frac{1}{7}\big(- E_{\chi_0,\chi_3}(q) + 5E_{\chi_1,\chi_2}(q) - 2E_{\chi_2,\chi_1}(q) + 10E_{\chi_3,\chi_0}(q)
-12D_1(q)\big) ,
\end{aligned}\\
&\begin{aligned}
\varphi (q^2) \varphi^3 (q^5)=& \frac{1}{7}\big(- E_{\chi_0,\chi_3}(q) +E_{\chi_1,\chi_2}(q) -4 E_{\chi_2,\chi_1}(q)
+4E_{\chi_3,\chi_0}(q) \big) \\
&-\frac{12}{7}\big(D_1(q)+D_2(q) + 3D_3(q) -D_4(q)\big),
\end{aligned}\\
&\begin{aligned}
\varphi(q^2)\varphi (q^5)\varphi^2 (q^{10})=& \frac{1}{7}\big(- E_{\chi_0,\chi_3}(q) + E_{\chi_1,\chi_2}(q) - 2E_{\chi_2,\chi_1}(q) + 2 E_{\chi_3,\chi_0}(q)\big)\\
&+\frac{4}{7}\big(-D_1(q)+D_2(q) - 3D_3(q)+D_4(q)\big).
\end{aligned}
\end{align*}
\end{theorem}
\section{Main results}
\begin{theorem}
Let $n \in \nn$. We have
\begin{align*}
&\begin{aligned}
N(1,1,5,5;n) = & \frac{4}{3}\sigma(n) - \frac{16}{3} \sigma (n/4) + \frac{20}{3} \sigma (n/5)
-\frac{80}{3}\sigma(n/20) + \frac{8}{3}a_1(n),
\end{aligned}\\
&\begin{aligned}
N(1,1,10,10;n) = & \frac{2}{3}\sigma(n) - \frac{2}{3} \sigma (n/2) + \frac{4}{3} \sigma (n/4) + \frac{10}{3} \sigma (n/5) - \frac{16}{3}\sigma(n/8) \\ &- \frac{10}{3}\sigma(n/10) + \frac{20}{3}\sigma(n/20) - \frac{80}{3}\sigma(n/40) + \frac{10}{3}a_1(n) \\ &+ \frac{8}{3}a_2(n) +4a_3(n),
\end{aligned}\\
&\begin{aligned}
N(1,2,5,10;n) = & \sigma(n) - \sigma (n/2) -2 \sigma (n/4) - 5 \sigma (n/5) + 8 \sigma(n/8) \\
& + 5\sigma(n/10) + 10\sigma(n/20) - 40\sigma(n/40) + a_1(n) + 2a_3(n),
\end{aligned}\\
&\begin{aligned}
N(2,2,5,5;n) = & \frac{2}{3}\sigma(n) - \frac{2}{3} \sigma (n/2) + \frac{4}{3} \sigma (n/4) + \frac{10}{3} \sigma (n/5) - \frac{16}{3}\sigma(n/8) \\
&- \frac{10}{3}\sigma(n/10) +\frac{20}{3}\sigma(n/20) - \frac{80}{3}\sigma(n/40) - \frac{2}{3}a_1(n) \\&+ \frac{8}{3}a_2(n) -4a_3(n).
\end{aligned}
\end{align*}
\end{theorem}
{\bf Proof.} The assertions follow from (1.1), (2.4) and Theorem 2.5.
\eop
\begin{theorem}
Let $n \in \nn$. Let $\sigma_{\chi_i, \chi_j}(n)$ be as in {\em(2.1)} for $ i, j \in \{0,1\}$. We have
\begin{align*}
&\begin{aligned}
N(1,1,1,5;n)= &\sigma_{\chi_0,\chi_1}(n) -2\sigma_{\chi_0,\chi_1}(n/2) -4 \sigma_{\chi_0,\chi_1}(n/4) \\
& + 5\sigma_{\chi_1,\chi_0}(n)+10\sigma_{\chi_1,\chi_0}(n/2)- 20\sigma_{\chi_1,\chi_0}(n/4),
\end{aligned}\\
&\begin{aligned}
N(1,1,2,10;n) = & -\frac{1}{2}\sigma_{\chi_0,\chi_1}(n) +\frac{1}{2} \sigma_{\chi_0,\chi_1}(n/2) - \sigma_{\chi_0,\chi_1}(n/4)
-4 \sigma_{\chi_0,\chi_1}(n/8) \\
&+\frac{5}{2}\sigma_{\chi_1,\chi_0}(n)+\frac{5}{2}\sigma_{\chi_1,\chi_0}(n/2)+ 5\sigma_{\chi_1,\chi_0}(n/4)\\
&-20\sigma_{\chi_1,\chi_0}(n/8) + 2b_2(n),
\end{aligned}\\
&\begin{aligned}
N(1,2,2,5;n)= &\frac{1}{2}\sigma_{\chi_0,\chi_1}(n) -\frac{1}{2} \sigma_{\chi_0,\chi_1}(n/2) - \sigma_{\chi_0,\chi_1}(n/4)
-4 \sigma_{\chi_0,\chi_1}(n/8) \\
& +\frac{5}{2}\sigma_{\chi_1,\chi_0}(n)+\frac{5}{2}\sigma_{\chi_1,\chi_0}(n/2)- 5\sigma_{\chi_1,\chi_0}(n/4)
+20\sigma_{\chi_1,\chi_0}(n/8) \\
&+5 b_1(n) - b_2(n),
\end{aligned}\\
&\begin{aligned}
N(1,5,5,5;n)= &\sigma_{\chi_0,\chi_1}(n) -2\sigma_{\chi_0,\chi_1}(n/2) -4 \sigma_{\chi_0,\chi_1}(n/4) \\
& + \sigma_{\chi_1,\chi_0}(n)+2\sigma_{\chi_1,\chi_0}(n/2)- 4\sigma_{\chi_1,\chi_0}(n/4),
\end{aligned}\\
&\begin{aligned}
N(1,5,10,10;n)= &\frac{1}{2}\sigma_{\chi_0,\chi_1}(n) -\frac{1}{2} \sigma_{\chi_0,\chi_1}(n/2) - \sigma_{\chi_0,\chi_1}(n/4)
-4\sigma_{\chi_0,\chi_1}(n/8) \\& +\frac{1}{2}\sigma_{\chi_1,\chi_0}(n)+\frac{1}{2}\sigma_{\chi_1,\chi_0}(n/2)- \sigma_{\chi_1,\chi_0}(n/4)
+4\sigma_{\chi_1,\chi_0}(n/8) \\&- b_1(n) +b_2(n),
\end {aligned}\\
&\begin{aligned}
N(2,5,5,10;n)=&-\frac{1}{2}\sigma_{\chi_0,\chi_1}(n) +\frac{1}{2} \sigma_{\chi_0,\chi_1}(n/2) - \sigma_{\chi_0,\chi_1}(n/4)
-4\sigma_{\chi_0,\chi_1}(n/8) \\
&+\frac{1}{2}\sigma_{\chi_1,\chi_0}(n)+\frac{1}{2}\sigma_{\chi_1,\chi_0}(n/2)+ \sigma_{\chi_1,\chi_0}(n/4)\\
& -4\sigma_{\chi_1,\chi_0}(n/8) -2 b_1(n).
\end {aligned}
\end{align*}
\end{theorem}
{\bf Proof.} The assertions follow from (1.1), (2.5) and Theorem 2.6.
\eop
\begin{theorem}
Let $n \in \nn$. Let $\sigma_{\chi_i, \chi_j}(n)$ be as in {\em(2.1)} for $ i, j \in \{0,2\}$. Then
\begin{align*}
&\begin{aligned}
N(1, 1, 1,2;n) =&-2 \sigma_{\chi_0,\chi_2}(n) +8 \sigma_{\chi_2,\chi_0}(n),
\end{aligned}\\
&\begin{aligned}
N(1,1,5,10 ;n) = & \frac{2}{13}\big(2 \sigma_{\chi_0,\chi_2}(n) -15\sigma_{\chi_0,\chi_2}(n/5) +8\sigma_{\chi_2,\chi_0}(n) + 60\sigma_{\chi_2,\chi_0}(n/5)\big) \\&
+ \frac{8}{13}\big(6 c_1(n) - 4 c_2(n) - 3 c_3(n) + 4 c_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(1, 2, 2,2;n) =&-2 \sigma_{\chi_0,\chi_2}(n) +4 \sigma_{\chi_2,\chi_0}(n),
\end{aligned}\\
&\begin{aligned}
N(1, 2, 5,5;n) =&-\frac{2}{13}\big(3 \sigma_{\chi_0,\chi_2}(n) +10\sigma_{\chi_0,\chi_2}(n/5) - 12 \sigma_{\chi_2,\chi_0}(n) +40\sigma_{\chi_2,\chi_0}(n/5) \big) \\
& + \frac{8}{13}\big(-2 c_2(n) - 5 c_3(n) + c_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(1,2,10,10;n) = &\frac{2}{13}\big(-3 \sigma_{\chi_0,\chi_2}(n) -10\sigma_{\chi_0,\chi_2}(n/5) +6\sigma_{\chi_2,\chi_0}(n) - 20\sigma_{\chi_2,\chi_0}(n/5)\big)\\
&+ \frac{4}{13}\big(2 c_1(n) - 2 c_3(n) + 5c_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(2,2,5,10;n)= &\frac{2}{13}\big(2 \sigma_{\chi_0,\chi_2}(n) -15\sigma_{\chi_0,\chi_2}(n/5) +4 \sigma_{\chi_2,\chi_0}(n) + 30\sigma_{\chi_2,\chi_0}(n/5)\big)\\&
+ \frac{4}{13}\big(-4 c_1(n) + 12 c_2(n) + 8c_3(n) - 3 c_4(n)\big).
\end {aligned}
\end{align*}
\end{theorem}
{\bf Proof.} The assertions follow from (1.1), (2.6) and Theorem 2.7.
\eop
\begin{theorem}
Let $n \in \nn$. Let $\sigma_{\chi_i, \chi_j}(n)$ be as in {\em(2.1)} for $ i, j \in \{0,1,2,3\}$. Then
\begin{align*}
&\begin{aligned}
N(1,1,1,10;n)=& \frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) -5\sigma_{\chi_1,\chi_2}(n) +4 \sigma_{\chi_2,\chi_1}(n) +20\sigma_{\chi_3,\chi_0}(n)\big)
\\&+\frac{4}{7}\big(-3d_1(n) +15d_2(n) -15 d_3(n)+ 9d_4(n)\big),
\end{aligned} \\
&\begin{aligned}
N(1,1,2,5;n)= & \frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) +5\sigma_{\chi_1,\chi_2}(n) - 4 \sigma_{\chi_2,\chi_1}(n) + 20\sigma_{\chi_3,\chi_0}(n)\big)
\\&+\frac{8}{7}\big(-d_1(n) + 2 d_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(1,2,2,10;n)= & \frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) -5\sigma_{\chi_1,\chi_2}(n) +2 \sigma_{\chi_2,\chi_1}(n) +10 \sigma_{\chi_3,\chi_0}(n) \big)
\\&+\frac{4}{7}\big(d_1(n) + 5d_2(n) + 5 d_3(n)+ d_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(1,5,5,10;n)= &\frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) -\sigma_{\chi_1,\chi_2}(n) +4 \sigma_{\chi_2,\chi_1}(n)
+4\sigma_{\chi_3,\chi_0}(n)\big) \\
&+\frac{8}{7}\big(d_2(n) -d_3(n)+d_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(1,10,10,10;n) = &\frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) -\sigma_{\chi_1,\chi_2}(n) +2 \sigma_{\chi_2,\chi_1}(n) +2\sigma_{\chi_3,\chi_0}(n)\big) \\
& +\frac{12}{7}\big(d_2(n) +d_3(n)+d_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(2,2,2,5 ;n) = & \frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) +5\sigma_{\chi_1,\chi_2}(n) -2 \sigma_{\chi_2,\chi_1}(n) +10\sigma_{\chi_3,\chi_0}(n) \big)\\
&-\frac{12}{7}d_1(n),
\end{aligned}\\
&\begin{aligned}
N(2,5,5,5;n) =& \frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) +\sigma_{\chi_1,\chi_2}(n) -4 \sigma_{\chi_2,\chi_1}(n)
+4\sigma_{\chi_3,\chi_0}(n)\big) \\
&+\frac{12}{7}\big(-d_1(n) -d_2(n) -3d_3(n)+d_4(n)\big),
\end{aligned}\\
&\begin{aligned}
N(2,5,10,10;n)= &\frac{1}{7}\big(- \sigma_{\chi_0,\chi_3}(n) +\sigma_{\chi_1,\chi_2}(n) -2 \sigma_{\chi_2,\chi_1}(n) +2\sigma_{\chi_3,\chi_0}(n)\big) \\
&+\frac{4}{7}\big(-d_1(n) +d_2(n) -3d_3(n)+d_4(n)\big).
\end{aligned}
\end{align*}
\end{theorem}
{\bf Proof.} The assertions follow from (1.1), (2.7), (2.8) and Theorem 2.8.
\eop
\section{Remarks}
\begin{remark}
{\rm
Replacing $q$ by $-q$ in $\varphi^3 (q) \varphi (q^5)$ in Theorem 2.6, we have
\begin{align}
\varphi^3 (-q) \varphi (-q^5) =& E_{\chi_0,\chi_1}(-q) -2E_{\chi_0,\chi_1}(q^2) -4 E_{\chi_0,\chi_1}(q^4) \nonumber \\
& + 5E_{\chi_1,\chi_0}(-q) + 10E_{\chi_1,\chi_0}(q^2) -20E_{\chi_1,\chi_0}(q^4).
\end{align}
Appealing to Theorem 2.3, we obtain
\begin{align}
&E_{\chi_0 , \chi_1} (-q) = - E_{\chi_0 , \chi_1} (q) - 2E_{\chi_0 , \chi_1} (q^2) + 4 E_{\chi_0 , \chi_1} (q^4), \\
&E_{\chi_1 , \chi_0} (-q) = - E_{\chi_1 , \chi_0} (q) + 2E_{\chi_1 , \chi_0} (q^2) + 4 E_{\chi_1 , \chi_0} (q^4) .
\end{align}
Substituting (4.2) and (4.3) in (4.1), we obtain
\begin{align}
&\varphi^3 (-q) \varphi (-q^5) =-E_{\chi_0,\chi_1}(q) -4E_{\chi_0,\chi_1}(q^2) - 5 E_{\chi_1,\chi_0}(q) + 20 E_{\chi_1,\chi_0}(q^2).
\end{align}
It can easily be seen that
\begin{align}
&-E_{\chi_0,\chi_1}(q) -4E_{\chi_0,\chi_1}(q^2) = 1 + \sum_{n=1}^{\infty} \Big( \sum_{d\mid n} (-1)^d \dqu{5}{d} d \Big) q^n, \\
&- E_{\chi_1,\chi_0}(q) + 4 E_{\chi_1,\chi_0}(q^2) = \sum_{n=1}^{\infty} \Big( \sum_{d\mid n} (-1)^d \dqu{5}{n/d} d \Big) q^n.
\end{align}
Now, appealing to (1.1) and (4.4)--(4.6), we obtain
\begin{align*}
&\sum_{n=0}^{\infty} N(1,1,1,5;n) (-q)^n
=\varphi^3 (-q) \varphi (-q^5) \nonumber \\
&\hspace{20mm}= 1 + \sum_{n=1}^{\infty} \Big( \sum_{d\mid n} (-1)^d \dqu{5}{d} d \Big) q^n
+ 5 \sum_{n=1}^{\infty} \Big( \sum_{d\mid n} (-1)^d \dqu{5}{n/d} d \Big) q^n,
\end{align*}
from which we deduce
\begin{align*}
N(1,1,1,5;n)= \sum_{d\mid n} (-1)^{n+d} \dqu{5}{d} d + 5 \sum_{d\mid n} (-1)^{n+d} \dqu{5}{n/d} d,
\end{align*}
which agrees with known results, see for example \cite[Theorem 5.1]{alaca2007quaternary}.
Similarly, one can show that our formula for $N(1,5,5,5;n)$ given in Theorem 3.2 agrees with the result in
\cite[Theorem 6.1]{alaca2007quaternary}.
}
\end{remark}
\begin{remark}
{\rm
Appealing to Lemma 2.1 and Theorem 2.3, we obtain the following identities:
\begin{align*}
&L(q)-4L(q^4)=\frac{1}{8}\, \frac{ \eta^{20} (2z) }{\eta^8 (z) \eta^8(4z)} ,\\
& E_{\chi_0,\chi_1}(q) =-\frac{1}{5} \, \frac{ \eta^5 (z) }{\eta (5z)} ,\\
& E_{\chi_1,\chi_0}(q) = \frac{ \eta^5 (5z) }{\eta (z)} ,\\
& E_{\chi_0,\chi_2}(q) =- \frac{1}{2} \, \frac{ \eta^2(z) \eta(2z) \eta^3 (4z)}{\eta^2 (8z)} ,\\
& E_{\chi_2,\chi_0}(q) = \frac{ \eta^3(2z) \eta(4z) \eta^2 (8z)}{\eta^2 (z)} ,\\
& E_{\chi_0,\chi_1}(q) +4E_{\chi_0,\chi_1}(q^2)=- \frac{ \eta (z) \eta^2 (2z) \eta^3 (5z)}{\eta^2 (10z)} ,\\
& E_{\chi_1,\chi_0}(q) +E_{\chi_1,\chi_0}(q^2)= \frac{ \eta^3 (2z) \eta^2 (5z) \eta(10z)}{\eta^2(z)} ,\\
& E_{\chi_1,\chi_0}(q) - 4E_{\chi_1,\chi_0}(q^2)= \frac{ \eta^3 (z) \eta (5z) \eta ^2 (10z)}{\eta^2 (2z)} , \\
& E_{\chi_0,\chi_2}(q) - 2E_{\chi_2,\chi_0}(q)= -\frac{1}{2}\, \frac{ \eta^{13} (4z) }{\eta^2 (z) \eta(2z) \eta^6 (8z)},\\
& E_{\chi_0,\chi_2}(q) - 4E_{\chi_2,\chi_0}(q)= -\frac{1}{2}\, \frac{ \eta^{13} (2z) }{\eta^6 (z) \eta(4z) \eta^2 (8z)}, \\
& E_{\chi_0,\chi_1}(q) -2E_{\chi_0,\chi_1}(q^2)-4E_{\chi_0,\chi_1}(q^4)= \frac{ \eta^5 (2z) \eta^7 (10z)}{\eta(z)\eta(4z) \eta^3 (5z) \eta^3 (20z)} ,\\
& E_{\chi_1,\chi_0}(q) +2E_{\chi_1,\chi_0}(q^2)-4E_{\chi_1,\chi_0}(q^4)= \frac{ \eta^7 (2z) \eta^5 (10z)}{\eta^3(z)\eta^3(4z) \eta (5z) \eta (20z)}.
\end{align*}
}
\end{remark}
\begin{remark}
{\rm
Set $a:=\varphi(q)$, $b:=\varphi(q^2)$, $c:=\varphi(q^5)$ and $d:=\varphi(q^{10})$.
We obtain the following identities from Theorem 2.8:
\begin{align*}
&ad(-a^2-b^2+5c^2-5d^2)+bc(5a^2-8b^2-5c^2+10d^2)=12D_1(q),\\
&ad(2a^2-b^2-4c^2+d^2)+bc(-a^2+b^2-5c^2+7d^2)=24D_2(q),\\
&ad(-a^2+2b^2-7c^2+10d^2)+bc(-a^2+4b^2+c^2-8d^2)=48D_3(q),\\
&ad(a^2-8b^2-5c^2+20d^2)+bc(7a^2-10b^2+5c^2-10d^2)=48D_4(q).
\end{align*}
}
\end{remark}
\begin{remark}
{\rm
It would be interesting to determine general formulas for the number of representations of a positive integer $n$ by the quaternary quadratic forms with coefficients in $\{1, p, q, pq\}$,
where $p$ and $q$ are distinct prime numbers.
The case when $p=2$ and $q=7$ is treated in \cite{Ayse-Jamilah}. \\
}
\end{remark}
\noindent{\bf Acknowledgements.}
The research of the first author was supported by a Discovery Grant
from the Natural Sciences and Engineering Research Council of Canada (RGPIN-418029-2013).
|
1,116,691,497,878 | arxiv | \section{Introduction}
ESA's Gaia mission has been in routine operations for two years. The first
intermediate data release (GDR1) occurred in September 2016 and is based on the
first~14 months of data gathered by the spacecraft. Such a time baseline is
insufficient to disentangle reliably proper motion and parallax so the primary
data product of GDR1 will be a map of around one billion stars to V~$\sim20$
with mean epoch positions and single passband photometry but no proper motions
nor parallaxes. However, by using the Tycho data from the precursor Hipparcos
mission it is possible to anchor the positions for over~2 million brighter stars
(V~$<11.5$) at a mean early epoch around~1991 and hence solve for proper motion
and parallax. This yields trigonometric distances for $20\times$ as many stars as in the
Hipparcos catalogue with similar precision~\citep{2015A&A...574A.115M}. A bonus part of GDR1
are these Tycho--Gaia Astrometric Solution (TGAS) sources. Data are released to
the community via a central Science Archive system and associated partner Data
Centers (e.g.~CDS) with extensive exploitation facilities and
documentation\footnote{\url{http://archives.esac.esa.int/gaia/}}.
\section{White Dwarfs in Hipparcos and Tycho}
There are~10 or so WDs in the Hipparcos catalogue (which is at best complete to V~$\sim9$)
with parallaxes measured at a 5$\sigma$ level. The number of WDs present in the
Tycho catalogue (90\% complete to V~$\sim11.5$) is larger of course, so TGAS will
deliver in itself a major step forward for WD science via the addition of
precision distances (and therefore WD masses, radii, etc). However the TGAS WDs
will be biased towards the hot, bright end of the WD luminosity function simply
because of the bright magnitude limit of that catalogue coupled with the extreme
faintness of cooler WDs.
\section{White Dwarfs in common proper motion with Tycho stars}
One method for expanding significantly the sample of cool WDs with measured
distances in GDR1 is to find those with high proper motion using existing
ground--based catalogues, e.g. the digitised optical all--sky Schmidt photographic
surveys, and associate them with Tycho stars via common proper motion. This can
be done with usefully high confidence given a sufficiently high lower proper
motion cut such that in any given part of the sky the likelihood of finding by
chance two nearby stars with the same high proper motion is negligibly small.
Figure~\ref{fig:rpm} illustrates this using the SuperCOSMOS Science
Archive\footnote{\url{http://ssa.roe.ac.uk}} RECONS proper motion survey plus
lower proper motion supplement~\citep{2004AJ....128..437H,2011MNRAS.417...93R}
down to a lower proper motion
limit of~80~mas/yr. Wide binary candidates with separations of up to~16.7 arcmin
(1000~arcsec, corresponding to maximal separations of order 100,000~AU at 100~pc)
can be identified in this way and their position in reduced proper motion /
colour space shows some of them to be likely cool WDs. In Figure~\ref{fig:example} the image and
adjacent thumbnails show one example, an R~=~19.7 object with R--I~=~1.3 having
a proper motion within 1.5$\sigma$ of that of TYC~2734--750--1 (central in the main
picture) which is more than~8 magnitudes brighter in~R. Using common proper
motion it will be possible to infer accurately the distances to a large sample
of cool WDs prior to the availability of direct trigonometric parallaxes in
GDR2.
\articlefigure[scale=0.8,clip=true,viewport=0 0 418 500]{Hambly_N_Fig1.pdf}{fig:rpm}
{Reduced proper motion diagram for Tycho2 catalogue stars (blue) and 1.5$\sigma$
common proper motion objects from SuperCOSMOS (red). The usual dwarf, subdwarf
and WD locii trace from the upper left to the lower right in this RPM--colour space.
A search radius of~1000 arcsec was used along with a lower proper motion cut of~80~mas/yr.
All the red objects share common proper motion with a Tycho star.}
\articlefigure[scale=0.4,angle=270]{Hambly_N_Fig2.pdf}{fig:example}
{Example of a candidate wide binary identified via common proper motion between
TYC~2734--750--1 (R~=~11.3) and a cool WD (R~=~19.7; R--I~=~1.3). The wide angle
image is~30 arcmin on each side while the thumbnails are~1 arcmin. Thumbnails on
the left are from the POSS--E plate scan while those on the right are from the
2nd epoch red survey plate (exposed some~40 years later).}
\section{White dwarfs in GDR2 and subsequent releases}
Of course the major leap forward for the science of all WD fields of study will
be from GDR2 onwards with the availability of full five--parameter
astrometric solutions and (some) radial velocities~\citep{2007ASPC..372..139J}. Even then full 6D kinematics
will not be available for the majority of isolated cool WDs because of the
unknown and generally unmeasurable radial velocity component. Moreover accurate
model--independent ages are difficult to establish for such objects which leads
to, for example, poor coverage in the Initial--Final Mass Relation. If, however,
a cool WD can be
associated with a normal star via common proper motion then the distance and
radial velocity will nearly always be more accurately measured (or indeed may
only be measurable) for the system using the brighter component. If the system
can be identified kinematically with a cluster, association or moving group of
known age and metallicity etc.\ then so much the better and hence the technique
of `benchmarking' systems~\citep{2013EPJWC..4706002G,2012ApJ...746..144Z} identified via common proper motion will
remain a valuable tool for WD science in the Gaia era.
An interesting aspect of this work is in the implementation details of the
CPM search algorithm. Catalogue pairing is, of course, a solved problem in
computer science via `plane sweep' algorithms that deliver true $O(N\log M)$
performance~(see~\citealt{2005ASPC..347..346D} and references therein) for
small search radii. For the larger search radii needed in this application,
we have further refined the implementation using the `zoned join' technique
developed by~\cite{gray} where the input catalogue(s) are first split into
separate Declination zones of extent equal to the maximum search radius
required to identify CPM wide binaries (in this case 1000~arcsec). This is
particularly important when scaling up to a self--join of a billion--row
scale catalogue, as will be the case when GDR2 is released.
\acknowledgements This research has made use of data obtained from the SuperCOSMOS Science Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council.
|
1,116,691,497,879 | arxiv | \section{Introduction}
\label{sec_intro}
Active Galactic Nuclei (AGN) are among the most energetic phenomena known in the universe. They can be identified with various observational techniques in different bands. Radio selection is sensitive to the powerful jet of luminous radio-loud AGN. X-ray has strong penetration and low dilution from the host galaxies, thus is the most efficient way in identifying low luminosity AGN \citep[e.g.,][]{Xue2016,Luo2017}. Mid-IR can identify both obscured and un-obscured AGN \citep[e.g.,][]{Stern2012}. However, different bands may represent different phases of AGN evolution and a single band identified AGN sample can be contaminated with strong starbursts. Deep UV/optical observations, especially the spectroscopic ones, are still a unique window to study the evolution of AGN.
Traditional optical spectroscopic surveys of AGN usually select targets based on photometric observations, i.e., detections in the broad band imaging with point-like morphologies to be distinguished from extended nearby galaxies, and blue colors to characterize the power-law continuum shape \citep[e.g.,][]{Richards2006}. The photometric selection can produce a selection effect on the AGN sample that fails to include the potential AGN population with optically faint host galaxies and high equivalent widths (EWs). These optically faint AGN could either be intrinsically less massive, which could make them outliers in the super-massive black hole (SMBH) - host relation \citep[see][for a review of the ``co-evolution'' between SMBH and their host galaxies]{Kormendy2013}, or
red and obscured AGN.
Whether the co-evolution stands for all SMBHs and their host galaxies remains an open question. Early works in the 1990s reported the discovery of ``naked'' quasars with no host galaxies \citep[e.g.,][]{Bahcall1994}. However, these ``naked'' quasars were later found to be hosted by normal elliptical galaxies with improved smoothing of the HST images \citep{McLure1999,McIntosh1999}. Simulations have suggested another special class of SMBHs that may not follow the SMBH-host correlation: the ejected SMBHs \citep{Loeb2007,Haiman2009,Ricarte2021}. A SMBH binary in a gas-rich merger could be ejected as a SMBH remnant carrying an accretion disk. A SMBH remnant of an SMBH binary with similar masses could have a recoil speed of thousands of $\>{\rm km}\,{\rm s}^{-1}$. The ejected SMBH could transverse a considerable distance from the merged galaxy and be observed as an off-centered quasar if the ejected SMBH happens to pass through a dense molecular cloud.
Red and obscured AGN are very important candidates for understanding the early quasar phases, galaxy quenching, and the enrichment of the intergalactic medium.
Interactions between galaxies can help remove the angular momentum of the cold gas in the outskirts of galaxies. The inflowed gas can fuel the star formation in the galaxies, feed the central SMBH, and trigger AGN. The systems are then dusty, gaseous, and usually observed as red and obscured AGN. The radiation from AGN could in return power feedback and affect the evolution of their host galaxies. Strong outflows can clear up the gas reservoir in the host galaxies, shut down the star formation, enrich the environments, and make the AGN visible in optical. Outflows are widely used in understanding the co-evolution between SMBHs and their host galaxies, the lack of luminous quasars in the luminosity function, and the quenching of the star formation of galaxies.
Spatially extended ionized gas and powerful outflow winds have been observed in a few high-redshift obscured quasars \citep{Cai2017,Fluetsch2021,Vayner2021,Lau2022}. These sources were pre-identified in large surveys and followed-up with hours of observation time for spatially resolved spectra on the Large Binocular Telescope/Medium-Dispersion Grating Spectroscopy, Keck/Keck Cosmic Web Imager, Very Large Telescope/Multi Unit Spectroscopic Explorer, etc.
In order to search for AGN with high EW, we need a spectroscopic survey that does not require continuum imaging pre-selection such as the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX, \citealt{Gebhardt2021}). HETDEX is a spectroscopic survey with no photometric pre-selection (magnitude/color/morphology). All sources within the footprint of the survey are observed with a set of 78 Integral Field Units (IFUs) consisting of 34,944 fibers and a 18-minute exposure. HETDEX enables spectroscopic detection of the AGN hosted by galaxies that may be fainter than the detection limit of the corresponding photometric observations. Additionally, there is no need to perform follow-up observations for the extended sources on other instruments. The spatially resolved spectra can be obtained directly.
In this paper, we introduce an AGN (HETDEX J115031.93+504850.4, shortened to J1150+5048 in this paper) with extremely high EW ($\rm EW_{Ly\alpha+N\,V,rest} \gtrsim 921\,\AA$) at $z\sim2.24$ from the HETDEX survey. Section \ref{sec_data} briefly summarizes the first AGN catalog of the HETDEX survey.
In Section \ref{sec_info
, we present the basic information of J1150+5048, why it is selected for study, and the detailed spatially resolved properties of J1150+5048 with its narrow-band flux maps. We discuss the possible explanations for this high EW AGN in Section \ref{sec_discuss}. We summarize our discovery in Section \ref{sec_summary}.
\section{The HETDEX AGN Catalog}
\label{sec_data}
HETDEX \citep{Gebhardt2021} is an ongoing spectroscopic survey (3500 \AA\ - 5500 \AA) on the upgraded 10-m Hobby-Eberly Telescope (HET, \citealt{Hill2021}). It uses the Visible Integral field Replicable Unit Spectrograph \citep[VIRUS;][]{Hill2021} to record spectra of every object falling within its field of view. A typical exposure contains 34,944 spectra, most of which capture ``blank'' sky. The primary goal of this survey is to measure the large-scale structure at $z\sim3$ using $\rm Ly\alpha$ emitters (LAEs) as tracers. The HETDEX survey is expected to be active from 2017 to 2024, and eventually will cover 540 deg$^2$ with a filling factor of 1 in 4.6.
The first AGN catalog of the HETDEX survey is presented in \cite{Liu2022} (Paper I\null). Here we briefly summarize the sample identification. AGN candidates are identified by requiring at least two significant AGN emission lines, such as the $\rm Ly\alpha$ and \ion{C}{4} $\lambda1549$ line pair, or with a single broad emission line with FWHM$>$1000~km\,s$^{-1}$, free of any pre-selection based on imaging (magnitude, morphology, or color). Each candidate AGN is then confirmed by visual inspection. This catalog contains 5,322 AGN, covering an effective sky coverage of 30.61 deg$^2$ and a redshift range of $0.25<z<4.32$. Measurements from the overlap regions with the Hyper Suprime-Cam (HSC) imager of the Subaru telescope from the HSC-HETDEX joint survey (HSC-DEX; $5\sigma$ depth is $r\sim25$ mag; S15A, S17A, S18A, PI: A. Schulze, and S19B, PI: S. Mukae) and the HSC Subaru Strategic Program (HSC-SSP; $5\sigma$ depth is $r\sim26$ mag ;\citealt{Aihara2019}) show that the median $r$-band magnitude of our AGN catalog is 21.6 mag, with 34\% of the objects having $r > 22.5$. Approximately 2.6\% of the HETDEX AGN are not detected at $>5\sigma$ confidence.
\section{J1150+5048}
\label{sec_info}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figure/ew.pdf}
\caption{Distributions of the rest-frame EW of the Ly$\alpha+$\ion{N}{5} $\lambda1241$ emission. HETDEX AGN with continuum detection ($>1\sigma$) are shown in the red histogram. HETDEX AGN without continuum detection ($<1\sigma$) are shown in the green histogram. SDSS quasars are indicated by purple histogram. The number of SDSS quasars in each bin is divided by 100 for presentation purposes. $\rm EW_{(Ly\alpha+N\,V),rest}$ of J1150+5048 is marked by the downward triangle.}
\label{f_ew}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figure/spec.pdf}
\caption{The HETDEX spectrum of J1150+5048 at
the HETDEX coordinate in Table \ref{t_info}.
Black data points with error bars are the observed spectrum shown in the rest-frame. The red line is our best fit to the spectrum. The green shaded areas indicate the continuum windows used in the continuum fit. The cyan dashed line is our best-fit continuum model. The four panels in the bottom row are the continuum subtracted local areas of the four emission lines.}
\label{f_spec}
\end{figure*}
\begin{table*}
\centering
\setlength{\tabcolsep}{9pt}
\begin{tabular}{|c|c|c|c|c|c|}
\hline\hline
\multicolumn{6}{c}{Coordinate} \\ \hline
\multicolumn{2}{|c|}{ (R.A., Dec.) (HETDEX)} & \multicolumn{2}{c|}{ (R.A., Dec.) (WISE)} & \multicolumn{2}{c|}{redshift} \\ \hline
\multicolumn{2}{|c|}{ (177.633030, 50.813999)} & \multicolumn{2}{c|}{ (177.632964, 50.814293)} & \multicolumn{2}{c|}{2.24} \\ \hline
\hline
\multicolumn{6}{c}{Rest-frame equivalent width (\AA)} \\ \hline
$\rm EW_{Ly\alpha}$ & $\rm EW_{N\,V}$ & $\rm EW_{Ly\alpha+N\,V}$ & $\rm EW_{Si\,IV+O\,IV]}$ & $\rm EW_{C\,IV}$ & $\rm EW_{He\,II}$ \\ \hline
$>$747 & $>$174 & $>$921 & -- & $>$177 & $>$86 \\ \hline
\hline
\multicolumn{6}{c}{Broad-band Photometry} \\ \hline
$g_{\rm AB}$ & $r_{\rm AB}$ & $\rm W1_{Vega}$ & $\rm W2_{Vega}$ & $\rm W3_{Vega}$ & $\rm W4_{Vega}$ \\ \hline
$>$24.23 & 24.57$\pm$0.13 & 17.50$\pm$0.14 & 16.82$\pm$0.29 & 12.47$\pm$0.38 & $>$8.93 \\ \hline
\end{tabular}
\caption{Basic information for J1150+5048}
\begin{tablenotes}[flushleft]
\scriptsize
\item 1) The Rest-frame EWs are all lower limits, because the continuum level in the [1275, 1290]\,\AA\ window at $(1.4\pm8)\times10^{-18}\, erg\,s^{-1}cm^{-2}\text{\AA}^{-1}$ used in the calculation of EWs is a non-detection.
\item 2) $g_{\text{AB}}$ is measured from the HETDEX spectrum (see Davis et al. in preparation for more details) at the HETDEX coordinate. The continuum of J1150+5048 is a non-detection in $g$-band. 24.23 is $1\sigma$ lower limit.
\item 3) $r_{\text{AB}}$ is measured from HSC-DEX at the WISE pointing. The $5\sigma$ depth of this field is $r=25.12$ (AB mag).
\item 4) W1-W4 are taken from the ALLWISE catalog \citep{Cutri2014}. W1, W2, W3, and W4 are the filters at 3.4, 4.6, 12, and 22 \micron, respectively. W4 is a non-detection.
\end{tablenotes}
\label{t_info}
\end{table*}
Figure \ref{f_ew} shows the distribution of the rest-frame EW of the Ly$\alpha+$\ion{N}{5} $\lambda1241$ emission of the HETDEX AGN with continuum detection ($>1\sigma$; red), that of the HETDEX AGN without continuum detection ($<1\sigma$; green), and that of the latest SDSS quasar catalog (purple; \citealt{Paris2018,Rakshit2020}). Similar with the SDSS quasars, the number of the HETDEX AGN detected with continuum decreases with $\rm EW_{(Ly\alpha+N\,V),rest}$. The green histogram shows the distribution of the lower limits of $\rm EW_{(Ly\alpha+N\,V),rest}$ for the HETDEX AGN not detected with continuum. In this paper, we study J1150+5048 as a representative case of the high EW AGN population with no continuum detection as it is detected with significant emission lines ($>7\sigma$) at Ly$\alpha+$\ion{N}{5} $\lambda1241$, \ion{C}{4} $\lambda1549$, and moderate emission line ($\sim4\sigma$) at \ion{He}{2} $\lambda1640$ (Figure \ref{f_spec}). Additionally, it has very deep $r$-band imaging from HSC-DEX. It is a type-II AGN with narrow lines ($\rm FWHM<1000\,\>{\rm km}\,{\rm s}^{-1}$). The fitting of the spectrum is detailed in Paper I. We fit a power-law continuum to the wavelength windows highlighted by the green shaded areas in Figure \ref{f_spec}. The continuum subtracted emission lines are then fit with two Gaussian profiles if there is a significant broad component, otherwise a single narrow Gaussian profile is fit for each emission line.
Table \ref{t_info} lists the basic information of J1150+5048.
The EWs are measured directly from the ratio between the line flux and the best-fit continuum at the lines from the spectrum in Figure \ref{f_spec}. The continuum level is $<1\sigma$ in the spectrum, so the measured EWs are only lower limits. The lower limit of the rest-frame EW of the Ly$\alpha+$\ion{N}{5} $\lambda1241$ emission of J1150+5048 is 921\,\AA. This is significantly higher than the typical $\rm EW_{Ly\alpha+N\,V}$ at $\sim$100\,\AA\ as indicated by the $\rm EW_{Ly\alpha+N\,V}$ where the number of SDSS quasars is highest in Figure \ref{f_ew}.
\begin{figure*}
\centering
\includegraphics[width=0.38\textwidth]{figure/hsc.pdf}
\includegraphics[width=0.38\textwidth]{figure/hsc2.pdf}\\
\includegraphics[width=0.46\textwidth]{figure/lya.pdf}\\
\includegraphics[width=0.46\textwidth]{figure/civ.pdf}
\includegraphics[width=0.46\textwidth]{figure/heii.pdf}\\
\caption{Top panels: The $r$-band image and the expanded version from HSC-DEX. The pointing of the IR detection in the WISE survey is marked with an orange square. The green diamond marks another HSC-DEX $r$-band detection (22.5 mag) that is covered by the HETDEX fibers in this field.
Middle panel: The narrow-band image of the $\rm Ly\alpha$ emission line derived from the HETDEX data. The small blue plus signs mark the positions of all HETDEX fibers in this field. The cyan circle marks the density peak of the Ly$\alpha$ line, which is also the recorded HETDEX coordinate in Table \ref{t_info}. The magenta triangle is a random position chosen to demonstrate the spatial changes of the Ly$\alpha$ line profiles in Figure \ref{f_BR}. Bottom panels: The narrow band images of the \ion{C}{4} $\lambda1549$ (left) and \ion{He}{2} $\lambda1640$ (right) emission lines. The seeing of the HETDEX observation is $1\farcs6$ (FWHM). All images are centered at the coordinate of WISE (continuum center) in Table \ref{t_info}.}
\label{f_image}
\end{figure*}
Figure \ref{f_image} displays the $r$-band cutout from HSC-DEX in the upper panels and the narrow band images ($\pm 20$\,\AA) of the $\rm Ly\alpha$, \ion{C}{4} $\lambda1549$, \ion{He}{2} $\lambda1640$ emission lines from HETDEX in the remaining three panels. Only spatial pixels with the signal-to-noise ratio of emission lines greater than 1 are used to generate the narrow band images.
The flux from the $\rm Ly\alpha$ emission line region is highly spatially resolved in J1150+5048 as shown by the middle panel of Figure \ref{f_image}. The seeing of this HETDEX observation is $1\farcs6$ (FWHM). The Ly$\alpha$ emission is clearly more extended than the other two emission lines. It spans a region of $\sim10\arcsec$ (85 kpc) in diameter.
The recorded R.A. and Dec. of HETDEX in Table \ref{t_info} is the coordinate where the $\rm Ly\alpha$ line flux is highest (the cyan circle in Figure \ref{f_image} and Figure \ref{f_BR}). The recorded R.A. and Dec. of WISE in Table \ref{t_info} is the coordinate of the continuum detection, taken from the WISE catalog \citep{Cutri2014}. All images in Figure \ref{f_image} and Figure \ref{f_BR} are centered at the WISE coordinate. The offset between the emission-line center (HETDEX coordinate) and the continuum center (WISE coordinate) is $1\farcs1$.
The emission lines of J1150+5048 are strongly bimodal with a red peak at $z\sim2.249$ and a blue peak at $z\sim2.236$ as shown in Figure \ref{f_BR}. The recorded redshift of $z\sim2.24$ in Table \ref{t_info} is a combination of the red peak and the blue peak.
J1150+5048 is a non-detection ($<1\sigma$) in $g$-band measured from the HETDEX spectrum at the HETDEX coordinate. It has an IR detection in the WISE catalog \citep{Cutri2014} with the W4 band (22 \micron) being a non-detection. When measuring at the pointing of the IR detection from HSC-DEX, its $r$-band magnitude is 24.57. The $\rm 5\sigma$ limiting magnitude of this field is $r=25.12$.
The spatial distribution of the $r$-band flux of J1150+5048 (top panels in Figure \ref{f_image}) is strikingly different from the typical 24-25 mag sources (see the first two examples of Figure 12 in Paper I): It is too diffuse that it might be dominated by a spatially extended emission line, such as \ion{C}{3}] $\lambda 1909$, rather than the true continuum.
J1150+5048 is a non-detection in the Faint Images of the Radio Sky at Twenty-Centimeters survey (FIRST, \citealt{Becker1995}), while it has a radio detection at the location of the WISE source at $S_{\text{150\,MHz}} = 2\,m\text{Jy}$ from the LOw-Frequency ARray (LOFAR) Two-metre Sky Survey (LoTSS) \citep{Shimwell2022}.
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{figure/specs.pdf}\\
\includegraphics[width=0.48\textwidth]{figure/blue_map.pdf}
\includegraphics[width=0.48\textwidth]{figure/red_map.pdf}\\
\caption{Upper two rows: The spectra of Ly$\alpha$ at the four positions marked by the cyan circle (Ly$\alpha$ density peak), the orange square (the WISE pointing), the magenta triangle (a random position), and the green diamond (another $r$-band detection) in the narrow-band image of the Ly$\alpha$ emission line in Figure \ref{f_image}. Black data points with error bars are the observed spectra. Our best-fit double-Gaussian model is shown by the green curve. The blue and red solid curves are the best-fit blue component and red component. The blue and red vertical lines mark the best-fit central wavelengths of the two peaks. Bottom panels: The surface density map of the spectral decomposed blue peak and red peak of Ly$\alpha$. The orange square again marks the position of the WISE detection. The blue and the red crosses show the positions where the blue peak and the red peak have the highest line flux respectively.}
\label{f_BR}
\end{figure*}
Figure \ref{f_BR} presents the $\rm Ly\alpha$ line profiles at the four representative positions marked in the Ly$\alpha$ narrow band image of Figure \ref{f_image} in the upper two rows. The $\rm Ly\alpha$ emission line clearly has two distinctive peaks. We decompose the $\rm Ly\alpha$ emission line into a blue peak and a red peak with a double-Gaussian model. The velocity offset between the two peaks ($\sim 1100\,\>{\rm km}\,{\rm s}^{-1}$) does not change significantly with location. The bottom two panels shows the surface density maps of the decomposed blue peak and red peak respectively. The blue cross and the red cross mark the positions where the flux is highest for the blue peak and the red peak. The separation between the two centers is $\sim1\farcs2$ (10.1 kpc). For most of the spatial pixels, the blue peak is weaker than the red peak. This asymmetry is an evidence of outflows, as the near side of $\rm Ly\alpha$ is resonantly scattered by an optically thick medium. The \ion{C}{4} $\lambda1549$ and \ion{He}{2} $\lambda1640$ emission lines also display double peak profiles; however, the two lines are not sufficiently strong for spatial decomposition as was done for Ly$\alpha$.
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{figure/line_ratio_civ.pdf}
\includegraphics[width=0.48\textwidth]{figure/line_ratio_heii.pdf}\\
\caption{Left: The line flux ratio map of the \ion{C}{4} $\lambda1549$ emission over the $\rm Ly\alpha$ emission. Only spatial pixels with
both emission lines detected at $>1\sigma$ level are used in the map.
Labels are the same with the ones in the narrow-band images in Figure \ref{f_image}. Right: Similar plot with the left panel, but for the line ratio of the \ion{He}{2} $\lambda1640$ emission over the $\rm Ly\alpha$ emission.}
\label{f_ratio}
\end{figure*}
Figure \ref{f_ratio} shows the line flux ratio of the \ion{C}{4} $\lambda1549$ emission over the $\rm Ly\alpha$ emission in the left panel and that of the \ion{He}{2} $\lambda1640$ emission over the $\rm Ly\alpha$ emission in the right panel. The line ratio of \ion{C}{4} $\lambda1549/\rm Ly\alpha$ ranges from 0.1 to 0.3. The line ratio of \ion{He}{2} $\lambda1640/\rm Ly\alpha$ ranges from 0.11 to 0.13. The combination of the \ion{C}{4} $\lambda1549/$\ion{He}{2} $\lambda1640$ ratio and the \ion{C}{3}] $\lambda1909/$\ion{C}{4} $\lambda1549$ would provide diagnostics for the ionization levels and the metallicities given quasar photoionization models \citep[e.g.][]{Guo2020}. Unfortunately, the \ion{C}{3}] $\lambda1909$ emission is out of the HETDEX wavelength range. \cite{Lau2022} suggested that the the ionization parameter can be assumed to be $\lg U\sim-1$. The \ion{C}{4} $\lambda1549/$\ion{He}{2} $\lambda1640$ ratio of $\sim1$ in at the $\rm Ly\alpha$ density peak (shown by the cyan circle) would suggest a metallicity of $\rm\sim Z_{\sun}$. The spatially-integrated line ratio of $\sim3$ out to $\sim3\arcsec$ ($\sim$ 20\,kpc) corresponds to a metallicity of $\rm\sim 0.5\,Z_{\sun}$.
The metallicity enrichment suggests centrally-driven outflows to the host galaxy.
\section{Discussion}
\label{sec_discuss}
There are several possibilities that can produce the high EW of J1150+5048; we discuss three possible explanations in this section.
\subsection{Collapsing Protogiant Elliptical Galaxy}
\label{sec_collap}
\cite{Adams2009} studied the famous radio loud AGN (B2 0902+34) at $z=3.4$ with VIRUS-prototype on the 2.7-m Harlan J. Smith Telescope. B2 0902+34 also has bimodal Ly$\alpha$ emission line profile, and the Ly$\alpha$ emission is extended with a radius of $\sim$ 50 kpc. The observed data was successfully reproduced with a model of a collapsing protogiant elliptical galaxy ($\rm \ge10^{12}\ M_\sun$). $z\sim2$ is a common era of galaxy formation. Massive galaxies and their formation have many signatures, most of which include a lot of emission in radio and infrared. B2 0902+34 is detected at 300 $m$Jy in the FIRST survey. J1150+5048 is also covered by FIRST, but there is no radio excess within $\pm 30\arcsec$ of the AGN. The radio detection of J1150+5048 in LoTSS at $S_{\text{150\,MHz}} = 2\,m\text{Jy}$ is also too weak compared to B2 0902+34. These may rule out the possibility of collapsing protogiant elliptical galaxy.
\subsection{Off-centered SMBH}
\label{sec_naked}
SMBHs can be ejected after merger event \citep{Loeb2007,Haiman2009,Ricarte2021}. If they happen to fall in a gas rich environment, the ejected SMBH can irradiate the inter-galactic medium (IGM) in its vicinity and appear as a naked BH with no host galaxy. The narrow band emission line flux maps in Figure \ref{f_image} show that J1150+5048 might be related with the other $r$-band detection covered by the HETDEX fibers marked by the green diamond which is $r_{\text{AB}}=22.5\,\text{mag}$. It has weak emission lines at similar redshift with J1150+5048 with a velocity offset of $\sim -600\,\>{\rm km}\,{\rm s}^{-1}$ as shown by the spectra in Figure \ref{f_BR}. If the green diamond is the original host and the WISE detection is the SMBH, then the separation between the two is $8\farcs6$ (72 kpc).
\subsection{Extremely Red Quasar with Strong Winds}
\label{sec_ulirg}
The optical broad band imaging could be heavily obscured by dust in the outskirts of the host galaxy.
Extremely red quasars (ERQs) are first were first identified in \cite{Ross2015} with $r_{\text{AB}}-\text{W4}_{\text{Vega}}>14\,\text{mag}$, i.e. $r_{\text{AB}}-\text{W4}_{\text{AB}}>7.38\,\text{mag}$. They are revisited by \cite{Hamann2017} with the definition of $i_{\text{AB}}-\text{W3}_{\text{AB}}\ge4.6\,\text{mag}$ and the rest-frame $\rm EW_{CIV}\ge100\,\AA$. Although J1150+5048 is a non-detection in the W4 band and it lacks the observation in $i$-band, the color of $r_{\text{AB}}-\text{W3}_{\text{AB}}=6.93\,\text{mag}$ and high $\rm EW_{CIV}>177\,\AA$ indicates that J1150+5048 probably belongs to the ERQ population.
The current photometric data is not sufficient to break the degeneracy among various models in the spectral energetic distribution. Photometric observations in more bands, such as $K$-band and $z$-band, might further help confirm whether J1150+5048 is an ERQ.
Many of ERQs are found in large-scale overdensities, we checked the full emission line catalog of the HETDEX survey (Cooper et al. in preparation) and only found one weak $\rm Ly\alpha$ emitter candidate within $\pm5\arcmin$ (2.5\,Mpc) at $z=2.24\pm0.03$ down to the detection limit of HETDEX at $g\sim24.5$ mag. J1150+5048 might not be in an large-scale overdensity.
Besides J1150+5048, we found that other high EW AGN in our catalog are all narrow-line AGN with $\rm FWHM\sim1000\,\>{\rm km}\,{\rm s}^{-1}$, corresponding to the velocity dispersion of $\sim400\,\>{\rm km}\,{\rm s}^{-1}$. \cite{Lau2022} also found that the $\rm Ly\alpha$ halo of the ERQ they studied is kinematically quiet with the velocity dispersion of $\sim300\,\>{\rm km}\,{\rm s}^{-1}$.
A significant fraction of our AGN sample has extended emission-line regions.
\cite{Ouchi2020} collected emitters with measurements of their diffuse $\rm Ly\alpha$ emission and found that the radius of diffuse $\rm Ly\alpha$ emitters are correlated with the $\rm Ly\alpha$ luminosities. The $\rm Ly\alpha$ luminosity of their sample ranges from $\rm\sim10^{42}\,erg\,s^{-1}$ to $\rm\sim10^{45}\,erg\,s^{-1}$.
The spatially integrated $\rm Ly\alpha$ luminosity of J1150+5048 is $\rm10^{43.4}\,erg\,s^{-1}$. The $\rm Ly\alpha$ emission of J1150+5048 extends to $r\sim85$ kpc. J1150+5048 lies well within the scattered region of the correlation between the radius and the $\rm Ly\alpha$ luminosity in Figure 13 of \cite{Ouchi2020}.
The extended emission-line region can be explained either by central-driven outflows or by inflows from the circumgalactic medium. The blue peak of the $\rm Ly\alpha$ emission is always weaker than the red peak. This suggests that the near side is more heavily scattered than the far side, and the gas flows are outflows rather than inflows.
If the direction of the outflows are perpendicular to the line-of-sight direction, the separation between the blue peak and the red peak ($\sim 1100\,\>{\rm km}\,{\rm s}^{-1}$) would not change significantly with positions as was found in Section \ref{sec_info}.
We estimate the outflow mass $M_{\text{outflow}}$ and the outflow rate $\dot{M}_{\text{outflow}}$ of J1150+5048 following Equation 5 and 6 in \cite{Fluetsch2021}. However, the H$\rm\alpha$ emission is not covered by the wavelength range of the HETDEX spectrum. We therefore make a simple assumption that the luminosity of the $\rm Ly\alpha$ emission is around 1.5 times that of the H$\rm\alpha$ emission following \citep{Allen1982}. The electron density $n_e$ should be carefully calculated from doublets such as the [\ion{O}{2}] $\lambda3726$/[\ion{O}{2}] $\lambda3729$ ratio and the [\ion{S}{2}] $\lambda6717$/[\ion{S}{2}] $\lambda6731$ ratio, but these emission lines are again out of the wavelength coverage of HETDEX. We then take the typical $n_e$ of outflows $\sim500\,cm^{-3}$ from \cite{Fluetsch2021}. With these two assumptions, the outflow mass and the outflow rate of J1150+5048 are then $\lg(M_{\text{outflow}}/\text{M}_{\sun})\sim8$ and 1.5 $\rm M_{\sun}\,yr^{-1}$.
As shown in Figure \ref{f_ew}, there are many other high-EW AGN in our catalog, although some are missing $r$-band imaging, and some do not have significant emission line detected besides $\rm Ly\alpha$. By the time the HETDEX survey is complete, we expect the final AGN sample to be about five times larger than the current one. We will systematically study all AGN with high EWs. It is expected that many of these high-EW AGN are similar to J1150+5048 with red colors and bipolar outflows. It is interesting to study the outflow properties, such as the outflow mass $M_{\text{outflow}}$ and the outflow rate $\dot{M}_{\text{outflow}}$, as a function of luminosity, reddening, and redshift.
\section{Summary}
\label{sec_summary}
We have identified an AGN (J1150+5048) with extremely high EW at $z\sim2.24$, with strong Ly$\alpha$, \ion{C}{4} $\lambda1549$, and \ion{He}{2} $\lambda1640$ emission lines and non-detected continuum in the HETDEX spectrum. The measured $\rm EW_{Ly\alpha+N\,V,rest}=921\,\AA$ is a lower limit of its rest-frame line strength at Ly$\alpha$. Extended emission is measured at $r=24.57$ in the deep $r$-band image ($r_{5\sigma}=25.12$) from the HSC-DEX survey. It has an IR detection in the WISE catalog, but it is a non-detection in the W4 band. The Ly$\alpha$ emission line is significantly extended in the narrow band image spanning $\sim$ 10\arcsec\ in diameter in the observation. The line profile of Ly$\alpha$ is strongly bimodal. The decomposed blue peak and red peak separated from each other with $1\farcs2$. The line-of-sight velocity offset between the two peaks is $\sim 1100 \>{\rm km}\,{\rm s}^{-1}$.
Further statistical studies on the full high-EW AGN sample are needed to understand the relation among this sample, the ERQs, type-II AGN, and the AGN with diffuse ionized gas. These would provide key information in understanding the early quasar phases, galaxy quenching, and the enrichment of the intergalactic medium. The HETDEX survey is very efficient in such studies because the spatial resolved information is observed simultaneously while the emission-line sources are identified in the 18-min spectroscopic exposures with no pre-selection based on imaging.
\acknowledgments
HETDEX is led by the University of Texas at Austin McDonald Observatory and Department of Astronomy with participation from the Ludwig-Maximilians-Universit\"at M\"unchen, Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), Leibniz-Institut f\"ur Astrophysik Potsdam (AIP), Texas A\&M University, The Pennsylvania State University, Institut f\"ur Astrophysik G\"ottingen, The University of Oxford, Max-Planck-Institut f\"ur Astrophysik (MPA), The University of Tokyo, and Missouri University of Science and Technology. In addition to Institutional support, HETDEX is funded by the National Science Foundation (grant AST-0926815), the State of Texas, the US Air Force (AFRL FA9451-04-2-0355), and generous support from private individuals and foundations.
The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximilians-Universit\"at M\"unchen, and Georg-August-Universit\"at G\"ottingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly.
The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing high performance computing, visualization, and storage resources that have contributed to the research results reported within this paper. URL: http://www.tacc.utexas.edu
|
1,116,691,497,880 | arxiv | \section{Introduction}
In many practical applications one is concerned with predicting a quantity $Y$ (target or
label) from a set of features $\bm{X}$, i.e. one needs to estimate the conditional
$p(Y|\bm{X})$ of the joint probability density distribution $p(Y,\bm{X})$ when the values
$\vec{x}$ of the feature variables $\bm{X}$ are observed. In supervised machine learning,
one uses e.g. neural networks like NeuroBayes \cite{NeuroBayes} to build a model for the
conditional, i.e. $\tilde{p}(Y|\bm{X},\vec{\theta})$, where $\vec{\theta}$ is a set of
parameters learned from the training data and represents all observed knowledge of the
joint probability distribution $p(Y,\bm{X})$.
Although the predictions obtained from the machine learning model are in general very
accurate, the exact path how an individual prediction was calculated is typically not
observable in complex ensemble or deep learning models. Being able to explain how
individual predictions and decisions were made can be a mandatory legal requirement in
many sectors such as finance and insurance and is highly desirable in others such as
medicine or retail to build trust in the machine learning model.
In addition, most machine learning algorithms struggle to learn rare events, as they are
not representative of the bulk of the data and are often over-regularized, even though in
practical applications these effects may play a major role.
In order to address both the explainability and handling of rare effects, a novel machine
learning algorithm called "Cyclic Boosting" is proposed, which is able to learn a model
$\tilde{p}(Y|\bm{X},\vec{\theta})$ efficiently and accurately, while allowing to precisely
follow the path how individual predictions were made.
As will become apparent below, Cyclic Boosting can be categorized as a generalized
additive model \cite{GAM}, where the target $Y$ belongs to the family of exponential
distributions such as the Poisson, Gaussian, or Bernoulli distribution, together with a
suitable link function. As such, the Cyclic Boosting algorithm can be used in the
following three scenarios:
\begin{itemize}
\item{Multiplicative regression mode: $Y \in [0, \infty)$}
\item{Additive regression mode: $Y \in (-\infty, \infty)$}
\item{Classification mode: $Y \in [0, 1]$}
\end{itemize}
In the following, the multiplicative regression mode is described in detail first and for
the other two modes only modifications are highlighted afterwards.
\section{Literature Review}
Reaching human-level performance using machine learning techniques in specific
applications has also increased the interest in interpretability of algorithm-driven
decisions in recent years. However, due to the opaque nature of most of the complex models
it remains largely unexplained how these decisions are reached.
Several approaches exist to address this situation, e.g. Molnar \cite{molnar2019} gives a
good overview. In general, one can either use a complex black box model and subsequently
apply model-agnostic interpretation tools or build an explainable model.
For black box models, the importance of individual input features can be determined
using e.g. Shapley's game-theoretic approach \cite{Shapeley1953, SHAP} or permutation
importance \cite{Breiman2001, eli5}. Google's {\em What-If} \cite{GoogleWhatIf} allows to
probe the behavior of a machine learning model if certain aspects or inputs are changed.
Partial dependency plots \cite{friedman2001} can be used to visualize the relationship
between the target and selected features and illustrate if e.g. the dependency is
non-linear or monotonous. Multiple visualization techniques have been developed in
particular to understand the way deep neural networks process images, such as partial
occlusion \cite{Zeiler2013} or saliency maps \cite{Simonyan2013}. These approaches are
undoubtedly invaluable tools to understand the inner workings of black box models as well
as the correlation between input features and predictions. However, they cannot address
the black box character on a fundamental level and therefore don't allow for fully
explainable models.
Surrogate models can be used to build explainable models out of black box models. For
instance, LIME \cite{lime} uses linear models to derive an explainable model which is
faithful locally around each prediction. While these predictions are locally explainable,
the model itself remains a black box model.
The simplest fully explainable model is the linear or logistic regression where the target
is represented by a sum of linear features and a Gaussian noise term
$\epsilon$: $y = \sum_i \alpha_i x_i + \epsilon$. Once all coefficients $\alpha_i$ are
determined from data, each prediction can be evaluated in terms of the features and
their coefficients. However, this approach suffers from many short-comings, e.g. all
features are assumed to be linear with constant variance, as well as independent from each
other. Consequently, this simple approach is rarely sufficient in practical
applications.
General Linear Models (GLM) \cite{GLM} extend this approach and are useful for a wider
range of models. However, they still retain their linear character. More generally,
General Additive Models (GAM) \cite{GAM} replace the linear term with a function
$f_i(x_i)$ which allows for the modeling of non-linear effects. In general, GAMs are
described by $g(E[y]) = \beta_0 + \sum f_j(x_j)$, where $g$ is called the link function
and $f_j$ is some function which operates on the features $x_j$. In case of GLMs, $f$ is
constrained to be linear. While GAMs are not as easily interpretable as a simple linear
regression, they retain most of the benefits while allowing to model complex relationships
found in concrete application scenarios. Recently, Microsoft released {\em Interprete}
\cite{Interprete}, which includes a GAM with pairwise interactions for each feature
variable \cite{GAM_Microsoft}, i.e.
$g(E[y]) = \beta_0 + \sum_j f_j(x_j) + \sum_{i \ne j} f_{ij}(x_i,x_j)$.
Considering other supervised learning approaches, single decision trees \cite{Breiman2001}
are fully explainable, but are often not sufficiently performant in practical
applications. While amending them to ensemble methods by means of e.g. bagging or boosting
techniques can improve the performance of tree-based methods significantly, it greatly
reduces the explainability of the models as well. The same holds for support vector
machines (SVM) \cite{Boser1992}: For a linear kernel, the SVM weights define the
hyperplane which separates two classes. In case of low dimensional feature space, this can
be used to gain insights into the relative importance of the input variables. However, in
case of high-dimensional spaces this becomes more difficult and more general non-linear
kernels do not allow for easy interpretation of the SVM decision boundaries. Artificial
neural networks are typically considered as black box models due to their non-linear
transfer function modifying the output of individual neurons.
\section{The Cyclic Boosting algorithm}
\subsection{General approach}
The main idea behind Cyclic Boosting is that each individual feature $X_j$ from
$\bm{X} = (X_1, X_2, \ldots , X_p)$ contributes in a specific way to the prediction of the
target $\hat{Y}$. If all contributions can be calculated on a granular level, each
prediction $\hat{y_i}$ for a given observation $i$ can be transparently interpreted by
analyzing how much each feature $X_j$ for the observed values $x_{j,i}$ contributes to the
prediction.
To achieve the required granularity, each feature $X_j$ is first binned appropriately:
Categorical features retain their original categories, whereas continuous features are
discretized such that each bin has the same width (equidistant binning) or contains the
approximately same number of observations. In the following, bins are denoted by $b^k_j$,
i.e. bin $k = 1,..., n$ for feature $X_j$. During the training of the supervised machine
learning model, each feature, containing its various bins, is considered in turn and an
appropriate modification to the prediction $\hat{Y}$ of the target $Y$ is calculated. This
process is repeated iteratively until a stopping criterion is met, e.g. the maximum number
of iterations or no further improvement of an error metric such as the mean absolute
deviation (MAD) or mean squared error (MSE).
\subsection{Multiplicative regression mode}
\label{multiregmode}
In the multiplicative regression mode of Cyclic Boosting, the target variable is in the
range $Y \in [0,\infty)$. The predicted values of the target variable, denoted by
$\hat{y_i}$, are calculated from given observations $\vec{x}_i$ of a set of feature
variables $\bm{X}$ in the following way.
\begin{equation} \label{product}
\hat{y}_i = \mu \cdot \prod \limits_{j=1}^p f^k_j \quad \text{with}\; k=\{ x_{j,i} \in b^k_j\}
\end{equation}
Here, $f^k_j$ are the model parameters for each feature $j$ and bin $k$. For any concrete
observation $i$, the index $k$ of the bin is determined by the observation of $x_{j,i}$
and the subsequent look-up into which bin this observation falls. The global average $\mu$
is calculated from all observed target values $y$ taken across the entire training data.
If one assumes that the target variable $Y$ is generated as the mean of a Poisson (or more
general, negative binomial) distribution and the logarithm $\ln$ is used as the link
function, eqn. \ref{product} can be inferred from the structure of a generalized additive
model by applying the inverse link function.
The model parameters $f^k_j$ are determined from the training data according to the
following meta-algorithm:
\begin{enumerate}
\item{Calculate the global average $\mu$ from all observed $y$ across all bins $k$ and
features $j$.}
\item{Initialize the factors $f^k_j \leftarrow 1$}
\item{Cyclically iterate through features $j = 1,...,p $ and calculate in turn for each
bin $k$ the partial factors $g$ and corresponding aggregated factors $f$, where indices
$t$ (current iteration) and $\tau$ (current or preceding iteration) refer to iterations of
full feature cycles as the training of the algorithm progresses:
\begin{equation} \label{factors}
g^k_{j,t} = \frac{\sum \limits_{x_{j,i} \in b^k_j} y_i}{\sum \limits_{x_{j,i} \in b^k_j} \hat{y}_{i,\tau}}
\;\; \mathrm{where} \; \; f^k_{j,t} = \prod \limits_{s=1}^t g^k_{j,s}
\end{equation}
\noindent
This means $g$ is the factor that is multiplied to $f_{t-1}$ in each iteration. Here,
$\hat{y}_\tau$ is calculated according to eqn. \ref{product} with the current values of
the aggregated factors $f$:
\begin{equation} \label{factors3}
\hat{y}_{i,\tau} = \mu \cdot \prod \limits_{j=1}^p f^k_{j,\tau}
\end{equation}
\noindent
To be precise, the determination of $g^k_{j,t}$ for a specific feature $j$ employs
$f^k_{j,t-1}$ in the calculation of $\hat{y}$. For the factors of all other features, the
newest available values are used, i.e., depending on the sequence of features in the
algorithm, either from the current ($\tau=t$) or the preceding iteration ($\tau=t-1$).
}
\item{Quit when stopping criteria are met at the end of a full feature cycle.}
\end{enumerate}
\noindent
This iterative cyclic optimization corresponds to a coordinate descent algorithm
\cite{Wright2015} with a boosting-like update of the factors $f$ and intrinsically
supports the modeling of hierarchical causal dependencies in the data by means of choosing
an appropriate feature sequence.
In order to increase robustness of the optimization and, if desired, reduce dependency on
the sequence of features, a learning rate $\eta$ can be added to the calculation of the
factors $f$ in eqn. \ref{factors} (with $\ln$ as link function):
\begin{equation}
\ln(\tilde g^k_{j,t}) = \eta_t \cdot \ln(g^k_{j,t}) \;\; \mathrm{where} \; \; \eta_t \in (0,1]
\end{equation}
\noindent
Here, $\eta$ is chosen as a small value at the beginning of the training ($t=1$) and is
then increased after each full feature cycle $t$ according to a linear or logistic
function until it reaches $\eta = 1$ for the maximal number of iterations, hence
$\tilde g^k_j \to g^k_j$ as the algorithm converges.
If the values $y$ follow a Poisson distribution, the Cyclic Boosting algorithm corresponds
to optimizing $\sum_i \frac{(y_i/\hat{y}_{i,\tau} - g^k_j)^2}{\sigma_i^2}$, i.e. $\chi^2$,
with $\sigma_i^2 = y_i/\hat{y}_{i,\tau}$ for all observations $i$ in each bin $k$ of
feature $j$. Since the bins of each feature variable are considered independently of each
other, the optimization is performed locally in each bin $b^k_j$. This has the benefit
that rare events can be learned effectively by the algorithm. While most machine learning
algorithms tend to over-regularize these effects, especially when they are far away from
the bulk of the respective distribution of observed feature variables $X_j$, choosing a
suitable binning allows to treat rare observations separately from the bulk of the
distribution of observed feature variables and hence allow accurate predictions even in
this case. However, the potentially low numbers of observations in such bins increase the
need for regularization methods in order to avoid learning wrong or spurious relationships
from data, i.e. reduce the risk of overfitting.
Although the cyclic consideration of all variables already accounts for correlations
between the different features, the learning of correlations between specific features can
be further improved by adding composed features with multi-dimensional binning, e.g.
built out of two or three of the original features. An example for this is described later
on in figure \ref{fig:store_td_2D}.
The binned feature-wise optimization of the Cyclic Boosting method also enables a natural
access to the introduction of sample weights. As an example, this can be used to put more
emphasis on the most recent past when predicting a target available as times series data.
Such a procedure can help to improve the forecast quality in case of trends or other
temporal changes in the data.
Owing to its straightforward structure based on fundamental arithmetic operations, Cyclic
Boosting can be trained efficiently on a large amount of data and parallelization of the
algorithm is possible without major obstacles.
\subsection{Regularization}
\label{conjugates}
The factors $f^k_j$ are iteratively updated according to eqn. \ref{factors}, where the
update rule has the form $g = \alpha / \beta$. As the Gamma distribution is the maximum
entropy probability distribution for a random variable $\xi$ for which
$E[\xi] = \alpha / \beta$ is fixed and greater than zero, the Gamma distribution is
assumed as a prior for the distribution of the factors $f^k_j$ in each bin $k$ of feature
$j$. Furthermore, the numerator and denominator of eqn. \ref{factors} have the form of the
maximum likelihood estimator for an i.i.d. random variable following a Poisson (or more
general negative binomial) distribution. These considerations motivate the description of
the individual contributions, i.e. the factors, to the prediction of a target variable
$Y \in [0,\infty)$ as conjugate distributions, the Gamma distribution being the conjugate
prior to the Poisson (or more general negative binomial) likelihood. Eqn. \ref{factors}
can hence be written as:
\begin{equation} \label{posteriors}
g^k_j = \frac{\alpha^k_j}{\beta^k_j}
\end{equation}
with
\begin{equation}
\alpha^k_j = \alpha_{\text{prior}} + \sum \limits_{x_{j,i} \in b^k_j} y_i \;\; \mathrm{and} \; \;
\beta^k_j = \beta_{\text{prior}} + \sum \limits_{x_{j,i} \in b^k_j} \hat{y}_i
\end{equation}
\noindent
The numerical values of the parameters of the prior Gamma distribution are chosen such
that the median of the Gamma distribution is 1, i.e. $\alpha_{\text{prior}} = 2$,
$\beta_{\text{prior}} = 1.67834$.
The definition of the factors in eqn. \ref{posteriors} exploits the fact that the mean of
the Gamma distribution can be expressed as $\alpha / \beta$. Instead, one could also
choose the median, which is generally a more robust point estimator and not as sensitive
to outliers as the mean.
\subsection{Smoothing}
\label{smoothing}
In most realistic applications, the observed data will be noisy and subject to statistical
fluctuations, assuming that missing, incomplete or wrong data have already been corrected
and common best practices for improving data quality have been observed. Regularizing the
factors $f^k_j$ across bins $k$ for each feature $j$ will therefore improve the numerical
stability of the algorithm training. For categorical features, the factors in each
category can be regularized by determining appropriate Bayesian {\em a priori}
probabilities for each occurrence of the specific category of feature variable $X_j$. For
continuous features, smoothing functions such as splines or a suitable base of orthogonal
polynomials can be applied, which is equivalent to applying a low-pass filter to remove
high-frequency noise.
It should be noted that the range of the factors need to be transformed from $(0, \infty)$
to $(-\infty, \infty)$ before these smoothing approaches can be applied. This can be
achieved by taking the logarithm of the factors, i.e. $f^{\prime k}_j = \ln(f^k_j)$. In
order to be able to fit a smoothing function to the factors, the uncertainties
$\sigma_{f^{\prime k}_j}$ of each factor $f^\prime$ in each bin $k$ for feature $j$ can be
estimated from moment matching of the Gamma distribution to the log-normal distribution,
i.e. assuming that the uncertainties follow a Gaussian distribution after the logarithmic
transformation has been applied. This means the variance of the Gamma distribution is set
equal to the variance of the log-normal distribution:
\begin{equation}
\frac{\alpha}{\beta^2} = (e^{\sigma^2} - 1) \cdot e^{2(\mu + \frac{\sigma^2}{2})}
\end{equation}
\noindent
The mean of the log-normal distribution is then substituted by the mean of the Gamma
distribution: $e^{\mu + \frac{\sigma^2}{2}} = {\alpha}/{\beta}$.
\noindent
And finally, this leads to the following formula for the uncertainties:
\begin{equation}
\sigma^2_{f^{\prime k}_j} = \log (1 + \alpha^k_j) - \log (\alpha^k_j)
\end{equation}
After the smoothing of the factors has been performed, the factors are transformed back to
the original range (i.e. $(-\infty, \infty) \to (0, \infty)$) by applying the exponential
function as the inverse of the natural logarithm.
\subsection{Additive regression mode}
In the additive regression mode, with the range of the target variable being
$Y \in (-\infty, \infty)$, the formulae are modified such that:
\begin{equation} \label{summands}
\hat{y}_i = \mu + \sum \limits_{j=1}^p f^k_j \quad \text{with}\; k=\{ x_{j,i} \in b^k_j\}
\end{equation}
\begin{equation}
f^k_{j,t} = \sum \limits_{s=1}^t g^k_{j,s} \;\; \mathrm{and} \;\; g^k_{j,t} = \sum \limits_{x_{j,i} \in b^k_j} y_i - \sum \limits_{x_{j,i} \in b^k_j} \hat{y}_{i,\tau}
\end{equation}
The conjugate distributions for the individual contributions to the prediction, in this
case the summands, follow a Gaussian function. Therefore, no transformation is needed
before smoothing.
\subsection{Classification mode}
In the case of (binary) classification, one aims to identify whether a given observation
$i$ belongs to a certain class or not. Hence the range of the target variable is in
$[0,1]$, which can be interpreted as the probability $p_i$ that this observation belongs
to the class ($p_i \to 1$) or doesn't belong to the class ($p_i \to 0$). In practical
applications, a suitable cut-off has to be defined which separates the two cases.
Noting that the odds, i.e. the ratio $\frac {p_i}{1-p_i}$, has the range $[0, \infty)$,
the same approach as the multiplicative regression mode can be used:
\begin{equation} \label{odds}
\frac{\hat{p}_i}{1 - \hat{p}_i} = \mu \cdot \prod \limits_{j=1}^p f^k_j \quad \text{with}\; k=\{ x_{j,i} \in b^k_j\}
\end{equation}
Instead of a Gamma function, the conjugate prior for the factors is now a Beta function,
due to the binary nature of the setting, and the corresponding likelihood is a Bernoulli
distribution. Choosing $\alpha_{\text{prior}} = 1.001$ and $\beta_{\text{prior}} = 1.001$
results in a uniform Beta distribution for the prior that drops sharply to zero at either
end of the interval $[0,1]$, which is helpful to avoid overconfidence with extreme
predictions. The parameters of the posterior Beta distribution are then calculated as:
\begin{equation} \label{alpha}
\alpha^k_j = \alpha_{\text{prior}} + \sum \limits_{x_{j,i} \in b^k_j} y_i \;\; \mathrm{and} \;\;
\beta^k_j = \beta_{\text{prior}} + \sum \limits_{x_{j,i} \in b^k_j} 1 - y_i
\end{equation}
The factors and their uncertainties are in turn estimated from the mean (or median) and
variance of this Beta distribution, similar to the approach taken for the multiplicative
regression mode.
The performance of the algorithm can be improved by the inclusion of sample weights
according to the following scheme:
\begin{equation}
w_i =
\begin{cases}
1 - \hat{p}_i, & \text{if}\ y_i = 1 \\
\hat{p}_i, & \text{if}\ y_i = 0
\end{cases}
\end{equation}
Similar to the approach taken in boosting, i.e. the combination of several weak learners
into a strong one, this definition enforces the training process to put more emphasis on
observations that have been misclassified in the current state of the algorithm. Eqn.
\ref{alpha} then reads:
\begin{equation}
\alpha^k_j = \alpha_{\text{prior}} + \frac{\sum \limits_{x_{j,i} \in b^k_j} w_i \cdot y_i}{\sum \limits_{x_{j,i} \in b^k_j} w_i}
\end{equation}
\begin{equation}
\beta^k_j = \beta_{\text{prior}} + \frac{\sum \limits_{x_{j,i} \in b^k_j} w_i \cdot (1 - y_i)}{\sum \limits_{x_{j,i} \in b^k_j} w_i}
\end{equation}
Like in the multiplicative regression mode, the logarithm is then used to transform the
range $(0, \infty)$ to $(-\infty, \infty)$ and in turn the same approach to regularization
and smoothing can be taken.
\section{Example: Demand Forecasting}
\label{demandforecasting}
A very useful application of Cyclic Boosting's multiplicative regression mode is to
forecast future demand of individual products sold in a retail location. Hereby, demand is
influenced by promotions, price changes, rebates, coupons, and even cannibalization
effects within the assortment range. Furthermore, customer behavior is not uniform but
varies throughout the week and is influenced by seasonal effects and the local weather, as
well as many other contributing factors. Hence, even though demand generally follows a
negative binomial distribution \cite{Ehrenberg1959}, the exact values of the parameters
are specific to a single product to be sold on a specific day in a specific location or
sales channel and depend on the wide range of frequently changing influencing factors
mentioned above.
Cyclic Boosting allows to efficiently calculate all relevant parameters to model the
demand of individual products, taking a wide range of influencing factors into account,
while at the same time allowing the operational business to track and understand how each
individual prediction was made.
\subsection{Data and algorithm training}
We use data from a Kaggle online competition \cite{kaggle_data} to demonstrate demand
forecasting with Cyclic Boosting and showcase its properties. The data set consists of the
fields date, store, item, and sales, the latter being the target to predict. There are
five years of historical data, from beginning of 2013 until end of 2017, for 10 different
stores and 50 different items.
Besides store and item, we include several features describing trend and seasonality,
namely days since beginning of 2013 as linear trend as well as day of week, day of year,
month, and week of month. A list of all used features (one- and two-dimensional) can be
found in the legend of fig. \ref{fig:factors}. For example, two-dimensional features
including the variable "item" allow to learn characteristics of time series of individual
products.
Exemplarily, fig. \ref{fig:item} shows a detailed analysis of the factors for the feature
variable "item". Shown are the mean values of the prediction $\hat{y}$ after completion of
the training as well as the observed, true values $y$ in each bin divided by the global
mean. This visualization directly indicates possible deviations from the optimal fit
results in the different bins. Here, no significant deviations are present across the
whole range of values. Furthermore, the smoothed values of the factors, i.e. the actual
fitted parameters of the model, are shown. These differ from the mean values of the target
and prediction in the different bins divided by the global mean due to correlations with
other features.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{item}
\caption{\label{fig:item} Analysis of the feature variable "item" after the final
iteration. The shown data points indicate mean values of prediction $\hat{y}$ and ground
truth $y$ for each bin divided by the global mean (prediction hardly visible due to good
agreement). Note that the smoothed factors shown here are influenced by correlations to
all other features in the model as well.}
\end{center}
\end{figure}
An example for a two-dimensional feature combination, namely "store" and trend "td", is
shown in fig. \ref{fig:store_td_2D}. The upper left-hand plot shows a binned,
two-dimensional, color-coded visualization of the deviations between final predictions
and truth. The lower left-hand plot shows the smoothed values of the two-dimensional
factors, again visualized by means of color-coding. Here, one of the features is
categorical ("store") and the other one continuous ("td"), and the two-dimensional
smoothing is performed by means of grouping by the categorical feature dimension and
smoothing the continuous one. An alternative for a two-dimensional smoothing in case of
two continuous features consists in performing a truncated singular-value decomposition.
The two right-hand plots show the two corresponding marginal smoothed factor distributions
for the mean of the respective other dimension (solid red) and its individual categories
(transparent blue) as well as the marginal distributions for final predictions and
observed (true) values.
These examples show how Cyclic Boosting supports model development in terms of feature
engineering by means of analysis and individual preprocessing.
\begin{figure}
\begin{center}
\includegraphics[scale=0.3]{store_td_2D}
\caption{\label{fig:store_td_2D} Analysis of the two-dimensional combination of the
features "store" and "td" after the final iteration. The upper left-hand plot shows the
two-dimensional deviations between prediction $\hat{y}$ and ground truth $y$, the lower
left-hand plot the smoothed factors, and the right-hand plots $\hat{y}$ and $y$ for the
two marginal distributions.}
\end{center}
\end{figure}
\subsection{Explanation of the predictions}
As stated above, one of the advantages of the Cyclic Boosting algorithm is that each
individual prediction can be interpreted and related to the feature variables used as
input, as shown in fig. \ref{fig:factors} for three different predictions $\hat{y_i}$. A
value of $f^k_j = 1$ implies that the importance of this particular feature is neutral
compared to the others, the strength of the deviation $f^k_j \neq 1$ indicates how
important a given feature is for the individual prediction. As the figure illustrates, the
importance of the individual features, from which the final prediction is calculated, can
vary significantly from one observation to the next.
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{factors}
\caption{\label{fig:factors} Illustration of the individual factors $f^k_j$ from which
the prediction $\hat{y_i}$ is calculated for three individual observations (displayed on
top of each other).}
\end{center}
\end{figure}
\subsection{Results}
The published results of the competition correspond to a test period from beginning of
January to end of March 2018 \cite{kaggle_data}. However, the true observed values in this
test period are not publicly available. Since the main aim of this example is to
demonstrate that the Cyclic Boosting algorithm achieves at least comparable performance to
other machine learning approaches while retaining the benefit of fully explainable
predictions, the data until the end of 2016 were used for training the model and the first
three months of 2017 were taken as an independent test sample. This reproduces the
conditions of the competition as closely as possible and allows for a like-for-like
comparison of the published results and scores with the results obtained from this model.
Using the observed sales in the first three months of 2017 and comparing these to the
predicted values, the approach using Cyclic Boosting results in a symmetric mean absolute
percentage error of $\mathrm{SMAPE} \approx 13.20\%$. The same approach with a training
period until end of 2015 and prediction of the first three months of 2016 yields
$\mathrm{SMAPE} \approx 13.57\%$. The winning models of the competition have scores of
$\mathrm{SMAPE} \approx 13.84\%$ and $\mathrm{SMAPE} \approx 12.58\%$ for $34\%$ and
$66\%$ of the data set for the first three months in 2018. Given the upward trend of sales
and the larger training data set, we expect the $\mathrm{SMAPE}$ of our model in 2018 to
be at least comparable to the winning method. One can therefore conclude that Cyclic
Boosting can compete with other available algorithms in terms of forecast quality while
retaining full explainability of the individual predictions.
In order to demonstrate the robustness of Cyclic Boosting against variations of the number
of feature bins, which could be considered as hyperparameters, we both halved and doubled
the number of bins (from default value of 100) for the two utilized continuous features,
what resulted in changes in fourth place after the comma for $\mathrm{SMAPE}$ values.
As an additional remark, this simulated data set includes no information on prices,
promotions, or product hierarchy and also shows no dependency on events, like holidays,
weather, or other exogenous variables, for which the full potential of Cyclic Boosting
would come to fruition.
\section{Discussion: Causality}
\label{causality}
Causal dependencies play an instrumental role in any data generation process, and having reflected the underlying causal structure of the data, at least partially, in a supervised machine learning model is therefore beneficial for the generalization of the model and crucial for the interpretability of the learned correlations in terms of causal what-if scenarios for potential interventions \cite{PearlCausality,PearlML}.
\subsection{Individual causal effects}
\label{causalinf}
The estimation of individual causal effects, suffering from the problem of counterfactuals, needs a method capable of generalizations by means of sample attributes, which is one of the key strengths of machine learning. In the sense of potential outcomes \cite{Rubin}, a previously trained supervised machine learning model can be used to predict what-if scenarios for different values of one of its features.
Instead of the detour via two separate predictions, a direct prediction of absolute individual causal effects can be achieved with a machine learning algorithm capable of processing negative sample weights by subtracting one of the two potential outcome or what-if groups statistically from the other in the training. We call this technique statistical background subtraction. Due to its intrinsic feature binning, Cyclic Boosting in its additive regression mode (with adapted summand uncertainties) is ideally suited for this purpose: The external weights reflecting the respective membership to the two what-if groups ($+1$ and $-1$ in the case of certain memberships, positive and negative real values in the case of statistically estimated memberships \cite{Pivk_2005,PhysRevD.84.012003,PhysRevD.86.032007}), are simply applied to the sums over all samples in each feature bin in eqn. \ref{factors}, corresponding to the filling of the feature summand histograms. The accordingly weighted global average $\mu$ in eqn. \ref{summands} then corresponds to the average causal effect.
As an example, we look at the prediction of the individual causal effects of personalized coupons on customer demand (in terms of revenue) from the perspective of the retailer, and consider the simple case where unconfounded data from past random coupon assignments are available for the model training (see sec. \ref{confounding} for a discussion of confounding). Each training sample then represents an individual customer, the values $+1$ and $-1$ are used as sample weights for customers that did or did not receive a coupon, respectively, and the target is the corresponding revenue from that customer in some defined time period, e.g., a week. The resulting predictions of this model then correspond to the absolute individual causal effects of coupon sending on revenue. Note that this effect can be both positive or negative, an example for the latter being customers that would have bought also without coupon for a higher price.
Another benefit of the proposed combination of machine learning with statistical background subtraction, namely the focus of the model training on the causal effect to be learned, can be seen in our couponing example when considering that most of the customers just ignore a coupon completely, for instance by not showing any demand in the time period at hand, no matter if they received a coupon for it or not. Due to the random coupon sending, these customers are present to the same extent in both groups (with weights $+1$ and $-1$, respectively) of the data set used for the training, and are thus, thanks to the statistical background subtraction, effectively ignored in the model. In turn, the model can focus on learning the causal effect of the intervention on the target, rather than mainly learning the general dependencies of the target.
\subsection{Confounding}
\label{confounding}
There is one issue though with this approach of predicting individual causal effects with machine learning: Since the dependencies exploited by machine learning methods are merely statistical, the underlying causal structures of the data are not necessarily reflected in the learned model, because confounding effects can lead to spurious correlations overlaying the causal dependencies both between the different features and between these and the target.
The safest and most direct way to get rid of any confounding affecting an examined causal effect is via random assignment of the variable representing the cause, also known as randomized controlled trials \cite{RCT}. In case random assignment is not possible in a study or you are left with pure observational rather than interventional data, there is the need for a statistical method to avoid confounding of the data used for the training of the machine learning model, one example being independence weighting by means of inverse propensity scores \cite{propensity}. For this, each input sample for the training is weighted by the inverse of the corresponding propensity score value, which can, for example, be calculated by a separate machine learning model trained on the observational values of the variable representing the cause and containing all potential confounders as features.
Rather than eliminating confounding in the data, Cyclic Boosting also allows to impose causal assumptions during the model training process itself, either via exploiting the order of features, as mentioned in sec. \ref{multiregmode}, or by utilizing feature-specific smoothing functions over the factors for the different bins, in order to restrict the learning of the dependency between the target and potentially confounding or confounded features to defined parametric forms, e.g., monotonous functions. For potential confounders, the idea is to enforce the model to describe specific causal effects by other features, namely the true causes. For example, yearly seasonality in a demand forecasting model could be smoothed by a sinusoidal curve, leaving the description of distinct peaking structures to other features like holidays or promotions. Group-by smoothing can be used for two-dimensional features consisting of a cofounder and an interventional variable to stratify the confounders. The restriction of interventional features to specific functional forms, for example an exponential price-demand elasticity in retail demand forecasting, can also help to extrapolate beyond the range of the observations in the training data.
Besides the use case of causal inference described in sec. \ref{causalinf}, such incorporation of causal assumptions into a supervised machine learning model is also beneficial in terms of general forecasting quality, because it can improve the generalizability of the learned model. For a discussion of temporal confounding in time series forecasting see \cite{wick2021demand}.
\section{Conclusion}
A new machine learning algorithm, called Cyclic Boosting, was presented, which can be
categorized as a generalized additive model with a cyclic coordinate descent optimization
featuring a boosting-like update of parameters.
Cyclic Boosting addresses the challenge of prediction explainability on a fundamental
level: Rather than relying on black box approaches, individual predictions $\hat{y}_i$
for single observations should not only be accurate but also explainable. Each prediction
calculated using the Cyclic Boosting algorithm can be explained in terms of the strength
of each feature variable contributing to the prediction.
Furthermore, Cyclic Boosting facilitates (multi-dimensional) feature engineering and
enables the modeling of hierarchical causal dependencies and the prediction of rare
effects in the data. Thereby, overfitting is effectively avoided by means of
regularization and smoothing extensions.
\section*{}
Published at ICMLA 2019.
\noindent
\copyright 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
\noindent
Compared to the ICMLA paper (v2), we added (in v3) the discussion about causality in sec. \ref{causality}.
|
1,116,691,497,881 | arxiv | \section{Introduction}\label{sec:intro}
One of the most fundamental problems in number theory is to give formulas for
the number of points on a variety over a finite field. In the simplest cases,
the number of points over ${\mathbb F}_p$ can be expressed in terms of powers of $p$
and Artin symbols expressing the decomposition of $p$ in number fields; however,
this is very far from being sufficient in general. Most famously, it is
now known that isogeny classes of elliptic curves over ${\mathbb Q}$ of conductor $N$
are in bijection with newforms of level $N$ with integer coefficients;
the correspondence takes a form with Hecke eigenvalues $a_p$ to an elliptic
curve with $p+1-a_p$ points over ${\mathbb F}_p$.
In higher dimensions, we cannot expect such a statement to hold for all
varieties of a given deformation type, except in very special situations.
Nevertheless, we would like to find varieties for which the number of points
can be expressed in terms of $p$, Artin symbols, and the coefficients of
modular forms. Perhaps the most interesting case is that in which only a
single eigenform of weight $n>2$ is needed and the dimension of the variety
is $n-1$.
In dimension greater than $1$, the most natural candidates for this property
are the {\em Calabi-Yau varieties}, which are defined as follows:
\begin{defn}\label{calabi-yau} A {\em Calabi-Yau variety} is a smooth and
simply connected variety $V$ of dimension $d$ satisfying
$K_V \cong {\mathcal O}_V$ and $H^i(K_v) = 0$ for $0 < i < d$. If
$H^{d-1}({\mathcal T_V}) = 0$, where ${\mathcal T_V}$ denotes the tangent
bundle of $V$, then $V$ is {\em rigid}; this condition can also be written
as $H^{d-1,1}(V) = 0$. If $V$ is a
limit of Calabi-Yau varieties and has a Calabi-Yau resolution of
singularities, then $V$ is a {\em singular Calabi-Yau variety}.
\end{defn}
In particular, a Calabi-Yau variety of dimension $2$ is a K3 surface.
The Hecke eigenforms of weight $3$ up to twist correspond to imaginary
quadratic fields with class group of exponent dividing $2$ by
\cite[Theorem 2.4]{schutt}.
Such fields are known with at most one exception, which is excluded by
the generalized Riemann hypothesis; the list can be found
in \cite{elkies-schutt}. Elkies and Sch\" utt proved the following:
\begin{thm}\cite[Theorem 1]{elkies-schutt}\label{e-s}
Let $f$ be a Hecke eigenform of weight $3$ from the list
with eigenvalues $a_p$. Then there is a K3 surface $S_f$
such that, for all but finitely many $p$, we have
$\#S_f({\mathbb F}_p) = p^2 + c(p)p + 1 + a_p$, where $c(p)$ is a linear combination of
Artin symbols.
\end{thm}
Much less is known for forms of weight greater than $3$. The following
was shown by Dieulefait-Manoharmayum and Gouv\^ea-Yui:
\begin{thm}[\cite{dieu-man},\cite{gouvea-yui}] Let $V$ be a smooth rigid
Calabi-Yau threefold over ${\mathbb Q}$. Then $V$ has $p^3 + f(p)(p^2+p) + 1 - a_p$
points over ${\mathbb F}_p$,
where $f(p)$ is expressed in terms of Artin symbols and the $a_p$ are
coefficients of a Hecke eigenform of weight $4$.
\end{thm}
Many examples are worked out in detail in \cite{meyer} and elsewhere,
but it is not known
whether every Hecke eigenform can be realized by a rigid Calabi-Yau threefold
in this way, nor whether there are finitely or infinitely many rational
Hecke eigenforms of weight $4$.
In higher dimension, there are almost no examples. If the dimension of the
space of cusp forms of weight $k$ and level $N$ is $1$, then the Kuga-Sato
construction \cite{deligne} gives a Calabi-Yau variety realizing the form.
Ahlgren \cite{ahlgren} in effect studies the case $k = 6, N = 4$,
and Paranjape and Ramakrishnan \cite{p-r} consider several others.
In addition, Frechette, Ono, and Papanikolas have
shown \cite{fop} how to construct varieties that realize the forms of levels
$2, 4, 8$ in arbitrary weight; however, in general the dimension of these
spaces is not $1$ and we do not obtain a Calabi-Yau variety.
Roberts has conjectured \cite[Conjecture 1.1]{r} that up to twist there
are only finitely many newforms with rational coefficients and not of CM type
(complex multiplication, i.e., for which there is an integer $N$ such that
$a_p = 0$ for all primes $p$ with $\qr{N}{p} = -1$) for $k \ge 6$ and none for
$k \ge 52$.
\begin{remark}\label{rem:strongly-rigid}
In general the condition of rigidity is insufficient for the arithmetic
applications. Even if $V$ is a rigid
Calabi-Yau fivefold, its point counts over ${\mathbb F}_p$ may not be expressible in
terms of modular forms; for example, it is possible that $h^{3,2}(V) > 0$
(recall the notation $h^{i,j}$ for $\dim H^{i,j}$),
and that $H^5_{{\hbox{\' et}}}(V,{\mathbb Z}_\ell)$ is irreducible of dimension $>2$. However,
we expect that if $V$ is a rigid Calabi-Yau fivefold that also satisfies the
conditions $h^{2,1}(V) = h^{3,1}(V) = h^{3,2}(V) = 0$, then the number of
${\mathbb F}_p$-points of $V$ can be described by a formula like those above.
\end{remark}
\begin{defn}\label{defn:strongly-rigid}
Let $V$ be a Calabi-Yau variety such that $h^{i,j}(V) = 0$ for all $i,j$
except with $i = j$ or $\{i,j\} = \{0,\dim V\}$, and, if $\dim V$ is even,
satisfying the additional condition that $H_{\hbox{\' et}}^{\dim V}(V,{\mathbb Z}_p)$ splits
as a direct sum of Galois representations
$(H^{\dim V,0} \oplus H^{0,\dim V}) \oplus H^{\dim V/2,\dim V/2}$. Then we say that
$V$ is {\em strongly rigid} (and we expect that the number of points of
$[V]$ over ${{\mathbb F}_p}$ can be expressed
in terms of powers of $p$, Artin symbols, and coefficients of a single
rational eigenform of weight $\dim V + 1$).
\end{defn}
The main goal of this paper is to work out two examples of fivefolds. One
realizes the newform of weight $6$ and level $8$; the other, the newform
of weight $6$ and level $32$ with complex multiplication.
Both are double covers
of $\P^5$ branched along a union of $12$ hyperplanes; however, the methods are
somewhat different.
In the first example, which bears a close resemblance to certain rigid
Calabi-Yau threefolds, we will show that the double cover of $\P^5$ branched
along the hyperplanes $x_i = 0, x_i+x_{i+1} = 0$ is modular of level $8$ by
finding a fibration by quotients of products of Kummer surfaces closely
related to the construction of \cite{fop}. It does not seem to follow
directly from known results that this variety is a singular Calabi-Yau,
but we intend to prove this in \cite{ingalls-logan} and the modularity does
not depend on it in any case. We will also use an idea of Burek \cite{burek}
to construct a quotient of the variety that appears to be a strongly rigid
Calabi-Yau.
Similarly, the second example is also a double cover of $\P^5$ branched along
the union of $12$ hyperplanes. It will be proved modular by exhibiting a
fibration
by quotients of products of K3 surfaces $(K \times L_\alpha)/\sigma_1$, where
$\sigma_1$ is an involution and $K$ is the same for all fibres.
In fact $K$ has Picard
number $20$ and realizes the CM newform of weight $3$ and level $16$ with
quadratic character. We will
then show that the total space of the $L_\alpha$ is birational to
$(K \times E)/\sigma_2$, where $E$ is an elliptic curve with complex
multiplication by ${\mathbb Z}[i]$. With this done, it is easy to express the number
of points in terms of the coefficients of CM newforms associated to powers of
the same character. For this example, we will again use Burek's idea to
construct what appears to be a strongly rigid Calabi-Yau quotient.
These do not exhaust the examples of apparently modular double covers of
$\P^5$ that we have discovered. For example, we rediscover Ahlgren's fivefold
discussed in \cite{ahlgren}. In addition, we have found two more unions of
$12$ hyperplanes, not projectively equivalent to the
first one or to each other, such that the double cover has a Calabi-Yau
resolution that appears to be strongly rigid and to realize the form of level
$8$. One of them is notable for its large symmetry group, with the
symmetric group on $5$ symbols acting faithfully on the set of $12$
hyperplanes; the other, for
admitting a fibration in quotients of products of K3 surfaces that is similar
but distinctly different to that in the first example discussed here. This
appears to point to a previously undiscovered identity of hypergeometric
functions. Although these examples certainly have resolutions of singularities
that are Calabi-Yau varieties, it is not easy to prove that the associated
Galois representations are as expected. They and other examples
will be studied in future work.
\section{Notation}\label{sec:notation}
We start by introducing some notation that will apply throughout the paper.
\begin{defn}\label{notation}
We will often work in $\P^5({\mathbb Q})$. Whenever we are in a projective space
of dimension $5$, the variables will be denoted by $x_0, \dots, x_5$,
with the usual understanding that $x_i = x_{i+6}$. We will also use
weighted projective space with weights $6,1,1,1,1,1,1$, this being the
natural home for double covers of $\P^5$ with branch locus of degree $12$.
The variables there will be $t, x_0, \dots, x_5$, and a map from
$\P(6,1,1,1,1,1,1)$ to $\P^5$ will always be given by omitting $t$.
At times we will use
other projective spaces, referring to their variables as $y_0, \dots, y_m$
or $z_0, \dots, z_m$.
\end{defn}
\begin{defn} Let $V$ be a variety over a ring with a chosen map to ${\mathbb F}_p$.
We write $[V]_p$ for the number of points of $V$ base changed to ${\mathbb F}_p$.
By abuse of language we will also use this notation when $V$ is defined over
${\mathbb Q}$ by equations
with coefficients whose denominators are not multiples of $p$.
\end{defn}
\begin{defn} We use $\phi$ to denote the quadratic character modulo a prime
$p$ (we will never be considering more than one prime at a time, so this
will not lead to ambiguity).
\end{defn}
\section{The first example: level $8$}\label{sec:first}
Our first example of a modular fivefold of level $8$ is the double cover
$F_1$ of $\P^5$ defined by the equation
$$t^2 = \prod_{i=0}^5 x_i (x_i + x_{i+1}).$$
This equation is reminiscent of the first arrangement of $8$
hyperplanes in $\P^3$ given in the table of \cite[{p.{} 68}]{meyer}, which
likewise defines a variety that is modular of level~$8$.
\begin{remark}\label{aut-f1}
The group of automorphisms of the double cover $F_1 \to \P^5$
is of order $24$: it is generated by the maps
$$(t:x_i) \to (-t:x_i), \quad (t:x_i) \to (t:x_{i+1}), \quad
(t:x_i) \to (t:x_{5-i}).$$
This is checked by verifying that the only automorphisms of the dual
$\check \P^5$ preserving the $12$ points corresponding to the $12$
hyperplanes are the obvious ones.
\end{remark}
We will prove its modularity in the following precise form:
\begin{thm}\label{main-first}
Let $a_p, b_p$ be the Hecke eigenvalues for the unique newforms of level
$8$ and weight $6, 4$ respectively. Then
\begin{equation}\label{f1-formula}
[F_1]_p = p^5 + p^4 + p^3 + p^2 + p + 1 - a_p - (b_p + \phi(-1) p) p.
\end{equation}
\end{thm}
\subsection{Proof of modularity}\label{proof-mod-f1}
We will prove this theorem by means of a fibration by quotients of products of
two K3
surfaces. We will relate these to the Kummer surface of the square of an
elliptic curve and use this to express the number of points in terms of
hypergeometric functions over a finite field, thus reducing to results
of \cite{fop}, especially Theorem 1.1.
The statements in this section have been proved; but, since it is easy to
make mistakes in such things, they are also verified numerically in the
file {\tt code-8.mag} in \cite{magma-scripts}.
\begin{defn} Let $\pi$ be the rational map $F_1 \to \P^1$ defined by
$(x_0:x_3)$.
\end{defn}
The justification for this definition is that if we set $x_0 = \lambda, x_3 = 1$
in the equations of the $12$ hyperplanes along which
the double cover $F_1 \to \P^5$ is branched, we can divide them into
two sets of $6$ equations, one depending only on $\lambda, x_1, x_2$,
the other only on $\lambda, x_4, x_5$.
\begin{defn}\label{def:kla-lla}
Let $K_\lambda, L_\lambda$ be the surfaces in weighted projective space
$\P(3,1,1,1)$ defined by the equations
\begin{equation}
\label{kla-lla}
\begin{split}
v^2 &= (\lambda+1) z_0z_1z_2(\lambda z_0+z_1)(z_1+z_2)(z_0+z_2), \\
w^2 &= \lambda(\lambda+1)y_0y_1y_2(y_0+y_1)(\lambda y_0+y_2)(y_1+y_2)\\
\end{split}
\end{equation}
and let
$A_\lambda, B_\lambda$ be the affine patches $z_0 \ne 0, y_0 \ne 0$.
(The twist by $\lambda(\lambda+1)$
is made to facilitate the comparison with a Kummer surface, as we will
see soon.) Let $F_\lambda$ be the affine patch of the fibre of $\pi$ above
$(\lambda:1)$ where $x_3 \ne 0$.
\end{defn}
\begin{prop}\label{fib-is-prod} For $\lambda \in {\mathbb F}_p$ with $\lambda \ne 0, -1$,
the fibre of $\pi$ at $(\lambda:1)$ is birationally equivalent
to $(K_\lambda \times L_\lambda)/\sigma$, where $\sigma$ is the automorphism
$$((v:z_0:z_1:z_2),(w:y_0:y_1:y_2)) \to ((-v:z_0:z_1:z_2),(-w:y_0:y_1:y_2)).$$
\end{prop}
\begin{proof} As above, if we substitute $x_0 = \lambda, x_3 = 1$ in the equations
for $F_1$, the variables separate and the $12$ linear forms can be expressed
as $6$ in $x_1, x_2$ and $6$ in $x_4, x_5$. Thus there is an obvious map
$\tau$ of degree $2$ from the product of the two double covers of $\mathbb{A}^2$
branched along these loci to the fibre of $\pi$, and
$\tau \circ \sigma = \tau$. The result follows by taking the projective
closure.
\end{proof}
We now show how to count points on such quotients of products of double covers.
\begin{lemma}\label{count-q-dc}
Let $p$ be an odd prime. For $1 \le i \le n$, let $S_i$ be affine or
projective spaces over ${\mathbb F}_p$ and let $V_i$ be hypersurfaces defined by
$f_i = 0$ in $S_i$, where the degree of $f_i$ is even if $S_i$ is
projective. Let $v_{i,+}, v_{i,0}, v_{i,-}$ be the number of points of $S_i$
where $f_i$ is a nonzero square, $0$, or a nonsquare respectively.
Let $D_{f_i}$ be the double covers of $S_i$ defined by $s_i^2 - f_i = 0$,
and let $D_V = (D_{f_1} \times \dots \times D_{f_n})/\sigma$,
where $\sigma$ is the involution that
negates all the $s_i$ and fixes all coordinates of the $S_i$. Then
$[D_{V}]_p = \prod_{i=1}^n [S_i]_p + \prod_{i=1}^n(v_{i,+}-v_{i,-})$.
\end{lemma}
\begin{proof}
A point $(P_1,\dots,P_n)$ of $S_{V_1} \times \dots S_{V_n}$ lies
under $1+\phi(\prod_{i=1}^n f_i(P_i))$ points (the residue symbol is
well-defined by our assumption on $\deg f_i$).
But $\prod_{i=1}^n f_i(P_i)$ is a nonzero square if and only if all
$\phi(f_i(P_i))$ are nonzero and an even number are $-1$, and is not square if
and only if all $f_i$ are nonzero and an odd number are $-1$.
Let $P, M, Z$ be the sequences of $n$ signs with product $1, -1, 0$
respectively. To restate the above, we have
$[D_{V}]_p = 2 \sum_{p \in P} \prod_{i=1}^n v_{i,p_i} + \sum_{z \in Z} \prod_{i=1}^n v_{i, z_i} $.
On the other hand, $\prod_{i=1}^n [S_i]_p = \prod_{i=1}^n (v_{i,+}+v_{i,0}+v_{i,-})$, so
$$ [D_{V}]_p - \prod_{i=1}^n [S_i]_p = \sum_{p \in P} \prod_{i=1}^n v_{i,p_i} - \sum_{m \in M} \prod_{i=1}^n v_{i,m_i} = \prod_{i=1}^n (v_{i,+} - v_{i,-}).$$
\end{proof}
\begin{prop}\label{count-prod}
For all odd primes $p$ and all $\lambda \ne 0, -1 \in {\mathbb F}_p$ we have
$[F_\lambda]_p = [A_\lambda]_p[B_\lambda]_p - p^2[A_\lambda]_p - p^2[B_\lambda]_p + 2p^4$ and
$[K_\lambda]_p - [A_\lambda]_p = [L_\lambda]_p - [B_\lambda]_p = p+1$.
\end{prop}
\begin{proof}
The first statement can be proved from Lemma \ref{count-q-dc}. As an alternative we show how to prove it by means of character sums. We have
\begin{equation}
\label{char-sums}
\begin{split}
[F_\lambda]_p &= \sum_{y_1, y_2, y_4, y_5 \in {\mathbb F}_p} 1+\phi(\lambda(\lambda+1) y_1y_2(\lambda+y_1)(y_1+y_2)(y_2+1) \times\\
&\qquad\qquad\qquad\qquad (\lambda+1)y_4y_5(1+y_4)(y_4+y_5)(y_5+\lambda)),\\
[A_\lambda]_p &= \sum_{y_1,y_2 \in {\mathbb F}_p} 1 + \phi((\lambda+1) y_1y_2(\lambda+y_1)(y_1+y_2)(y_2+1)),\\
[B_\lambda]_p &= \sum_{y_4,y_5 \in {\mathbb F}_p} 1 + \phi(\lambda(\lambda+1)y_4y_5(1+y_4)(y_4+y_5)(y_5+\lambda)).\\
\end{split}
\end{equation}
Thus
\begin{equation}
\label{f-ab-decomp}
\begin{split}
[F_\lambda]_p &= [A_\lambda]_p[B_\lambda]_p
- \sum_{y_1,y_2 \in {\mathbb F}_p}\phi((\lambda+1) y_1y_2(\lambda+y_1)(y_1+y_2)(y_2+1))\\
&\qquad - \sum_{y_4,y_5 \in {\mathbb F}_p}\phi(\lambda(\lambda+1)y_4y_5(1+y_4)(y_4+y_5)(y_5+\lambda))\\
&= [A_\lambda]_p[B_\lambda]_p - p^2([A_\lambda]_p-p^2) - p^2([B_\lambda]_p-p^2)\\ \end{split}
\end{equation}
as claimed.
The other statements are obvious since the points of $K_\lambda \setminus A_\lambda$
and $L_\lambda \setminus B_\lambda$ are exactly the projective points with
$z_0 = v = 0$.
\end{proof}
We now study $K_\lambda$ and $L_\lambda$. By exchanging $z_1, z_2$ we see that
they are quadratic twists of each other by $\lambda$. In other words,
we have
$$[K_\lambda]_p - (p^2+p+1) = \phi(\lambda)([L_\lambda]_p - (p^2+p+1)).$$ It thus suffices
to consider $K_\lambda$.
\begin{defn}\label{e-lambda} For $\lambda \ne 0, -1$, let
$E_\lambda$ be the elliptic curve defined by
$y^2 = x^3 -2x^2 + \frac{\lambda}{\lambda+1}x$. Let $\mathcal{K}_\lambda$ be the Kummer surface
of $E_\lambda \times E_\lambda$: it has a singular model in weighted projective
space $\P(3,1,1,1)$ defined by
$$v^2 = \prod_{i=0}^1 (z_i^3-2z_i^2z_2 + \frac{\lambda}{\lambda+1}z_iz_2^2).$$
Let $a_{\lambda,p} = p+1-\#E_\lambda({\mathbb F}_p)$, and let $a_{0,p} = 0$.
\end{defn}
\begin{prop}\label{count-kum} Let $\lambda \ne 0, -1 \in {\mathbb F}_p$. The minimal
desingularization $\widetilde \mathcal{K}_\lambda$ of $\mathcal{K}_\lambda$ has
$p^2+\left(12+6\phi(\lambda)\right)p+1+a_{\lambda,p}^2$ points
over ${\mathbb F}_p$.
\end{prop}
\begin{proof} Consider the elliptic fibration defined by $(z_0:z_2)$.
It has four $\tilde D_4$ fibres at $(1:0)$ and $(\alpha:1)$, where
the $\alpha$ are the $x$-coordinates of the $2$-torsion points of $E_\lambda$;
these are all of the bad fibres.
If $\lambda \in {\mathbb F}_p^2$, then these fibres and all of their components are
rational and thus contribute $4(5p+1)$ points. If not, then only two of
them are rational and only three components of each, so we obtain
$2(3p+1)$ points.
We now turn to the good fibres. If $x_0$ is the $x$-coordinate of two points
of $E_\lambda$, then the fibre at $(x_0:1)$ is isomorphic to $E_\lambda$ and therefore
has $p+1-a_{\lambda,p}$ points. If not, it is isomorphic to the quadratic twist
of $E_\lambda$, so it has $p+1+a_{\lambda,p}$ points. If $\lambda$ is a square, there are
$(p-3-a_{\lambda,p})/2$ of the first type and $(p-3+a_{\lambda,p})/2$ of the second type
(because the total number of points is $p+1-a_{\lambda,p}$ and $4$ of them have
$x$-coordinates with exactly $1$ point). Thus $\mathcal{K}_\lambda$ has
$4(5p+1)+((p-3-a_{\lambda,p})(p+1-a_{\lambda,p})+(p-3+a_{\lambda,p})(p+1+a_{\lambda,p}))/2 = p^2+18p+1+a_{\lambda,p}^2$
points. Likewise, if $\lambda$ is not a square, there are $p^2+6p+1+a_{\lambda,p}^2$
points.
\end{proof}
\begin{remark} This could alternatively be proved by means of \' etale
cohomology. The point is that
$\mathop{\rm Sym}^2 H^1_{\hbox{\' et}}(E, {\mathbb Z}_p)$ is a component of
$H^2_{\hbox{\' et}}(\widetilde \mathcal{K}_\lambda,{\mathbb Z}_p)$ whose complement is
generated by the fundamental classes of the exceptional divisors above the
$2$-torsion points of $E \times E$ and by the images of
$E \times \{0\}, \{0\} \times E$, and the diagonal. Thus, if $\alpha, \beta$
are the eigenvalues of Frobenius on $H^1_{\hbox{\' et}}(E,{\mathbb Z}_p)$, those on
$\mathop{\rm Sym}^2$ are $\alpha^2, \alpha \beta = p, \beta^2$. On the complement,
the eigenvalue $p$ occurs $19$ times if the curve has full level-$2$
structure, and if not the multiplicities of $p, -p$ are $13, 6$.
\end{remark}
\begin{prop}\label{fib-on-kum} $\widetilde \mathcal{K}_\lambda$ admits an elliptic fibration
whose general fibre is isomorphic to the elliptic curve defined by
$$y^2 = x^3 + \frac{4t-2}{(\lambda+1)t(\lambda t+1)}x^2 +
\frac{1}{((\lambda+1)(\lambda t+1)t)^2} x$$
and that has singular fibres of types $I_4^*, I_1^*, I_0^*, I_1$.
\end{prop}
\begin{proof}
We define the fibration by the equations
$$[z_0 z_1/\lambda - z_2^2/(\lambda+1):z_0^2 - 2z_0z_2 + \lambda z_2^2/(\lambda+1)].$$
In Magma \cite{magma}
it is routine to define the general fibre of this map, express it
as an elliptic curve, and show that it has the desired properties.
\end{proof}
\begin{prop}\label{fib-on-k} The minimal desingularization $\widetilde K_\lambda$ of
$K_\lambda$ admits a fibration in curves of genus $1$
whose general fibre is $2$-isogenous to that of the fibration on
$\mathcal{K}_\lambda$ introduced in Proposition \ref{fib-on-kum}. The singular fibres
of this fibration corresponding to those of that fibration are of types
$I_2^*, I_2^*, I_0^*, I_2$, and all components of the
reducible fibres are defined over the field to which $\lambda$ belongs.
\end{prop}
\begin{proof} The desired fibration is defined by $(z_0:z_1)$.
Again, it is a simple matter to show that the general fibre is isomorphic
to the elliptic curve defined by
$$y^2 = x^3 + \frac{t-2}{t(\lambda+1)(\lambda t+1)}x^2 +
\frac{1-t}{((\lambda+1)(\lambda t+1)t)^2}x,$$ that its bad fibres are as stated,
and that the quotient map by the subgroup of order $2$ generated by
$(\frac{1-t}{(\lambda+1)(\lambda t^2+t)}:0:1)$ is the desired isogeny.
\end{proof}
\begin{thm}\label{count-kl} Suppose as before that $\lambda \ne 0, -1$.
Then $[K_\lambda]_p = p^2 + 1 + a_{\lambda,p}^2$.
\end{thm}
\begin{proof}
We compare the numbers of points by means of the fibrations of Propositions
\ref{fib-on-kum}, \ref{fib-on-k}.
Let $\delta$ be $-1$ if the tangent directions at the singularity of the
$I_1$ fibre of the fibration on $\widetilde \mathcal{K}_\lambda$ are rational
and $1$ otherwise.
Then this fibre has $p+1+\delta$
points. The two components of the $I_2$ fibre on $\widetilde K_\lambda$ meet in two points
defined over the same field as the tangent directions of the singularity,
so their union has $2p+1+\delta$ points.
All $21$ components of the reducible fibres on $K_\lambda$ are defined over the
ground field. When $\lambda \in {\mathbb F}_p^2$ the same is true for the $20$ components
of reducible fibres of $\widetilde \mathcal{K}_\lambda$, but
otherwise only $8$ of them are: the double curve and two of the tails in the
$I_0^*$, the two double curves and two of the tails in the $I_1^*$, and
the central component of the $I_4^*$.
Thus, when $\lambda \in {\mathbb F}_p^2$, there are $21p+4+\delta$ points on singular
fibres of the fibration on $\widetilde \mathcal{K}_\lambda$, and likewise $21p+4+\delta$ points on
singular fibres of the fibration on $\widetilde K_\lambda$. The good fibres have
the same number of points on each, because corresponding fibres are related
by an isogeny. Hence $\widetilde K_\lambda$ has the same number of points as
$\widetilde \mathcal{K}_\lambda$.
Similarly, when $\lambda \notin {\mathbb F}_p^2$, there are $9p+4+\delta$ points on
singular fibres on $\widetilde \mathcal{K}_\lambda$, so $\widetilde K_\lambda$ has $12p$ more points
than $\widetilde \mathcal{K}_\lambda$. We conclude, in view of Proposition \ref{count-kum},
that $\widetilde K_\lambda$ has $p^2+18p+1+a_{\lambda,p}^2$ points.
To finish the proof, we note that for generic $\lambda$, the singular
subscheme of $K_\lambda$ has degree $18$, and that all components of the
resolutions are defined over the base field
(again, this is an easy computation). The singular subscheme is unaltered
by specializations that do not cause additional pairs of lines in the
ramification locus to meet: the only $\lambda$ for which such coincidences
occur are $0, -1$, which are not permitted. Hence $\widetilde K_\lambda$ has
$18p$ more points than $K_\lambda$, and the result follows.
\end{proof}
\begin{cor}\label{count-ll} Let $\lambda \ne 0, -1$ as before. Then
$[L_\lambda]_p = p^2 + p + 1 + \phi(\lambda) (a_{\lambda,p}^2-p)$.
\end{cor}
\begin{proof} This follows immediately from the fact that $L_\lambda$ is the twist
of the double cover $K_\lambda \to \P^2$ by $\lambda$. If $\lambda$ is a square the two
surfaces have the same number of points; if not, the sum is twice the number
of points of $\P^2$.
\end{proof}
Combining this with Proposition \ref{count-prod} (together with the
observation that $[F_0]_p = p^4$, because $x_0=0$ is one of the hyperplanes and
so there is one point for each point of $\mathbb{A}^4({\mathbb F}_p)$), we immediately obtain
the following:
\begin{cor}\label{count-f}
Let $\lambda \ne -1$. Then $F_\lambda$, the affine patch of the fibre of
$\pi$ above $(\lambda:1)$,
has $p^4 + \phi(\lambda) (a_{\lambda,p}^2-p)^2$ points.
\end{cor}
Neither the equation $y^2 = x^3 - 2x^2 + \frac{\lambda}{\lambda+1}x$ nor any twist gives
an elliptic curve when $\lambda = -1$, so we have to change the formula somewhat.
However, the separation of the variables still allows us to write
$[F_\lambda]_p$ in terms of a K3 surface and its quadratic twist by $\lambda$.
\begin{defn}\label{kmin1}
Let $K_{-1}, L_{-1}$ be the surfaces defined by
$$ v^2 = z_0z_1z_2(-z_0+z_1)(z_1+z_2)(z_0+z_2), \quad
v^2 = -z_0z_1z_2(z_0+z_1)(-z_0+z_2)(z_1+z_2),$$
and $A_{-1}, B_{-1}$ the affine patches $z_0 \ne 0$.
Let $a_{-1,p} = p+1-[E]_p$, where $E$ is the elliptic curve with affine
equation $y^2 = x^3 - x$.
\end{defn}
As before, by exchanging $z_1, z_2$ we see that $K_{-1}, L_{-1}$ are quadratic
twists of each other by $-1$. On the other hand, the map $(v:-z_0:z_1:z_2)$
is an isomorphism $K_{-1} \to L_{-1}$.
\begin{prop}\label{count-m1}
For all odd primes $p$ we have
$[K_{-1}]_p = [L_{-1}]_p = p^2 - \phi(-1) p + 1 + a_{-1,p}^2$.
Further, $F_{-1}$ has
$p^4 + (2p-a_{-1,p}^2)^2$ points for $p \equiv 1 \bmod 4$
and $p^4$ for $p \equiv 3 \bmod 4$.
\end{prop}
\begin{proof}
For $p \equiv 3 \bmod 4$, we have stated above that $K_{-1}, L_{-1}$ are
isomorphic to their twists by $-1$, which is not a square in ${\mathbb F}_p$, so
the number of points is $p^2+p+1$. This is as claimed, since $a_{-1,p} = 0$
for such $p$.
In the case $p \equiv 1 \bmod 4$, the
argument is very similar to that given above. The two surfaces are
isomorphic, so we only consider $K_{-1}$. Again we begin with the
fibration $(z_0:z_1)$, for which the general fibre is defined by
\begin{equation}
y^2 = x^3 + (-t^3+t)x^2 + (t^5-2t^4+t^3)x
\end{equation}
and three fibres of type $I_2^*$ (since the $I_0^*$ and $I_2$ of the generic
case come together). All components of the singular fibres
are rational. We consider the quotient of this elliptic curve by
$(0:0)$, obtaining a curve defined by
\begin{equation}
y^2 = x^3 + (2t^3-2t))x^2 + (t(t-1)^2)^2 x.
\end{equation}
This curve has an $I_4^*$ fibre at $1$ and $I_1^*$ at $0, \infty$; the action
of Galois on all three is trivial (here we use that $p \equiv 1 \bmod 4$).
Let $\widetilde S_{-1}$ be the K3 surface given
by the minimal desingularization of this curve. Then as in the proof of
Theorem \ref{count-kl} we have
$[\widetilde K_{-1}]_p = [\widetilde S_{-1}]_p$.
On the other hand, we consider the Kummer surface $\mathcal{K}_{-1}$ of
$E_{-1} \times E_{-1}$, where $E$ is defined by $y^2 = x^3 - x$. It can
be defined in $\P(3,1,1,1)$ by $v^2 = (z_0^3-z_0z_2^2)(z_1^3-z_1z_2^2)$.
The map defined by
\begin{equation}
\begin{split}
&((z_0+z_2)(4z_0^2z_1 + z_0z_1^2 - z_1^2z_2 - z_0z_2^2 -
2z_1z_2^2 - z_2^3)/4:\\
&\quad (z_0z_1 - z_0z_2 - z_1z_2 - z_2^2)(3z_0z_1 - z_0z_2 - z_1z_2 - z_2^2))/3\\
\end{split}
\end{equation}
induces an elliptic fibration on the minimal desingularization whose general
fibre is isomorphic to that above. Thus $[\widetilde \mathcal{K}_{-1}]_p = [\widetilde S_{-1}]_p$.
But as before $[\widetilde \mathcal{K}_{-1}]_p = p^2 + 18p + 1 + a_{-1,p}^2$. Since
the singular subscheme of $K_{-1}$ has degree $19$, and all the exceptional
curves are defined over ${\mathbb F}_p$, we have
$$[K_{-1}]_p = [\widetilde K_{-1}]_p -19p = [\widetilde \mathcal{K}_{-1}]_p - 19p = p^2 - p + 1 + a_{-1,p}^2$$
as claimed.
The count of points on $F_{-1}$ follows from this by Proposition
\ref{count-prod} as in Corollary \ref{count-f}. (Although the relation
between $K_\lambda, A_\lambda$ and $L_\lambda, B_\lambda$ is only stated for $\lambda \ne -1$,
it clearly applies when $\lambda = -1$ as well.) Alternatively, we may use
Proposition \ref{fib-is-prod} together with Lemma \ref{count-q-dc}.
Let $k_+, k_0, k_-$ be the number
of points of the affine patch $z_0 = 0$ of $\P^2$ where the branch function
of $K_{-1}$ is a nonzero square, $0$, or a nonsquare respectively: then
$[F_{-1}]_p = p^4 + (k_+-k_-)^2.$ The first
statement of the proposition says that $k_+ - k_- = 2p - a_{-1,p}^2$ and the
result follows.
\end{proof}
We now recall the notation for hypergeometric functions over finite fields
from~\cite{fop}.
\begin{defn}[{\cite[(2.2), (2.4), (1.1), (1.3)]{fop}}]\label{fop-notation}
Let ${{}_3E_2}(\lambda)$ be the elliptic curve defined by
$y^2 = (x-1)(x^2+\lambda)$ and, for $\lambda \in {\mathbb F}_p$ with
$\lambda^2 \ne -\lambda$, let ${{}_3A_2}(p,\lambda)$ be the trace of
Frobenius of ${{}_3E_2}(\lambda)$ over ${\mathbb F}_p$. In addition, for characters
$A$ and $B$ on ${\mathbb F}_p$, let $\binom{A}{B}$ be the normalized Jacobi sum
$\frac{1}{p}\sum_{x \in {\mathbb F}_p} A(x) \bar B(x-1)$, where the bar denotes the
complex conjugate.
Let $\phi$ be the quadratic character on ${\mathbb F}_p$ and let
${{}_3F_2}(\lambda) = \frac{p}{p-1} \sum_\chi {\binom{\phi \chi}{\chi}}^3 \chi(\lambda)$,
where the sum runs over all characters $\chi$ of ${\mathbb F}_p$.
\end{defn}
We restate the basic relation between ${{}_3F_2}$ and ${{}_3A_2}$.
\begin{thm}[{\cite[Theorem 4.3 (2), Theorem 4.4 (2)]{fop}}]
$${{}_3F_2}\left(1+\frac{1}{\lambda}\right) = \frac{\phi(-\lambda)({{}_3A_2}(p,\lambda)^2-p)}{p^2}$$
for $\lambda \in {\mathbb F}_p$ with $\lambda \ne 0, -1$. In addition, if
$p \equiv 1 \bmod 4$ we have ${{}_3F_2}(1) = \frac{4a^2-2p}{p^2}$ where $a$ is
an odd integer such that $p - a^2$ is a square, and if $p \equiv 3 \bmod 4$
then ${{}_3F_2}(1) = 0$.
\end{thm}
To relate our notation to that of \cite{fop} requires a simple statement
about elliptic curves.
\begin{prop}\label{same-curve} For $\lambda \ne 0, -1$ we have
${{}_3A_2}(p,\frac{-1}{\lambda+1})^2 = a_{\lambda,p}^2$. Equivalently, we have
${{}_3A_2}(p,\mu) = a_{-(1+\frac{1}{\mu}),p}^2$ for $\mu \ne 0, -1$.
\end{prop}
\begin{proof} Replacing $x$ by $x+1$ in the equation
$y^2 = (x-1)(x^2-\frac{1}{\lambda+1})$ defining an elliptic curve whose trace
of Frobenius is ${{}_3A_2}(p,\frac{-1}{\lambda+1})$ gives a quadratic
twist of the elliptic
curve $y^2 = x^3-2x^2+\frac{\lambda}{\lambda+1}x$. This is the elliptic curve whose
trace is $a_{\lambda,p}$, so the two have the same trace up to sign.
\end{proof}
Combining these two statements gives
\begin{equation}\label{f-a}
{{}_3F_2}(\lambda) = \phi\left(\frac{\lambda+1}{\lambda}\right)\frac{(a_{-\lambda,p}^2-p)}{p^2}.
\end{equation}
We are now ready to prove Theorem \ref{main-first}. For simplicity we will
only write out the proof in the case of $p \equiv 3 \bmod 4$; the case
$p \equiv 1 \bmod 4$ is very similar but requires slightly more work
to keep track of the $\lambda = -1$ terms.
\begin{proof}
With $p \equiv 3 \bmod 4$, the formula
in Theorem \ref{main-first} becomes
$$\sum_{i=0}^5 p^i - a_p - pb_p + p^2$$ (recall that $a_p, b_p$ are the Hecke
eigenvalues for the newforms of weight $8$ and level $6, 4$ respectively).
By \cite[Theorem 1.1]{fop}, for $p \equiv 3 \bmod 4$ we have
$$a_p = -p^4 \sum_{\lambda=2}^{p-1}\phi(-\lambda){{}_3F_2}(\lambda)^2 + p^2 - pb_p,$$ so we need
to show that
\begin{equation}\label{eqn:f1-formula-f32}
[F_1]_p = \sum_{i=0}^5 p^i - p^4 \sum_{\lambda=2}^{p-1}\phi(-\lambda){{}_3F_2}(\lambda)^2.
\end{equation}
We count the points on $F_1$ by means of the fibration $\pi$.
The hyperplane $x_3 = 0$ has
$p^4+p^3+p^2+p+1$ points, in bijection with those of the hyperplane $x_3 = 0$
in $\P^4$. The affine patch $x_3 \ne 0$ of the fibre at $0$ has $p^4$ points.
So, by Corollary \ref{count-f} and Proposition \ref{count-m1},
the total number of points is
$$\sum_{i=0}^4 p^i + 2p^4 + \sum_{\lambda = 1}^{p-2} p^4 + \phi(\lambda)(a_{\lambda,p}^2-p)^2.$$
Combining the $p^4$ terms, changing $\lambda$ to $-\lambda$ in the summation, and
using (\ref{f-a}) shows that this is the same as the expression
(\ref{eqn:f1-formula-f32}).
This completes the proof of Theorem \ref{main-first} in the case
$p \equiv 3 \bmod 4$. As already mentioned, the proof for
$p \equiv 1 \bmod 4$ is very similar, requiring the use of
the familiar fact \cite[Theorem 18.5]{ireland-rosen}
that $a_{-1,p} = \pm 2a$, where as
before $a$ is the positive odd integer such that $p - a^2$ is a square.
\end{proof}
Having shown that $F_1$ is modular, we now consider the question of whether it
is birationally equivalent to a Calabi-Yau fivefold.
\begin{defn}\label{near-pencil}
Let $\{D_i\}_{i=1}^n$ be a set of smooth divisors on a variety $V$ and
let $S \subseteq \{1,\dots,n\}$ be a nonempty subset such that
$\cap_{i \in S} D_i \not \subset D_j$ for all $j \notin S$.
As in \cite[Section 5]{cynk-hulek}, we say that an intersection
$\cap_{i \in S} D_i$ is {\em near-pencil} if there is a single element
$s \in S$ such that $\cap_{i \in S} D_i \ne \cap_{i \in S \setminus \{s\}} D_i$.
\end{defn}
Note in particular that $\cap_{i \in S} D_i$ is automatically near-pencil if
$\#S \le 2$. For another example, if $V = \P^n$, the $D_i$ are hyperplanes,
and the equation defining $D_1$ involves a variable not mentioned in any
other $D_i$, then every intersection $\cap_{i \in S} D_i$ with $1 \in S$ is
near-pencil.
Cynk and Hulek show:
\begin{prop}[{\cite[Proposition 5.6]{cynk-hulek}}] Let $V$ be a smooth variety
with smooth divisors $D_1, \dots, D_n$ such that the sum of
the Picard classes of the $D_i$ is divisible by $2$ in $\mathop{\rm Pic} V$. For
$S \subset \{1,\dots,n\}$, let $C_S = \cap_{i \in S} D_i$. Suppose that, for
all nonempty $S$ with $C_S \ne C_{S \cup \{i\}}$ for all $i \notin S$, either
$C_S$ is near-pencil or $\lfloor \#S/2 \rfloor = {\mathop{\rm codim}} S - 1$. Then the
double cover of $V$ branched along the union of the $D_i$ admits a crepant
resolution.
\end{prop}
We refer to the given condition on $S$ or $C_S$ as the
{\em Cynk-Hulek criterion}. In addition, if the condition holds for the
intersection of every subset of the $D_i$ of cardinality greater than $1$,
we will say that the set of $D_i$ satisfies the Cynk-Hulek criterion.
To discuss $F_1$, we do not need to describe the resolution in detail.
It suffices to observe that all but one subset of
the $12$ hyperplanes satisfies the Cynk-Hulek criterion: the exception is the
intersection of the hyperplanes $x_i + x_{i+1} = 0$, which consists of the
single point $(-1:1:-1:1:-1:1)$ and is not near-pencil (the intersection of
any five of the six hyperplanes is the same).
Combining this with the result of Cynk and Hulek just above, we conclude that
if the singularity of $F_1$ at $(0:-1:1:-1:1:-1:1)$ admits a crepant resolution,
then so does $F_1$, and this resolution $\tilde F_1$
would be a Calabi-Yau fivefold. We intend to prove this in forthcoming joint
work with Colin Ingalls \cite{ingalls-logan}.
\begin{remark}\label{resol-cohom}
The form of $[F_1]_p$ suggests that $\tilde F_1$
is not strongly rigid (Definition \ref{defn:strongly-rigid}): we expect that
$h^{3,2} = h^{3,1} = h^{2,1} = 0$ but that $h^{4,1} = 1$, with the Galois
representation on $H^5_{\hbox{\' et}}(\tilde F_1,{\mathbb Z}_p)$ being reducible with components
$H^{5,0} \oplus H^{0,5}, H^{4,1} \oplus H^{1,4}$. Unfortunately we cannot be
certain that such a resolution exists.
\end{remark}
\subsection{Constructing a rigid Calabi-Yau fivefold from $F_1$}\label{subsec:rigid-f1}
Following a method of Burek \cite{burek}, we will attempt to construct a
strongly rigid Calabi-Yau fivefold realizing the same newform of weight $6$
and level $8$ as a quotient of $F_1$. In particular, we will consider the
quotients $Q_1, Q_2, Q_3$ of $F_1$ by representatives $\iota_1, \iota_2,
\iota_3$ of each of the three conjugacy classes of involutions in the $D_6$
that acts on the set of components of the branch locus of the map
$F_1 \to \P^5$. All of these involutions commute with the map that
exchanges the sheets of the double cover, so the quotients are still double
covers of a quotient of $\P^5$: let these quotients be $R_1, R_2, R_3$.
For computational verification of the assertions of this section we refer
to {\tt quotient-level8.mag} in \cite{magma-scripts}.
First, we examine the central element $\iota_1: x_i \to x_{i+3}$.
The differential $D$ on $\P^5$ given by
\begin{equation}\label{hol-diff-p5}
\frac{x_0^5}{\prod_{i=1}^5 x_i} \bigwedge_{i=1}^5 d(x_i/x_0)
\end{equation}
(\cite[Remark III.7.1.1]{h}) has divisor
$-\sum H_i$, where $H_i$ is the hyperplane $x_i = 0$.
Pulling it back to $F_1$, we get a differential whose divisor is
$R - 2\sum H_i$, where $R$ is the ramification locus. This is the divisor
of $t/\prod_{i=0}^5 x_i$, so we obtain a differential
\begin{equation}\label{hol-diff-f1}
\frac{x_0^6}{t} \bigwedge_{i=1}^5 d(x_i/x_0)
\end{equation}
on $F_1$ whose divisor is trivial.
Assuming (as will be shown in \cite{ingalls-logan}) that $F_1$
admits a crepant resolution of singularities, we may pull this back to
a differential on the resolution with the same property.
To see that $\iota_1$ pulls this back to its negative, note that
$t/\prod_{i=0}^5 x_i$ is invariant under $\iota_1$, while
exchanging two variables changes the sign of $D$ (this is clear if $x_0$ is
not one of the two variables; if it is, replace this expression for $D$ by a
similar one with a different variable singled out). Since $\iota_1$ gives
an odd permutation, it acts as $-1$ on the pullback of $D$ to $F_1$ and
we do not expect to see
the form of weight $6$ in the cohomology of the quotient.
The invariant ring for the action of $\iota_1$ on $\P^5$ is generated by the
polynomials $x_i+x_{i+3}, x_i^2 + x_{i+3}^2, x_ix_j + x_{i+3}x_{j+3}$ for
$0 \le i < j \le 2$. Thus we may view the quotient map $\P^5 \to R_1$
as a map to a subvariety of $\P(1,1,1,2,2,2,2,2,2)$. None of the $12$
hyperplanes is fixed by the involution, so we obtain $6$ branch divisors,
each of which is defined by an equation of degree~$2$.
\begin{prop}\label{prop:count-q-r-small-p}
For $p$ an odd prime less than $20$, both
$Q_1$ and $R_1$ have $\sum_{i=0}^5 p^i$ points over ${\mathbb F}_p$.
\end{prop}
\begin{proof} First find single polynomials $s_i$ defining each
of the branch components as a subvariety of $R_1$; then, for each $p$,
enumerate the ${\mathbb F}_p$-points of $R_1$, evaluate the product of the $s_i$ at
each, and sum the Kronecker symbols to obtain the point count of $Q_1$.
All of this is easily done in Magma when $p$ is small.
\end{proof}
\begin{remark} Of course we expect this statement to hold for all odd
primes $p$.
\end{remark}
Next we consider $\iota_2: x_i \to x_{5-i}$. As an involution of $\P^5$
this is conjugate to $\iota_1$, and again it gives an odd permutation of the
variables and hence acts as $-1$ on the presumed
$H^{5,0}$, but it is different as an automorphism of
$F_1$. Indeed, it fixes $2$ of the $12$ components of the branch locus,
so we get $7$ branch divisors, of which $5$ have degree $2$ and $2$ have
degree $1$. Again we find that $[R_2]_p = \sum_{i=0}^5 p^i$, but this time
$[Q_2]_p = \sum_{i=0}^5 p^i - pb_p$ for $p < 20$, where as before $b_p$ is
the Hecke eigenvalue for the newform of weight $4$ and level $8$. This
suggests that $\iota_2$ acts as $+1$ on $H^{4,1} \oplus H^{1,4}$ and as
$-1$ on $H^{5,0} \oplus H^{0,5}$.
\begin{conj}\label{conj:q2}
$[Q_2]_p = \sum_{i=0}^5 p^i - pb_p$ for all odd $p$.
\end{conj}
Since $\iota_3 = \iota_1 \iota_2$, we therefore expect that $\iota_3$
acts as $+1$ on $H^{5,0} \oplus H^{0,5}$ and as $-1$ on $H^{4,1} \oplus H^{1,4}$.
This time the ring of invariants is generated by
$x_0, x_1+x_5, x_2+x_4, x_3, x_1^2+x_5^2, x_2^2+x_4^2,x_1x_2+x_4x_5$, so the
quotient map from $\P^5$ goes to $\P(1,1,1,1,2,2,2)$; the image is in fact
a hypersurface $H$ of (weighted) degree $4$. Two of the $12$ components of
the branch locus are fixed by $\iota_3$ and map to divisors in $H$ cut out
by equations of degree $1$; the other $10$ are exchanged in pairs and map
to divisors defined by equations of degree $2$. Numerically this suggests
a Calabi-Yau variety: the canonical divisor of $H$ would be
${\mathcal O}(-4 \cdot 1 - 3 \cdot 2 + 4) = {\mathcal O}(-6)$, while the branch locus has class
${\mathcal O}(12) = -2K_H$, but such a calculation is suspect in light of the large
singular locus of $H$ and the branch divisors. Our calculations lead
us to the following conjecture:
\begin{conj}\label{conj:q3}
$[R_3]_p = \sum_{i=0}^5 p^i$ and
$[Q_3]_p = \sum_{i=0}^5 p^i - a_p - \phi(-1)p^2$ for all odd $p$,
where as before $a_p$ is the eigenvalue of $T_p$ on the newform of
weight $6$ and level $8$.
\end{conj}
Finally, we use \cite[Theorem 2]{kollar-larsen} to show that if $F_1$ has a
Calabi-Yau resolution, then $Q_3$ has a resolution of Kodaira dimension $0$
(presumably Calabi-Yau).
\begin{prop}\label{prop:age-1}
The age {\rm (\cite[Definition 1]{kollar-larsen})} of $\iota_3$
acting on the tangent space of every fixed point is $1$.
\end{prop}
\begin{proof}
Viewed as an automorphism
of $\P^5$, the fixed locus of $\iota_3$ consists of the linear subspaces
$x_0 - x_2 = x_3 - x_5 = 0$ and $x_0 + x_2 = x_1 = x_3 + x_5 = x_4 = 0$.
On the first of these, we may take $x_0, x_2, x_3, x_4, x_5$ as a system of
local parameters, even on the double cover. Then $\iota_3$ exchanges the
tangent vectors in the
$x_0$ and $x_2$ directions, and likewise $x_3$ and $x_5$, while fixing
$x_4$; thus its age is $1$ there.
On the second, we have $t = 0$ on the double cover, so $t$ must be taken
among our local parameters, and we must blow up $x_1 = x_4 = 0$. Then
we take $x_0, x_1, x_3, x_4, t$ as our local parameters. It is clear that
tangent vectors in the $x_1, x_4, t$ directions are fixed by $\iota_3$.
As for $x_0$, such a tangent vector is described by the infinitely near
point $(0:x_0+\epsilon:x_1:-x_0:x_3:x_4:-x_3)$ where $\epsilon^2 = 0$,
which goes by the involution to $(0:-x_0:x_1:x_0+\epsilon:x_3:x_4:-x_3)$.
Now $(-x_0:x_0+\epsilon) = (x_0-\epsilon:-x_0)$, so the corresponding
diagonal entry of the matrix giving the action is $-1$; similarly for a
tangent vector in the $x_3$ direction. Thus the action of $\iota_3$ has
trace $1$ on the $5$-dimensional tangent space, and so the $-1$-eigenspace
has dimension $2$ and the age is $1$ as before.
\end{proof}
In particular $\iota_3$ satisfies the global Reid-Tai criterion, and so
by \cite[Theorem 2]{kollar-larsen} the quotient has Kodaira dimension $0$.
We therefore conjecture:
\begin{conj}\label{conj:rigid-8} $Q_3$ admits a strongly rigid resolution of
singularities for which the representation
on $H^5$ coincides with that obtained from the newform of weight $6$ and
level $8$ up to semisimplification.
\end{conj}
\section{The second example: level $32$}\label{sec:32}
In this section we will consider the fivefold $V_{32}$ defined by the equation
\begin{equation}\label{fivefold-32}
t^2 = \left(\prod_{i=0}^5 x_i\right)(x_0+x_1)(x_3+x_5)(x_2+x_4+x_5)(x_0+x_2-x_4)(x_1-x_2+x_4)(x_2-x_3+x_4).
\end{equation}
We will show that it realizes the newform of weight $6$ and level $32$ that
has complex multiplication by ${\mathbb Q}(i)$. We will use the following notation:
\begin{defn} For $j \in \{2,4,6\}$, let $m_j$ be the unique newform of weight
$j$ and level $32$ that has complex multiplication by ${\mathbb Q}(i)$. Let
$m_3$ be the newform of weight $3$ and level $16$ whose Nebentypus is the
Dirichlet character $\left(\frac{-1}{\cdot}\right)$. For
$j \in \{2,3,4,6\}$
and $p$ prime, let $a_{j,p}$ be the eigenvalue of $m_j$ for the Hecke
operator $T_p$.
\end{defn}
We will prove:
\begin{thm}\label{count-32}
$[V_{32}]_p = \sum_{i-0}^5 p^i - a_{6,p} - pa_{4,p} - 2p^2a_{2,p}$.
\end{thm}
\subsection{Proof of modularity}\label{subsec:modularity-32}
As with Section \ref{proof-mod-f1}, the assertions of this section are
verified numerically in the file {\tt code-32.mag} in \cite{magma-scripts}.
We begin with a standard observation on the modular forms that are used in
the proof of Theorem \ref{count-32}.
\begin{remark}\label{rem:cm-mfs}
Since the $m_i$ are modular forms with complex multiplication by
${\mathbb Q}(i)$,
their Fourier coefficients may be described in terms of Hecke characters
of this field \cite{ribet}.
In particular, if $p \equiv 2,3 \bmod 4$, then
all $a_{j,p}$ are equal to $0$. For $p \equiv 1 \bmod 4$, let $a_p, b_p$
be such that $a_p^2 + b_p^2 = p$ and $a_p + b_p \equiv 1 \bmod 2+2i$
(this determines $a_p$ uniquely and $b_p$ up to sign). Then
$a_{j,p} = \mathop{\rm tr} (a_p + b_p i)^{j-1}$ for $j \in \{2,3,4,6\}$.
\end{remark}
\begin{lemma}\label{coef-mf} For $p \equiv 1 \bmod 4$ we have
$a_{3,p} = a_{2,p}^2 - 2p,
a_{4,p} = a_{2,p}(a_{3,p}-p),
a_{6,p} = a_{4,p}a_{3,p} - p^2 a_{2,p}$.
\end{lemma}
\begin{proof}
For the second identity
$$\begin{aligned}
&a_{4,p} = \mathop{\rm tr}(a_p+b_p i)^3 \cr
&= 2a_p^3 - 6a_pb_p^2\cr
&= a_{2,p}(a_p^2 - 3b_p^2)\cr
&= a_{2,p}(2a_p^2 - 2b_p^2 - p).\cr
\end{aligned}$$
But $a_{3,p} = \mathop{\rm tr}(a_p+b_pi)^2 = 2a_p^2 - 2b_p^2$, so the first claim
follows. The proofs of the other two are similar.
\end{proof}
We will prove Theorem \ref{count-32} in a manner suggested by the lemma.
Namely, we
will start by writing a fibration on $V_{32}$ whose fibres are quotients of
products of two K3 surfaces, one of which is always the K3 surface $K$ of
Picard rank $20$ and discriminant $-4$, while the other varies in a family.
This expresses $V_{32}$ as birationally equivalent to a quotient of the
product of this K3 surface with a threefold. In turn, we will use a fibration
on the threefold to relate it to $K \times E_{32}$, where $E_{32}$ is
an elliptic curve of conductor $32$. Applying Lemma \ref{coef-mf} will then
complete the proof.
\begin{remark} Since $V_{32}$ satisfies the Cynk-Hulek criterion, it admits
a crepant resolution by a Calabi-Yau fivefold. The form of the formula
for the number of points suggests that this resolution has
$h^{5,0} = h^{4,1} = 1, h^{3,2} = 2$, and $h^{i,j} = 0$ unless
$i = j$ or $i+j = 5$.
This is explained by the birational description just above. Indeed,
$H^5$ of the resolution arises from $H^1(E) \otimes H^2_T(K)^{\otimes^2}$,
where $H^2_T$ is the transcendental lattice $H^2(K)/\mathop{\rm Pic} K$.
Thus, for example, $H^{3,2}$ of the resolution matches
$$(H^{1,0}(E) \otimes H^{2,0}(K) \otimes H^{0,2}(K)) \oplus (H^{1,0}(E) \otimes H^{0,2}(K) \otimes H^{2,0}(K))$$
and has dimension $2$. This will be
explained more precisely in Remark \ref{cohom-v32}.
\end{remark}
As in Section \ref{sec:first}, we begin by partitioning
the twelve hyperplanes into the branch locus into two sets of six, each
set intersecting in a line. In particular, the set of linear forms
$\{x_3,x_5,x_0+x_1,x_3+x_5,x_2+x_4+x_5,x_2-x_3+x_4\}$ spans the space
generated by $x_2+x_4,x_0+x_1,x_3,x_5$, while its complement in the set of
$12$ linear forms defining components of the branch locus spans
$\langle x_0+x_1,x_2+x_4,x_0,x_2 \rangle$.
\begin{defn}\label{fib-v32}
Define a rational map $\pi: V_{32} \dashrightarrow \P^1$ by $(x_0+x_1:x_2+x_4)$. We
will also view $\pi$ as a map $\P^5 \dashrightarrow \P^1$.
\end{defn}
As before, the general fibre is a quotient of the product of two K3 surfaces.
As in Definition \ref{def:kla-lla}, we describe the first of these by
writing the linear form $ax_3 + bx_5 + c(x_0+x_1) + d(x_2+x_4)$ as
$ax + by + (c\lambda+d)z$. This gives six linear forms
$x,y,\lambda z,x+y,y+z,-x+z$ from which we obtain a K3 surface
$k_\lambda$ defined by the equation obtained by setting $t^2$ equal to their
product. Similarly, for the other six we write
$ax_0 + bx_2 + c(x_0+x_1) + d(x_2+x_4)$ as $ax + by + (c\lambda+d)z$, obtaining
$x,-x+\lambda z,y,-y+z,x+2y-z,-x-2y+(\lambda+1)z$ and define a K3 surface
$\ell_\lambda$ by setting $u^2$ equal to their product.
However, we are only interested in
$(k_\lambda \times \ell_\lambda)/\sigma$, where $\sigma$ is the involution that changes
the signs of $t, u$. This does not change if we replace $\lambda z$ by $z$
in the definition of $k_\lambda$ and $y$ by $\lambda y$ in that of $\ell_\lambda$.
\begin{defn}\label{def:kla-lla-new} Let $K = K_\lambda, L_\lambda$ be the surfaces
defined by
$K_\lambda: t^2 = xyz(x+y)(y+z)(-x+z),
L_\lambda: t^2 = \lambda x(-x+\lambda z)y(-y+z)(x+2y-z)(-x-2y+(\lambda+1)z)$.
\end{defn}
Now, $K = K_{\lambda}$ is independent of $\lambda$ and
$(K_\lambda \times L_\lambda)/\sigma \cong (k_\lambda \times \ell_\lambda)/\sigma$.
\begin{prop}\label{count-k} $[K]_p = p^2+p+1+a_{3,p}$.
\end{prop}
\begin{proof}
Observe that $K$ is the same surface
as $K_{-1}$ (Definition \ref{kmin1}) up to a change of variables. We showed
in Proposition \ref{count-m1} that $[K]_p = p^2 - \phi(-1) p + 1 + a_{-1,p}^2$.
Since $a_{-1,p}$ is the trace of Frobenius for the elliptic curve
Since $y^2 = x^3 - x$ is the unique elliptic curve of conductor $32$
up to isogeny and $a_{-1,p}$ is the trace of Frobenius at $p$ for this
curve, we see that $a_{-1,p} = a_{2,p}$. In light of Lemma \ref{coef-mf},
this implies our claim for $p \equiv 1 \bmod 4$. For $p \equiv 2, 3 \bmod 4$,
both sides are equal to $p^2+p+1$.
\end{proof}
\begin{remark}\label{count-k-branch} The observation that the branch locus
of $K$ has $6p-5$ points over ${\mathbb F}_p$ will also be helpful later.
\end{remark}
It is not so easy to give a useful formula for $[L_\lambda]_p$. The Picard number
of $L_\lambda$ is generically $19$: this can be seen by constructing an elliptic
fibration on it with two bad fibres each of type $D_4, A_3, A_1$ and a
section. In fact, the Picard lattice of $L_\lambda$ is a sublattice of index
$2$ of that of the Kummer surface of $E \times E$.
\begin{remark}\label{lla-is-twist}
It appears that $L_\lambda$ is isogenous to the Kummer surface of
$E_\lambda \times E_\lambda^\sigma$, where $E_\lambda$ is an elliptic curve with
$j$-invariant
$(-4\lambda^2+16)^3/\lambda^4$ and $\sigma$ is its quadratic twist by $-\lambda^3+\lambda$.
However, this observation leads to an unnecessarily
complicated method of counting the ${\mathbb F}_p$-points on $V_{32}$.
\end{remark}
We consider the total space $\L$ of the family of the $L_\lambda$ inside
$\P(3,1,1,1) \times \P^1$. In other words, we regard $\lambda$ as the ratio of
the two coordinates of $\P^1$.
Let the coordinates on this space be $t,z_0,z_1,z_2,u,v$: then $\L$ is
defined by the equation
$$t^2v^3 = uz_0(-vz_0+uz_2)z_1(-z_1+z_2)(z_0+2z_1-z_2)(-vz_0-2vz_1+(u+v)z_2).$$
We define a map $\rho: \L \dashrightarrow \P^1$ by $(2z_1-z_2:z_2)$.
It is easily checked that the base scheme consists of two rational
curves that meet in a single point; it thus has $2p+1$ points mod
$p$ for all $p$.
\begin{prop}\label{fibres-rho}
Let $x \in {\mathbb F}_p$ with $x^3 - x \ne 0$
and let $\rho_x$ be the fibre of $\rho$ at $(x:1)$. Then
$\rho_x$ has $p^2+4p+1+\phi(x^3-x) a_{3,p}$ points.
The fibres of $\rho$ at $0, \pm 1, \infty$ have
$p^2 + 3p + 1, 2p^2+2p+1, 2p^2+2p+1$ points respectively.
\end{prop}
\begin{proof}
We consider the affine patch of $\rho_x$ where $z_2, v$ are nonzero.
We may view this patch of $\rho_x$
as being inside $\mathbb{A}^3$, which in turn we think of as the affine patch
of $\P(3,1,1,1)$ where the last coordinate is nonzero. The projective
closure is defined by the equation
$$t^2 = \frac{-x^2+1}{4}z_0z_1z_2(z_0+xz_2)(z_0-z_1)(z_0-z_1+xz_2).$$
Replacing $z_2$ by $z_2/x$ and multiplying through by $(x/2)$
converts this to
\begin{equation}\label{threefold-e32}
t^2 = (-x^3+x)z_0z_1z_2(z_0+z_2)(z_0-z_1)(z_0-z_1+z_2),
\end{equation}
and replacing $z_0-z_1$ by $z_1$ converts $z_1$ to $z_0-z_1$ and hence
changes this to the equation for $K$, twisted by $-x^3+x$,
up to the order of variables.
Note further that if $p \equiv 1 \bmod 4$ then
$\phi(x^3-x) = \phi(-x^3+x)$, and if $p \equiv 3 \bmod 4$ then
$a_{3,p} = 0$, so we may replace $\phi(-x^3+x)$ by $\phi(x^3-x)$ in
both cases. With these observations, the proof for the general fibres
reduces to routine bookkeeping.
As for the bad fibres of $\rho$, the fibre at $1$ consists of two
components, one supported at $t = z_1 - z_2 = 0$ and one at
$z_1 - z_2 = v = 0$. The total number of points is
$p^2+2p+1+p^2+p+1-(p+1) = 2p^2 + 2p + 1$; similarly for $\rho_{-1}$
with $z_1 - z_2$ changed to $z_1$.
The fibre at $\infty$ has components at
$t^2v + (z_0z_1(z_0+z_1))^2u = z_2 = 0$ and $z_2 = v = 0$. To count the
points on the first of these, note that for fixed $t, z_0, z_1$ and
$z_2 = 0$ we get one solution for $u, v$ if $t \ne 0$ or
$z_0z_1(z_0+z_1) = 0$ and $p+1$ otherwise. Thus the total is
$p^2+p+1+3p = p^2 + 4p + 1$. The two components intersect along
$z_2 = v = z_0z_1(z_0+z_1)$, that is to say, at $3p+1$ points. The
second component has $p^2+p+1$ points, so the total is $2p^2 + 2p + 1$.
Finally, the fibre at $0$ is defined in $\P(3,1,1,1) \times \P^1$ by
$$4t^2v^3 = z_0^2 z_2^2(z_0^2uv^2 + 2z_0z_2uv^2 + z_2^2(u^3+2u^2v)), \quad 2z_1 - z_2 = 0.$$
We count its points with the help of the projection to $\P(3,1,1,1)$.
It is readily checked that the fibre at $0$ is supported on a smooth
rational
curve, giving $p+1$ points, and that the fibre at $\infty$ is supported on
two smooth rational curves that meet at $(1:0:0:0)$, giving $2p+1$ points.
The fibre at $(\alpha:1)$ consists of two rational curves that meet at
$3$ rational points. The components are defined over ${\mathbb F}_p(\sqrt{\alpha})$,
so we find $p+1+(p-2)\phi(\alpha)$ points. The map has no base scheme,
and $\phi(\alpha)$ is $+1$ and $-1$ equally often, so the total number of
points is $p(p+1)+2p+1 = p^2 + 3p + 1$ as claimed.
\end{proof}
\begin{cor}\label{cor:bir-v32}
$V_{32}$ is birationally equivalent to
$((K \times (K \times E_{32})/\sigma_1))/\sigma_2$ for appropriate
involutions $\sigma_1, \sigma_2$.
\end{cor}
\begin{proof} We already know that $V_{32}$ is birational to
$(K \times \L)/\sigma$, so the only new information is that $\L$ is
birational to $(K \times E_{32})/\sigma_1$. This follows from the
equation (\ref{threefold-e32}) for the fibre of $\rho_x$.
\end{proof}
This will be used in Section \ref{subsec:rigid-v32}
to construct a candidate for a rigid Calabi-Yau quotient of $V_{32}$.
It is also instructive to compare to Lemma \ref{count-q-dc}.
Indeed, $e_{32,+} - e_{32,-} = a_{2,p}$, where $e_{32,\pm}$ are as in
Lemma \ref{count-q-dc}, and for a suitable
model of $K$ we have $k_+ - k_- = a_{3,p}$. However, this does not immediately
imply the simple formula of Theorem \ref{count-32}, because the birational
equivalence contracts and expands many subvarieties.
\begin{cor}\label{count-script-l}
The total space $\L$ has $p^3+6p^2-3p+1-a_{4,p}-pa_{2,p}$ points over ${\mathbb F}_p$.
\end{cor}
\begin{proof} By the proposition, there are $(p-3-a_{2,p})/2$ fibres with
$p^2+4p+1+a_{3,p}$ points and $(p-3+a_{2,p})/2$ with $p^2+4p+1-a_{3,p}$ points,
in addition to the points of the bad fibres. In light of the $2p+1$
points of the base scheme, the total number of points is then
$$\begin{aligned}
&\frac{1}{2} ((p-3-a_{2,p})(p^2+4p+1+a_{3,p}) + (p-3+a_{2,p})(p^2+4p+1-a_{3,p}))\cr
&\quad + p^2+3p+1+3(2p^2+2p+1)-p(2p+1).\cr
\end{aligned}$$
Simplifying and applying Lemma \ref{coef-mf} gives this result.
\end{proof}
We use this together with Lemma \ref{count-q-dc} to count the
${\mathbb F}_p$-points of $V_{32}$.
\begin{defn}
Let $K_+, K_0, K_-$ be the sets of points of $\P^2({\mathbb F}_p)$ at which
the right-hand side of the equation defining $K$ has Kronecker symbol
$1, 0, -1$ respectively, and let $k_+, k_0, k_-$ be their cardinalities.
Similarly, define
$L_{+,\lambda}, L_{0,\lambda}, L_{-,\lambda}, \ell_{+,\lambda}, \ell_{0,\lambda}, \ell_{-,\lambda}$
using the equation for $L_\lambda$.
\end{defn}
In these terms,
we may rephrase Proposition \ref{count-k} and the following remark as
saying that $k_0 = 6p-5, k_+ = (p^2-5p-4+a_{3,p})/2, k_- = (p^2-5p-4-a_{3,p})/2$.
We also note that $\ell_{0,\lambda} = p^2+p+1$ for $\lambda = 0$, while it is
$6p-7$ for $\lambda = \pm 1$ and $6p-9$ for other values of $\lambda$ (the
difference is that two sets of three lines in the branch locus are concurrent
for $\pm 1$ but not on other fibres).
It follows from Lemma \ref{count-q-dc} that $(K \times L_\lambda)/\sigma$ has
$(p^2+p+1)^2 + (k_+-k_-)(\ell_{+,\lambda} - \ell_{-,\lambda})$ points for $\lambda \in {\mathbb F}_p$.
On the other hand, we may use the birational equivalence of this with the
fibre of $\pi$ at $\lambda$ to count the points on the fibre. In the following,
let $y_0,y_1,y_2,z_0,z_1,z_2$ be coordinates on $\P^2 \times \P^2$.
\begin{prop}\label{prop:match-fibre-p2p2}
Fix $\lambda \in {\mathbb F}_p^*$, and let $\mu$ be the rational map from the hyperplane
in $\P^5$ defined by $x_0+x_1 = \lambda(x_2+x_4)$ to $\P^2 \times \P^2$ given by
$((x_3:x_5:x_2+x_4),(x_0:x_2:x_2+x_4))$. Then $\mu$ induces a rational map
from the fibre of $\pi$ at $(\lambda:1)$ to $(K \times L_\lambda)/\sigma$.
Further, it induces a bijection between the sets of ${\mathbb F}_p$-points of these
schemes, with
the following exceptions:
\begin{enumerate}
\item Points of the fibre of $\pi$ with $x_0 = x_1 = x_2 + x_4 = 0$, or
with $x_3 = x_5 = x_2+x_4 = 0$, do not correspond to any point of
$(K \times L_\lambda)/\sigma$.
\item Points of the fibre of $\pi$ with $x_2 + x_4 = 0$, but
for which $(x_0:x_2), (x_1:x_4), (x_3:x_5)$ are well-defined points of
$\P^1$, correspond $(p-1)$-to-$1$ to points of
$(K \times L_\lambda)/\sigma$ above a point
with coordinates $((y_3:y_5:0),(y_0:y_2:0))$.
\item Points of $(K \times L_\lambda)$ with $y_2 = 0, z_2 \ne 0$, or with
$y_2 \ne 0, z_2 = 0$, do not correspond to any point of the fibre of
$\pi$.
\end{enumerate}
\end{prop}
\begin{proof}
As discussed above, the map matches the branch loci of the two double
covers, so there is a rational map from the fibre of $\pi$ to
$(K \times L_\lambda)/\sigma$ as described. In case 1, we would obtain
a point whose coordinates in one $\P^2$ are $(0:0:0)$. In case 2,
if $x_2 + x_4 = 0$, then clearly we obtain the point
$((x_3:x_5:0),(x_0:x_2:0))$, and this is unchanged by rescaling $(x_0:x_2)$
by an element of ${\mathbb F}_p^*$. On the other hand, if $x_0 = x_2 = 0$, then
we have already dealt with this point in case 1, and similarly for the
other two pairs.
Finally, points with $y_2 = 0, z_2 \ne 0$ cannot be obtained from $\mu$,
because $x_2+x_4$ cannot both be $0$ and not be $0$.
In the other direction, we have an inverse rational map from
$\P^2 \times \P^2$ to the hyperplane. On the affine patch
$y_2 = z_2 = 1$, it is given by
$((y_0,y_1),(z_0,z_1)) \to (z_0:\lambda-z_0:z_1:y_0:1-z_1:y_1)$. Points not
on this affine patch are accounted for in cases 2 and 3 above.
\end{proof}
\begin{cor}\label{cor:count-diff}
The double cover $(K \times L_\lambda)/\sigma$ of $\P^2 \times \P^2$ has
$p(p+1)^2+(p-2)\phi(\lambda)(k_+-k_-)$ more points than the
fibre of $\pi$ at $\lambda$.
\end{cor}
\begin{proof}
We consider the three cases. Case 1 describes two disjoint sets of $p+1$
points in the fibre of $\lambda$,
so $2p+2$ in total. In case 2 we contract $(p+1)^2(p-1)$ points to
$(p+1)^2$.
To understand the third case, note that one of the linear forms defining
$K$ is the third coordinate, so all points with $x_2 = 0$ give one point
on $K$ and there are $(p+1)p^2$ missed
points for which the third coordinate of the point giving $K$ is $0$.
On the other hand, setting the
third coordinate to $0$ in the linear forms defining $L_\lambda$ gives
$\pm t_0, \lambda t_1, -t_1, \pm (t_0+2t_1)$. Thus the product is $0$ for $3$
points and $\lambda$ times a nonzero square for the other $p-3$.
Where the product is $0$, we have $p^2$ points.
In the double cover we get one point for each point in $K_0$ (cf.{}
Remark \ref{count-k-branch}) and two points for each point of
$K_+$ or $K_-$, depending on whether $\lambda$ is a square.
This contributes
$p^2(p+1) + (p-2)(k_+-k_-)$ points if $\phi(\lambda) = 1$ and
$p^2(p+1) + (p-2)(k_--k_+)$ points if $\phi(\lambda) = -1$ to the
excess of $[(K\times \lambda)/\sigma]_p$ over $[\pi_\lambda]_p$.
In total, then, the excess is
$p^2(p+1) + p^2(p+1) + (p-2)\phi(\lambda)(k_+-k_-) - (2p+2) - (p+1)^2(p-1)$.
This simplifies to the formula asserted.
\end{proof}
These calculations are not valid for $\lambda = 0, \infty$. However, it is
easy to see that for both of these the fibre has $p^4+p^3+p^2+p+1$ points.
Finally, the base locus of $\pi$ is defined by $x_0 + x_1 = x_2 + x_4 = 0$.
On this locus the product of linear forms is $0$, so it has
$p^3 + p^2 + p + 1$ points and we must subtract $p$ times this from the
total number of points on the fibres to obtain the correct point count for
$V_{32}$.
We now assemble all of the ingredients: the comparison of the fibres
in Proposition \ref{prop:match-fibre-p2p2} and its
Corollary \ref{cor:count-diff}, the remarks on special fibres and
the base locus just above, the count of points on $\L$ from Proposition
\ref{fibres-rho}, and the relations of coefficients of modular forms of
Lemma \ref{coef-mf}.
By routine calculation, we obtain
$[V_{32}]_p = \sum_{i-0}^5 p^i - a_{6,p} - pa_{4,p} - 2p^2a_{2,p}$ as
claimed. This completes the proof of Theorem \ref{count-32}.
\subsection{Construction of a rigid Calabi-Yau fivefold of level 32}\label{subsec:rigid-v32}
As in Section \ref{subsec:rigid-f1}, we will use the method of \cite{burek}
to construct a candidate for a rigid Calabi-Yau fivefold of level $32$,
and again we refer to {\tt quotient-level32.mag} in
\cite{magma-scripts} for verifications.
Let $G_{64}$ be the group of projective automorphisms of the configuration
of $12$ hyperplanes used to construct $V_{32}$, let $C_2$ be the cyclic group
of order $2$, and let $Z_G$ be the centre of a group $G$.
The group $G_{64}$ has order $64$ and is isomorphic to
$C_2 \times G_{32}$, where $Z_{G_{32}} \cong C_2^2$ and $G_{32}$ fits into an
exact sequence $1 \to Z_{G_{32}} \to G_{32} \to C_2^3 \to 1$.
We study the quotients of $V_{32}$ by elements of order $2$ with
characteristic polynomial $(x-1)^4(x+1)^2$ in $G_{64}$ as in
Section \ref{subsec:rigid-f1}.
We concentrate on two elements of $G_{64}$: namely
$\alpha_1$, taking $(x_0:x_1:x_2:x_3:x_4:x_5)$ to $(x_1:x_0:x_4 :x_3:x_2:x_5)$,
and
$\alpha_2$, defined by $(x_0:x_1:x_2:x_3:x_4:x_5) \to (-x_1:-x_0:x_2:-x_5:x_4:-x_3)$.
(Note that $\alpha_2 \in Z_{G_{64}}$.) In this case, the Cynk-Hulek criterion
is satisfied, so we know that the differential
\begin{equation}\label{hol-diff-v32}
D' = \frac{x_5^6}{t} \bigwedge_{i=0}^4 d(x_i/x_5)
\end{equation}
(cf.{} (\ref{hol-diff-f1})) pulls back to a generator of $H^{5,0}$ on the
quotient.
Now, $\alpha_1$ gives an even permutation and therefore does
not change the sign of
$\frac{x_5^5}{\prod_{i=0}^4 x_i} \wedge_{i=0}^4 d(x_i/x_5)$. In addition, it
fixes $x_5/t$, and so it fixes $D'$.
To verify the invariance for $\alpha_2$, we use the alternative form
$-\frac{x_2^6}{t} \wedge_{\stackrel{i=0}{i \ne 2}}^5 d(x_i/x_2)$ for $D'$.
Negating any variable changes the sign of this, and it is invariant under
even permutations that fix $x_2$. So it is fixed by $\alpha_2$.
When we consider the quotient $V_{32}/\alpha_1$, we find that the
images of all of the branch divisors are defined by a single polynomial,
and so it is easy to write down the branch function on $\P^5/\alpha_1$
(that is, the function whose square root gives the double cover
$V_{32}/\alpha_1 \to \P^5/\alpha_1$).
On the other hand, for $V_{32}/\alpha_2$, all but two of the branch divisors,
as well as the union of the two that are not, are defined by single
polynomials. In this case, it is again easy to write down the branch
function. For both of these, the quotient $\P^5/\alpha_i$ is a hypersurface
of weighted degree $4$ in $\P(1,1,1,2,2,2,2)$ as previously.
There are other involutions such that exactly one branch divisor on the
quotient is not defined by a single polynomial. We would have to be more
careful in this situation; however, it does not arise in this paper.
Counting points on the double covers, we are led to the following conjecture.
\begin{conj}\label{conj:count-mod-a1} For all primes $p>2$ we have
$[V_{32}/\alpha_1]_p = \sum_{i=0}^5 p^i - a_{6,p} - p^2 a_{2,p}$
and $[V_{32}/\alpha_2]_p = \sum_{i=0}^5 p^i - a_{6,p} - pa_{4,p}$.
\end{conj}
Accordingly we expect that $\alpha_1$ acts as $-1$ on
$H^{4,1}(\tilde V_{32}) \oplus H^{1,4}(\tilde V_{32})$ and has eigenvalues
$1,-1$ on $H^{3,2}(\tilde V_{32}) \oplus H^{2,3}(\tilde V_{32})$, while
$\alpha_2$ acts as $+1$ on $H^{4,1} \oplus H^{3,2}$ and as $-1$ on
$H^{3,2} \oplus H^{2,3}$; this also confirms our observation that both act as
$+1$ on $H^{5,0}$, which also applies to $H^{0,5}$.
Thus $\alpha_1 \alpha_2$ should satisfy the same description as $\alpha_1$,
except that the eigenspaces of $\pm 1$ for $H^{3,2}$ and $H^{2,3}$ are reversed;
this is consistent with calculations that find that
$[V_{32}/\alpha_1\alpha_2]_p = [V_{32}/\alpha_1]_p$ for small $p$ and with
the fact that $\alpha_1\alpha_2$ is conjugate to $\alpha_1$ in $G_{64}$.
In particular, the $+1$ eigenspace of $\langle \alpha_1,\alpha_2\rangle$ on
$H^5$ should be neither more nor less than $H^{5,0} \oplus H^{0,5}$, and
we expect that the number of ${\mathbb F}_p$-points on the quotient should be
expressible in terms of powers of $p$, Artin symbols, and $a_{6,p}$ only.
\begin{remark}\label{cohom-v32}
Although $H^5(K \times K \times E)$ is much larger, the classes coming
from elements of $\mathop{\rm Pic} K$ do not survive in the quotient by the involutions
$\sigma_1, \sigma_2$ (for notation see Corollary \ref{cor:bir-v32}).
This is because they are fixed by these involutions,
while $H^1(E)$ and the transcendental part of one $H^2(K)$ are negated by
$\sigma_1$. Thus $H^3(\L)$ has dimension $4$ and is negated by
$\sigma_2$ (which acts as $-1$ on $H^1(E)$ and $+1$ on the same $H^2(K)$),
while the other $H^2(K)$ is negated by $\sigma_2$ as well. So we find
$8$ for the dimension of $H^5$ of a resolution of $V_{32}$, and
$H^{5,0}$ arises from $H^{1,0}(E) \otimes H^{2,0}(K) \otimes H^{2,0}(K)$,
etc. In particular $H^{3,2} \oplus H^{2,3}$ comes from
$H^1(E) \otimes (H^{2,0}(K) \otimes H^{0,2}(K))^2$, which is isomorphic to
the sum of two copies of $H^1(E)$ twisted by $2$, which is why we see
$2p^2a_{2,p}$ in Theorem \ref{count-32}. Similarly $H^{4,1} \oplus H^{1,4}$
corresponds to the Hecke character $\chi^4 \bar \chi$ and its conjugate.
Since $\chi \bar \chi$ takes $p$ to $p$ when $p$ is a prime congruent to
$1 \bmod 4$, we obtain a $pa_{4,p}$ term.
\end{remark}
Thus let $G_4 = \langle \alpha_1, \alpha_2 \rangle$.
We consider the ring of invariants of $G_4$.
It is generated by polynomials of degree $1,1,2,2,2,2,2,2,4$, and we use these
to define a map to the corresponding weighted projective space and find the
image. The branch locus has $5$ orbits of size $2$ and one of size $1$
under the group; the images of the components in orbits of size $2$ are
defined by a single polynomial, as is the union of the two of size $1$.
As before we are able to compute the number of ${\mathbb F}_p$-points of
$[V_{32}/G_4]$ for small $p$, finding it to be
$\sum_{i=0}^5 p^i - a_{6,p}$ for $p < 20$.
This suggests the following conjecture:
\begin{conj}\label{rigid-32}
$[V_{32}/G_4]_p = \sum_{i=0}^5 p^i - a_{6,p}$ for all $p>2$.
Further, $[V_{32}/G_4]$ has a strongly rigid Calabi-Yau resolution.
\end{conj}
\section*{Acknowledgments}
I would like to thank Colin Ingalls, Ken Ono, Owen Patashnick, Rachel Pries,
and Don Zagier for enlightening discussions.
|
1,116,691,497,882 | arxiv | \section{Introduction: the basics of turbulence}
Turbulence is characterized by chaotic motions in a fluid
\citep{rempel04, he05, chian06, chian07, chian10} that lead to
diffusion of matter and dissipation of kinetic energy. It is to be
stressed that not all chaotic motions in a fluid may be called ``turbulent".
Because of its chaotic nature
turbulence can only be studied and modelled in terms of statistical quantities.
Long-term deterministic local properties of a turbulent fluid are unpredictable.
For nearly incompressible and unmagnetized fluids, the temporal evolution of the
fluid velocity field is given by the Navier-Stokes equation:
\begin{equation}
\frac{\partial {\bf u}({\bf x},t)}{\partial t} + {\bf u}({\bf x},t) \cdot {\bf \nabla u}({\bf x},t) = - \frac{{\bf \nabla}p({\bf x},t)}{\rho({\bf x},t)}+\nu \nabla^2 {\bf u}({\bf x},t)+{\bf F}({\bf x},t),
\label{eq1}
\end{equation}
\noindent
where ${\bf u}({\bf x},t)$ represents the velocity field, $p$ the pressure,
$\nu$ the kinematic viscosity, and ${\bf F}$ an external force
normalized by the local density. $\rho$ is the gas mass density and is set
constant in the incompressible case (with ${\bf \nabla \cdot u}=0$). Even in
this simplified mathematical description the fluid dynamics is not a trivial
solution. Equation~\ref{eq1} is non-linear, as seen from the advective term in
the left hand side, and non-local - in the sense that the local properties of the
fluid are related to all the other regions -, through the pressure term. The
incompressibility condition results in an infinite sound speed, and in an
instantaneous propagation of any perturbation in the fluid.
\citet{burg39} modeled the time evolution of the simplified version of the
Navier-Stokes equation by considering $\nabla p = 0$. This equation has exact
solutions, which may sound interesting, but it results in non-universal
``turbulence". Eventhough Burgers turbulence models have gained increasing
interest due to their ability to describe the statistics of shock induced
structures, and many other applications \cite[see review by][]{bec07}.
In the full Navier-Stokes equation, perturbations in ${\bf u}({\bf x},t)$ are
expected to have their distribution changed due to non-linear terms. These
instabilities may drive local vorticity and result in the fragmentation of large
amplitude eddies into smaller ones, creating a turbulent pattern. As imagined
by \citet{ric22}, {\it big whirls have little whirls that feed on their
velocity, and little whirls have lesser whirls, and so on to viscosity}. This
statement represents one of the first conceptual descriptions of the energy
cascade in turbulent flows. The shear drives unstable motions at large scales,
which are broken and fragmented into smaller
vortices, down to the smallest scales where they are damped, e.g. due to
viscosity. In an incompressible viscous fluid this damping scale is
that at which the timescale for
viscous damping is of the order of the turnover dynamical time.
At that scale, the eddy kinetic energy is transferred to internal energy due to viscosity.
Turbulence is naturally developed over larger range of scales if
viscosity is small, i.e. with large Reynolds number ($Re = UL/\nu \gg 1$), being the
characteristic velocity $U$ injected at a lengthscale $L$.
\citet[hereafter K41]{kol41} realized that it would be possible to
solve the Navier-Stokes equation for a turbulent flow if ${\bf u}({\bf
x},t)$ is considered a stochastic distribution. One of the key
assumptions in the K41 theory is that the energy transfer rate $\epsilon$
should be constant at all scale. It is defined as $\epsilon \simeq \delta
u_l^2/\tau_l$, where $\delta u_l$ is the velocity fluctuation
amplitude at lengthscale $l$, and $\tau_l = \tau_{\rm eddy} =
l/\delta u_l$ its dynamical timescale\footnote{Note that we
distinguish $\tau_l$ and $\tau_{\rm eddy}$ here, since $\tau_l$
represents the timescale for energy transfer at scale $l$, while
$\tau_{\rm eddy}$ is the eddy turnover timescale. In the K41 theory
both timescales are the same, but this is not true for other cases,
e.g. as in some magnetized cases}. Therefore, one obtains:
\begin{equation}
\delta u_l \simeq (\epsilon l)^\frac{1}{3}.
\label{eq2}
\end{equation}
Equation~\ref{eq2} means that turbulence can be modeled
by scaling laws. This would be true within the so called
{\it inertial range of scales}, i.e. the scales where the energy transfer
rate is constant, generally between the energy injection and the
viscous damping scales. The velocity power spectrum $P_u(k)$ is
defined\footnote{The power spectrum is defined as the one
dimensional spectrum in Fourier space while the energy spectrum,
generally defined as $E_u(k) =k^2P_u(k)$ is the three-dimensional
spectrum. For the sake of simplicity, we use the term {\it power
spectrum} to represent the latter.} here by $\int_{k=1/l}^\infty
P_u(k')dk'= \delta u_l^2$, from which we obtain the standard Kolmogorov
power spectrum for the velocity field:
\begin{equation}
P_u(k) \propto \epsilon^{2/3} k^{-5/3}.
\end{equation}
In other words, it is possible to reinterpret Kolmogorov's idea in Fourier space
in terms of non-linear interaction between similar wavenumbers. This theory is a
result of the so-called {\it locality}, i.e. similar wavenumbers,
$k=2\pi/\lambda$, of the non-linear wave-wave interaction that result in the
energy cascade through smaller scales \cite{kra65a}. From the spectral form of
the Navier-Stokes equation, the three-wave interactions follow the selection
rule $k_3=k_1+k_2$. The extrema are found at $k_3\rightarrow0$ and $k_1=k_2$,
which is the locality assumed in Kolmogorov's theory, resulting in $k_3=2k_1$.
The theory also predicts the scaling laws for the moments of velocity spatial
increments, known as {\it velocity structure functions}, defined as:
\begin{equation}
S_p(l)=\left\langle \left\{\left[ \textbf{u}\left( \textbf{r} + \textbf{l} \right) - \textbf{u} \left( \textbf{r} \right) \right] \cdot \textbf{l}/l \right\} ^p \right\rangle = C(p) \epsilon^{p/3}l^{p/3},
\end{equation}
\noindent
where $p$ is a positive integer representing the moment order and ${\bf l}$ is
the spatial increment vector. In incompressible fluids, if the turbulence is
considered {\bf \it homogeneous}, {\bf \it isotropic} and {\bf \it
self-similar}, i.e. scale invariant, then:
\begin{equation}
S_p(l)=C(p) \epsilon^{p/3} l^{p/3} ,
\end{equation}
\noindent
where $C(p)$ was initially assumed by Kolmogorov to be constant with $p$.
One of the main successes of the Kolmogorov-Obukhov turbulence theory is
the explanation of
the empirical determination of the diffusion coefficient by
\citet{ric26}, done more than a decade before K41. The diffusion coefficient is
related to the time evolution of the separation between Lagrangian points (e.g.
particles dragged by the flow) in a turbulent medium. The probability
distribution function $\Phi$ of pairs of points separated by a
distance ${\bf r}$ may be described as:
\begin{equation}
\frac{\partial \Phi \left({\bf r},t\right)}{\partial t} = \frac{1}{r^2} \frac{\partial}{\partial r} r^2 K(r) \frac{\partial \Phi \left({\bf r},t\right)}{\partial r},
\label{eq6}
\end{equation}
\noindent
where $K(r)$ represents the diffusion coefficient. It is easy to determine,
from dimensional analysis, that if $\dot{r} = u(r) \propto r^{1/3}$ as in the
Kolmogorov scaling, the diffusion coefficient for the inertial range will be
$K(r) = k_0 \epsilon^{1/3} r^{4/3}$, the scaling proposed by \citet{ric26}.
This diffusion coefficient for the inertial range substituted in
Equation~\ref{eq6} then results in:
\begin{equation}
\Phi \left({\bf r},t\right) = \frac{A}{(k_0 t)^3\epsilon} \exp\left(-\frac{9r^{2/3}}{4k_0\epsilon^{1/3}t} \right),
\end{equation}
\noindent
where $A$ is a normalization coefficient. The Richardson
distribution is therefore non-Gaussian. Several experiments and numerical
models have shown the validity of the turbulent diffusion scaling \cite{ell96,
fun98, zov94, bof02}, as has also been recently used in the predictions of
stochastic magnetic reconnection \footnote{this term accounts for the magnetic reconnection that is induced by turbulent motions near the current sheet - separation layer between fields with components of opposed directions -, which would then result in reconnection rates as a function of the stochastic motions of the fluid.} \cite{laz12}.
This theory of turbulence has been quite successful in reproducing most of
experimental data, and there is a flourishing literature with hundreds of works available
e.g. \citet{arm95, lea98, bal05, koga07, bou09, chian09, che10, sah10, chian11,
cha12, hur12, miranda13}, just to mention a few. Naturally, many authors criticized the fact of
$C(p)$ is a constant in Kolmogorov's initial theory, given the breakdown of self-similarity
at small scales and the possible non-universality of turbulence (given its
``memory" related to the energy injection). These criticisms have been later addressed in
the Kolmogorov-Obukhov turbulence theory \cite{kol62, obu62}, including the
effects of {\it intermittency}. Intermittency results from rare and large local fluctuations
in the velocity field which break the similarity condition \cite{fri95}.
One of the effects of intermittency is observed in the
probability distribution function (PDF) of velocity longitudinal increments $\delta u_l = [{\bf
u}({\bf r}+{\bf l})-{\bf u}({\bf r})]\cdot \hat{l}$, which shows large deviations
from the Gaussian distribution at small scales, with large amplitude tails and
peaked distributions at $\delta u_l \sim 0$ (see Figure~\ref{fig1}).
\citet{kra91} pointed that sharp shocks could, for intance, result in
more regions with smooth fluid flows and also more regions with sharp transitions
in velocities, compared to the standard picture of the self-similar K41 turbulence.
We would then expect non-Gaussian PDFs at both small and large scales.
\begin{figure}[t]
\vspace*{2mm}
\begin{center}
\includegraphics[width=8.3cm]{fig1.eps}
\end{center}
\caption{PDF of velocity increments as a function of the lag length $\left|{\bf
l}\right|$, from small (top) to large scales (bottom) \cite[extracted
from][]{wil10}. The non-Gaussianity is clear for velocity increments at small
scales. \label{fig1}}
\end{figure}
Many authors attempted to theoretically determine the scalings of turbulence
with intermittency. One of the most successful approaches is the multifractal
description for the energy dissipation field proposed by \citet{she94}. This
theory results in $S_p(l) \propto l^{\zeta(p)}$, with:
\begin{equation}
\zeta(p)=\frac{p}{3}(1+\frac{2}{3})+(3-D')\left[1-\left(1-\frac{2}{3(3-D')} \right)^{p/3} \right],
\end{equation}
\noindent
where $D'$ represents the dimensionality of the dissipation structures.
In the
Kolmogorov-Obukhov theory, structures of highest dissipation are filamentary, better described
then by $D' \sim 1$, while recent numerical simulations reveal a dominance
of two dimensional intermittent structures at small scales
\cite[e.g.][]{moi04,kow07a,kow07b,bol12}, what is also supported by
experimental data \cite[e.g.][]{fre03, the07}. Multifractal analysis
of Voyager 1 and 2 {\it in situ} data
have also showed intermittent features
on the magnetic turbulence at the solar wind and the termination shock \citep{macek08,macek11,macek12}.
On the theoretical side, \citet{bir13} derived a
statistical solution of the stochastic Navier-Stokes equation from the
linear Kolmogorov-Hopf differential equation, accounting for the She-Lev\^eque
intermittency corrections. His results satisfactorily reproduce the PDFs built
on observations and numerical simulations of turbulent flows.
Compressibility and coupling between magnetic fields and the plasma
flow - both present in the dynamics of the interstellar medium
(ISM) - make the description of the interstellar turbulence even more
complex.
\subsection{Supersonic turbulence}
\label{sec1.1}
Compressible plasmas are of great interest in astrophysics, and particularly in the case
of interstellar turbulence.
Compressibility in turbulent flows results in the formation of a
hierarchy of density structures, viewed as dense cores nested in less
dense regions, which are in turn embedded in low density regions and so on. Such a
hierarchical structure was described by \citet{von51} as:
\begin{equation}
\frac{\rho_\nu}{\rho_{\nu-1}} = \left(\frac{l_\nu}{l_{\nu-1}} \right)^{-3\alpha},
\end{equation}
\noindent
where $\rho_\nu$ represents the average density of a structure at hierarchical
level $\nu$, at a lengthscale $l$, and $\alpha$ the compressibility degree, assumed to be the
same at each level. The dimensionality of the system is obtained by $D'=3-3\alpha$.
Therefore, the average mass within each substructure must follow the relation
$M_l \propto l^{3-3\alpha}$.
The density hierarchy as described above must then be coupled to the local
turbulent motions. The energy density transfer rate must now be rewritten as
$\epsilon_l = \rho_l \delta u_l^3/l$ to account for the density changes at
different scales \cite{lig55}. If, once again, one assumes the
constancy of the energy transfer rate across scales within the inertial range
\cite{fle96}, one obtains the scaling of the amplitude of the velocity fluctuations:
\begin{equation}
\delta u_l \propto l^{\frac{1}{3}+\alpha},
\end{equation}
\noindent
and the velocity power spectrum is then given by:
\begin{equation}
P_u(k) \propto k^{-5/3 -2\alpha}.
\end{equation}
Note that for stationary energy distribution solutions in compressible
turbulence, $\alpha> 0$ which results in steeper velocity power spectra, compared to
the standard K41 scaling. The density power spectrum, on the other hand,
instead of following the velocity field as a passive scalar would do, presents a distinct
power spectrum given by:
\begin{equation}
P_\rho(k) \propto k^{6\alpha-1},
\end{equation}
\noindent
i.e. for $\alpha \sim 1/6$, the power spectrum of the density field becomes flat
in the inertial range.
One of the most striking results of the hierarchical model for the density field
in compressible turbulence is its ability to recover the standard Kolmogorov
scalings for the density weighted velocity field ${\bf v} \equiv \rho^{1/3} {\bf
u}$ \cite{fle96}.
Numerical simulations of compressible turbulence have confirmed the scalings
described above for $\alpha \simeq 0.15$ \cite{kri07, kow07b}, close to $\alpha
= 1/6$ for which the density power spectrum becomes flat. The velocity power
spectrum on the other hand becomes $P_u(k) \propto k^{-2}$. Remarkably, this is
the exact slope obtained for Burger's turbulence, despite the different
framework of that theory.
\subsection{Magnetized turbulence}
Magnetic fields introduce further complexity in the plasma
dynamics that can be described by the magneto-hydrodynamic (MHD)
equations in the fluid approximation and assuming perfect coupling between the field and
the plasma:
\begin{eqnarray}
\frac{\partial {\bf u}}{\partial t} + {\bf u} \cdot {\bf \nabla u} = - \frac{{\bf \nabla}p}{\rho}+ \nu \nabla^2 {\bf u}+\frac{\left( {\bf \nabla \times B} \right) \times {\bf B}}{4\pi\rho}+{\bf F},
\label{eq13}
\end{eqnarray}
\begin{equation}
\frac{\partial {\bf B}}{\partial t} = {\bf \nabla \times} \left( {\bf u}{\bf \times B} \right)+\eta \nabla^2{\bf B},
\label{eq14}
\end{equation}
\noindent
where ${\bf B}$ is the magnetic field and $\eta$ the plasma resistivity
($\eta =0$ for ideal plasmas).
Let us first consider an external uniform magnetic field $B_0$.
Any perturbation in the fluid velocity field will be coupled to the magnetic
field. The magnetic tension/pressure results in a decrease of the non-linear
growth of perturbations, but only of those perpendicular to the magnetic field
lines. This complex coupling between the flow and magnetic field makes the
modelling of turbulence in magnetized plasmas an interesting task\footnote{More
details on MHD turbulence may be found in \citet{bis03}}.
\subsubsection{The Iroshnikov-Kraichnan model}
An useful simplification to the equations above is made by
considering ${\bf B}={\bf B}_0+{\bf \delta B}$, and using the
Els\"{a}sser variable ${\bf z^\pm}={\bf
u}\pm{\bf \delta \breve{B}}$, where $\breve{B} = B/(4\pi \rho)^{1/2}$. This has
been independently derived by \citet{iro63} and \citet[]{kra65a, kra65b} (IK
hereafter). From this change of variables, Eqs~\ref{eq13} and \ref{eq14} result
in \cite[see][]{sch07}:
\begin{equation}
\frac{\partial {\bf z^\pm}}{\partial t} \mp v_A \nabla_{||}{\bf z^\pm}+{\bf z^\mp} \cdot {\bf \nabla z^\pm} = -{\bf \nabla} p + \frac{\nu + \eta}{2} \delta z^\pm + \frac{\nu - \eta}{2} \delta z^\mp +{\bf F},
\end{equation}
\noindent
where $v_A = B_0/\sqrt{4 \pi \rho}$ is the Alfv\'en velocity
and $\nabla_{||}$ is the spatial derivative parallel to
the direction of the mean magnetic field.
In their model, Iroshnikov and Kraichnan proposed that incompressible magnetized turbulence
results from the non-linear interactions of counter propagating waves packets.
The timescale for the two wave packets
to cross each other is of order of the Alfv\'en
time $\tau_A \sim l_{||}/v_A$, where $l_{||}$ is the lengthscale of the wave packet
parallel to the mean magnetic field. In their phenomenological description of the MHD
turbulence, the interactions between the wave packets
are {\bf weak}, i.e. $|{\bf z}^\pm|
\ll \breve{B}_0$ or the field perturbations are much smaller than $B_0$.
Notice that, superimposed to the magnetic fluctuations, the fluid is also perturbed
and the dynamical timescale of a fluid ``eddy'' is $\tau_{\rm eddy}
\equiv l/ \delta u_l$. The different wave modes (mechanical and magnetic
perturbations) thus interact with each other.
For the interaction between modes to be weak the Alfv\'en time must be much
smaller than the dynamical timescale, i.e.
$\tau_A \ll \tau_{\rm eddy}$. The non-linear decay of the wave packets
in such weak interactions, and subsequently the turbulent cascade, can only
occur after several interactions. Since interactions are random, the wave packet
amplitude changes in a random walk fashion, i.e. $N = (\tau_l/\tau_A)^{1/2}$
interactions are needed for the wave packet to significantly change. At the same time,
$N$ is also defined by the number of crossings in a decay timescale
$N = \tau_l / \tau_{\rm eddy}$, which results in:
\begin{equation}
\tau_l \sim \frac{\tau_{\rm eddy}^2}{\tau_A} \sim \frac{l^2 v_A}{l_{||} \delta u_l^2}.
\end{equation}
Therefore, the turnover time at scale $l$ is longer by a large factor
and, as expected, the non-linear cascade proceeds much more slowly.
The second major assumption in the IK theory of weak turbulence
is its isotropy, i.e. $l_{||} \sim
l$. Substituting this scaling into the relation $\epsilon = \delta u_l^2/\tau_l$, one obtains:
\begin{equation}
\delta u_l \sim (\epsilon v_A)^{1/4} l^{1/4}\\ {\rm and}\\
P_u(k) \sim (\epsilon v_A)^{1/2} k^{-3/2}.
\end{equation}
There is evidence for an IK cascade in the solar wind and
interplanetary medium \cite[e.g.][]{bam08, ng10}. However, many observations of
the solar wind turbulence also suggest a more Kolmogorov-like turbulence, i.e.
$\propto k^{-5/3}$ \cite[e.g. the early studies of \citeauthor{col68},
\citeyear{col68}; \citeauthor{mat82}, \citeyear{mat82}; and the more recent
papers by][]{ale08, chian09, sah10, li11, chian11, koz12, hel13}. It is possible though
that a mix of both cascades may occur, as pointed by e.g. \citet{sal09} and
\citet{ale13}, which showed a mix of K41 and IK cascades for the magnetic and
velocity field fluctuations, respectively. Moreover, most of these data also
reveal the solar wind turbulence to be highly anisotropic (i.e. $\delta u_l^{||}
\neq \delta u_l^{\perp}$) with respect to the local magnetic field \citep{hor08,hor12}.
As pointed by \citet{gol01}, one of the main issues raised by the solar
wind is {\it why is the power spectrum of this anisotropic, compressible,
magnetofluid often Kolmogorov-like?}
\subsubsection{The Goldreich-Sridhar model}
\begin{figure*}[ht]
\vspace*{2mm}
\begin{center}
\includegraphics[width=5cm]{fig2a.eps}
\includegraphics[width=5cm]{fig2b.eps}
\includegraphics[width=5cm]{fig2c.eps}
\includegraphics[width=5cm]{fig2d.eps}
\includegraphics[width=5cm]{fig2e.eps}
\includegraphics[width=5cm]{fig2f.eps}
\end{center}
\caption{Spectra and second order structure function anisotropy of
dispersion ($\delta v$) of the different
wave modes in MHD turbulence. The Alfv\'en and slow modes present K41 power
cascade and strong anisotropy of dispersion of velocity at small scales, while fast waves present IK
cascade and are basically isotropic at all scales. Data from a $1024^3$ isothermal, sub-Alfv\'enic and
subsonic turbulence model. \label{fig2}}
\end{figure*}
\citet{mar90} remarked that if, instead of an Alfv\'en time, the timescale for
the waves to non-linearly interact with each other was the regular eddy turnover
time, i.e. $\tau_l \simeq \tau_{\rm eddy} \sim l_{||}/\delta u_l$,
one would get a K41 cascade for the
magnetized turbulence. This would be true also for the case of strong turbulence,
$|{\bf z}^\pm| > \breve{B}_0$. The isotropy condition was retained, which was raising a
problem, most of the observational data
mentioned above revealing strongly anisotropic turbulence.
\citet[GS95 hereafter]{gol95} proposed a turbulent model based on anisotropic
fluctuations, with strong coupling between the wave modes. Strictly speaking
the GS95 model assumes a critical balance between mechanical and
Alfv\'enic modes in such a way that $l_{\perp}/\delta u_l \simeq
l_{||}/v_A$.
Therefore:
\begin{equation}
l_{||} \sim v_A \epsilon^{-1/3} l_{\perp}^{2/3}\\ {\rm and}\\
P_u(k) \propto k^{-5/3}.
\label{eq18}
\end{equation}
From Eq.~\ref{eq18}, not only the magnetized turbulence is anisotropic but it is
{\it local} in the sense that the anisotropy is measured in the reference frame
of the local magnetic field. Such an anisotropy is expected to occur in both
the dispersion of velocity ($\delta v$) and wave vectors ${\bf k}$, though it
is easier to observe velocity dispersion anisotropies from the interstellar medium, as
discussed below.
Therefore, statistically, a large number of eddies
with local fields randomly distributed in space result in an average zero
anisotropy (even at small scales). In the strong magnetized cases though, the
anisotropy would be more clearly detected in experiments and observations.
Several direct numerical simulations of magnetized turbulence in a
quasi-incompressible regime have been performed in the past decade. Many
numerical experiments reveal that MHD turbulence indeed has a large part of its
energy cascade close to a K41 distribution. However, as shown by
\citet{cho02a, cho03, cho02b} and \citet{kow10}, the decomposition of the
different modes in MHD turbulence actually reveals that, although Alfv\'en and slow
modes behave as K41 type of turbulence and are anisotropic, the fast modes are
isotropic and follow IK statistics (see Figure~\ref{fig2}).
Effects of imbalanced (or cross-helicity) turbulence in the cascade and
statistics of the local fields have also been addressed in the past few years
\citep[][and references therein]{lit07, ber08, ber10, wic11, mar13}. Imbalanced turbulence occurs when waves
traveling in opposite directions along the mean magnetic field are of unequal
amplitudes, i.e. carry different energy fluxes to small length scales, so that
${\bf z}_l^+/{\bf z}_l^- \neq 1$ and ${\epsilon}_l^+/{\epsilon}_l^- \neq 1$.
The imbalance may arise in MHD turbulence since the interaction timescales
between the waves ${\bf z}_l^+$ and ${\bf z}_l^-$ are different, and the cascade
generally occurs faster for ${\bf z}_l^-$. This is understood as the number
of interactions ($N$) is much larger for counter-propagating wave packets, resulting in
${\epsilon}_l^+/{\epsilon}_l^- > 1$.
In such a scenario, numerical simulations
show that the anisotropy is not equal for the different wave modes.
Locality of scales for wave-wave interactions has also been the subject of recent
studies in turbulence \citep{car06, ale07, min08, alu09, ber10}. Magnetic
fields are responsible for long range interactions, from the Lorentz force acting
over the whole fluid frozen to it. Therefore, different wavelengths may interact with each
other non-linearly. Bi-spectra of fluctuations of density are discussed in
\citet{bur09}, and the non-local interactions appear to be important in MHD
and supersonic turbulence models. A similar approach is used for studying the
non-local interactions of Els\"asser modes \cite{cho10}, resulting in a
substantial fraction of non-local interactions in MHD turbulence. The role of
the non-local interactions in the turbulent cascade is still not clear though.
Turbulence in magnetized collisionless plasmas has been also studied
in the past few years \cite[e.g.][and others]{hel06, sch08, bal09} in order to
determine the role of collisionless plasma instabilities on the dynamics of
plasma turbulence. Simulations of \citet{kow11, san13}, reveal that the statistics
are still dominantly Kolmogorov-like, though strong asymmetries may also arise due to
instabilities (firehose, mirror and cyclotron instabilities) are small scales.
\section{Signatures of a turbulent ISM}
In the previous section some theoretical aspects of
turbulence have been presented. Its direct comparison to the
dynamics of the interstellar gas is not trivial, as we discuss in
the following. However, we will present here some observational
evidences for a turbulent ISM, and discuss the possible turbulent
regimes that may be inferred from these.
The recognition of a turbulent interstellar medium dates back to
1950's with the work of \citet{von51} on the
spatial distribution of dense structures in the plane of the sky. He
recognized the hierarchy of structures and suggested its turbulent
origin. The identification of turbulent motions
was provided shortly after it was measured from velocity dispersions
\citep{vonH51}. Later on, the observational and
theoretical supports for a turbulence dominated ISM have grown considerably
\cite[see reviews by][and references therein]{elm04, mac04, hen12}, causing
a major shift in the uderstanding of the ISM nature, from a
thermal pressure dominated system, as thought before, to a very dynamic
multi-phase system.
\subsection{Density distributions}
As mentioned above, one of the main signature of the turbulent
character of the ISM is related to the density distribution of its contents. Up to now,
tracers of the gas density distributions of the ISM at large scales have been dominantly indirect\footnote{{\it in situ} data have been obtained at the nearby interstellar plasma
by Voyager 1 \citep{gurnett13}, though no direct study of the local turbulence has been discussed yet.}. They rely on spectral lines and continuum emissions from the different phases
of the ISM: the hot and fully ionized (HIM),
the warm and fully/partially ionized medium (WIM/WNM), and the
cold weakly ionized (CNM). These emissions being integrated along lines of sight and projected
in the plane of the sky, sophisticated inversion methods have to be implemented.
It is the statistical approach of the
temporal and spatial variability of these emission fluxes that are
the readily accessible observational techniques for studying interstellar turbulence.
\begin{figure}[ht]
\vspace*{2mm}
\begin{center}
\includegraphics[width=8cm]{fig3.eps}
\end{center}
\caption{Power spectrum of density along the line-of-sight from different data
sets, and the dashed line as reference for Kolmogorov-like spectrum for one
dimension ($k^{-11/3}$). \cite[extracted from][]{arm95}. \label{fig3}}
\end{figure}
With hydrogen being the most abundant element in the Universe, the $\lambda$ 21cm line of neutral
hydrogen is a key diagnostic. Its line integrated emission is proportional to the bulk of the
hydrogen column density, since its opacity remains low over most of the ISM.
Statistics of the HI intensity spatial distributions have therefore been used
to probe interstellar turbulence but the results are far from homogeneous.
\citet{gre93} studied the
power-law of the spatial power spectrum of the HI emission from different
fields in our Galaxy. He obtained power spectra with slopes between -2.2 and
-2.8 at a scale range between 35 and 200~pc. From the HI 21 cm absorption
towards Cas A. \citet{roy09} derived a power law with index -2.7,
consistent with Kolmogorov turbulence
in the diffuse interstellar medium.
However, \cite{miv03} find an impressive power-law in the nearby ISM at high galactic latitude
with the same slope of -3.6 over two orders of magnitude in scales (between 0.1 and 25~pc).
Similar studies have been performed since then, including other density tracers
such as the CO and $^{13}$CO line
emission of molecular clouds and power-laws have also been inferred \cite[e.g.][]{ben01, hil08}.
A review of the scatter of the power-law slopes measured is given in \cite{hen12}.
The scatter of the slope values is certainly affected by projection effects:
one would expect a 2D power spectrum $k^{-8/3}$ for an intrinsic Kolmogorov scaling. However,
the integration along lines of sight crossing often large amounts of turbulent ISM
with different properties tends to blur
such a simple law. Moreover, the different tracers originate in truly different phases
of the ISM with varying amounts of small scale structure that may affect the power spectrum
of the density distributions (i.e. in many cases, like supersonic turbulence,
density fluctuations are not simply advected by turbulence as passive scalars,
see \citet[e.g.][]{aud05}).
Indeed, as seen in Fig. 10 of \cite{hen12}, many studies
(including the power spectrum of the dust thermal emission) give power-laws indices
close to -2.7. It is not possible though to presume that a Kolmogorov-like cascade
operates in the ISM, with scalings given by Equations 2 and 3. Even though compressibility
seems to play little effect on the statistics of the ISM, except for small scales ($\sim$ pc scales) and
cold and dense regions, magnetization effects may be important, as we discuss further below.
\citet{arm95} used another tracer of density fluctuations, the scintillation of the
background radiation (i.e. changes in the refraction index due to the turbulent motions in the
ionized components of the ISM) in order to obtain the density spectrum along the line-of-sight.
As a complementary method, fluctuations of the
Faraday rotation measurements (RM) in the plane of sky are
also used to estimate density fluctuations (once the magnetic field is known)
on the line-of-sight \cite[e.g.][]{min96}. The combined data provide the
density fluctuations along the line-of-sight, but for different lengthscales, as
seen in Figure~\ref{fig3}. The turbulence probed by both methods
(scintillations and RM) present a most impressive spectrum, with a unique
Kolmogorov-like slope across more than ten orders of magnitude in wavenumber.
Similar works have been done for external galaxies. Turbulence has
been characterized based on similar techniques for the Small
Magellanic Cloud \cite[see][]{sta99, sta01, che08, bur10} and revealed
spatial variations of HI morphology. \citet{dut13} calculated HI intensity
fluctuation power spectrum for a sample of 18 spiral galaxies and found slopes
in the range of -1.9 to -1.5. Shallower spectra, compared to K41, could be evidence for
two-dimensional eddy dominated turbulence at scales larger than the disk thicknesses.
\subsection{Velocity fields}
\subsubsection{Direct statistical analysis}
Spectral lines of several species observed with high spectral resolution may be used to
infer the turbulence velocity distributions in the different phases of the ISM, such
as hydrogen lines (mostly) and some ions for the diffuse ISM
\cite[e.g.][]{bow08}, and molecular spectral lines ($^{12}$CO and $^{13}$CO in
most surveys) for the molecular clouds. The early surveys of \citet{lar81} and
\citet{sol87} revealed the universal line-width and mass distribution scalings
among molecular clouds. Notably, both works
pointed to a velocity dispersion relation $\sigma_v \propto l^\alpha_\nu$, with
$\alpha \sim 0.5$ (see Figure~\ref{fig4}, left). Many similar studies
were carried out to study the velocity distribution in molecular clouds, such as
the work by \citet{gol08, yos10, qia12, hey12} in the Taurus Molecular Cloud;
\citet{gus06} and \citet{liu12} for the Orion Complex, and many others.
More recent studies confirmed the same scaling relation
although with slopes varying
significantly \citep{hey04, qia12}. \citet{qia12} for instance used
the variance of the velocity difference of cores in molecular clouds, instead of
the line width, and obtain $\alpha_\nu \sim 0.7$. On the other hand, massive cores
are known to exhibit shallower slopes compared to what is frequently assumed
(i.e. $\alpha_\nu < 0.5$).
\begin{figure*}[ht]
\vspace*{2mm}
\begin{center}
\includegraphics[width=16cm]{fig4.eps}
\end{center}
\caption{Velocity dispersion relations from different surveys, \citet{hey04}
(Left panel) and compilation from several surveys done by \citet{bal11}
(Right panel). As the later authors point out, while large CO clouds from
the survey by Heyer et al. (2009) exhibit the typical Larson relationship, denser structures
show larger dispersion of velocity. This fact has been interpreted by those authors as
due to gravity in collapsing cores, while \citet{fal10b} argued for projection effects and compressibility. \label{fig4}}
\end{figure*}
Recently, \citet{bal11} compiled different observational surveys and concluded
that while in general terms, the typical CO clouds observed by
\citet{hey09} lie close to Larson's relation, this is clearly not the case for
the dense and massive cores, which exhibit large velocity dispersions for their
relatively small sizes (Figure 4). Those authors propose that the large dispersion
observed at small scales are related to increased velocities as the clouds
become gravitationally bound. However the increased dispersion at small scales has
already been reported though, based on numerical simulations, in \citet{fal10}
without self-gravitating objects. For these authors the large dispersion
observed at small scales is an intrinsic feature at the turbulent gas.
The broad dispersion
of the scaling relation indicates a turbulent regime dominated by compressible
motion at small scales, as discussed in Section 1.1, though regular incompressible
turbulence dominates at larger scales. Compressibility, as
described in Fleck's model (see Equation 10), naturally give larger slopes for
the dispersion relation, with a value of $\alpha \sim 0.16$ favoured by observations.
It is
not clear though what is the actual role of gravity in the statistics of the molecular
cloud emissions.
At the large scale end of the cascade, the apparent uniqueness of the scaling of the velocity dispersion
with size scale suggests an universal source (or mixture of sources)
of energy for the molecular gas turbulence in our
Galaxy. \cite{che10} presented statistical analysis of high-latitude HI
turbulence in the Milky Way based on velocity coordinate spectrum (VCS)
technique. They found a velocity power spectrum $P_u(k) \propto k^{-3.8}$
and an injection scale of $\sim 140 \pm 80$pc. The alightly steeper slope, compared to
K41, can be the result of shock-dominated (compressible)
turbulence, with averaged sonic Mach numbers $\sim 7 - 8$ (see
Section~\ref{sec1.1} above).
Two-point statistics are also used but, since in situ measurements are not yet available, one
easily accessible observable turns out to be the variations in the plane-of-the-sky of the
line-of-sight centroid velocity of spectral lines. \citet{lis96}
showed that they trace the plane-of-the-sky projection of the vorticity.
Using a sample of about one million independent CO spectra in a diffuse field,
\citet{hil09} identified, on statistical grounds, the ensemble of positions
at which vorticity departs from a Gaussian distribution. These form coherent elongated
structures at the parsec-scale that are found to harbor sub-structures of
most intense velocity shears down to the milliparsec scale \cite{fal09}.
These coherent structures are proposed to be the manifestations of the intermittency
of turbulent dissipation in diffuse molecular clouds (see the review of \cite{hen12}),
which may be compared to Equation 8 above.
\citet{li08} studied the scaling relations of the velocity dispersions from
different neutral and ionized molecular species, namely HCN and HCO$^+$, in the
region of M17. As it occurs in many other star forming regions, the ionized
molecules systematically present smaller dispersion of velocity compared to the
neutral. Such a difference arises as turbulent energies dissipate differently
for the species due to ambipolar diffusion. \citet{fal10b} showed that the
dispersion for ions is typically smaller than that for the neutral
species basically due to the damping of the ion turbulence at the
ambipolar diffusion scales ($\simeq 0.01$pc).
The direct comparison between statistics of observational data and the theory
must be done with caution. Column density projections, or in
other sense emission maps, are influenced by projection effects. Different
structures projected on the same line-of-sight, but decorrelated at a given
lengthscale, may be observed as a single structure in the projected emission map.
Some deconvolution is possible though once the velocity profile is known, with high
spectral resolution.
\subsubsection{Indirect access to the velocity field via maser emission}
The low surface brightness of the above tracers
and projection effects make the direct analysis of
turbulent flows in the ISM difficult. Maser spots that are bright point sources and
are transported by turbulence as passive scalars (because they are tiny and low mass structures),
turn out to be powerful tracers of the turbulent velocity field.
Maser radiation in molecular lines appear in dense regions
where population levels inversion can be generated by radiative pumping, for instance,
e.g. in the dense
molecular gas of star-forming regions (SFRs) associated with ultra-compact HII
regions, embedded IR sources, hot molecular cores, Herbig-Haro objects, and
outflows \citep{lit74,rei81,eli92,lo05}. Maser emissions are often
characterized by high brightness temperatures and high-degrees of polarization.
Intense maser emission is detected in the molecular lines of hydroxyl
(OH), water (H$_2$O), silicon monoxide (SiO), ammonia (NH$_3$), methanol
(CH$_3$OH), among others.
\citet{wal84} used the {\it Very Long Baseline Interferometry} ($VLBI$) maps of
the H$_2$O maser source in W49N to demonstrate that both two-point velocity
increments and two-point spatial correlation functions exhibit power-law
dependencies on the maser spot separation, which is indicative of a turbulent
flow. \citet{gwi94} performed statistical analysis of VLBI data for
W49N to confirm the power-law dependence of the velocity dispersion and spatial
density of masing spots on spatial scale, and interpreted this observation as an
evidence of turbulence. \citet{ima02} reported sub-milliarcsec structures of
H$_2$O masers in W3 IRS 5.
A cluster of maser spots (emission spots in
individual velocity channels) displays velocity gradients
and complicated spatial structure. Two-point spatial correlation functions
for the spots can be fitted by the same power laws in two very different spatial
ranges. The Doppler-velocity difference of the spots as a function of spot separation
increases as expected in Kolmogorov-type turbulence. \citet{str02} used VLBI data to
investigate the geometry and statistical properties of the velocity field traced
by H$_2$O masers in five star-forming regions. In all sources the angular
distribution of the H$_2$O maser spots shows approximate self-similarity
over almost 4 orders of magnitude in scale. The lower order structure
functions for the line-of-sight component of the velocity field can be fitted by
power laws, with the exponents close to the Kolmogorov value.
Similar results were also obtained for other
regions \cite[e.g.][]{ric05, str07, usc10}.
\subsection{Turbulent magnetic fields}
The magnetic fields in our Galaxy is modelled as a superposition of
different components: i) a large scale field, following a spiral structure
in the plane of the galactic disk, and extending high above the plane into the Galactic halo,
and ii) a complex component of
locally disturbed magnetic fields, which are related to molecular clouds and
star formation regions. The spiral pattern in the disk aligns with
the spiral arms \cite[e.g.][]{han06}. This is
expected since the shear of gaseous motion around the center of the
galaxy stretches the field lines in this direction \cite[see review of mean
field dynamo by][]{bec96}.
There are four main methods to study the fluctuations in the ISM magnetic field,
namely the polarization of dust thermal emission (both in emission in the far-infrared
(FIR) and absorption in the visible and near-IR), Zeeman
effect of spectral lines, Faraday rotation and polarization of the synchrotron
emission. Polarized synchrotron
emission can also be mapped in order to
provide the geometry of the field lines in the plane of the sky. Faraday
rotation and synchrotron polarization measurements excel in probing the magnetic
field of the diffuse ionized medium of the ISM, i.e. they are excellent tools to
study the large scale fields of galaxies in general. More extensive reviews both
on magnetic fields in star formation regions and galactic
scale magnetic fields are given in \citet{cru12} and
\citet{han06}, respectively.
As mentioned earlier, synchrotron emission polarization can
be used for mapping the large scale structure of the magnetic fields in galaxies
\cite[see review by][]{bec09}. The fields traced by the polarized synchrotron
emission present intensities of the order of $\sim 10 - 15\mu$G.
However, the synchrotron emission probes the ionized medium
only, which is less useful in determining the
turbulence properties of the star formation regions, dominated
by the dense and neutral components of the ISM. Therefore, a magnetic field with
intensity $\sim 10 - 15\mu$G is supposed to thread most of galactic disk,
except the dense regions of the arms where the local properties of
the plasma and stellar feedback may dramatically change the field properties.
\citet{opp12} compiled an extensive catalog of Faraday rotation measure
(RM) data of compact extragalactic polarized radio sources in order to
study the angular distribution of the all-sky RMs. The authors found
an angular power spectrum $P(k) \propto k^{-2.17}$ for the Faraday
depth, which is given by the product of the line-of-sight magnetic field
component $B_{\rm LOS}$ and the electron number density $n_e$. The
combination of the RM and polarization vectors of the synchrotron emission
allows to reconstruct the three-dimensional structure of
galactic magnetic fields. Such angular fluctuations of the Faraday depth is
thought to be related to the turbulent ISM. However, the
relationship between the fluctuations of the RM and the local fluctuations of
electron density and magnetic fields is not clear yet. This, for
instance, is an interesting subject for further comparisons
with simulations (as in \citet{gae11}).
Possibly the most direct method for estimating the magnetic field intensity in
the dense and cold ISM relies on the detection of Zeeman effect \cite[see][for
details]{rob08}. For instance, \citet{sar02} detected and
studied the Zeeman effect in H$_{2}$O masers in several SFRs and
determined line-of-sight magnetic field strengths ranging from 13 to 49 mG.
They found a close equilibrium between the magnetic field energy and
turbulent kinetic energy in masing regions.
\citet{alv12} showed that shock-induced H$_{2}$O masers are important
magnetic-field tracers of very high-density gas in low-mass protostellar core
IRAS 16293-2422. They investigated whether the collapsing regime of this source
is controlled by magnetic fields or other factors such as turbulence, and
concluded that the magnetic field pressure derived from data is comparable to
the ram pressure of the outflow dynamics. This indicates that the magnetic
field is energetically important for the dynamical evolution of the
protostellar core.
Due to its brightness, maser emission is better for probing
magnetic fields, but they are rare and limited in extent.
The Zeeman effect in non-masing regions has been detected for
HI, OH, and CN lines for which the turbulent
broadening is typically larger than the Zeeman splitting in frequency.
The compilation by \citet{cru99} and recent CN
Zeeman observations in SFRs by \citet{falgarone08} show that the
turbulent motions within the SFRs and molecular clouds are supersonic but
sub-Alfv\'enic. The {\it upper limit} magnetic field intensity scales
with density, estimated from a Bayesian analysis, as
$B \propto n^\kappa$, with $\kappa \sim 0.47$ \citep{cru10}. Collapsed
structures along the mean field would produce $\kappa \rightarrow 0$, while
shock compressions perpendicular to the field lines result in $\kappa
\rightarrow 1$. The observed relation with $\kappa \sim 0.47$ is expected, for instance,
in Alfv\'enic perturbations and is in agreement
with MHD simulations \cite[e.g.][]{bur09}. It was also claimed in
that work that, despite its relative importance in the overall dynamics of
clouds, the uniform magnetic fields in these clouds are in general not strong
enough to prevent gravitational collapse based on the mass-to-flux ($M/\Phi$)
ratios observed. Other major compilations of Zeeman measurements in molecular
clouds are given, for example, in \citet{bou01} and \citet{tro08}
with similar results.
\subsubsection{Polarization maps of molecular clouds}
Radiation may be polarized due to a prefered direction for emission/absorption
from aspherical dust grains, as well as by some molecules and atoms. The ISM in
known to be populated by a complex distribution of grain sizes and shapes.
Depending on its composition an aspherical rotating dust particle may align
with the magnetic field line. The orientation of the polarization of
radiation is then linked to the orientation of the magnetic field itself
\cite[see review by][]{laz07}. Many observational data has been made available
in the past decade both on absorption and emission dust polarization
\cite[e.g.][]{heiles00,chapman11}.
The strengths of magnetic fields can be
estimated from polarization maps by the \citet{cha53} (CF) technique. The CF
method is based on the assumption that the magnetic and turbulent kinetic
pressures are the dominant ones within the cloud, and that the fluid motions are
coupled to the magnetic field lines. In this sense, any perturbation from the
fluid turbulence will result in a change in the orientation of the field lines.
Major improvements on the CF technique are given e.g. by \citet{fal08},
\citet{hild09} and \citet{hou09}. If the velocity dispersion $\delta v_{\rm
los}$ is known, e.g. from spectral lines, the mean magnetic field in the plane
of sky can be estimated as \citep{fal08}:
\begin{equation}
B_{\rm sky}^{\rm uniform} \simeq \delta v_{\rm los} \left(4 \pi <\rho> \right)^{1/2} \left[1+\tan^{-1} \left(\delta \phi \right) \right],
\end{equation}
\noindent
where $\delta \phi$ represents the dispersion in the polarization angle.
From the equation above, the ratio $\delta B/B_{\rm sky}$ - assumed to be
$\sim \tan^{-1} \delta \phi$ - is directly related to
the Alfv\'enic Mach number of the turbulence. Notice that the
dependence of the projected $\delta B/B_{\rm sky}$ with the actual 3D MHD
turbulence may be removed from higher order statistical analysis, as proposed in
\citet{fal08}.
The left image of Figure 5 presents the polarization map of the Muska Dark Cloud \citep{per04}
in the optical wavelengths, as a result of dust absorption. Vectors represent the magnetic field
orientation. The filamentary morphology of the dark cloud is perpendicular to the external field,
which is very uniform indicating a sub-Alfv\'enic turbulent regime. At the right hand side of Figure 5,
the polarization map overplotted on the column density projection of a 3D MHD numerical simulation of
sub-Alfv\'enic turbulence \citep{fal08}. Such comparisons between MHD numerical simulations and measurements
of magnetic fields in the ISM are important in unveiling the physics of MHD turbulence, and its role on other
phenomena such as star formation.
Spatial dispersion of magnetic fields in molecular clouds from polarization maps
may be used to characterize the
power spectrum of magnetized turbulence in the inertial and dissipation ranges.
\citet{hou11} found a power law inertial range for the magnetic field spatial
distribution that is $\propto k^{-2.9 \pm 0.9}$, and a cutoff at scales $\sim
0.009$pc, which is claimed by the authors to be related to the ambipolar
diffusion scales.
\begin{figure*}[ht]
\vspace*{2mm}
\begin{center}
\includegraphics[width=7cm]{fig5a.eps}
\includegraphics[width=8.5cm]{fig5b.eps}
\end{center}
\caption{Left: optical polarization map of the Muska Dark Cloud \cite[extracted from][]{per04}. Right: simulated polarization map from a three dimensional simulation of MHD sub-Alfv\'enic turbulence \cite[extracted from][]{fal08}. \label{fig5}}
\end{figure*}
Again, as another issue in a proper modelling of the
statistics of velocity, gravity is claimed to interfere in the statistics of the
observed polarization maps \citep{koc12a, koc12b}. Gradients in emission
towards the cores of molecular clouds have been shown to be associated to
gradients in the polarization angles. A transition from a magnetically
subcritical to a supercritical state\footnote{i.e. the system becomes
supercritical once gravity overcomes the magnetic pressure.}
could then explain the trend, and this
technique could provide an independent way to estimate the local
magnetic force compared to gravity.
\citet{hey12} showed that the turbulence in the densest regions of Taurus
molecular cloud is super-Alfv\'enic, while the reverse is true in the surrounding lower density
medium, threaded by a strong magnetic field. This observational
result is in agreement with the transition expected between
scales as dense structures are formed, e.g. by shocks, in a supersonic but
sub-Alfv\'enic large scale turbulence \cite[see discussion in][]{fal08, hey08, bur09}.
Similar to the synchrotron radiation case, by combining
dust polarization maps with Zeeman measurements in molecular clouds one can
determine the three-dimensional structure of the magnetic field. \citet{poi13}
recently succeeded in testing this approach for a number of objects of the
SCUBA Polarimeter Legacy (SCUPOL) data catalog. The authors were able to determine the orientation of the
mean field with respect to the line-of-sight, as well as to estimate the
turbulence regime within several molecular clouds. The authors also claimed
that all observed clouds seem to present a universal large scale turbulence that
is supersonic ($M_s \sim 6 - 8$) and sub-Alfv\'enic ($M_a
\sim 0.5 - 0.9$), at scales as large as 50pc.
In terms of comparing these data with basic theories of magnetized turbulence,
most observations point towards a magnetically dominated turbulence at
scales larger than few tenths of parsecs. \citet{hey08} also showed one of the
first evidences for anisotropic turbulence in molecular clouds, with respect to the
large scale magnetic field orientation. The observations of the Taurus Molecular Cloud
revealed a significant anisotropy in the dispersion of velocity ($\delta v$),
being larger for lags perpendicular to the mean large scale field lines.
Even though a Goldreich-Sridhar similarity relation is not obtained, the anisotropy
observed is a strong indication for strong coupling between MHD wave modes in the
insterstellar turbulence, as predicted by GS95 model.
We could extrapolate a bit and say that a GS95 model
combined with fractal density distributions, as given in \citet{fle96},
is favoured for the ISM turbulence based on current observations.
\section{Origins of interstellar turbulence}
Surveys of different atomic and molecular line emissions have shown us
that the diffuse ISM is turbulent at scales $>150$pc, with $\delta v \geq 50$km
s$^{-1}$. This results in a specific energy transfer rate\footnote{this
estimate is at least one order of magnitude larger than that of \citet{mac04},
since these authors considered a lower injection
velocity at the largest scales ($\delta v_L = 10$km s$^{-1}$).} of $\epsilon
\simeq m_H n_H \delta v_L^3/L \sim 10^{-25} - 10^{-24}$erg cm$^{-3}$ s$^{-1}$.
\citet{bru09} estimated the driving scales of turbulence for
molecular clouds by comparing observed and synthetic CO
velocity dispersions from numerical simulations. They found that only models
with large scale sources of turbulence, such as supernovae-driven outflows (SNe) and galactic
dynamics, fit well to the observed data.
Supernovae have been claimed as main turbulence drivers by many authors
\cite[e.g.][]{nor96, mac04, avi04, avi05, jou06, hil12}. It certainly
corresponds to an important driving mechanism for turbulence in starburst regions
and small galaxies \cite[e.g.][]{fal10, rui13}. However, its impact on galactic
turbulence, in a more generalized sense, is still a matter of debate.
One issue is that numerical simulations of SNe driven turbulence
create superbubbles that are far too hot and diffuse \cite[see][]{avi05, jou06,
hil12}. Other critical arguments disfavour SNe as a main
driver mechanism as well, at least for our Galaxy. \citet{zha01} analysed the CO
emission lines from the Carina Complex and obtained a turbulent energy flux per
mass density unit cascading over scales $\sim 10^{-7}$(km s$^{-1}$)$^2$
yr$^{-1}$, that could not
be explained from stellar feedback, but is in rough agreement
with the injection rate of energy from the gravitational interaction of
the ISM gas and the galactic spiral arms. \citet{san07} also showed that HI mapping of
our Galaxy is consistent with a turbulence injection rate that is not directly
related to the star formation rate, but is about constant with respect to
the galactocentric radius. Also, the correlation lengths related to SNe
turbulence is strongly dependent on local properties (such as local density and
temperature) \cite[see][]{lea09}. Such local dependence also occurs with
respect to the height related to the galactic plane, since SNe
energy is easily released outwards \cite[e.g.][]{mel09, mara13}. The
universality of the observed properties of turbulence in our Galaxy,
together with the extremely large
injection scales ($>100$pc) suggest a Galactic scale driving
source, which is later amplified, as second order effects, by local stellar feedback.
\cite{qia12}, for instance,
obtained similar core and ambient turbulent statistics, which suggested that
molecular cores condense from more diffuse gas, and that there is
little (if not none) additional energy from star formation into the more
diffused gas.
Turbulence driven by galactic dynamics models, such as driven by
velocity-shears in the galactic disk, have been posed long ago \cite[e.g.][]{fle81}.
Instabilities such as the magneto-rotational instability have also been proposed
\cite[e.g.][]{sel99, kim02}. Interactions between the arms of the Galaxy and
the disk gas also generate perturbations, as large as $20$km s$^{-1}$
\citep{gom02}, that could explain most of the injection of energy into turbulent
motions.
It is not clear yet which of these mechanisms (SNe or galactic
dynamics) is more important for the observed turbulence in the ISM.
Certainly, it is a promising subject for
studies in the upcoming years, both from theoretical and observational sides.
\conclusions
In this work, we briefly reviewed part of the current understanding on
incompressible, compressible and/or magnetized turbulence, which can be applied to
characterize the interstellar medium. There is a
vast literature available for each of these and a complete review on turbulence is out
of the scope of this work. We discussed the recent theoretical
improvements made on the modelling and characterization of the different
turbulent regimes. Multifractal description, statistics of probability functions, and spectral
analysis are just a few that have been currently employed to characterize spatial
and temporal variations of plasma properties associated to turbulent motions.
Phenomenological descriptions of turbulence in Fourier space, such as that of
Kolmogorov-Obukhov, are particularly simple and still very useful on the diagnostics
of interstellar turbulence. Since scaling relations for compressible, incompressible and
magnetized turbulence of these theories may differ among each other, observations
can be used to determine the turbulent regime of the ISM.
Spectroscopy has been long used to probe the velocity distributions along the line of sight.
The observed amplitudes of the turbulent motions indicate that the ISM
transits from a supersonic turbulent regime at scales of tens to hundreds of parsecs,
at which the turbulence is driven, to subsonic at smaller scales. The scales where turbulence is
subsonic depend on the ``phase"' of the ISM plasma. Dense molecular clouds present
lower temperatures, which result in
subsonic turbulence only at very small scales ($\ll 1$pc). The warmer
and more diffuse media, such as warm neutral medium and the warm diffuse medium,
present subsonic flows at scales of few parsecs due to the larger local sound speeds.
It is interesting to mention that this transition is deeply related to the
origin of the dense molecular
clouds. These objects are either originated due to the large scale compressible motions
of the gas \citep[e.g.][]{will00}, or at small scales due to other mechanisms, such as
thermal instabilities. Current observations favour the first, given their lengthscales
and internal dynamics \citep[see][]{poi13}.
Spatial gas distributions over the plane of sky are also provided observationally. The
filamentary structure observed reveals a compressible dominated turbulent regime, at
least at most of the observed scales. Observations also reveal a magnetized ISM.
All these ingredients combined result in compressible and magnetized ISM turbulence,
challenging theorists to provide a phenomenological description
of the combined effects of supersonic flows and strong magnetic fields. Despite
the good agreement between observations and the Goldreich-Sridhar model for magnetized
turbulence, and Fleck's model for compressible one, such as spectral slopes, scaling
relations and multifractal analysis applied to emission maps, a complete unified theory
is yet to be developed.
One of the major problems in comparing statistics of observed quantities to
theories of turbulence relies on projection effects.
Observations are spatially limited in the sense
that all statistics are done either along the light-of-sight (e.g.
scintillation, velocity dispersion from spectral lines, Faraday rotation)
or in the plane-of-the-sky. In addition, even the plane-of-sky maps are related to
integrated quantities (e.g. emission lines, column
density, Stokes parameters for the polarization maps). One
must therefore be careful when comparing these with theories
of three dimensional turbulence.
Other effects may also make the direct comparison between theory and
observations challenging.
Self-gravity of dense gas and stellar feedback, for instance, have been neglected in this paper.
These processes are responsible for extra sources of energy and momemtum, but are not
easily linked to the turbulent cascade. Despite their obvious importance on the
process of e.g. star formation, their role on the statistics of the turbulence is not
completely clear. Naturally, fragmentation and clumping would be enhanced if self-gravity
is considered \citep{vaz96,bal11,cho11}, however its role on the cascade itself and on
intermittency is unknown.
Future studies from the theoretical side are possibly to be focused on the understanding
of combined processes, such as magnetic fields, gravity, compressibility and radiation, on
the energy transfer among scales. Formation of coherent structures and how their statistics
relate to the bulk of the fluid are vital for theories of star formation.
New data is also expected for the upcoming years. Although the
Herschel mission ended in early 2013 its data are not yet fully explored.
Other major observational facilities, such as
the Planck\footnote{Planck mission main goal is to observe the comic microwave background emission, for cosmological purposes, however the foreground ambient is the ISM, and proper modelling of the ISM structure and magnetic fields will be mandatory.} satellite and the Atacama Large Millimeter Array (ALMA), will provide complementary data at radio and microwave frequencies with very large sensitity, therefore going ``deeper" than
reached by other instruments. Also, \citep{gurnett13}
recently presented the first {\it in situ} measurement of the interstellar plasma as
Voyager 1 has crossed the heliopause and started to probe the nearby interstellar plasma. This opens
new possiblities in studying interstellar turbulence locally. It is clear that the future of
interstellar turbulence science is going to be very exciting.
\begin{acknowledgements}
DFG thanks the European Research Council (ADG-2011 ECOGAL), and
Brazilian agencies CNPq (no. 300382/2008-1), CAPES (3400-13-1)
and FAPESP (no.2011/12909-8) for financial support. GK acknowledges support
from the FAPESP grant no. 2013/04073-2. ACLC thanks the European Commission for the award of an Marie Curie International Incoming Fellowshp, CNPq for support, and Paris Observatory for the kind hospitality.
\end{acknowledgements}
|
1,116,691,497,883 | arxiv | \section{introduction}
The study of electron transport through single-molecule junctions (SMJs) have attracted great research interest in recent years.\cite{Nitzan,Galperin} One of the prominent features of molecular junctions is the essential role played by the vibrational degrees of freedom (phonons or vibrons). \cite{Smit,Leturcq,Park,Sapmaz} The coupling between the tunneling electron and the phonons in SMJs gives rise to a variety of interesting phenomena, e.g., a strong current suppression at low bias voltage termed Franck-Condon blockade in experiments on the suspended carbon nanotube quantum dots, \cite{Leturcq} side peaks of differential conductance due to the inelastic tunneling of electron accompanied with the emission or absorption of vibrons, \cite{Park} and the ubiquitous appearance of negative differential conductance in the measured current-voltage characteristics. \cite{Sapmaz}
The theory of electron tunneling through a junction with vibrational modes was pioneered by Glazman {\it et al.}\cite{Glazman} and Wingreen {\it et al.}'s works, \cite{Wingreen} in which analytical results for transmission probability are obtained based on single particle approximation. In the last decade, a number of different methods and approximations are utilized to address this problem. \cite{Flensberg,Zhu,Mitra,ZChen,Avriller} In the high temperature case, the sequential tunneling of electron through the SMJ and also the cotunneling effects were investigated within the rate equation approach, \cite{Mitra,Koch} and giant Fano factors due to the avalanche-like transport of electrons was predicted in nonlinear transport regime. \cite{Koch} In the low temperature and the weak electron-phonon interaction (EPI) case, interesting phonon effects on the current-voltage and the shot noise characteristics of this system were predicted within perturbation theory using the nonequilibrium Green's function (GF) method. \cite{Egger,Schmidt,Haupt,Entin-Wohlman}
The study of the strong coupling regime at low temperature is a more challenge problem. Based on a polaron transformation of the Hamiltonian, Galperin {\it et al}. \cite{Galperin2006a,Galperin2006b} proposed a self-consistent perturbation theory to treat the strong coupling regime by using the equation of motion method of the nonequilibrium
GF. H\"artle {\it et al}. \cite{Hartle} extended this method to the treatment of the systems with the several vibrational modes, and studied the nonequilibrium vibrational effects and also the quantum interference effects on the $I$-$V$ characteristics. Recently, the full counting statistics of currents and the zero frequency shot noise of SMJs in the strong EPI region were also addressed. \cite{Maier,Dong2013}
In addition to the dc characteristics, the ac response and finite-frequency noise properties\cite{Blanter} of SMJ are also important for their potential applications in future quantum devices. Recent experiments have addressed the dynamical properties for semiconductor quantum dots in the high frequency regime. \cite{Gabelli,Frey} With the rapid progresses in the field of molecular electronics, one can expect that the ac response properties in SMJ can be probed by experiments in the near future. However, the ac properties of SMJs or quantum dots coupled with vibrational mode were only addressed in a few theoretical works\cite{Ueda,Armour} in the weak coupling regime. To a large extent, the ac properties and transient dynamics\cite{Riwar,Tahir} of SMJ are not extensively investigated and still remains elusive.
In this paper, we will present the results of our investigation on the finite-frequency current noise spectra and ac conductance of the SMJ system within the nonequilibrium GF formalism. \cite{Haug} A self-consistent perturbation theory for calculations of the current and the current fluctuations in the strong EPI regime is formulated. By using the equation of motion approach, we first show that the previous self-consistent field theory \cite{Galperin2006a,Galperin2006b}can be improved by noticing the fact that the self-energy of the phonons in the SMJ is determined by the current fluctuations in this system. Then, we derive an analytical expression for the nonsymmetrized noise spectra \cite{Billangeon}by applying functional derivatives to the current formula. Consequently, the ac conductance can be obtained from the nonsymmetrized noise spectra by using the out of equilibrium fluctuation-dissipation theorem.\cite{Safi} In our numerical calculation, we find pronounced phonon effects on the absorption and emission spectra in the source and drain side leads and also on the ac conductance. By taking into account the phonon self-energy and relaxation effect, the effects of phonon heating \cite{Mitra,Entin-Wohlman2010,Urban} are considered when the SMJ is driven to nonequilibrium state by a large bias voltage. We show that the unequilibrated phonons lead to a general suppression of the current and also smears out some inelastic tunneling features on the $I$-$V$ curve and the nonsymmetrized noise spectra. But negative contributions to the zero frequency shot noise by inelastic tunneling processes are obvious at intermediate values of EPI strength.
The organization of the paper is as follows: In Sec. II, the model Hamiltonian and the generic current formula are given for the system in the presence of external time-dependent measuring fields. In Sec. III, we derive the self-consistent equations for the nonequilibrium GFs of phonon and electron. In Sec IV, we give analytical expressions for the current-current correlation functions and also the ac conductance. In Sec. V, some numerical calculations on the current noise spectra and ac conductance are presented. Section VI is devoted to the concluding remarks.
\section { model }
We consider a molecular quantum dot which has only one energy level (with energy $\epsilon_d$) involved in the electron tunneling process, and is also coupled to a single vibrational mode (phonon) of the molecular having the frequency $\omega_0$, with the EPI strength $g_{ep}$. The electron can tunnel between the molecular quantum dot and the left and the right electron leads. Hence, this system will be described by the Anderson-Holstein model:
\bn
H&=&\sum_{k\eta}\epsilon_{k\eta}c^\dagger_{k\eta}c_{k\eta}+
\epsilon_d d^\dagger d +\omega_0 a^\dagger a + g_{ep} d^\dagger d(a^\dagger+a) \nonumber\\
&&+\sum_{k\eta}\left [ \gamma_\eta e^{i \lambda_\eta
(t)}c^\dagger_{k\eta}d_\sigma+{\rm H.c.}\right ]\;,
\en
where $\eta=L,R$ denotes the left and right leads, and $\lambda_\eta(t)$ is an external gauge potential (measuring field) coupled to the tunnel current from the lead $\eta$ to the molecular junction. We will show in the context that the measuring field $\lambda_\eta(t)$ provides a convenient tool \cite{Gogolin,Ding2013} for calculating various correlation functions of currents within the Schwinger-Keldysh formulation. It should be noted that we neglect the on-site Coulomb interaction in the molecular QD and consider a spinless electron model. Applying a canonical transformation
\bq
\tilde H= e^S H e^{-S}\;,
\eq
with $S=g d^\dagger d(a^\dagger-a)$ and the dimensionless parameter $g=g_{ep}/\omega_0$. The transformed Hamiltonian becomes
\bn
\tilde H&=&\sum_{k\eta}\epsilon_{k\eta}c^\dagger_{k\eta}c_{k\eta}+
\tilde\epsilon_d d^\dagger d +\omega_0 a^\dagger a \nonumber\\
&+&\sum_{k\eta}\left [ \gamma_\eta e^{i \lambda_\eta
(t)}c^\dagger_{k\eta}d X+\gamma^*_\eta e^{-i \lambda_\eta(t)}X^\dagger d^\dagger c_{k\eta} \right ]\;,
\en
with the phonon shift operator $X=e^{g(a-a^\dagger)}$, $X^\dagger=e^{-g(a-a^\dagger)}$ and the renormalized energy level $\tilde\epsilon_d=\epsilon_d-g_{ep}^2/\omega_0^2$. In the transformed Hamiltonian, the direct coupling between the electron and the phonon is eliminated, but the dot-lead tunneling amplitude is modified by the phonon shift operator $X$, which is responsible for the observation of the Frank-Condon steps in the current-voltage characteristics of this SMJ.
The electric current from the lead $\eta$ to the molecular dot $I_\eta(t)=-e\langle {\frac {d N_\eta} {dt}}\rangle$, is given by
\bq
I_\eta(t)={\frac {i e}{\hbar} }\sum_{k}\left [ \gamma_\eta e^{i \lambda_\eta
(t)}\langle c^\dagger_{k\eta}d X\rangle-\gamma^*_\eta e^{-i \lambda_\eta(t)}\langle X^\dagger d^\dagger c_{k\eta}\rangle
\right ]\;.
\eq
It is convenient to introduce the composite operators of electron: $\tilde d\equiv d X$ and $\tilde d^\dagger\equiv X^\dagger d^\dagger$, and the corresponding contour-ordered GF: $\tilde G_d(t,t')=-i\langle T_c \tilde d(t) \tilde d^\dagger(t')\rangle$.
Then by equation of motion method, one can express the current as a integration on the closed-time contour over the combination of the GFs of the composite operator and the lead's operators as follows
\bn
I_\eta(t)&=&{\frac { e}{\hbar} }\sum_{k}|\gamma_\eta|^2\int dt_1\left [e^{i \phi_\eta(t,t_1)} \tilde G_d(t,t_1)
g_{k\eta}(t_1,t)\right. \nonumber\\
&&\left. - e^{-i \phi_\eta(t,t_1)}g_{k\eta}(t,t_1)\tilde G_d(t_1,t)\right ]\;,
\en
with the phase factor $\phi_\eta(t,t_1)=\lambda_\eta(t)-\lambda_\eta(t_1)$, and
\bq
\tilde G_d(t,t')=-i\langle T_c d(t)X(t)X^\dagger(t')d^\dagger(t')\rangle \;,
\eq
$g_{k\eta}(t,t')$ being the bare GF of the lead.
\section{self-consistent perturbation theory}
In this section, a self-consistent perturbation procedure for studying the out of equilibrium phonon and electron dynamics in this SMJ is outlined, which is based on the previous self-consistent perturbation theory suggested by Galperin {\it et al.}, \cite{Galperin2006a,Galperin2006b} but we find that the calculations on the phonon dynamics can be improved as shown in the following.
First, we will derive a generic relation between the phonon dynamics and the current fluctuations in this system. To study the phonon dynamics, one introduce the phonon momentum operator and phonon displacement operator
\bq
P_a=-i(a-a^\dagger), \hspace{1cm} Q_a=a+a^\dagger\;,
\eq
and also the contour-ordered GF
\bq
D_{P_a}(t,t')=-i\langle T_c P_a(t)P_a(t')\rangle\;.
\eq
To obtain the self-consistent equation for the phonon GF $D_{P_a}$, one can first derive the equation of motion in the Heisenberg picture for the phonon momentum and displacement operators $P_a$ and $Q_a$, respectively as
\bq
i {\frac {\partial P_a(t)} {\partial t}}=-i\omega_0 Q_a(t)\;,
\eq
\bq
i {\frac {\partial Q_a(t)} {\partial t}}=i\omega_0P_a(t)+2ig\sum_\eta j_\eta(t)\;,
\eq
where the particle current operator
\bq
j_\eta(t)=\frac {i} {\hbar} \sum_k \left [ \gamma_\eta e^{i \lambda_\eta
(t)} c^\dagger_{k\eta}d X-\gamma^*_\eta e^{-i \lambda_\eta(t)} X^\dagger d^\dagger c_{k\eta}
\right ]\;,
\eq
which differs from the charge current operator only by the electron charge constant ($I_\eta=ej_\eta$).
Then, introduce a differential operator
\bq
{D_{P_a}^{(0){-1}}}=-{\frac {1} {2\omega_0}}\left [ \frac {\partial^2} {\partial t^2} +\omega_0^2 \right ]\;,
\eq
and apply it to the GF $D_{P_a}$ in Eq. (8). It gives rise to
\bq
{D_{P_a}^{(0){-1}}}D_{P_a}(t,t')=\delta(t,t')-ig\sum_\eta \langle T_c j_\eta(t) P_a(t')\rangle\;.
\eq
Next, applying the operator ${D_{P_a}^{(0){-1}}}$ again to the above equation from the right (with respect to the time variable
$t'$), we can obtain the equation of motion for $D_{P_a}$ as follows
\bn
&&{D_{P_a}^{(0){-1}}}D_{P_a}(t,t'){D_{P_a}^{(0){-1}}}=\delta(t,t'){D_{P_a}^{(0){-1}}} \nonumber\\
&&\hspace*{1cm} -ig^2\sum_{\eta\eta'} S^p_{\eta\eta'}(t,t')\;,
\en
where $ S^p_{\eta\eta'}(t,t')=\langle T_c \delta j_\eta(t)\delta j_\eta(t')\rangle $, is the particle current correlation functions between the leads $\eta$ and $\eta'$. It should be noted that here we have restricted our consideration to the steady state, and used the current conservation condition for a steady state, $\sum_\eta \langle j_\eta(t)\rangle=0$.
The above differential equation can be rewritten as an integral equation
\bn
&&D_{P_a}(t,t')=D_{P_a}^{(0)}(t,t')\nonumber\\
&&+\int dt_1 dt_2 D_{P_a}^{(0)}(t,t_1)\Pi(t_1,t_2)D_{P_a}^{(0)}(t_2,t')\;,
\en
with the self-energy term given by
\bn
\Pi(t_1,t_2)&=&-ig^2\sum_{\eta\eta'} S^p_{\eta\eta'}(t,t')
\nonumber\\
&=&-ig^2\sum_{\eta\eta'}\langle T_c \delta j_\eta(t_1)\delta j_{\eta'}(t_2)\rangle\;.
\en
As in Refs. [\onlinecite{Galperin2006a,Hartle}], we will make an approximation to replace the bare GF $D_{P_a}^{(0)}(t_2,t')$ in right hand of the above integral equation by the full phonon GF $D_{P_a}(t_2,t')$, then it gives the closed form of the Dyson equation for the phonon GF.
\bn
&&D_{P_a}(t,t')=D_{P_a}^{(0)}(t,t')\nonumber\\
&&+\int dt_1 dt_2 D_{P_a}^{(0)}(t,t_1)\Pi(t_1,t_2)D_{P_a}(t_2,t')\;.
\en
The above equation indicates that the self-energy for the GF of the phonon momentum operator is generated from the current fluctuations through the molecular junction.
If one can express $S^p_{\eta\eta'}$ in terms of the phonon GF $D_{P_a}$ and the electron GF $G_c$, then a self-consistent procedure of
calculation can be fulfilled. We will discuss how to obtain analytical expression of $S^p_{\eta\eta'}$ in the next section. It should be noted that the self-energy term in Eq. (16) has contributions not only from current correlations in the same leads but also from cross current correlations between the left and the right leads.
It is interesting to observe that the retarded(or advanced) and lesser(or greater) GFs of current operators are directly related to that
of the phonon momentum operator in this system, for instance
\bq
D_{P_a}^{r}(\omega)=D_{P_a}^{(0),r}(\omega)+ D_{P_a}^{(0),r}(\omega)\Pi^{r}(\omega)D_{P_a}^{r}(\omega)\;,
\eq
\bq
D_{P_a}^{<}(\omega)= D_{P_a}^{r}(\omega)\Pi^{<}(\omega)D_{P_a}^{a}(\omega)\;,
\eq
Therefore, in an out of equilibrium steady state, we can study the damping effect and also the occupation number of phonons by studying the emission and absorption parts of current noise spectra in leads. This reflects the coupling and the energy flow balance in the dynamics of electron and phonon degrees of freedom in this nonequilibrium system.
using the usual non-crossing approximation, one can decouple the electron and phonon dynamics
\bq
\tilde G_d(t,t')\approx G_c(t,t')K(t,t')\;,
\eq
where
\bq
G_c(t,t')=-i\langle T_c d(t)d^\dagger(t')\rangle\;,
\eq
\bq
K(t,t')=\langle T_c X(t)X^\dagger(t')\rangle\;.
\eq
This decoupling is similar to the Born-Oppenheimer adiabatic approximation in the study of electron-nuclear dynamics of
molecular system. One can understand that the motion of electron are influenced by the time-dependent fluctuations of potential generated by the phonon mode in the molecule, then the correlation function $K$ will account for the correlations of this kind of potential at different time. Then the correlation function $K$ will be expressed in terms of the phonon GF by using the second order cumulant expansion\cite{Galperin2006a}
\bq
K(t,t')=\exp\{-g^2[\langle P_a^2\rangle-iD_{P_a}(t,t')]\}\;.
\eq
Based on the transformed Hamiltonian of Eq. (3), we apply the equation of motion method and also the decoupling approximation in Eq. (20), it is straightforward to see that the electron GF $G_c(t,t')$ satisfies the non-equilibrium Dyson equation
\bq
(i{\frac \partial {\partial t}}-\tilde\epsilon_d)G_c(t,t')=\delta(t,t')+\int dt_1 \Sigma_c(t,t_1)G_c(t_1,t')
\eq
with the self-energy
\bq
\Sigma_c(t,t')=\sum_{k \eta} |\gamma_\eta|^2 g_{k\eta}(t,t')K(t',t) e^{-i\phi_\eta(t,t')}\;.
\eq
We introduce a quantity
\bq
\Sigma^{(0)}_\eta(t,t')=\bar\Sigma^{(0)}_\eta(t,t')e^{-i\phi_\eta(t,t')}\;,
\eq
with $\bar\Sigma^{(0)}_\eta(t,t')=\sum_{k} |\gamma_\eta|^2 g_{k\eta}(t,t')$, then
the the expression for the self-energy can be written as
\bq
\Sigma_c(t,t')=\sum_{\eta}\Sigma^{(0)}_\eta(t,t')K(t',t)\;.
\eq
This kind of notations can simplify our formal derivation of current and current fluctuation formula in the next section. One sees that once the correlation function $K$ is calculated, it will be straightforward to obtain the self-energy $\Sigma_c$ and the electron GF $G_c$.
\section{ Current fluctuation and the ac conductance}
In this section, we will derive analytical expressions for the current fluctuations and ac conductance in this molecular junction in the framework of self-consistent perturbation theory. By using the definition of self-energy $\Sigma^{(0)}_\eta$ in Eq. (26), it is easy to observe that the current from the lead $\eta$ to the molecule given by Eq.(5) can be rewritten as
\bq
I_\eta(t)={\frac {e} {\hbar}}\int dt_1 \left [ \tilde G_d(t, t_1)\Sigma_\eta^{(0)}(t_1, t)-\Sigma_\eta^{(0)}(t, t_1)\tilde G_d(t_1, t) \right ]\;.
\eq
Using the operational rules given by Langreth for contour integration, \cite{Haug,Langreth} one can show that this formula is equivalent to the current formula obtained by Jauho {\it et al.} for electron transport through a quantum dot,\cite{Jauho} except for the quantum dot GF $\tilde G_d$ here is for the composite operator.
The current formula Eq. (28) can be simply expressed as
\bq
I_\eta(t)=-i{\frac e \hbar} \int dt_1 dt_2
\tilde G_d(t_1,t_2)\Gamma^{(0)}_\eta(t_2,t_1;t)\;,
\eq
where we introduce the bare current vertex function $\Gamma^{(0)}_\eta(t_2,t_1;t)$ given by a functional derivative of $\Sigma^{(0)}_\eta(t_2, t_1)$ with respect to $\lambda_\eta (t)$
\begin{eqnarray}
\Gamma^{(0)}_\eta(t_2,t_1;t)&=&{\frac {\delta\Sigma_\eta^{(0)}(t_2,t_1)} {\delta\lambda_\eta(t)}}
\nonumber\\
&=&i\left [\delta(t_1,t)-\delta(t_2,t)\right ]\Sigma^{(0)}_\eta(t_2,t_1)\;.
\end{eqnarray}
Using the decoupling approximation in Eq. (20), we can rewrite the current formula as
\bq
I_\eta(t)=-i{\frac e \hbar} \int dt_1 dt_2
G_c(t_1,t_2)K(t_1,t_2)\Gamma^{(0)}_\eta(t_2,t_1;t)\;,
\eq
Making comparison the above current formula with that of the non-interacting resonant tunneling model, one can find that
the current formula for resonant tunneling model is quite similar to the above equation but without the presence of the correlation function $K(t_1, t_2)$. We may interpret the effects of the vibration mode on the electric tunneling in this system is represented by the renormalization of the current vertex function and also the induced self-energy in the GF $G_c$.
Now, we are ready to calculate the current-current correlation functions on the closed-time contour.
We find that these correlation functions can be obtained by functional derivative of $I_\eta(t)$ with respect to $\lambda_{\eta'}(t')$
\bq
S_{\eta\eta'}(t,t')\equiv \langle T_c\delta I_\eta(t)\delta I_{\eta'}(t')\rangle
=i e\frac {\delta I_\eta(t)}
{\delta\lambda_{\eta'}(t')}\;.\\
\eq
By using the following identity: $\frac {\delta G_c(t_1,t_2)} {\delta\lambda_{\eta'}(t')}=\int dt_3 dt_4 G_c(t_1,t_3)\frac {\delta \Sigma_c(t_3, t_4)}{\delta\lambda_{\eta'}(t')} G_c(t_4,t_2) $, and making the approximation $\frac {\delta K(t_1,t_2)} {\delta \lambda_{\eta'}(t')}\approx 0$, which corresponds to neglecting the influence of the external measuring potential $\lambda_{\eta'}$ on the phonon dynamics, it is straightforward to obtain a generic expression for the current correlation function\cite{Ding2013}
\begin{widetext}
\bn
S_{\eta\eta'}(t,t')&=&{\frac {e^2} \hbar}\delta_{\eta\eta'}\left [G_c(t,t')K(t,t')\Sigma^{(0)}_\eta(t',t)
+\Sigma^{(0)}_\eta(t,t')K(t',t)G_c(t',t)\right ]
\nonumber\\
&&+{\frac {e^2} \hbar}\int dt_1 dt_2 dt_3 dt_4
\left [G_c(t_1,t_2)\Gamma^{(0)}_{\eta'}(t_2,t_3;t')K(t_3,t_2)G_c(t_3,t_4)K(t_1,t_4)\Gamma_{\eta}^{(0)}(t_4,t_1;t)\right ]\;.
\en
The particle current correlation function $S^p_{\eta\eta'}$, which appears in the self-energy of phonon GF in the last section, is given by the relation $S^p_{\eta\eta'}=S_{\eta\eta'}/e^2$.
By using Eq. (27) and denoting the self-energy term $\Sigma_\eta(t,t')=\Sigma^{(0)}_\eta(t,t')K(t',t)$, we can write the
the current correlation function more explicitly as
\bn
S_{\eta\eta'}(t,t')&=&{\frac {e^2} \hbar}\delta_{\eta\eta'}\left [G_c(t,t')\Sigma_\eta(t',t)
+\Sigma_\eta(t,t')G_c(t',t)\right ]
\nonumber\\
&&+{\frac {e^2} \hbar}\int dt_1 dt_2 \left [-G_c(t,t_1)\Sigma_{\eta'}(t_1,t')G_c(t',t_2)\Sigma_\eta(t_2,t)
-\Sigma_\eta(t,t_1)G_c(t_1,t')\Sigma_{\eta'}(t',t_2)G_c(t_2,t)\right.\nonumber\\
&&\left.+G_c(t,t')\Sigma_{\eta'}(t',t_1)G_c(t_1,t_2)\Sigma_\eta(t_2,t)
+\Sigma_\eta(t,t_1)G_c(t_1,t_2)\Sigma_{\eta'}(t_2,t')G_c(t',t) \right ]\;.
\en
Among various current correlation functions, the correlation function for current noise is of particular interest, since the frequency-dependent noise spectrum of current contains the intrinsic dynamics information of this quantum dot system. In a steady state without external time-dependent potential, the current fluctuations can be characterized by the nonsymmetrized noise spectra $S^>_{\eta\eta'}(\omega)$ and $S^<_{\eta\eta'}(\omega)$, which are given by the Fourier transform of the correlation function of current operators: $S^>_{\eta\eta'}(t,t')\equiv\langle \delta I_\eta(t)\delta
I_{\eta'}(t')\rangle$ and $S^<_{\eta\eta'}(t,t')\equiv\langle \delta I_{\eta'}(t')\delta I_{\eta}(t)\rangle$, respectively. Therefore, we have
\bq
S^>_{\eta\eta'}(\omega)=\int dt e^{i\omega(t-t')}\langle \delta I_\eta(t)\delta
I_{\eta'}(t')\rangle\;,
\eq
and the symmetry relation $S^<_{\eta\eta'}(\omega)=S^>_{\eta'\eta}(-\omega)$.
It should be noticed that the positive frequency part of $S^>_{\eta\eta'}(\omega)$ contains the information about the photon emission spectrum of this system and the negative frequency part corresponds to the absorption spectrum. In many theoretical works, a symmetrized correlation function for current noise, $\tilde S_{\eta\eta'}(t,t') \equiv S^{>}_{\eta\eta'}(t,t')+S^{<}_{\eta\eta'}(t,t')$, are investigated. It is noted that the symmetrized noise power is given by the sum of two nonsymmetrized noise spectra: $\tilde S_{\eta\eta'}(\omega)=S^{>}_{\eta\eta'}(\omega)+S^{<}_{\eta\eta'}(\omega)$.
In recent experiments based on on-chip quantum detectors, \cite{Billangeon} it was shown that the nonsymmetrized noise spectra such as the absorption and emission parts of noise are more easy to be measured, hence we will concentre our investigation on the nonsymmetrized noise spectra of this system in this work.
The $S^>_{\eta\eta'}(t,t')$ is obtained straightforwardly from Eq.(34) by using Langreth's analytical continuation rules.\cite{Langreth} In the absence of external ac potential, we can transform it to the frequency space, and express it in terms of the Green's functions of molecular quantum dot explicitly as
\bn
S^>_{\eta\eta'}(\omega)&=&{\frac {e^2} \hbar}\int {\frac {d\omega_1} {2\pi}}\bigg \{ \delta_{\eta\eta'}
\left [ G^>_c(\omega_1+\omega)\bar\Sigma^<_\eta(\omega_1)+\bar\Sigma^>_\eta(\omega_1+\omega)G^<_c(\omega_1)\right ]
- [G_c\bar\Sigma_{\eta'}]^>(\omega_1+\omega)[G_c\bar\Sigma_\eta]^<(\omega_1)\nonumber\\
& &-[\bar\Sigma_\eta G_c]^>(\omega_1+\omega)[\bar\Sigma_{\eta'}G_c]^<(\omega_1)+ G^>_c(\omega_1+\omega)[\bar\Sigma_{\eta'}G_c\bar\Sigma_\eta]^<(\omega_1)+[\bar\Sigma_\eta G_c\bar\Sigma_{\eta'}]^>(\omega_1+\omega)G_c^<(\omega_1) \bigg \}\;,
\en
where the self-energy term $\bar\Sigma_\eta(\omega)$ is the Fourier transform of $\bar\Sigma_\eta(t,t')=\bar\Sigma_\eta^{(0)}(t,t')K(t',t)$. It should be pointed out that here the notation for various products of GFs and self-energy terms is the same as that in Ref.\onlinecite{Galperin2006b}, but the above current noise formula is different, since the self-energy term $\bar\Sigma_\eta(t,t')$ here is dressed by the phonon correlation function $K$, not the bare one induced solely by the hybridization between the leads and the dot.
\end{widetext}
The ac response properties of SMJ can be probed by measuring the change of the current in the lead $\eta$ with respect to an applied
ac potential in the lead $\eta'$, which can be characterized by the ac conductance response function
\bq
G_{\eta\eta'}(t,t')=-i\theta(t-t') \langle [I_\eta(t),e N_{\eta'}(t')]\;.
\eq
Differentiating with respect to the time variable $t'$ gives
\bq
\partial_{t'}G_{\eta\eta'}(t,t')=i\delta(t-t')C_{\eta\eta'} -\chi_{I_\eta I_{\eta'}}(t,t')\;,
\eq
where the current-current response function
\bq
\chi_{I_\eta I_{\eta'}}(t,t')=-i\theta(t-t')\langle [I_\eta(t), I_{\eta'}(t')] \rangle\;,
\eq
and the constant $C_{\eta\eta'}= e\langle [I_\eta(t), N_{\eta'}(t)]\rangle$.
In the frequency space Eq. (38) can be written as
\bq
G_{\eta\eta'}(\omega)={\frac {1} {i\omega}} \left [ C_{\eta\eta'}-\int^\infty_{-\infty} {\frac {d\omega_1} {2\pi}}
{\frac {[S^>_{\eta\eta'}(\omega_1)-S^<_{\eta\eta'}(\omega_1)]}{\omega-\omega_1+i 0^+} }\right ]\;.
\eq
Here the real constant $C_{\eta\eta'}$ can be calculated as
\bn
C_{\eta\eta'}&=& -i{\frac {e^2}{\hbar}}\delta_{\eta\eta'} \int \frac {d \omega_1} {2\pi} \bigg \{ \bar\Sigma^<_{\eta}(\omega_1)[G^r_c(\omega_1)+G^a_c(\omega_1)]
\nonumber\\
&& +G^<_c(\omega_1)[ \bar\Sigma^r_{\eta}(\omega_1)+ \bar\Sigma^a_{\eta}(\omega_1)]\bigg \}\;.
\en
Therefore, the real part of ac conductance is given by
\bq
Re G_{\eta\eta'}(\omega)=\frac {1} {2\omega}[S^>_{\eta\eta'}(\omega)-S^<_{\eta\eta'}(\omega)]\label{FTR}\;.
\eq
The above equation corresponds exactly to the out-of equilibrium fluctuation-dissipation theorem given in Ref. \onlinecite{Safi}.
Hence, we can obtain the real part of ac conductance directly from the nonsymmetrized noise spectra.
\section {results and discussion}
Now, we will carry out the numerical calculation of the current and the finite-frequency noise spectra through the SMJ by using the analytical expressions presented in the above section. For simplicity, we consider the system with symmetric tunneling coupling to the leads, with $\Gamma_L=\Gamma_R=0.1\omega_0$, $\Gamma=(\Gamma_L+\Gamma_R)/2$, and assume the bias voltage is applied symmetrically, $\mu_{L/R}=E_F\pm eV/2$. Therefore, we can only consider the positive bias voltage case $eV \geq 0$. In the following calculation, we
will take the phonon frequency $\omega_0=1$ as the unit of energy and the Fermi levels of the leads at equilibrium, $\mu_L=\mu_R=E_F=0$, as the reference of energy.
\subsection{equilibrated and undamped phonon case}
For better understanding of the phonon effects on the electron transport in this molecular junction system, we first present the results by assuming the phonons in equilibrium and using the bare phonon GFs. It can describe the systems with extremely strong energy dissipation of the vibration mode to a thermal bath, e.g., a substrate or a backgate. Therefore, the oscillator restores to its equilibrium state very quickly and can be described by an equilibrium Bose
distribution $n_B=(e^{\omega_0/T}-1)^{-1}$ at temperature $T$. The phonon shift operator GF $K(t,t')$ will be replaced by its equilibrium correlation function,\cite{Dong2013}
\bq
K(t,t')= \left (
\begin{array}{cc}
e^{-\phi(-|\tau|)} & e^{-\phi(\tau)} \\
e^{-\phi(-\tau)} & e^{-\phi(|\tau|)} \\
\end{array}
\right ),
\eq
where $\phi(\tau)$ is defined as ($\tau=t-t'$)
\bq
\phi(\tau) = g^2 \left [ n_B(1-e^{-i\omega_0 \tau}) + (n_B+1) (1-e^{i\omega_0 \tau}) \right ].
\eq
It is noted that in this approximation, the electron can emit or absorb phonons during the tunneling processes, but back-action of electron tunneling on the phonon distribution is neglected, except for the shift of the equilibrium position of the quantum oscillator.
\begin{figure}[htb]
\includegraphics[height=2.8in,width=\columnwidth]{figure1}
\caption{(Colour online) The current $I$ vs the applied bias voltage $V$ at zero temperature. The dot levels are (a) $\tilde\epsilon_d=0.0$, (b) $\tilde\epsilon_d=1.5\omega_0$, respectively. The current curves for different values of EPI strength: $g=0.0$ (red line), $0.5$
(green line), $1.0$ (blue line), 1.5 (cyan line), $2.0$ (magenta line), are plotted. The other parameters used for the calculation are taken as: $\Gamma_L=\Gamma_R=0.1\omega_0$, and the chemical potential $\mu_L=-\mu_R=eV/2$.}
\label{fig1}
\end{figure}
We plot the current-voltage characteristics for different values of electron-phonon coupling strength in Fig.1. A prominent feature shown in Fig.1 (a) and (b) is the Frank-Condon blockade effect as manifested by the drastic suppression of the current with increasing the value of electron-phonon coupling constant. Therefore, for the SMJ with strong electron-phonon coupling, the electron transport is effectively blocked at low bias voltage, which was predicted by Koch {\it et al.}\cite{Koch} based on the rate equation approach and demonstrated unambiguously in recent experiment on suspended carbon nanotube quantum dots. \cite{Leturcq} Another interesting feature is the plateau structure of the $I$-$V$ curves in the nonlinear transport regime, which is attributed to the inelastic tunneling current: when the bias voltage is greater than the phonon energy, the inelastic tunneling channel with excitations of vibrational modes is opening, which leads to the upward steps of the total current. By making comparison between Fig.1 (a) and (b), one also observes that the current is greatly affected by the position of the energy level participated in electron tunneling processes. For the energy level turned away from the Fermi level as shown in Fig.1 (b), the magnitude of current is significantly reduced in the low bias voltage regime.
Next, we study the current fluctuation properties in this SMJ. In the equilibrium case without bias voltage, the averaged tunneling current between the left and right leads is zero. However, the current fluctuations in the system still exist and can be detected by measuring the noise spectra. This noise spectra contain the information of the intrinsic dynamics of this system. In Fig.2 nonsymmetrized noise spectra of the left lead is illustrated. Since the noise power in the zero frequency limit corresponds to the well-known Johnson-Nyquist noise, $S=4k_BTG$, where $G$ is the linear conductance and $T$ is the temperature, in the zero temperature case, the noise power at zero frequency should be zero as indicated in the figure. It is also noted that for the nonsymmetrized noise spectra have nonzero values only in the positive-frequency parts, which correspond to the
absorption noise spectra, at zero temperature and in equilibrium state. At finite frequency, pronounced
phonon effects on the noise power density are observed for the system with strong electron-phonon coupling, as indicated by the rapid increasing of noise power density at the frequency of integer multiples of the phonon frequency $\omega_0$. In the high frequency limit, it is noted that the noise power density approach a constant value of $e^2\Gamma/\hbar$ even for the systems with different coupling constant $g$. In the equilibrium case and for the system with symmetric tunneling couplings, the noise spectrum in the right lead is equal to that of the left lead, therefore we only consider the noise spectra in the left lead here.
\begin{figure}[htb]
\includegraphics[height=2.8in,width=\columnwidth]{figure2}
\caption{(Colour online) The nonsymmetrized noise spectra for the left lead in the equilibrium case at zero temperature. The dot levels are (a) $\tilde\epsilon_d=0.0$, (b) $\tilde\epsilon_d=1.5\omega_0$, respectively. The noise spectra at different values of the EPI strength $g=0.0$(red line), $0.5$(green line), $1.0$(blue line), 1.5(cyan line), $2.0$(magenta line) are plotted. The other parameters used for calculation are taken as: $\Gamma_L=\Gamma_R=0.1\omega_0$, and the chemical potential $\mu_L=\mu_R=0.0$.}
\label{fig2}
\end{figure}
\begin{figure}[htb]
\includegraphics[height=2.8in,width=\columnwidth]{figure3}
\caption{(Colour online) The nonsymmetrized noise spectra for the left and the right leads in the nonequilibrium case at different bias voltages. The energy level $\tilde{\epsilon}_d=1.5\omega_0$ and the electron-phonon coupling constant $g=1.0$. The remaining parameters are the same as those in Fig.~\ref{fig1}.}
\label{fig3}
\end{figure}
For the system with finite bias voltage, the noise powers in the left and the right leads would be different. In Fig.3 (a) and (b) the nonsymmetrized noise spectra for the left and right lead at different bias voltages are plotted, respectively. Here we assume the renormalized dot level $\tilde\epsilon_d=1.5\omega_0$. In the low bias voltage region, the absorption noise power densities at positive frequency shown in Fig. 3(a) have consecutive steps at the frequencies which enable new inelastic electron tunneling channels. The jump of noise power density at consecutive steps decreases in magnitude, which might be related to the Frank-Condon factor. We note that with the increasing of the bias voltage, the noise power in the left leads
shifts to the low frequency part, and it can be interpreted as a result of increasing of the chemical potential $\mu_L$ in the left lead. When the chemical potential $\mu_L$ is above the dot level $\tilde\epsilon_d$, nonzero noise power appears in the negative frequency regime, which can be attributed to some photon emission processes during the electron tunneling from the left lead to the molecular dot. More interesting features of noise spectra are observed in the right lead side in Fig. 3(b) (the low chemical potential side). With increasing the bias voltage, the absorption noise power density merely shifts to the high frequency part at first. The reason is that in the low bias voltage regime, the bias potential is not large enough to induce significant electron tunneling through the SMJ, because the dot level $\tilde \epsilon_d$ is well above the Fermi levels. As soon as there is bias induced electron tunneling ($\Delta\mu\ge 3\omega_0$), significant noise powers are observed in the negative frequency and the low frequency parts as shown in Fig. 3(b). We believe that the negative frequency part of noise power is resulted from the processes of energy dissipation of the tunneling electron to the drain electrode. It is interesting to notice that evident signatures of phonon effect can be found in this negative frequency part of noise spectra. A dip structure of the noise power at zero frequency is also revealed. Therefore, one may expect that measuring the noise power in the drain side can give rich information about the intrinsic dynamics of this system.
\begin{widetext}
\begin{SCfigure}
\includegraphics[height=3in,width=5in]{figure4}
\caption{(Colour online) The real part of the ac conductance of the left lead in the equilibrium case. (a)
$\tilde{\epsilon}_d=0$ and (b) $\tilde{\epsilon}_d=1.5\omega_0$. The ac conductance in the right lead is the same as
that of the left lead.}
\label{fig4}
\end{SCfigure}
\end{widetext}
\begin{figure}[htb]
\includegraphics[height=2.8in,width=\columnwidth]{figure5}
\caption{(Colour online) The real part of ac conductance in the out of equilibrium case at different bias voltages. (a) and (b) correspond to the left and right leads, respectively. The renormalized dot level $\tilde \epsilon_d=1.5\omega_0$ and the EPI strength $g=1.0$. }
\label{fig5}
\end{figure}
The ac conductance is calculated from the nonsymmetrized noise spectra by using the nonequilibrium fluctuation-dissipation relation Eq. (42).
In Fig. 4 we show the ac conductance in the linear response regime of system in the equilibrium case and at different values of EPI strength $g$. When the dot level $\tilde\epsilon_d$ is located exactly at the Fermi level of the leads, a peak of ac conductance is found in the low frequency region as shown in Fig. 4(a), and a unitary value of conductance is achieved in the zero frequency limit. The frequency dependence of ac conductance show pronounced phonon sidebands with sudden jumps of the conductance, which can be attributed to the phonon induced logarithmic singularities in the real part of electronic self-energy and also the step structure in the imaginary part. \cite{Engelsberg,Dong2013} The ac conductance decreases as $G_{LL}(\omega)\sim 1/\omega$ in the high frequency region. For the system with the dot level located above the Fermi level as shown in Fig. 4(b), the maximum of the ac conductance is obtained at the frequency of finite value, and the conductance in the low frequency regime is greatly suppressed. With increasing the coupling constant $g$, the magnitude of the ac conductance is reduced in general, whereas the phonon effect gives rise to pronounced oscillating behavior in the line shape of the ac conductance. Therefore, it will be an effective way to study the phonon effects in experiments by measuring the ac conductance of this SMJ systems.
For the system with finite bias voltage, the linear responses to ac potential in the left and the right leads are plotted in Fig.5 (a) and (b), respectively, and they show distinct features. By increasing the bias voltage, the low frequency part of the ac conductance in the left lead increases significantly, and as the Fermi level in the left lead approaches to the dot level $\tilde\epsilon_d$, a Drude peak of conductance at zero frequency is found in similar to a metallic state. On the contrary, the spectrum power of the ac conductance in the right lead is shifted to the high frequency as the bias voltage increases. The low frequency ac conductance is small at low bias voltage, and becomes significant when the phonons take part in the electron tunneling processes in the large bias situation. The phonon effect is manifested in the oscillation behavior of the ac conductances both in the left and right leads.
\subsection{unequilibrated phonon with self-energy }
For the system with the vibrational degrees of freedom being well isolated from the environment, the energy dissipation of the phonons is mainly through its coupling with the leads and the tunneling electrons, therefore, the lifetime of phonons can be long enough to be driven to an unequilibrated state. In this case, the equilibrium and bare phonon approximation might be invalid as soon as the SMJ is biased by a finite voltage and in the out of equilibrium case. The phonons in the molecular junction can be significantly heated, and they may greatly influence the current-voltage characteristics and the noise spectra of this system.
\begin{figure}[htb]
\includegraphics[height=3.5in,width=\columnwidth]{figure6}
\caption{(Colour online) (a) and (b) correspond to the current $I$ and phonon occupation number $n_{ph}$ as functions of the bias voltage, respectively, in the EPI strength $g=0.5$ and $1.0$ cases. The black solid and green dashed line in panel (a) are the current curves in the bare phonon case. (c) and (d) give the real and imaginary parts of the phonon GF. The dressed phonon GFs at bias voltages $\Delta\mu=0.0$ (blue dashed line), $2.0$ (green dash-dotted line), $4.0$ (wine short-dashed line) in units of $\omega_0$ are plotted and compared with the bare phonon GF (red solid line). Here the dot level $\tilde \epsilon_d=1.5\omega_0$ and the coupling constant $g=1.0$. }
\label{fig6}
\end{figure}
By solving the self-consistent equations, we obtain the current $I$ and phonon occupation number $n_{ph}$ vs. the bias voltage as shown in Fig. 6(a) and (b), respectively. In Fig. 6 (a) the currents obtained by taking into account the phonon damping effect are also compared with that of the bare phonon approximation. One sees that in weak electron-phonon coupling ($g=0.5$) case, the self-consistent result for the current only has small deviations from the bare phonon result. With increasing the coupling constant to $g=1.0$, significant suppression of the tunneling current by unequilibrated phonon is observed in the high bias voltage region, and the current stepwise induced by inelastic tunneling with phonon excitations are also smeared. The occupation number $n_{ph}$ can be obtained from the lesser GF of phonon: $n_{ph}=[i\int \frac {d\omega} {2\pi} D^<_{P_a}(\omega)-1]/2$. In the bare phonon approximation and at zero temperature, $n_{ph}$ always remains zero for the system with different bias voltages. After taking into account the phonon self-energy, the phonons on the vibration mode is driven to the out of equilibrium and the occupation number $n_{ph}$ increases with the bias voltage as shown in Fig. 6 (b). For the strong electron-phonon coupling case, the phonon number $n_{ph}$ increases more drastically than the weak coupling case. Therefore, the vibration mode of the SMJ is significantly heated by the applied bias between the source and drain.
In order to describe the nonequilibrium effect on vibration mode quantitatively, we plot the real and imaginary parts of the phonon GF at different bias
in Fig. 6 (c) and (d), respectively. It is noted that the real part of phonon GF is an even function of the frequency and the imaginary part is an odd one, therefore only the positive frequency part is shown here. We take the coupling constant $g=1.0$. There are two distinct features for the self-consistent results of the phonon GF compared with the bare phonon case: (i) the phonon frequency is softened to the low frequency region compared with the bare phonon frequency $\omega_0$, indicating a strong renormalization effect of phonon GF; (ii) with increasing the chemical potential difference $\Delta\mu$ between the leads, the peaks for the imaginary part of GF are broadened as a result of the increased damping effect of vibrational mode in the out of equilibrium case.
\begin{figure}[htb]
\includegraphics[height=2.8in,width=\columnwidth]{figure7}
\caption{(Colour online) The zero frequency shot noise $S_0$ and the corresponding Fano factor $F$ vs. the bias voltage $V$. Here the energy level $\tilde{\epsilon}_d=1.5\omega_0$. }
\label{fig7}
\end{figure}
An important quantity for characterizing the fluctuations of averaged current in this SMJ is the zero frequency shot noise $S_0$, which can be calculated by the definition: $S_0=[{\tilde S}_{LL}(0)+{\tilde S}_{RR}(0)-{\tilde S}_{LR}(0)-{\tilde S}_{RL}(0)]/4$, where ${\tilde S}_{\eta\eta'}(0)$ is the symmetrized noise power density at zero frequency. In Fig. 7(a), we plot the zero frequency shot noise $S_0$ as functions of bias voltage $V$ at different values of coupling constants $g$. It is observed that when the chemical potential of the left lead moves above the dot level ($
eV\approx 2\tilde{\epsilon}_d=3.0\omega_0$), the shot noise increases rapidly as well as the tunneling current(shown in Fig. 6(a)).
When the chemical potential difference between the leads reaches the threshold value to excite the vibrational mode in the molecular junction, phonon effects on the shot noise are observed in Fig. 7(a). It is interesting to find that the shot noise contributed from the inelastic tunneling processes is negative, resulting a sudden suppression of shot noise at threshold values of voltage. This kind of negative contributions to shot noise has also been obtained in our previous calculation based on bare phonon approximation. \cite{Dong2013} In a recent experiment on Au nanowires \cite{Kumar} with weak electron-phonon coupling, similar negative contributions of inelastic shot noise were observed, and they were ascribed to coherent two-electron tunneling processes mediated by phonon emission and the Pauli exclusion principles. \cite{Kumar,Schmidt} However, it should be noted that the systems consider in this paper are molecular junctions with $\Gamma\ll \omega_0$, instead of the $\Gamma\gg \omega_0$ case in the experiment on Au nanowires. In Fig. 7(a) we find that the negative corrections of noise are most evident at intermediate values of electron-phonon interaction and will be smeared out in the strong coupling regime. It is also noted that the positions of the sudden suppressions on the shot noise are slightly shifted for the systems with different EPI strength $g$, as a result of the renormalized phonon frequencies induced by EPI being different. Another useful quantity to characterize the strength of noise is the so-called Fano factor $F$, which is defined as the ratio of the shot noise to the Possion value, $F=S_0/2eI$. In Fig. 7(b) the Fano factor vs. the bias voltage is plotted. The large Fano factor in the low bias voltage region is unphysical, which is due to the inaccuracy of numerical calculation, since the magnitude of current is very small in this region. Therefore, in general, the Fano factor is less than one. One interesting feature in this electron-phonon coupling system is that the Fano factor $F$ exhibits rich oscillating structures as a function of the bias voltage. Several downward jumps of the Fano factor in conjunction with upward steps of current are observed due to the opening of new inelastic tunneling channels. In the large bias voltage limit, the Fano factors approach to 1/2 , which is in accordance with the Fano factor of a resonant tunneling model for a symmetric tunneling junction.
\begin{figure}[htb]
\includegraphics[height=2.8in,width=\columnwidth]{figure8}
\caption{(Colour online) The nonsymmetrized noise spectra for the left and the right leads in the nonequilibrium case at different bias voltages. Here, the self-energy of phonon GF is taken into account by a self-consistent calculation. The energy level $\tilde{\epsilon}_d=1.5\omega_0$ and the EPI strength $g=1.0$. }
\label{fig8}
\end{figure}
The current noise spectra for the left and right leads at different bias voltages obtained by self-consistent calculations are given in Fig. 8(a) and (b), respectively. The overall features of these nonsymmetrized noise spectra are similar to the results shown in Fig.3, where an equilibrated bare phonon
approximation is assumed. One sees that in low bias voltage case, only absorption noise spectra in the positive frequency region are observed. As soon as the Fermi level in the left lead is aligned with the dot level, the tunneling current increases significantly, and the emission noise spectra in the negative frequency region becomes evident. Some signatures on the noise power density due to inelastic electron tunnelings with phonon emissions can still be observed in this calculation with the phonon self-energy taken into account. However, the inelastic tunneling signatures on the noise spectra are not as distinct as the undamped phonon case shown in Fig.3.
\section {conclusions}
The finite frequency current noise and the ac conductance in a SMJ have been investigated in this paper based on a self-consistent
perturbation theory. We show that the study of the self-energy and the damping effect of vibrational degree of freedom involves in the calculation of the current fluctuations and the noise spectra in the system, therefore, these physical quantities have been obtained concurrently and in a self-consistent way. The functional derivative with respect to the time-dependent counting field defined on Keldysh time-closed contour provides a convenient tool for calculating various current-current correlation functions, which is in the same spirit as the counting field approach to the calculation of the zero frequency shot noise and also the other high order culmulants of current in charge transport through junctions. We have investigated two different situations theoretically: At first, we assume the bare phonon GF and the equilibrium phonon approximation, It is shown that both the absorption noise spectra and the ac conductance in the linear response regime exhibit pronounced features of inelastic electron tunneling effects. In the out of equilibrium case, the emission noise spectra in the drain side of electrode show evident characteristics of phonon emissions. Secondary, the unequilibrated phonons taking into account the self-energies case is considered. There are significantly softening of the vibration frequency of phonon due to the interaction effect and the phonon damping effects with increasing the bias voltage. The phonon occupation number becomes large in the presence of a large bias voltage, indicating the heating of vibrational modes in this molecular junction. The absorption and emission noise spectra also exhibit some features of the inelastic electron tunneling effects in this case. Recent experiments on the molecular junctions made of single-walled carbon tubes have shown strong electron-vibron coupling and high quality factor of the vibrational mode. \cite{Steele,Lassagne} We may expect the results obtained in this paper on the inelastic features of the finite-frequency noise spectra and ac conductance can be probed by future experiments on such kind of SMJs with on-chip detecting techniques.
\begin{acknowledgments}
This work was supported by Projects of the National Basic Research Program of China (973 Program) under Grant No. 2011CB925603,
the National Natural Science Foundation of China (Grant Nos.91121021 and 11074166), and Shanghai Natural Science Foundation (Grant No. 12ZR1413300).
\end{acknowledgments}
|
1,116,691,497,884 | arxiv | \section*{Abstract}
{\bf
We enlarge the chiral model, the so-called extended Linear Sigma Model (eLSM), by including the low-lying hybrid nonet with exotic quantum numbers $J^{PC}=1^{-+}$ and the nonet of their chiral partners with $J^{PC}=1^{+-}$ to a global
$U(3)_r \times U(3)_l$ chiral symmetry. We use the assignment of the $\pi_1^{hyb}= \pi_1(1600)$ as input to determine the unknown parameters. Then, we compute the lightest vector and pseudovector hybrid masses that could guide ongoing and upcoming experiments in searching for hybrids.
}
\section{Introduction}
\label{sec:intro}
The investigation of the properties of exotic quarkonia, the so-called hybrids, is extremely interesting and an important step toward the understanding of the nontraditional hadronic states, i.e., those structures beyond the normal meson and baryon, which are allowed in the framework of quantum chromodynamics (QCD) \cite{Gross, Politzer, Wilson} and quark model \cite{Gell-Mann, Zweig}. Hybrids are colour singlets and constitute of quark-antiquark pair and gluonic degree of freedom. In Lattice QCD, a rich spectrum of hybrid states are predicted below 5 GeV \cite{Dudek, Dudek2, Dudek3}, but there are still no predominantly hybrid states assigned to be one of the
listed mesons in the PDG \cite{pdg}. Quite interestingly, recent results by
COMPASS concerning the confirmation of the state $\pi_{1}(1600)$ with exotic
quantum numbers $1^{-+}$ led to a revival of interest in this topic \cite{COMPASS}.
In this work, we investigate vector hybrids by enlarging the extended Linear
Sigma Model (eLSM) \cite{dick}. In particular, we make predictions
for a nonet of exotic hybrids with quantum numbers $J^{PC}=1^{-+}$. Moreover,
we also make predictions for the nonet of their chiral partners, with quantum
numbers $J^{PC}=1^{+-}.$
The eLSM has shown to be able to describe various hadronic masses and decays
below 1.8 GeV, as the fit in Ref. \cite{dick} confirms, hence it represents a
solid basis to investigate states that go beyond the simple $\bar{q}q$
picture. In the past, various non-conventional mesons were studied in the
eLSM. Namely, the scalar glueball is automatically present in the eLSM as a
dilaton and is coupled to light mesons: it represents an important element of
the model due to the requirement of dilatation invariance (as well as its
anomalous breaking)\cite{staninew}. The eLSM has been used to study the pseudoscalar
glueball \cite{walaaG1, walaaG11, walaaG4, walaaG5, walaaG6}, the first excited pseudoscalar glueball \cite{walaaG2, walaaG3}, and hybrids \cite{walaah}. Moreover, the connection and compatibility with chiral perturbation theory
\cite{Divotgey}, as well as the extention to charmed mesons
\cite{walaac, walaac1, walaac1a, walaac1b, walaac1b1, walaac2, walaac2a, walaac2b} and the inclusion of baryons in the so-called mirror assignment
\cite{gallas, olbrich} were performed.
In the present study, we extend the eLSM to hybrids by constructing the chiral multiplet for hybrid nonets with $J^{PC}=1^{-+}$ and $J^{PC}=1^{+-}$ and determine the interaction terms which satisfy chiral symmetry. Consequently, the spontaneous symmetry breaking is responsible for mass differences between the $1^{+-}$ crypto-exotic hybrids and the lower-lying $1^{-+}$. We work out the masses of vector and pseudovector hybrid mesons.
\section{Hybrid mesons in the chiral model}
In this section, we enlarge the eLSM Lagrangian by including hybrid mesons in the case of $N_f=3$
\begin{equation}
\mathcal{L}_{eLSM}^{\text{with hybrids}}=\mathcal{L}_{eLSM}+\mathcal{L}%
_{eLSM}^{\text{ hybrid}}%
\end{equation}
where $\mathcal{L}_{eLSM}$ is the standard of the eLSM Lagrangain, which are constructed under chiral and
dilatation symmetries, as well as their explicit and spontaneous breaking
features (for more details see Refs.\cite{dick}).\\
We introduce the hybrids in the eLSM as:%
\begin{align}
\mathcal{L}_{eLSM}^{\text{ hybrid}}&=\mathcal{L}%
_{eLSM}^{\text{ hybrid-quadratic}}+\mathcal{L}_{eLSM}^{\text{ hybrid-linear}}\,\nonumber\\
&=\mathcal{L}%
_{eLSM}^{\text{ hybrid-kin}}+\mathcal{L}_{eLSM}^{\text{ hybrid-mass}}+\mathcal{L}_{eLSM}^{\text{ hybrid-linear}}\,
\end{align}
where the $\mathcal{L}%
_{eLSM}^{\text{ hybrid-kin}}$ and $\mathcal{L}_{eLSM}^{\text{ hybrid-linear}}$ terms are described in details in Ref.\cite{walaah}. The masses of hybrids can be extracted from the following mass term
\begin{align}
\mathcal{L}_{eLSM}^{\text{ hybrid-mass}}=&\,m_{1}^{hyb,2}\frac{G^{2}%
}{G_{0}^{2}}\mathrm{Tr}\left( L_{\mu}^{hyb,2}+R_{\mu}^{hyb,2}\right)
+\mathrm{Tr}\left( \Delta^{hyb}\left( L_{\mu}^{hyb,2}+R_{\mu}^{hyb,2}%
\right) \right) \nonumber\\
& +\, \frac{h_{1}^{hyb}}{2}\mathrm{Tr}(\Phi^{\dagger}\Phi)\mathrm{Tr}\left(
L_{\mu}^{hyb,2}+R_{\mu}^{hyb,2}\right) +h_{2}^{hyb}\mathrm{Tr}[\left\vert
L_{\mu}^{hyb}\Phi\right\vert ^{2}+\left\vert \Phi R_{\mu}^{hyb}\right\vert
^{2}]\nonumber \\
&+\, 2h_{3}^{nyb}\mathrm{Tr}(L_{\mu}^{hyb}\Phi R^{hyb,\mu}\Phi^{\dagger
})\text{ .}\label{Lag}%
\end{align}
which satisfies both chiral and dilatation invariance. $G$ is the dilaton field and $G_0$ its vacuum's expectation value. The multiplet of the scalar and pseudoscalar mesons, $\Phi$, is defined as
\begin{equation}
\Phi=S+i P=\frac{1}{\sqrt{2}}\left(
\begin{array}
[c]{ccc}%
\frac{\sigma_{N}+a_{0}^{0}}{\sqrt{2}} & a_{0}^{+} & K_{S}^{+}\\
a_{0}^{-} & \frac{\sigma_{N}-a_{0}^{0}}{\sqrt{2}} & K_{S}^{0}\\
K_{S}^{-} & \bar{K}_{S}^{0} & \sigma_{S}%
\end{array}
\right) +i \frac{1}{\sqrt{2}}\left(
\begin{array}
[c]{ccc}%
\frac{\eta_{N}+\pi^{0}}{\sqrt{2}} & \pi^{+} & K^{+}\\
\pi^{-} & \frac{\eta_{N}-\pi^{0}}{\sqrt{2}} & K^{0}\\
K^{-} & \bar{K}^{0} & \eta_{S}%
\end{array}
\right) \text{ ,}%
\end{equation}
and transforms under chiral transformations $U_{L}(3)\times
U_{R}(3)$: $\Phi\rightarrow U_{L}\Phi U_{R}^{\dagger}$, where $U_{L}$ and
$U_{R}$ are $U(3)$, under parity $\Phi\rightarrow\Phi^{\dagger}$ and
under charge conjugation $\Phi\rightarrow\Phi^{t}.$ \\
(i) The scalar fields are $\{a_{0}%
(1450),K_{0}^{\ast}(1430),\sigma_{N},\sigma_{S}\}$ with quantum number $J^{PC}=0^{++}$ \cite{pdg}, and lie above $1$ GeV \cite{dick}, where the non-strange bare field $\sigma_{N}%
\equiv\left\vert \bar{u}u+\bar{d}d\right\rangle /\sqrt{2}$ corresponds
predominantly to the resonance \thinspace$f_{0}(1370)$ and the bare field
$\sigma_{S}\equiv\left\vert \bar{s}s\right\rangle $ predominantly to
$f_{0}(1500).$ Finally, in the eLSM the state $f_{0}(1710)$ is predominantly a
scalar glueball, see details in\ Ref. \cite{staninew}.
(ii) The pseudoscalar fields are $\{\pi$,
$K,\eta,\eta^{\prime}\}$ with quantum numbers $J^{PC}=0^{-+}$ \cite{pdg}, where $\eta$ and $\eta^{\prime}$ arise
via the mixing $\eta=\eta_{N}\cos\theta_{p}+\eta_{S}\sin\theta_{p},$
$\eta^{\prime}=-\eta_{N}\sin\theta_{p}+\eta_{S}\cos\theta_{p}$ with
$\theta_{p}\simeq-44.6^{\circ}$ \cite{dick}.\\
We now turn to the right-handed and left-handed, $R_{\mu}^{hyb} \text{and}\, L_{\mu}^{hyb}$, combinations of exotic hybrid states, which combine the vector fields in the hybrid sector $\Pi_{ij}^{hyb,\mu}$ with the pseudovector fileds in the hybrid sector $B_{ij}^{hyb,\,\mu}$.\\
The hybrid sector $\Pi_{ij}^{hyb,\mu}$ is vector currents with one additional gluon with quantum numbers $J^{PC}=1^{-+}$, and is given by
\begin{equation}
\Pi_{ij}^{hyb,\mu}=\frac{1}{\sqrt{2}}\bar{q}_{j}G^{\mu\nu}\gamma_{\nu}%
q_{i}=\Pi^{hyb,\mu}=\frac{1}{\sqrt{2}}\left(
\begin{array}
[c]{ccc}%
\frac{\eta^{hyb}_{1,N}+\pi_{1}^{hyb,0}}{\sqrt{2}} & \pi_{1}^{hyb+} & K_{1}^{hyb+}\\
\pi_{1}^{hyb-} & \frac{\eta_{1,N}^{hyb}+\pi_{1}^{hyb,0}}{\sqrt{2}} & K_{1}^{hyb,0}\\
K_{1}^{hyb,-} & \bar{K}_{1}^{hyb,0} & \eta_{1,S}^{hyb}%
\end{array}
\right) ^{\mu}\;\text{, }
\end{equation}
where the gluonic field tensor $G^{\mu\nu}$ is equal to $\partial^{\mu}A^{\nu}-\partial^{\mu}A^{\nu}-g_{QCD}[A^{\mu
},A^{\nu}]$, and $\Pi^{hyb,\mu}$ contains $\{\pi(1600), \, K_1(?),\, \eta_1(?),\,\eta_1(?)\}$ which only the isovector member corresponds to a physical resonance at the present. The exotic hybrid field $\pi_{1}$ is assigned
to $\pi_{1}(1600),$ (the details of this assignment are given in Ref. \cite{JPAC}).
There are not yet candidates for the other members of the nonet, but we shall estimate their masses in Sec. 3.\\
The pseudovector fields, $B_{ij}^{hyb,\mu}$ in the hybrid sector, after including the gluon field, with quantum numbers $J^{PC}=1^{+-}$, is written as
\begin{equation}
B_{ij}^{hyb,\mu}=\frac{1}{\sqrt{2}}\bar{q}_{j}G^{\mu\nu}\gamma^{5}\gamma_{\nu
}q_{i}=B^{hyb,\mu}=\frac{1}{\sqrt{2}}\left(
\begin{array}
[c]{ccc}%
\frac{h_{1N,B}^{hyb}+b_{1}^{hyb,0}}{\sqrt{2}} & b_{1}^{hyb,+} & K_{1,B}%
^{hyb+}\\
b_{1}^{hyb,+} & \frac{h_{1N,B}^{hyb}-b_{1}^{hyb,0}}{\sqrt{2}} & K_{1,B}%
^{hyb0}\\
K_{1,B}^{hyb-} & \bar{K}_{1,B}^{hyb0} & h_{1S,B}^{hyb}%
\end{array}
\right) ^{\mu} \text{ .}%
\end{equation}
The nonet $B_{ij}^{hyb,\mu}$ has not yet any experimental candidate. So, all fields $\{b_1(?),\,K_{1,B}(?),\, h_1(?),\, h_1(?)\}$ are unkown yet. In the lattice calculation of Ref. \cite{Dudek2}, an upper limit of about $2.4$ GeV is reported,
but lattice simulation still used a quite large pion mass. We estimate the
mass of the $b_{1}^{hyb}$ state, the chiral partner of $\pi_{1},$ to a value of about (or eventually somewhat larger than) $2$ GeV. For definiteness, we shall assign it to an hypothetical state $b_{1}(2000?)$
state. The other member masses of the pseudovector crypto-exotic nonet follow as a consequence of this assumption.
One can obtain the right-handed and left-handed currents as follows
$$R_{\mu}^{hyb}=\Pi^{hyb,\mu}-B_{ij}^{hyb,\mu} \text{and}\, L_{\mu}^{hyb}=\Pi^{hyb,\mu}+B_{ij}^{hyb,\mu}$$
and transform as $R_{\mu}^{hyb}\rightarrow U_R R_{\mu}^{hyb} U_R^\dagger$ and $L_{\mu}^{hyb}\rightarrow U_L L_{\mu}^{hyb} U_L^\dagger$ and under parity as $R_{\mu}^{hyb}\rightarrow L^{\mu,\,hyb}$ and $L_{\mu}^{hyb}\rightarrow R^{\mu,\,hyb}$ as well as under C as $R_{\mu}^{hyb}\rightarrow L^{hyb,\mu,t}$ and $L_{\mu}^{hyb}\rightarrow R^{hyb,\mu,t}$.
See Ref. \cite{walaah} for more details and discussions.\\
\section{Masses of hybrids}
Masses of hybrids can be calculated from the expression (\ref{Lag}) by taking into account that the multiplet of the scalar and pseudoscalar fields, $\Phi$, has a nonzero condensate or vacuum's expectation value. Consequently, the spontaneous symmetry breaking is reflected from that condensate. Especially relevant is
the term $h_{3}^{nyb}$ which generates a mass difference between the $1^{-+}$
and $1^{+-}$ hybrids, after shifting the latter masses upwards (see Ref. \cite{walaah}). Note, the second term breaks explicitly flavor symmetry (direct contribution
to the masses due to nonzero bare quark masses):
\begin{equation}
\Delta^{hyb}=diag\{\delta_{N}^{hyb},\delta_{N}^{hyb},\delta_{S}^{hyb}%
\}\text{.}%
\end{equation}
After a straightforward calculation, the (squared) masses of the $1^{-+}$
exotic hybrid mesons and the (squared) masses of the cryptoexotic pseudovector hybrid states were obtained as seen in Ref. \cite{walaah}. Consequently, one can get the (exact) relations as
\begin{align}
m_{b_{1}^{hyb}}^{2}-m_{\pi_{1}}^{2} & =-2h_{3}^{hyb}\phi_{N}^{2}%
\label{hcp1}\\
m_{K_{1,B}^{hyb}}^{2}-m_{K_{1}}^{2} & =-\sqrt{2}\phi_{N}\phi_{S}h_{3}%
^{hyb}\label{hcp2}\\
m_{h_{1S}^{hyb}}^{2}-m_{\eta_{1,S}}^{2} & =-h_{3}^{hyb}\phi_{S}^{2}
\label{hcp3}%
\end{align}
As seen in Eqs. (\ref{hcp1}-\ref{hcp3}), the parameter $h_{3}^{hyb}$ is the only parameter responsible for the mass splitting of the hybrid chiral partners. After fixing all the parameters that appear in the Lagrangian (\ref{Lag}) and the square masses equations (see details in Ref. \cite{walaah}), we obtain the following results ( shown in Table 1) for the masses of the vector and pseudovector hybrid mesons:
\begin{table}[h] \centering
\begin{tabular}
[c]{|c|c|}\hline
Resonance &$ Mass [MeV] $\\\hline
$\Pi_1^{hyb}$ & $1600$ [input using $\pi_1(1600)$] \cite{} \\\hline
$\eta_{1,N}^{hyb}$ & $1660$ \\\hline
$\eta_{1,S}^{hyb}$ & $1751$ \\\hline
$K_1^{hyb}$ & $1707$ \\\hline
$b_1^{hyb}$ & $2000$ [input set as an estimate] \\\hline
$h_{1N,B}^{hyb}$ & $2000$ \\\hline
$K^{hyb}_{1,B}$ & $2063$ \\\hline
$h_{1S,B}^{hyb}$ & $2126$ \\\hline
\end{tabular}%
\caption
{Masses of the exotic $J^{PC}=1^{-+}$ and $J^{PC}=1^{+-}$ hybrid mesons.}%
\end{table}%
\section{Conclusion}
We have enlarged a chiral model, the so-called eLSM, in the case of $N_f=3$ by including the hybrid state, the lightest hybrid nonet with $J^{PC}=1^{-+}$ and of its chiral partner with $J^{PC}=1^{+-}$, into a chiral multiplet. The eLSM implements the global chiral $U(N_f)_r \times U(N_f)_l$ symmetry and the symmetries of QCD: the discrete T, P, and C symmetries. The global chiral symmetry is broken in several ways: explicitly through non-vanishing quark masses, spontaneously due to the chiral condensate, and at the quantum level due to the chiral anomaly. To our knowledge, this is the first time that a model was constructed, which contains vector and pseudovector hybrid mesons. The resonance $\pi^{hyb}_1$ is assigned to $\pi_1(1600)$ (with mass $1660^{+15}_{-11}$ MeV) and $b_1^{hyb}$ is set to $2$ GeV. The masses of the other hybrid states are computed and their results are reported in Table $1$. Note that our model predicts the mass of the state $\eta_1^{hyb}$ to be the same as $\pi_1^{hyb}\equiv \pi(1600)$ because of the small mixing of the nonstrange-strange quarks, which is in agreement with the homochiral nature of the chiral multiplet.
Moreover, the calculation and the results of the decay widths of the lightest vector and pseudovector hybrid mesons are presented in
Ref. \cite{walaah}.
\section*{Acknowledgements}
The author thanks F. Giacosa, C. Fischer, and D. Parganlija for cooperation leading to Ref. \cite{walaah}. Moreover, the author acknowledges support from the NYUAD Center for Interacting Urban Networks (CITIES) through Tamkeen under the NYUAD Research Institute Award CG001.
|
1,116,691,497,885 | arxiv | \section{Introduction}
Illicit platforms have become increasingly common on the dark web \cite{chen_dark_2012}. Examples include Dark Net Markets (DNMs) and carding shops enabling hackers to exchange stolen data, hacking tools, and other illegal products. These malicious products could be detrimental to cyber defense and cause significant loss and damage on cyber infrastructure. To address this issue, proactive Cyber Threat Intelligence (CTI) aims at monitoring the dark web to inform cybersecurity decision making \cite{sapienza2018discover, ebrahimi2018detecting,wen2021key,liu2020identifying, ebrahimi2020detecting, ebrahimi_cross-lingual_nodate}. For instance, cybersecurity firms such as FireEye identify threats to customer assets from the dark web \cite{heires2016terror, Li10.1145/2676869}. Also, with the prevalence of financial data breaches in carding shops and DNMs, monitoring the dark web helps financial institutions alert their customers for potential risks \cite{website}. While developing proactive CTI necessitates automated web crawling for dark web data collection, the dark web extensively employs anti-crawling measures to prevent automated data collection. Text-based CAPTCHA is often known as the most common and difficult anti-crawling measure to counteract in the dark web \cite{weng2019towards, du_identifying_2018}. CAPTCHA stands for Completely Automated Public Turing Test to tell Computers and Human Apart. Text-based CAPTCHA identifies and blocks web crawlers by presenting a heavily obfuscated image of characters and testing the user’s ability to identify the characters shown in the CAPTCHA image. The obfuscation generally involves distorting characters in terms of their font, color, and rotation.
Text-based CAPTCHA can significantly hamper the automation of large-scale dark web collection. When navigating through dark web platforms, crawlers are frequently disrupted by text-based CAPTCHA challenges, which then requires human involvement and is thus detrimental to automated large-scale dark web collection. While recent machine learning (ML) methods have been developed and shown promising results in automated CAPTCHA breaking \cite{ferreira2019breaking, ye2018yet, hussain2017segmentation, george2017generative, gao2013robustness}, dark web platforms increase the difficulty of algorithmic CAPTCHA breaking. Text-based CAPTCHA images in the dark web are different from those on regular platforms in two ways. First, CAPTCHA backgrounds are made particularly noisy by adding distracting background consisting of colorful curves and dots. Such noisy backgrounds are challenging for ML-based methods because they need to learn and distinguish a large number of background patterns, in addition to recognizing the characters. Second, while the character length of text-based CAPTCHA images is a key configuration parameter for training ML-based CAPTCHA breaking methods, dark web platforms rarely use CAPTCHA images with a fixed character length. Pre-trained ML methods often have difficulty in breaking CAPTCHA images with different lengths from what they are trained on. In addition to these challenges, there is a lack of human-labeled dark web-specific text-based CAPTCHA datasets for training ML-based CAPTCHA breaking methods. As such, existing CAPTCHA breaking methods often face trouble with effectively facilitating the automated collection of dark web content for CTI.
Motivated by these challenges, our study contributes to the cybersecurity literature by proposing a novel CAPTCHA breaking framework that leverages the state-of-the-art deep learning techniques for breaking dark web CAPTCHA. The proposed framework, named DW-GAN, comprises three sequential components. First, the automated background removal component seeks to remove the dark web noisy background to improve CAPTCHA breaking performance. To this end, we propose a Generative Adversarial Network (GAN) model, CAPTCHA GAN, that learns to generate CAPTCHA images with relatively clean background from the original patterns with noisy backgrounds. Instead of using millions of labeled CAPTCHA images for training, our CAPTCHA GAN can generate training data to address the lack of labeled training data. Second, the character segmentation component addresses the challenge of variable character length by extracting image segments, each of which contains one single character. Additionally, we extend the core image segmentation technique with a region enlargement procedure for achieving better segmentation. Lastly, the character recognition component detects the character in each CAPTCHA image segment. We utilize the state-of-the-art Convolutional Neural Network (CNN) to recognize segmented characters.
Our proposed framework was rigorously evaluated on a research testbed comprising various CAPTCHA images from the dark web. Our evaluation experiments show that the proposed framework consistently outperformed the state-of-the-art baseline CAPTCHA-breaking methods. In particular, our ablation analysis show that our method was able to effectively remove the colorful curves and dots in the background. Moreover, our proposed CAPTCHA segmentation was able to further improve the success rate of breaking CAPTCHA with variable character length over the state-of-the-art baseline techniques. We demonstrate the applicability of our proposed framework through a case study on dark web data collection, where we incorporated our framework into a dark web crawler. Equipped with DW-GAN, the crawler successfully collected a medium-sized Dark Net Marketplace within almost 5 hours without any human involvement, where DW-GAN was able to break CAPTCHA images with no more than three attempts.
The remainder of the paper is organized as follows. Section 2 presents a review of related literature to motivate our research and provide a technical background of CAPTCH-breaking. Section 3 details our proposed design of the DW-GAN framework. Section 4 systematically examines the effectiveness of our proposed framework and its major novelties in comparison with the state-of-the-art benchmark techniques. Section 5 demonstrates the applicability of our proposed framework through a case study on dark web collection. Lastly, Section 6 summarizes our contribution and discusses our future directions.
\section{Literature Review}
Based on our research objectives, we review two major areas of research. First, we examine dark web data collection for proactive CTI as the major application domain of our study and investigate CAPTCHA as a major challenge hampering dark web data collection. Second, we review the state-of-the-art text-based CAPTCHA breaking methods in prior research. In particular, we explore image preprocessing and background denoising techniques as necessary steps for breaking the dark web CAPTCHA with complex backgrounds. Subsequently, we examine character segmentation as a vital step to breaking CAPTCHA with variable character length. Lastly, we investigate character recognition techniques for distinguishing segmented CAPTCHA characters.
\subsection{Dark Web Data Collection for CTI}
Cyber attacks are projected to cost the global economy \$6 trillion by 2021 \cite{morgan_2017_2017}. The mitigation of cyber attacks increasingly relies on gathering hacker intelligence from the dark web \cite{chen_dark_2012} to proactively gain reconnaissance on emerging cyber threats and key threat actors \cite{ebrahimi_semi-supervised_2020}. The dark web features a conglomerate of covert illegal platforms, including shops selling stolen credit/debit card and dark net marketplaces facilitating transactions between cybercriminals. Accordingly, dark web data collection has been considered as a key step in developing proactive CTI \cite{Chiang10.1145/2361256.2361257, robertson_darkweb_2017}.
Automated data collection from the dark web is crucial to proactively responding to cyber threats and data breach incidents. However, the automation of dark web data collection is complicated by anti-crawling measures widely adopted in the dark web \cite{samtani2017exploring}. These measures include user authentication, session timeout, deployment of cookies, and CAPTCHA. While most of these measures can be effectively circumvented through implementing automated counter measures in a crawler program, CAPTCHA is the most hampering anti-crawling measure in the dark web that cannot be easily circumvented due to high cognitive capabilities that are often not possessed by automation tools \cite{du_identifying_2018, zhang2019survey}. There are four major types of CAPTCHA: text-based, image-based, video-based, and audio-based. Text-Based CAPTCHA requires subjects to recognize the characters shown in a deliberately obfuscated alphanumeric pattern. Image-based CAPTCHA asks subjects to perform certain actions (e.g., drag and drop) on a specific portion of a given image. Video-based CAPTCHA challenges subjects to choose an option that best describes the content of a given video. Finally, audio-based CAPTCHA plays an audio and requires users to enter the characters mentioned in the audio. Text-based CAPTCHA is the most prevalent type among various types of CAPTCHA in the dark web, and is the focus of our research \cite{wu2019machine}. We note that despite the ubiquity of traditional text-based CAPTCHA in online space, dark web CAPTCHA patterns are uniquely positioned due to the use of more complicated background noise. Accordingly, while this study is mainly focused on dark-web CAPTCHA as a more challenging problem, the proposed method in this study is expected to be applicable to other types of CAPTCHA without loss of generality.
\subsection{Text-based CAPTCHA Breaking Methods }
Automated breaking of text-based CAPTCHA is non-trivial due to two main challenges: complex security measures and variable character length. The former involves intentionally obfuscating patterns added to the CAPTCHA image for complicating the recognition task \cite{chen2017survey}. The latter draws upon the fact that most automated methods trained to break CAPTCHA with a specific length are poorly generalizable to CAPTCHAs with different character length \cite{ye2018yet}. There are two categories of security measures: foreground security measures and background security measures. Foreground security measures are mainly employed to prevent characters from having a uniform appearance. These measures include font change (i.e., varying the typeface of the character to non-standard unseen fonts), character rotation (i.e., varying the angular position of each character), and color change (i.e., varying the foreground color of each character). Background security measures provide additional obfuscation to the background of the CAPTCHA image by introducing dot noise (i.e., dot-shaped noise applied to the background surface while interfering with the characters), curve noise (i.e., irregular curvatures that cross characters), and color change (i.e., varying the color of noise so that it is not simply removable by elimination of a specific color). Figure \ref{captcha_example} illustrates examples of background and foreground security measures.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\linewidth]{img/two_captcha_sample.png}
\centering
\caption{Examples of CAPTCHA with Background and Foreground Security Measures: (a) dots+curves+font size noise (b) curves + foreground noise}
\label{captcha_example}
\end{figure}
Extant CAPTCHA-breaking frameworks often employ three steps to address these security measures \cite{wu2019machine}:
\begin{itemize}
\item Image Preprocessing: Performs a series of computer vision processing techniques to remove background and foreground noise
\item Character Segmentation: Separates characters in the CAPTCA image for character-level CAPTCHA breaking (necessary to handle patterns with variable character length)
\item Character Recognition: Identifies foreground characters via deep learning-based classification architectures such as CNN.
\end{itemize}
We summarize prior CAPTCHA-breaking studies with focus on the methods employed at each of these steps in Table \ref{lit_overview}.
\input{tables/lit_overview}
\subsubsection{Image Processing and Background Denoising}
Preprocessing CAPTCHA images involves applying computer vision techniques to enhance the foreground and denoise the background. Past studies utilize five major image preprocessing techniques for this purpose: normalization, grayscale conversion, Gaussian smoothing, dilation, and erosion. Normalization scales pixel values into a certain range to enhance inconspicuous features \cite{shanmugamani2018deep}. Grayscale conversion changes colored images into grayscale to reduce the negative impact of a high variance in colors \cite{gao2013robustness}. Gaussian smoothing applies Gaussian function to remove details and reduce the impact of curves and dots on the detection of characters’ edges \cite{wu2019machine}. Dilation enlarges shapes proportionally to help repair the missing parts in characters \cite{ferreira2019breaking}. Finally, erosion down-scales shapes proportionally in order to shrink dot noises \cite{ferreira2019breaking}. Table \ref{img_processing} summarizes the main strengths of each image preprocessing technique in dealing with background noise. While these techniques help enhance foreground characters, background denoising in complex patterns still remains a challenge \cite{gao2013robustness}. As shown in Table 2, the removal of curve noise is a difficult task that is hard to address with common image preprocessing methods. This is mainly due to the resemblance of the curves to the foreground characters. Accordingly, attempts to remove curves may result in unintended removal of foreground characters. Background denoising is even more difficult for dark web CAPTCHA with complex and noisy backgrounds.
\input{tables/img_processing}
While deep learning models have demonstrated promising results in image processing and can potentially improve background removal, most deep learning models are required to be trained on a large number of unseen background patterns. However, there lacks enough training data of background patterns for training such models. This scarcity is even more severe for specific domains such as the dark web. Recently, Ye et al. \cite{ye2018yet} have shown that Generative Adversarial Network (GAN) can address this issue by automatically generating background patterns with eliminated curve noise. GAN for background removal comprises two competing neural networks known as generator and discriminator. The generator tries to generate patterns with clean background from input CAPTCHAs. The discriminator tries to identify if the generator has fully removed the background. Nevertheless, the approach in \cite{ye2018yet} operates at the image level, and thus is not applicable to dark web CAPTCHA with variable length.
\subsubsection{Character Segmentation to Address Variable Character Length}
Despite the noisy background, another unique challenge of dark web CAPTCHA is the variable character length. Many automated CAPTCHA breaking solutions operate at the image level, where the CAPTCHA image is analyzed in its entirety and the CAPTCHA breaker predicts characters in the CAPTCHA altogether. As such, these solutions need to be trained on fixed-length CAPTCHA. The variable character length makes such image-level CAPTCHA breakers ineffective for two major reasons. First, image-level models that are trained on a pre-specified length are not applicable to CAPTCHAs with a different length of characters. Second, at the image level, the number of class labels grows exponentially with the number of characters (e.g., $10^3$ for 3-digit and $10^4$ for 4-digit numerical CAPTCHA). As such, a very large number of CAPTCHA images is required for model training because each class label needs to have a sufficient number of training CAPTCHA images \cite{ye2018yet}.
In contrast, character-level CAPTCHA breakers require a much smaller number of class labels (e.g., 10 different classes (i.e., {0,…, 9}) for numerical patterns). In particular, character-level CAPTCHA breakers perform character segmentation to separate characters from other characters prior to recognition \cite{wu2019machine}. Four segmentation methods are commonly leveraged in CAPTCHA breaking research: color filling segmentation (CFS), interval-based, pixel distribution-based, and contour detection-based. We provide a review of these methods in Table \ref{seg_techniques}. As observed in the table, the contour detection method is more suitable for the dark web CAPTCHA since the method can operate despite the changes in font, color, and rotation of the characters. This is mainly due to the independence of the contours from these security measures.
\input{tables/seg_techniques}
\subsubsection{Character Recognition}
After identifying the character boundaries via segmentation, the last step involves correctly recognizing the characters within each boundary \cite{gao2013robustness}. As shown in Table \ref{lit_overview}, among the past studies, Convolutional Neural Networks (CNNs) have been widely used for character recognition task \cite{weng2019towards,chen2017survey}. CNNs have shown promising results in counteracting foreground security measures such as rotation, color change, and font change of the characters. A CNN is built upon three main components: convolution layer, sampling layer, and fully connected layer. Each CNN component contributes to a certain aspect of CAPTCHA Character recognition. Convolution layer extracts geometrical salient features from local regions of the input image. For CAPTCHA breaking, this layer extracts rotation-invariant features from characters. The sampling layer combines features from local regions to generate more abstract features. For CAPTCHA breaking, this layer helps identify characters despite differences in their font and sizes. The fully connected layer weights the extracted features and assigns a probability to the output. For CAPTCHA breaking, this layer predicts the pattern’s class label based on the extracted features.
Accordingly, we expect that CNN can effectively contribute to dark web CAPTCHA character recognition after proper character segmentation.
\subsection{Research Gaps and Questions}
Existing CAPTCHA breaking methods are not designed for addressing specific characteristics of the dark web CAPTCHA. As such, security analysts often face trouble with effectively facilitating the automated collection of dark web content for CTI. Specifically, two major research gaps are identified from the review of prior studies. First, there lack studies offering methods for breaking CAPTCHA with complex noisy backgrounds in the dark web. Second, existing methods do not address CAPTCHA with variable character lengths and noisy backgrounds within a unified framework.
Based on these gaps, we pose the following research question:
\textit{How can we design an automated CAPTCHA breaking framework to address the noisy background and variable character length in the dark web?}
\section{Research Design}
To address our research question, we propose Dark Web-GAN (DW-GAN), a novel GAN-based framework that utilizes background denoising, character segmentation, and character recognition to automatically break dark web CAPTCHA. DW-GAN was implemented using PyTorch and OpenCV open-source libraries. DW-GAN aims to counteract background security measures and address the challenge of variable character length for dark web CAPTCHA. Consistent with the steps described in the literature review, DW-GAN comprises three major components shown in Figure \ref{fig_framework}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\linewidth]{img/fig1.png}
\centering
\caption{DW-GAN Framework for Breaking Dark Web-specific CAPTCHA Images (left) and Corresponding Illustration (right) }
\label{fig_framework}
\end{figure}
As illustrated in the figure, first, the GAN-based background denoising component removes the noisy background from a given dark web CAPTCHA image. Then, the denoised CAPTCHA is processed using Gaussian smoothing for removing the residual noise. As opposed to the GAN approach used by Ye et al. {\cite{ye2018yet}} that operates at the image level, and thus is not applicable to dark web CAPTCHA with variable length, our proposed DW-GAN is designed to recognize CAPTCHA at the character level, which does not depend on the CAPTCHA length. Once the background is denoised, characters are segmented using an enhanced contour detection algorithm so that the remaining steps will not depend on the character length of the CAPTCHA image. To this end, we enhance the core border tracing algorithm with a subsequent region enlargement procedure, which is described later. Lastly, segments of the original CAPTCHA image are fed to a CNN for deep learning-based character recognition. We detail each component of DW-GAN in the following subsections.
\subsection{Background Denoising}
\subsubsection{Background Denoising: CAPTCHA GAN}
Attaining a training dataset encompassing all possible background patterns might not be practical for dark web CAPTCHA as it requires expensive human-labeled data and excessive human involvement. As such, we propose to leverage GAN to automatically generate CAPTCHA background patterns for training background denoising. Specifically, the background denoising component in DW-GAN consists of two sub-components: (1) the CAPTCHA GAN sub-component, a customized GAN architecture to automatically learn how to remove background noise, and (2) the residual background noise removal sub-component to further enhance the CAPTCHA foreground. CAPTCHA GAN aims to reduce various background curve noises in CAPTCHA images automatically. This is achieved by incentivizing the CAPTCHA GAN generator to generate background noise-free counterparts of original CAPTCHA images. This generative feature of CAPTCHA GAN allows for training the model with only a small labeled dataset (i.e., ~500 dark web CAPTCHA images) and achieving a high performance in background denoising. The learning process for background removal in CAPTCHA GAN includes four major steps:
\begin{itemize}
\item Step 1: The generator seeks to create background noise-free CAPTCHA image $x$ from the original dark web CAPTCHA image $y$.
\item Step 2: The generated pattern and the corresponding original CAPTCHA image are fed to the discriminator to assess whether the background noise has been completely removed.
\item Step 3: The generator improves by learning via minimizing the loss function $\mathcal{L_G}$, a pixel-to-pixel cross-entropy loss that compares the generator's output and the noise-free CAPTCHA image $x$. As in the conventional GAN {\cite{goodfellow2016deep}}, the loss minimization is conducted via gradient descent as part of the back propagation in the generator's neural network parameterized by network weights $\theta_g$. The generator's loss function is given by $\mathcal{L_G} = \nabla_{\theta_g}\frac{1}{N}\sum_n^N[\log(1-D(G(y_n)))]$.
\item Step 4: The discriminator improves by maximizing the loss function $\mathcal{L_D}$ that compares the true label (i.e., incomplete (0) vs. complete background removal (1)) and the discriminators' output. the loss minimization is conducted via gradient ascent as part of the back propagation in the discriminator's neural network parameterized by network weights $\theta_d$. The discriminators' loss is given by $\mathcal{L_D} = \nabla_{\theta_d}\frac{1}{N}\sum_n^N[(\log(D(x_n))) + \log(1-D(G(y_n)))]$. to make the discrimination more accurate even on the denoised CAPTCHA image from generator's output.
\end{itemize}
Steps 1-4 are repeated until the generator and discriminator reach the equilibrium condition, where neither is able to improve the performance further \cite{goodfellow2016deep}. Figure \ref{fig_captchagan} depicts the abstract view of these steps.
\begin{figure}[htbp]
\centering
\setlength{\abovecaptionskip}{0cm}
\includegraphics[width=0.45\textwidth]{img/GAN_architecture.png}
\caption{Abstract View of CAPTCHA GAN for Background Denoising}
\setlength{\belowcaptionskip}{4cm}
\label{fig_captchagan}
\end{figure}
CAPTCHA GAN's process in Figure {\ref{fig_captchagan}} can be formalized as the minimax game with value function $V$ in Equation \ref{main_loss}, in which $G(x_n)$ and $D(x_n)$ denote the generator and discriminator networks, respectively.
\begin{equation}
\underset{G}{\operatorname{min}}\
\underset{D}{\operatorname{max}}\ V
(D, G) = \frac{1}{N}\sum_n^N[(\log(D(x_n))) + \log(1-D(G(y_n)))]
\label {main_loss}
\end{equation}
$N$ denotes the number of images in the dataset and $x_n$ represents the input CAPTCHA image. Also, $G(x_n)$ associates with $\mathcal{L_G}$ and $D(x_n)$ corresponds to discriminator's loss $\mathcal{L_D}$. CAPTCHA GAN was implemented in Python using PyTorch deep learning library. All training and testing processes for experiments in Section \ref{eval} were executed on a single Nvidia RTX 2070 GPU with 2,560 CUDA cores and 8 GB internal memory. To enhance reproducibility, the model specifications of CAPTCHA GAN, including the number of layers, neurons, and activation functions are given in Appendix \ref{appendix}. Also, the implementation of CAPTCHA GAN is available at https://github.com/johnnyzn/DW-GAN as part of the DW-GAN's source code.
\subsubsection{Background Denoising: Residual Noise Removal}
While GAN-based background denoising seeks to remove all curve noise in the background, some residual noise may still remain. This residual noise could potentially interfere with the subsequent segmentation and character recognition. Accordingly, we utilize a combination of three image processing techniques to further remove the remaining background noise. Grayscale conversion is first leveraged to reduce the color variance as a preliminary step that alleviates color noise. Then, Gaussian smoothing is employed to reduce the visibility of the residual background noise by blurring the unnecessary details in the image. Lastly, normalization is applied to distinguish the foreground from the background color. An example of the results of residual noise removal is depicted in Figure \ref{fig_noiseremoval}.
\begin{figure}[htbp]
\includegraphics[width=0.5\linewidth]{img/fig3.png}
\caption{An Example of the Result of Residual Noise Removal}
\label{fig_noiseremoval}
\end{figure}
\subsection{Character Segmentation}
After background denoising, character segmentation is conducted to identify the boundary of characters. As reviewed in Table \ref{seg_techniques}, contour detection is suitable for segmenting characters in dark web CAPTCHA images due to its robustness against the dark web-specific noises. Among various contour detection methods, the \textit{border tracing} algorithm has been successfully used for segmenting CAPTCHA images since the algorithm can effectively identify the boundary pixels of the character region and separate the characters from the background \cite{yadav2018feature}. This is mainly because characters in text-based CAPTCHA often have distinguishable borders. Border tracing algorithm has two main steps \cite{pratomo2019algorithm}. In the first step, the image is converted to binary pixels (black and white) and is scanned from the upper left to the bottom right pixel. In the second step, for each pixel, the algorithm searches a square neighborhood (e.g. 3x3 pixels) to find the direction of the edges and define minimal regions to bound the character. The results of such a segmentation process is illustrated in Figure \ref{segmentation}.
\begin{figure}[htbp]
\includegraphics[width=0.3\linewidth]{img/fig4.png}
\caption{An Example of the Result of Character Segmentation}
\label{segmentation}
\end{figure}
While effective, sometimes the detected regions obtained from border tracing might not be large enough to encompass the entire character (e.g., digit ‘6’ in Figure \ref{ourseg}(b)). The core border tracing method has a tendency to yield minimal segments that may not include the entire character. As a result, incorrect segmentation would result in the false recognition of the corresponding character, and thereby the entire CAPTCHA image. To address this issue, we enhance border tracing with subsequent region enlargement to encompass the entire character via a two-step process that enhances the initial regions found by the core border tracing algorithm:
\begin{enumerate}
\item The initial character regions are first detected by border tracing algorithm (Figure \ref{ourseg}(b)).
\item Subsequently, the detected regions are overlapped with the maximal regions resulted from dividing the CAPTCHA image into fixed intervals. This could result in detecting a larger region that bounds the character (Figure \ref{ourseg}(c)).
\end{enumerate}
\begin{figure}[htbp]
\includegraphics[width=1\linewidth]{img/fig5.png}
\caption{Our Character Segmentation Process of Enhancing Border Tracing with Interval-based Segmentation}
\label{ourseg}
\end{figure}
As shown in the figure, characters M and q have a considerable overlap that leads to being incorrectly identified as one character by border tracing. However, the subsequent interval-based segmentation identifies them as two imperfect characters (i.e., an incomplete "M" and a "q" with an extra stroke on the left). Given that the subsequent character recognition component is trained on various forms of incomplete characters, there is a reasonable chance that the incomplete "M" and "q" are correctly recognized. Of course, in severe cases of overlapping characters, where it could be also difficult for a human observer to distinguish characters, the character recognition component is more likely to make mistakes.
\subsection{Character Recognition}
Consistent with prior literature \cite{weng2019towards, chen2017survey}, we leverage CNN to detect characters in extracted CAPTCHA segments. Our character recognition CNN stacks convolutional layers and sampling layers to detect characters via the architecture described in Appendix \ref{appendix}. The convolutional layer extracts features from local regions of the segmented CAPTCHA image and the sampling layer combines extracted features across multiple local regions to identify fine-grained features (e.g., lines and edges). Another convolutional-sampling structure of such kind is then stacked over to extract features from larger regions through combining lines and edges into more abstract features informative of characters. Such a stacked structure further addresses CAPTCHA security measures such as rotation and font size change by jointly considering features from multiple local regions. The extracted features are then used to detect characters in a fully connected layer. Given the successful CNN training practices in the literature \cite{Pierazzi10.1145/3382158,Unger10.1145/3386243}, we used Adam optimizer \cite{kingma_adam:_2015} to minimize the cross entropy loss to train the model and utilize Rectified Linear Units (ReLU) activation function \cite{nair2010rectified} and the dropout mechanism to improve the efficiency of model training.
\section{Evaluation}
\label{eval}
We systematically evaluated the performance of our proposed framework in braking CAPTCHA images with a testbed encompassing dark web CAPTCHA datasets. In particular, our CAPTCHA testbed comprised text-based CAPTCHA images from three different dark web datasets and a widely used open-source CAPTCHA synthesizer. The dark web datasets included three sets of CAPTCHA images: two sets from carding shops (Rescator-1 and Rescator-2) and one set from a newly-emerged dark net market (Yellow Brick), as showing in Table \ref{testbed}. All three datasets were suggested by CTI experts. These datasets were selected based on their popularity and scale in the dark web. For each dataset, a TOR-routed spider was developed to collect 500 CAPTCHA images. The three datasets were labeled and inspected by two CTI experts. The second data source of our testbed was an open-source CAPTCHA synthesizer. We leveraged the synthesizer to generate CAPTCHA images with controllable security measures that enabled us to create a fair comparison for different methods on CAPTCHA images with variable character length. These datasets along with their CAPTCHA examples are summarized in Table \ref{testbed}.
\input{tables/testbed}
We conducted three experiments to evaluate the effectiveness of our proposed research design:
\begin{itemize}
\item Experiment 1 examined the overall CAPTCHA breaking performance of our proposed framework in comparison with the state-of-the-art baseline methods. This experiment targets the dark web-specific condition of lacking labeled CAPTCHA images for training. To this end, we sampled 500 images from each of the dark web CAPTCHA datasets serving as the evaluation testbed. The CAPATCHA breaking performance was measured by success rate \cite{bursztein2011text,ye2018yet}, a commonly used metric representing the percentage of CAPTCHA images that are successfully recognized \cite{ye2018yet}. A successful recognition entails correctly recognizing \textit{all} characters in the CAPTCHA image. Three state-of-the-art methods from past research, were used as benchmarks: Image-level CNN \cite{le2017using}, Image-level CNN with preprocessing (grayscale conversion, normalization, and gaussian smoothing) \cite{nouri2020deep}, and Character-level CNN with interval-based segmentation \cite{tang2018research}.
\item Experiment 2 investigated the effect of each component in our proposed DW-GAN framework through an ablation analysis. To this end, the framework's major components, including background denoising, border tracing segmentation, and interval-based segmentation were removed from DW-GAN framework to evaluate their impact on the automated CAPTCHA breaking performance. The ablation analysis were thoroughly conducted on all dark web datasets used in benchmark evaluation in Experiment 1. Also, consistent with Experiment 1, the success rate was adopted as the evaluation criterion.
\item Experiment 3 evaluated the efficacy of DW-GAN components to determine whether background denoising and character segmentation improved CAPTCHA breaking performance. Accordingly, this experiment encompassed two sub-experiments as detailed below. We employed the CAPTCHA synthesizer to generate a wide range of background noise with variable lengths.
\begin{itemize}
\item Experiment 3.1 examined the contribution of the background denoising component to the overall performance in DW-GAN. We evaluated the effect of our background denoising component before and after its application to each type of background noise. To measure the effect, we adopted the structural similarity (SSIM) metirc \cite{powell2014fgcaptcha}, a widely used metric to measure the similarity of two images. SSIM can be used to determine the extent to which the background security measures are counteracted through our background denoising component. A higher SSIM between the denoised image and its clean version (without background noise) indicates a more effective removal of background security measures.
\item Experiment 3.2 investigated the effect of the character segmentation component on the overall performance. This experiment compared the performance of DW-GAN with a benchmark image-level method and two character-level methods to evaluate how character segmentation improved CAPTCHA breaking performance when character length varied. In accordance with the common dark web CAPTCHA length, the character lengths ranged from 4 to 7. The evaluation was based on a 90\%-10\% train-test split. That is, our training set contained 50,000 CAPTCHA images (10,000 images for each character length) and the testing set included 5,000 CAPTCHA images (1,000 images for each character length).
\end{itemize}
\end{itemize}
\subsection{Results of Experiment 1: Benchmark Evaluation}
We compared the performance of DW-GAN to the state-of-the-art CAPTCHA breaking methods across the dark web datasets. The success rates for each method are shown in Table \ref{ex1_result}. The asterisks denote the statistical significance obtained from paired t-test between the results of DW-GAN and the above-mentioned benchmark methods (P-values are significant at 0.05:*, 0.01:**, 0.001:***).
\input{tables/ex1_result}\
Three notable observations can be made from the results of Experiment 1. First, comparing the automated CAPTCHA breaking method in \cite{le2017using} against \cite{nouri2020deep}, it is observed that preprocessing improved the success rate across all datasets. Second, the character-level method proposed by \cite{tang2018research} generally achieved a higher success rate compared to image-level CAPTCHA breaking methods. Third, and most importantly, our proposed framework with GAN-based background denoising yielded the highest success rate across all datasets. Overall, DW-GAN outperformed the state-of-the-art benchmark methods with statistically significant margins in dark web CAPTCHA. DW-GAN's performance is attributed to combining background denoising and character recognition in a unified framework. The contribution of each component is further investigated in Experiment 2 and Experiment 3.
\
\subsection{Results of Experiment 2: Ablation Analysis}
To gauge the contribution of each component in the proposed DW-GAN framework, we further evaluated the CAPTCHA breaking performance (i.e., success rate) by eliminating each major component in DW-GAN. To retain the basic functionality of CAPTCHA breaking, we utilized at least one segmentation method as well as the CNN for character recognition. Accordingly, we eliminated the background denoising component, border tracing segmentation, and interval-based segmentation in a consecutive manner resulting into three alternative models to compare with the proposed DW-GAN. The results of the ablation analysis for these three models across the dark web datasets in Experiment 1 are given in Table {\ref{ex_ablation}}.
\input{tables/ex_ablation}
As shown in Table {\ref{ex_ablation}}, all three essential components of DW-GAN in our design significantly contributed to the performance of automated CAPTCHA breaking on average across all three datasets. Specifically, eliminating the background denoising component resulted in approximately 28\% performance loss (67.83\% vs. 95.6\% in Table 6) on average across the three datasets. Similarly removing border tracing segmentation and interval-based segmentation leads to a 27\% and 40\% performance drop, respectively (68.21\% and 57.07\% vs. 95.96\%). These results indicate that background denoising, border tracing segmentation, and interval-based segmentation improved the performance significantly. We note that for Rescator2, due to less complicated background, border tracing alone could be sufficient for effective automated CAPTCHA breaking, and thus there is less need for interval-based segmentation. That is, our DW-GAN framework outperformed with a larger margin on more complex backgrounds with various noise types.
\subsection{Results of Experiment 3.1: Evaluating Background Denoising}
To evaluate the effect of our background denoising component, we first compared the SSIM of original CAPTCHA images with different types of background noise to clear CAPTCHA images (same pattern without any background noise). Then we applied our proposed GAN-based background denoising on the original image and computed the SSIM with respect to the clear CAPTCHA image. The results of these two comparisons are shown in paired bars for each background noise type in Figure \ref{barchart}. As seen in the figure, our GAN-based background denoising method shows consistently higher similarity to the clear image for various types of background noise, including dot, curves, dot+curves, dense dots, dense curves, and dense dot with dense curves.
\begin{figure
\includegraphics[width=0.7\linewidth]{img/ex21.png}
\caption{SSIM Change Before and After Background Denoising for Different Security Measure Combinations}
\label{barchart}
\end{figure}
The high SSIM index achieved by DW-GAN suggests that GAN-based background denoising is capable of removing a wide variety of background noise that are widely used in the dark web.
\subsection{Results of Experiment 3.2: Evaluating Character Segmentation}
To gauge the effect of our proposed segmentation component in DW-GAN, we applied our segmentation method to various lengths of CAPTCHA images and compared the success rate with other image-level methods as well as common segmentation methods in character-level CAPTCHA breaking. The corresponding success rates are shown in Table \ref{ex_22}.
\input{tables/ex22}
We highlight two major observations from Table \ref{ex_22}. First, while the CAPTCHA breaking performance of image-level methods drastically decreased as the character length increased, the variation in character length had a significantly smaller impact on the CAPTCHA breaking performance of our proposed segmentation method in DW-GAN. This shows that DW-GAN's segmentation method performed better than other segmentation methods. Second, consistent with the result of Experiment 1, character-level methods demonstrated a higher success rate than image-level methods over all character lengths. To further demonstrate the sensitivity of success rate to increasing the character length for image-level and character-level methods, we plot the performance while increasing the character length in Figure \ref{trend}. Observing that DW-GAN maintains a high performance as character length increases signifies the effectiveness of the enhanced character segmentation proposed in DW-GAN.
\begin{figure}[h]
\includegraphics[width=0.6\linewidth]{img/ex22.png}
\caption{Sensitivity of Success Rate to Increasing the Character Length}
\label{trend}
\end{figure}
\section{Dark Web Case Study: Yellow Brick DNM}
We further demonstrate the application of our proposed framework on a real-world DNM. The purpose of this case study is two-fold. First, we sought to show that our proposed framework could detect and break the CAPTCHA to facilitate scalable, automated dark web data collection. Second, we aim to show that our proposed framework was able to effectively remove human involvement from dark web data collection. Figure \ref{yellowbrick} illustrates the process of employing DW-GAN for automated dark web CAPTCHA breaking in a DNM.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\linewidth]{img/fig9.PNG}
\centering
\caption{ An Automated Process for Dark Web CAPTCHA Breaking in a Dark Net Market with DW-GAN}
\label{yellowbrick}
\end{figure}
DW-GAN can be embedded in a dark web crawler. When a CAPTCHA challenge is encountered in a targeted dark web platform, DW-GAN is activated to break the CAPTCHA within a few attempts. The entire process can be broken down into several stages. First, the crawler navigates to the homepage of the targeted DNM. Parsing the login page, the crawler captures the CAPTCHA image and send it to our trained DW-GAN component. DW-GAN denoises, segments, and recognizes the CAPTCHA image. The recognized result would return to our crawler. Finally, the crawler would iteratively enter the predicted result until the correct prediction occurs (i.e., the platform permits access). As such, the DW-GAN enhanced crawler is guaranteed to collect illicit product information without human interaction.
Based on the above automated process in Fig \ref{yellowbrick}, a Tor-routed web crawler was developed and equipped with DW-GAN to automatically collect illegal advertised product pages from the Yellow Brick DNM. The crawler first detected the login page and automatically entered credentials. Then, the crawler collected all Onion links that contained the information of illegal items by sending HTTP requests to the platform server. Yellow Brick requires re-login and bypassing CAPTCHA for every 15 HTTP requests on average. Without our automated solution, the web crawler needs human interventions every three minutes to manually bypass the CAPTCHA challenges issued by the platform.
Using a crawler enhanced by our DW-GAN, we were able to collect 1,831 illegal products from Yellow Brick. Among these products, there were 286 cybersecurity-related items, including 102 stolen credit cards, 131 stolen accounts, 9 forged document scans, 44 hacking tools, and 1,223 drug-related products (opioid, cocaine, etc.). Overall, collecting Yellow Brick market with DW-GAN took about 5 hours without human involvement. In particular, each HTTP request took 8.8 seconds for loading a new webpage; therefore crawling 1,831 pages took 268.5 minutes. Solving the recurring CAPTCHA challenges (per each 15 HTTP requests) took our DW-GAN crawler 18.6 seconds. Overall, the proposed framework could automatically break CAPTCHA with no more than 3 attempts. Breaking all CAPTCHA images take about 76 minutes in total for all 1,831 product pages, a process that is fully automated.
\section{Conclusion and Future Directions}
Text-based CAPTCHA breaking has been a major challenge for collecting large-scale dark web data for developing proactive CTI. Two major challenges have been the particularly noisy background and the variable character length of CAPTCHA images. Leveraging GAN and CNN, we propose a novel and automated framework for breaking text-based CAPTCHA in the dark web. The proposed framework utilizes GAN to counteract background security measures for dark web-specific CAPTCHA and leverages an enhanced border tracing algorithm to extract segments encompassing single characters from CAPTCHA images. Our DW-GAN was evaluated on several dark web CAPTCHA datasets. DW-GAN significantly outperformed the state-of-the-art benchmark methods on all datasets in the dark web research context, where there is a lack of labeled CAPTCHA data. Moreover, the proposed framework maintained relatively consistent CAPTCHA breaking performance when character length varies. While collecting dark web data traditionally requires significant human effort, a web crawler equipped with DW-GAN is able to collect large-scale dark web data without human involvement.
Future research is needed to counteract more sophisticated CAPTCHA patterns in the dark web. For instance, we have observed some newly emerged small-sized dark web platforms that complement the text-based CAPTCHA with a ‘question-answering’ challenge. In addition to solving the text-based CAPTCHA, these platforms require user to answer a question. For example, the user may be asked to answer queries such as "adding two numbers" or "the first month of the year." Enhancing DW-GAN for other emerging CAPTCHA challenges remains as a promising future research direction.
\begin{acks}
This material is based upon work supported by the National Science Foundation (NSF) under the following grants: SaTC-1936370, CICI-1917117, and SFS-1921485.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,116,691,497,886 | arxiv |
\section{Conclusion} \label{sec:conclusion}
We have presented a novel \textit{reactive} planning objective allowing the ego-agent to jointly reason about its own plans as well as how other actors will react to them. We formulated the problem with a deep energy-based model which enables us to explicitly model trajectory goodness as well as interaction cost between actors. Our experiments showed that our reactive model outperforms the non-reactive model in various highly interactive simulation scenarios without trading off collision rate. Moreover, we outperform or are competitive with state-of-the-art in prediction metrics.
\section{Experiments} \label{sec:experiments}
We demonstrate the effectiveness of our reactive planning objective
in two closed-loop driving simulation settings: \textit{real-world traffic
scenarios} with our in-house simulator (Simba), and \textit{synthetically
generated dense traffic} with the open-source CARLA simulator \cite{dosovitskiy_carla}. We setup a large
number of complex and highly interactive scenarios from lane changes and (unprotected) turns - in order to tease apart the differences between reactive and non-reactive models.
To better showcase the importance of reactivity, we created a non-reactive variant of our model by defining the planning costs as follows: $ f_{\text{nonreactive}} = {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})]$, which uses a prediction model unconditioned on the SDV trajectory. This non-reactive assumption leads to a simplification of the joint cost ${\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})] = C_{\text{traj}}({\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}}) + {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[\sum_{i=1}^N{C_{\text{inter}}({\mathbf{y}}_0, {\mathbf{y}}_i)]}$, where the considered terms are just the SDV-specific trajectory cost and the SDV/actor interaction cost.
Our results show a key insight:
our pure reactive model
alone achieves a higher success rate compared to the non-reactive model without
trading off collision rate, implying it is able to effectively consider the reactive behavior of other
actors and formulate a goal-reaching plan without being unreasonably aggressive. Moreover, we justify the choice of a deep structured model by demonstrating that when our model is used for actor trajectory prediction, it is competitive with the state-of-the-art in both CARLA and Nuscenes \cite{caesar_nuscenes}.
\subsection{Experimental Setup}
\subsubsection{Training Datasets}
Since our closed-loop evaluations are in Simba and CARLA, our models are trained on the respective datasets in the corresponding domains. Our Simba model is trained on a large-scale, real-world dataset collected through our self-driving vehicles in numerous North American cities, which we call \textbf{UrbanCity}. The dataset consists of over 6,500 snippets of approximately 25 seconds from over 1000 different trips, with 10Hz LiDAR and HD map data, which are included as input context into the model ${\mathcal{X}}$ in addition to the past trajectories per actor.
Meanwhile, the CARLA simulated dataset is a publicly available dataset \cite{rhinehart_precog}, containing 60k training sequences. The input to the model for CARLA consists of rasterized LiDAR features and 2s of past trajectories to predict 4s future trajectories.
\subsubsection{Simulation Setup}
Simba runs at 10Hz and leverages a realistic LiDAR simulator
\cite{manivasagam_lidarsim} to generate LiDAR sweeps around the
ego-agent at each timestep. HD map polygons are available per scenario. We first setup 12 different interactive ``template" scenarios: we select these templates by analyzing logs in the validation set of UrbanCity and selecting a start time where there is a high degree of potential interactions with other actors. We set a goal state for the ego-agent, which for instance can be a turn or lane merge, and initialize actor positions according to their start time positions in the log. We then proceed to generate 25 distinct scenarios per template scenario by perturbing the initial position and velocity of each actor, for a total of 50 val/250 test scenarios. During simulation each actor behaves according to a heuristic car-following model that performs basic hazard detection.
In CARLA we leverage the synthetic LiDAR sensor as input.
Rather than initializing scenarios through ground-truth
data, we manually create 6 ``synthetic" template scenarios containing dense traffic, and spawn
actors at specified positions with random perturbations.
We extend the \texttt{BasicAgent} class given in CARLA 0.9.5 as an actor model per agent, which performs basic route following to a goal and hazard detection. We generate 50 val/100 test scenarios by perturbing the initial position / vehicle type / hazard detection range of each actor.
Scenarios in all settings are run for a set timer. The scenario completes if
1) the ego-agent has reached the goal, 2) the timer has expired, or 3) the ego-agent has collided.
\subsubsection{Closed-Loop Metrics}
The output metrics include: 1) Success Rate (whether the ego-agent successfully
completed the lane change or turn), 2) time to completion (TTC), 3) collision rate, 4)
number of actor brake events. This information is provided in CARLA but not in Simba.
\subsection{Reactive/Non-Reactive Simulation Results}
The pure reactive model outperforms the non-reactive model on
success rate, time to completion, goal distance, with no difference in collision
rate (Tab. \ref{tab:overall_sim}), on both Simba and CARLA.
This implies that by considering the reactivity of other actors in its planning objective, the reactive model can more efficiently navigate to the goal in a highly-interactive environment, without performing overly aggressive behaviors that would result in a higher collision rate.
Moreover, we also note that both the reactive/non-reactive models within our
joint structured framework outperform a strong joint prediction and planning
model, PRECOG \cite{rhinehart_precog} -- we present our PRECOG implementation and visualizations in supplementary material.
\begin{table}[]
\centering
\scalebox{0.8}{
\begin{tabular}{@{}lllllll@{}}
\toprule
Nuscenes & KDE \cite{lee_desire} & DESIRE \cite{gupta_socialgan} & SocialGAN \cite{rhinehart_r2p2} & R2P2 \cite{chai_multipath} & ESP \cite{rhinehart_precog} & Ours \\ \midrule
5 agents & 52.071 & 6.575 & 3.871 & 3.311 & 2.892 & \textbf{ 2.610} \\ \bottomrule
\end{tabular}}
\caption{Nuscenes prediction performance (5 nearest, minMSD, K=12).}
\label{tab:exp_openloop_nusc}
\end{table}
\subsection{Qualitative Results}
To complement the quantitative metrics, we provide scenario visualizations. In Fig. \ref{fig:supp_qual_carla1}, we present a lane merge scenario in Simba to better highlight the difference between the reactive and non-reactive models in a highly complex, interactive scenario. We provide simulation snapshots at $t=0,1,2,3$ seconds.
Note that the reactive model is able to take decisive action and complete the lane merge; the neighboring actor slows down with adequate buffer to let the ego-agent pass. Meanwhile, the non-reactive agent does not complete a lane merge but drifts slowly to the left side of the lane over time. We provide several more comparative visualizations of various scenarios in both Simba and CARLA in our supplementary document and video.
\subsection{Prediction Metrics}
To validate the general performance of our joint structured framework, we
compute actor predictions using our model, by using Loopy Belief Propagation to
compute unconditioned actor marginals $p({\mathbf{y}}_{i})$,
and compare against state-of-the-art on standard prediction benchmarks in Fig. \ref{tab:exp_openloop_carla}, \ref{tab:exp_openloop_nusc}: the CARLA PRECOG dataset and
the Nuscenes dataset \cite{caesar_nuscenes} (note that a separate model was trained for
Nuscenes). We report minMSD \cite{rhinehart_precog}, the minimum mean squared
distance between a sample of predicted/planned trajectories and ground-truth as metric. As shown, our method is competitive with or
outperforms prior methods in minMSD. Similar to the findings of DSDNet
\cite{zeng_dsdnet}, this implies that an energy-based model relying on discrete
trajectory samples per actor is able to effectively make accurate trajectory predictions for each actor.
\subsection{Training Loss Functions}
\begin{table}[]
\centering
\scalebox{0.95}{
\begin{tabular}{ccccc}
\toprule
Loss & Pred FDE (3s) & Actor CR & Plan FDE (3s) & Ego CR \\ \midrule
Cross-entropy & 1.47 & 0.55\% & 2.01 & 0.24\% \\
Chen et al. \cite{chen_deepstruct} & 1.30 & 0.67\% & 2.10 & 0.22\% \\
Ours & \textbf{1.24} & \textbf{0.44\%} & \textbf{1.73} & \textbf{0.18\%} \\
\bottomrule
\end{tabular}}
\caption{ Ablation Study comparing training losses on UrbanCity. For all metrics, lower is better.}
\label{tab:train_ablate}
\end{table}
We also perform an ablation study on the UrbanCity validation set to analyze our proposed training loss function compared against vanilla cross-entropy loss (no ignore set), as well as the approach in Chen et al. \cite{chen_deepstruct}. We demonstrate in Fig. \ref{tab:train_ablate} that our approach achieves the lowest Final Displacement Error (FDE), for the SDV and other actors, as well as the lowest collision rate between actor collisions and collisions of the ego-agent with ground-truth actors.
\section{Introduction}
Self-driving vehicles (SDVs) face many challenging situations when dealing with complex dynamic environments.
Consider a scenario where an SDV is trying to merge left into a lane that is currently blocked by traffic. The SDV cannot reasonably merge by simply waiting - it could be waiting for quite a while and inconvenience the cars behind it.
On the other hand, it cannot aggressively merge into the lane disregarding the lane congestion, as this will likely lead to a collision.
A human driver in this situation would think that if they gently nudge, other vehicles will have enough time to react without major inconvenience or safety risk, resulting in a successful and safe merge.
While this is just an example, similar situations happen often for example during rush hour, in downtown areas, or highway ramp merging.
The key idea here is that the human driver cannot be entirely \textit{passive} with respect to the dynamic multi-actor environment; they must exercise some degree of control by reasoning about how other actors will \textit{react} to their actions.
Of course, the driver cannot use this control \textit{selfishly}; they must act in a responsible manner to maximize their own utility while minimizing the risk/inconvenience to others.
This complex reasoning is, however, seldom used in self-driving approaches.
Instead, the autonomy stack of an SDV is composed of a set of modules executed one after another. The AV first detects other actors in the scene
(\textit{perception}) and predicts their future trajectories (\textit{prediction}). Given the output of perception and prediction, it plans a trajectory towards its intended goal that will be executed by the control module.
This implies that behavior forecasts of other actors are not affected by the AV's own
plan; the SDV is a passive actor assuming a stochastic world that it cannot change.
As a consequence it might struggle when planning in high-traffic scenarios.
In this paper we refer to prediction unconditioned on planning as \textit{non-reactive}.
Recently, there has been a line of work that
identify similar issues and tries to incorporate how the ego-agent affects other
actors into the planning process; for instance, via game-theoretic
planning \cite{sadigh_effectshuman, sadigh_infogathering,
fisac_hierarchicalgame} and reinforcement learning \cite{saxena_densemfrl,
bouton_carl}.
Yet these works rely on assumptions about a hand-picked prediction model or manually-tuned planning reward, which may not fully model real-world actor dynamics or human-like behaviors.
Thus there is a need for a more general approach to the problem.
Towards this goal, we propose a novel \textbf{\textit{joint}} prediction and planning framework that can perform \textbf{reactive} planning.
Our approach is based on cost minimization for planning where we predict the actor reactions to the potential ego-agent plans for costing the ego-car trajectories.
We formulate the problem as a \textbf{deep structured model} that defines a set of \textbf{learnable costs} across the future trajectories of all actors; these costs in turn induce a joint probability distribution over these actor future trajectories.
A key advantage is that our model can be used jointly for prediction (with derived probabilities) and planning (with the costs).
Another key advantage is that our structured formulation allows us to explicitly model interactions between actors and ensure a higher degree of safety in our planning.
We evaluate our reactive model as well as a non-reactive variant in a variety of highly interactive, complex closed-loop simulation scenarios, consisting of lane merges and turns in the presence of other actors.
Our simulation settings involve both real-world traffic as well as synthetic dense traffic settings.
Importantly, we demonstrate that using a reactive objective can more effectively and efficiently complete these complex maneuvers without trading off safety.
Moreover, we validate the choice of our learned joint structured model by demonstrating that it is competitive or outperforms prior works in open-loop prediction tasks.
\section{A Joint Structured Framework for Reactive Prediction/Planning}
\section{ Joint Reactive Prediction and Planning}
\begin{figure}
\includegraphics[width=\linewidth]{figures/struct_overview4.pdf}
\caption{Overview of the actor-specific and interaction energy terms in our joint structured model.}
\label{fig:struct_overview}
\end{figure}
Suppose the SDV is driving in a scenario where there are $N$ actors. Let $\mathcal{Y} = ({\mathbf{y}}_0, {\mathbf{y}}_1, \cdots, {\mathbf{y}}_N)$ be the set of random
variables representing future trajectories of both the SDV, ${\mathbf{y}}_0$, and all other traffic participants, ${\mathcal{Y}}_r = ({\mathbf{y}}_1, \cdots {\mathbf{y}}_N)$.
We define a \textit{reactive} planner as one that considers actor predictions ${\mathcal{Y}}_r$ conditioned on ${\mathbf{y}}_0$ in the planning objective, and a \textit{non-reactive} planner as one that assumes a prediction model which is independent of ${\mathbf{y}}_0$.
In this section, we first outline the framework of our joint structured model
which simultaneously models the costs and probability distribution of the future (Sec.
\ref{sec:struct_model}).
We then introduce our reactive objective which
enables us to safely plan under such a distribution considering the
reactive behavior of other agents (Sec. \ref{sec:plan_objective}) and discuss how to evaluate it (Sec. \ref{sec:inference}), including with a goal-based extension (Sec. \ref{sec:goal_energy}). We finally describe our training procedure (Sec. \ref{sec:training}). We highlight additional model properties, such as interpolation between our non-reactive/reactive objectives, in our supplementary.
\subsection{Structured Model for Joint Perception and Prediction} \label{sec:struct_model}
We define a probabilistic deep structured model to represent the distribution
over the future trajectories of the actors conditioned on the environment context ${\mathcal{X}}$
as follows
\begin{align}
p({\mathcal{Y}} | {\mathcal{X}}; {\mathbf{w}}) = \frac{1}{Z} \exp(-C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})) \label{eq:joint_struct}
\end{align}
where $Z$ is the partition function, $C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})$ defines the joint energy of all future trajectories ${\mathcal{Y}}$ and ${\mathbf{w}}$ represents all the parameters of the model.
In this setting, the context ${\mathcal{X}}$ includes each actor's past trajectories, LiDAR
sweeps and HD maps, represented by a birds-eye view (BEV) voxelized tensor
representation \cite{casas_intentnet, luo_faf}.
Actor trajectories ${\mathcal{Y}}$ can naturally be represented in continuous space. However, performing inference on continuous structured models is extremely challenging. We thus instead follow \cite{zeng_dsdnet,phan2020covernet} and discretize each actor's action
space into $K$ possible trajectories (each continuous) using a realistic
trajectory sampler inspired from \cite{zeng_dsdnet}, which takes past
positions as input and samples a set of lines, circular curves, and euler
spirals as future trajectories.
Thus, each ${\mathbf{y}}_i$ is a discrete random variable that can take up one of $K$ options, where each option is a full continuous trajectory -- such a discretized distribution allows us to efficiently compute predictions (see Sec. \ref{sec:inference}). Additional details on input representation and trajectory sampling are provided in the supplementary material.
We decompose the joint energy $C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})$ in terms of an \textbf{actor-specific} energy that encodes the cost of a given trajectory for each actor, while the \textbf{ interaction term} captures the plausibility of trajectories across two actors:
\begin{align}
C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}}) = & \sum_{i=0}^N C_{\text{traj}}({\mathbf{y}}_i, {\mathcal{X}}; {\mathbf{w}}) + \sum_{i,j} C_{\text{inter}}({\mathbf{y}}_i, {\mathbf{y}}_j)
\end{align}
We exploit a learnable neural network to compute the {\bf actor-specific energy},
$C_{\text{traj}}({\mathbf{y}}_i, {\mathcal{X}}; {\mathbf{w}})$, parameterized with weights ${\mathbf{w}}$. A convolutional network takes as input the context feature ${\mathcal{X}}$ as a rasterized BEV 2D tensor grid centered around the ego-agent, and produces an intermediate spatial feature map $\mathbf{F} \in \mathbb{R}^{h \times w \times c}$, where $h,w$ represent the dimensions of the feature map (downsampled from the input grid), and $c$ represents the number of channels.
These features are then combined with the candidate trajectories ${\mathbf{y}}_i$ and processed through an MLP, outputting a $(N+1) \times K$ matrix of trajectory scores, one per actor trajectory sample.
Our \textbf{interaction energy} is a combination of collision and safety distance violation costs.
We define the collision energy to be $\gamma$ if a pair of future trajectories collide and 0 if not.
Following \cite{sadat_plt}, we define the safety distance violation to be a squared penalty within some safety distance of each actor's bounding box, scaled by the speed of the SDV. In our setting, we define safety distance to be 4 meters from other vehicles.
Fig. \ref{fig:struct_overview} gives a graphic representation of the two energy terms.
Full model details are in the supplementary, including the specific dataset-dependent input representation and model architecture.
\subsection{Reactive Inference Objective} \label{sec:plan_objective}
The structured model defines both a set of costs and probabilities over possible futures.
We develop a planning policy on top of this framework which decides what the
ego-agent should do in the next few seconds (i.e., planning horizon).
Our reactive
planning objective is based on an optimization formulation which finds the trajectory
that minimizes a set of planning costs -- these costs consider both the candidate SDV trajectory as well as other actor predictions conditioned on the SDV trajectory.
In contrast to existing literature, we re-emphasize that both prediction and planning components of our objective are derived from the same set of learnable costs in our structured model, removing the need to develop extraneous components outside this framework; we demonstrate that such a formulation inherently considers both the reactivity and safety of other actors.
We define our planning objective as
\begin{align}
{\mathbf{y}}_0^* = \text{argmin}_{{\mathbf{y}}_0} f({\mathcal{Y}}, {\mathcal{X}};{\mathbf{w}}) \label{eq:cond_0}
\end{align}
where ${\mathbf{y}}_0$ is the ego-agent future trajectory and $f$ is the planning cost function defined over our structured model.
In our reactive setting, we define the planning costs to be an expectation of the joint energies, over the distribution of actor predictions conditioned on the current candidate SDV trajectory:
\begin{align}
f({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}}) = {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}})}[C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})] \label{eq:cond_1}
\end{align}
Note that ${\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}})$ describes the future distribution of other actors, conditioned on the current candidate trajectory ${\mathbf{y}}_0$ and is derived from the underlying joint distribution in Eq. (\ref{eq:joint_struct}). Meanwhile, the $C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})$ term represents the joint energies of a given future configuration of joint actor trajectories. We can expand the planning objective by decomposing the joint energies into the actor-specific and interaction terms as follows:
\begin{small}
\begin{align}
C_{\text{traj}}({\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}}) &+ {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}})}[\sum_{i=1}^{N}C_{\text{inter}}({\mathbf{y}}_0, {\mathbf{y}}_i) + \\
& \sum_{i=1}^{N}C_{\text{traj}}({\mathbf{y}}_i, {\mathcal{X}}; {\mathbf{w}}) + \sum_{i=1,j=1}^{N,N}C_{\text{inter}}({\mathbf{y}}_i, {\mathbf{y}}_j)] \nonumber
\end{align}
\end{small}
The set of costs includes the SDV-specific cost, outside the expectation. It also includes the SDV/actor interaction costs, the actor-specific cost, and actor/actor interaction costs within the expectation. Note that the SDV-specific cost $C_{\text{traj}}({\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}})$ uses a different set of parameters from those of other actors ${\mathbf{y}}_i$ to better exploit the ego-centric sensor data and model SDV-specific behavior. Moreover, the set of actor-specific and interaction costs within the expectation leads to an inherent balancing property of additional responsibility to additional control: by explicitly modeling the reactive prediction distribution of other actors in the prediction model, we must also take into account their utilities as well.
In the following, we further exclude the last energy term $ \sum_{i,j}C_{\text{inter}}({\mathbf{y}}_i, {\mathbf{y}}_j)$ due to computational reasons. See supplementary material for more details.
\subsection{Inference for Conditional Planning Objective} \label{sec:inference}
Due to our discrete setting and the nature of actor-specific and interaction costs, for any given ${\mathbf{y}}_0$, we can directly evaluate the expectation from Eq. (\ref{eq:cond_1}) without the need for Monte-Carlo sampling.
We thus have
\begin{align}
f &= C_{\text{traj}}^{{\mathbf{y}}_0} + \sum_{{\mathcal{Y}}_r}{p_{{\mathcal{Y}}_r | {\mathbf{y}}_0}}[ \sum_{i=1}^{N}C_{\text{inter}}^{{\mathbf{y}}_0, {\mathbf{y}}_i} + \sum_{i=1}^{N}C_{\text{traj}}^{{\mathbf{y}}_i}]
\end{align}
where $p_{{\mathbf{y}}_i|{\mathbf{y}}_0}$ is short-hand for $p({\mathbf{y}}_i | {\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}})$, and $C_{\text{traj}}^{{\mathbf{y}}_i}$ for $C_{\text{traj}}({\mathbf{y}}_i, {\mathcal{X}}; {\mathbf{w}})$ (same for pairwise).
Since the joint probabilities factorize over the actor-specific and pairwise interaction energies, they simplify into the marginal and pairwise marginal probabilities between all actors.
\begin{align}
f &= C_{\text{traj}}^{{\mathbf{y}}_0} + \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i|{\mathbf{y}}_0}C_{\text{inter}}^{{\mathbf{y}}_0, {\mathbf{y}}_i} + \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i|{\mathbf{y}}_0}C_{\text{traj}}^{{\mathbf{y}}_i}
\end{align}
where
$p_{{\mathbf{y}}_i|{\mathbf{y}}_0}$ represents the marginal probability of the actor trajectory conditioned on the candidate ego-agent trajectory.
These marginal probabilities which are tensors of size $N \times K \times K$, can all be efficiently approximated by exploiting Loopy Belief Propagation (LBP) \cite{yedidia_gbp}.
This in turn allows efficient batch evaluation of the planning objective: for every sample of every actor ($N \times K$ samples), evaluate the conditional marginal probability times the corresponding energy term.
Note that LBP can also be interpreted as a special form of recurrent network, and thus is amenable to end-to-end training.
Then, since the ego-agent itself has $K$ trajectories to choose from, solving the minimization problem in (\ref{eq:cond_0}) involves simply picking the trajectory with the minimum planning cost.
\begin{table}[]
\centering
\scalebox{0.83}{
\begin{tabular}{c|ccccc}
\toprule
Model & Succ (\%) $\uparrow$ & TTC (s) $\downarrow$ & Goal (m) $\downarrow$ & CR (\%) $\downarrow$ & Brake $\downarrow$ \\ \midrule
PRECOG (C) & 12.0 & 16.3 & 13.5 & 18.0 & 39.2 \\
Non-Reactive (C) & 46.0 & 15.8 & 4.2 & 5.0 & \textbf{34.4} \\
\textbf{Reactive } (C)& \textbf{70.0} & \textbf{13.9} & \textbf{2.4} & 5.0 & 37.8 \\ \midrule
PRECOG (S) & 21.0 & 9.4 & 16.8 & 20.5 & - \\
Non-Reactive (S) & 70.0 & 7.5 & 5.3 & 3.5 & - \\
\textbf{Reactive} (S) & \textbf{82.0} & \textbf{6.8} & \textbf{4.3} & 3.5 & - \\
\bottomrule
\end{tabular}}
\caption{Results obtained from simulations in Simba/CARLA. C = CARLA, S = Simba. }
\label{tab:overall_sim}
\end{table}
\subsection{Goal Energy} \label{sec:goal_energy}
Similar to \cite{rhinehart_precog}, \cite{rhinehart_deepimit}, we make the observation that our current formulation, which encodes both actor behavior and desirable SDV behavior in the energies of our structured model, can be extended to goal-directed planning to flexibly achieve arbitrary goals during inference.
In addition to the learned ego-agent cost $C_{\text{traj}}^{{\mathbf{y}}_0}$, we can specify a goal state $\mathcal{G}$ in each scenario and encourage the
ego-agent to reach the goal state via a goal energy $C_{\text{goal}}^{{\mathbf{y}}_0}$. The goal state can take on different forms depending on the scenario:
in the case of a turn, $\mathcal{G}$ is a target position.
In the case of a lane change, $\mathcal{G}$ is a polyline representing the centerline of the lane in continuous coordinates. In particular we define the goal energy term $C_{\text{goal}}^{{\mathbf{y}}_0}$ as follows: if $\mathcal{G}$ is a single point, the energy is the $\ell_2$ distance of the final waypoint; if $\mathcal{G}$ represents a lane, the energy represents the average projected distance to the lane polyline.
We sum the goal energy cost to the conditional planning objective during inference.
\newcommand{\figcap}[3]{
\begin{overpic}[width=0.24\textwidth, trim={#2},clip]{#1}
\put(0,2){\sffamily \scriptsize \colorbox{gray}{\color{white} #3}}
\end{overpic}
}
\begin{figure*}[h]
\centering
\def0.24\textwidth{0.24\textwidth}
\setlength{\tabcolsep}{0.2pt}
\begin{tabular}{cccc}
\figcap{figures/qual/simba_lm1_r0.png}{300 275 500 225}{Reactive, t=0s} &
\figcap{figures/qual/simba_lm1_r1.png}{300 275 500 225}{Reactive, t=1s} &
\figcap{figures/qual/simba_lm1_r2.png}{300 275 500 225}{Reactive, t=2s} &
\figcap{figures/qual/simba_lm1_r3.png}{300 275 500 225}{Reactive, t=3s} \\
\figcap{figures/qual/simba_lm1_nr0.png}{300 275 500 225}{Non-Reactive, t=0s} &
\figcap{figures/qual/simba_lm1_nr1.png}{300 275 500 225}{Non-Reactive, t=1s} &
\figcap{figures/qual/simba_lm1_nr2.png}{300 275 500 225}{Non-Reactive, t=2s} &
\figcap{figures/qual/simba_lm1_nr3.png}{300 275 500 225}{Non-Reactive, t=3s} \\
\end{tabular}
\caption{Visualization of a Simba lane merge for non-reactive (bottom) and reactive (top) models at 3 different time steps: 1s (left), 2s (middle), 3s (right). AV is in green, other actors are in blue/purple, and goal lane is in cyan. The reactive model is able to decisively complete the lane merge, while the non-reactive model is not.
}
\label{fig:supp_qual_carla1}
\end{figure*}
\subsection{Learning} \label{sec:training}
We train our joint structured model given observed ground-truth trajectories for the ego-car and all other agents in the scene.
We want to learn the model energies such that they induce both optimal plans for the ego-agent and accurate probabilities for the actor behaviors.
Since the model energies induce a probability distribution used in our prediction model, this implies that minimizing the cross-entropy between our predictive distribution and the ground-truth trajectories will also learn a good set of costs for planning.
To this end, we minimize the following cross-entropy loss function:
\begin{small}
\begin{align}
\mathcal{L} &= \sum_{i} \mathcal{L}_{i} + \sum_{i,j}\mathcal{L}_{i,j} \\
\mathcal{L}_{i} &= \frac{1}{K} \sum_{{\mathbf{y}}_i \notin \Delta({\mathbf{y}}_i^*)} { p_{\text{g.t.}}({\mathbf{y}}_i) \log p({\mathbf{y}}_i, {\mathcal{X}}; {\mathbf{w}}) } \\
\mathcal{L}_{{\mathbf{y}}_i, {\mathbf{y}}_j} &= \frac{1}{K^2} \sum_{{\mathbf{y}}_{i} \notin \Delta({\mathbf{y}}_i^*), {\mathbf{y}}_{j} \notin \Delta({\mathbf{y}}_j^*)} {p_{\text{g.t.}}({\mathbf{y}}_i, {\mathbf{y}}_j) \log p({\mathbf{y}}_i, {\mathbf{y}}_j, {\mathcal{X}}; {\mathbf{w}}) }
\end{align}
\end{small}
where $p_{{\mathbf{y}}_i}$ and $p_{{\mathbf{y}}_i, {\mathbf{y}}_j}$ represent the marginal and pairwise marginal probabilities for every actor including the ego-agent, and $p_{\text{g.t.}}$ represents the indicator function that is zero everywhere unless ${\mathbf{y}}_i, {\mathbf{y}}_j$ are equal to the ground-truth ${\mathbf{y}}_i^*, {\mathbf{y}}_j^*$. Recall that the marginal probabilities, $p_{{\mathbf{y}}_i}$ and $p_{{\mathbf{y}}_i, {\mathbf{y}}_j}$ for every actor including the ego-agent are computed through Loopy Belief Propagation, a
differentiable iterative message-passing procedure. Note that our method has a subtle but important distinction from raw cross-entropy loss: $\Delta({\mathbf{y}}_i^*)$ is defined as the set of $k$ non-ground-truth trajectories for actor $i$ closest to ${\mathbf{y}}_i^*$ by $\ell_2$ distance, and we only compute the cross-entropy loss for trajectories outside of this set.
We adopt this formulation since any trajectory within $\Delta$ can reasonably be considered as a ground-truth substitute, and hence we do not wish to penalize the probabilities of these trajectories.
\section{Related Work}
\paragraph{Prediction}
The prediction task, also refer to as motion forecasting, aims to predict future states of each agent given the past.
Early
methods have used physics-based models to unroll the past actor
states \cite{welch_kalmanfilter, lefevre_pred_survey, cosgun_fullauto}.
This field has exploded in recent years thanks to the advances in deep learning.
One area of work in this space is to perform prediction (often jointly with detection) through rich unstructured sensor and map data as context \cite{luo_faf, casas_intentnet, zeng_nmp, liang_pnpnet, liang_lanegcn,li_e2epnptransformer}, starting with LiDAR context \cite{luo_faf} to map rasterizations \cite{casas_intentnet, djuric_shortmotion, cui_pred_dcn}, to lane graphs \cite{liang_lanegcn, gao_vectornet}.
Modeling the future motion with a multi-modal
distribution is of key importance given the inherent future uncertainty \cite{casas_intentnet,
rhinehart_r2p2, rhinehart_precog, tang_mfp, hong_ror, zeng_dsdnet,
chai_multipath} and sequential nature of trajectories \cite{rhinehart_precog, tang_mfp}
Recent works also model
interactions between actors \cite{casas_spagnn, rhinehart_precog, tang_mfp,
lee_desire, casas_ilvm,li_e2epnptransformer}, mostly through graph neural networks. In our work, we tackle the multi-modal and interactive
prediction with a joint structured model, through which we can efficiently
estimate probabilities.
\paragraph{Motion planning} Given observations of the environment and
predictions of the future, the purpose of motion planning is to find a safe and comfortable
trajectory towards a specified goal.
Sample-based planning is a popular paradigm due to its low latency, where first a large set of trajectory candidates are sampled and evaluated based on a pre-defined cost function, and then the minimal cost trajectory is chosen to be executed.
Traditionally, such a cost function is hand-crafted to reflect our
prior knowledge \cite{fan_baidump, ziegler_bertha, montemerlo_junior,
buehler_darpa, bandyopadhyay_intentmp}. More recently, learning-based cost functions
also show promising results. Those costs can be learned through either Imitation
Learning \cite{sadat_plt} or Inverse Reinforcement Learning
\cite{ziebart_maxent_irl}.
In most of these systems, predictions are made independently of planning. While there has been recent work on accounting for actor reactivity in the planning process \cite{sun_courteous, sadigh_effectshuman, sadigh_infogathering, fisac_hierarchicalgame}, such works still rely on hand designed rewards or prediction models which may have difficulty accounting for all real-world scenarios in complex driving situations.
\paragraph{Neural end-to-end motion planning}
The traditional compartmentalization of prediction and planning results in the following
issues: First, hooking up both modules may result in a large system that can be
prohibitively slow for online settings. Second, classical planning usually
assumes predictions to be very accurate or errors
to be normally distributed, which is not realistic in practice. Third, the sequential
order of prediction and planning makes it difficult to model the interactions
between the ego-agent and other agents while making decisions.
To address these issues, prior works have started exploring
end-to-end planning approaches integrating perception, prediction and planning into
a holistic model. Such methods can enjoy fast
inference speed, while capture prediction uncertainties and model
prediction-planning interactions either implicitly or explicitly. One popular
way is to map sensor inputs directly to control commands via neural nets.
\cite{pomerleau_alvinn, bojarski_e2edrive, codevilla_e2e_condimit, muller_drivepolicy, bansal_chauffeurnet}.
However, such methods lack interpretability and is hard to verify safety.
Recent works have proposed neural motion planners that produce interpretable intermediate representations. This are in the form of non-parametric cost maps \cite{zeng_nmp}, occupancy maps
\cite{sadat_p3} or affordances \cite{sauer_affordance}.
The most related work to ours, DSDNet \cite{zeng_dsdnet}, outputs a
structured model representation, similar to our setting - yet DSDNet still follows the traditional pipeline of separating prediction from planning, and thus cannot do reactive planning.
The two closest related works on modeling multi-agent predictions during end-to-end planning are PRECOG \cite{rhinehart_precog}, and PiP \cite{song_pip}.
PiP is a prediction model that generates joint actor predictions conditioned on
the known future ego-agent trajectory,
assuming planning is solved.
However, in the real-world, finding the future ego-trajectory
(planning) is in itself a challenging open problem, since the ego-trajectory
depends on other actors which creates a complicated feedback-loop between planning
and prediction.
The PRECOG planning objective accounts for joint reactivity under a flow-based \cite{rezende_flow} framework, yet it requires sequential decoding for planning and
prediction making it hard to satisfy low-latency online requirements. Moreover, the planning objective does not ensure collision avoidance and can suffer from mode collapse in the SDV trajectory space.
\paragraph{Structured Models}
Researchers have applied neural nets to learn parameters in undirected graphical models, also known as Markov Random Fields (MRF's). One of the key challenges in training MRF's in the discrete setting is the computation of the partition function, where the number of states increases exponentially with the number of nodes in the worst case scenario. Message-passing algorithms such as Loopy Belief Propagation (LBP) have been found to approximate the partition function \cite{yedidia_gbp, mceliece_turbo} well in practice.
Other learning based methods have included dual minimization \cite{chen_deepstruct}, directly optimizing the Bethe Free Energy \cite{wiseman_abfem}, finding variational approximations \cite{kuleshov_nvi} or mean-field approximations \cite{schwing_fc_dsn}.
Some approaches take an energy-minimization approach, unrolling the inference objective through differentiable optimization steps \cite{belanger_spen, wang_crf_polynomial} that can also be used to learn the model parameters \cite{belanger_spene2e, wang_proximaldsm}.
\section{PRECOG Details}
Here, we provide more details regarding our PRECOG implementation
\cite{rhinehart_precog}. PRECOG is a planning objective based on a conditional
forecasting model called Estimating Social-forecast Probability (ESP). We first
implement ESP and verify that it reproduces prediction metrics provided by the
authors in CARLA (also see Table III in the main paper). We then attach the
PRECOG planning objective on top.
\subsection{ESP Architecture}
The ESP architecture largely follows the details specified in the original
paper, with slight modifications similar to the insights discovered in
\cite{casas_ilvm}. First, we use the same whisker featurization scheme as
specified in the paper, but due to memory limitations in UrbanCity we sample
from a set of three radii $[1,2,4]$ as opposed to the original 6. Our past
trajectory encoder is a GRU with hidden state 128 that sequentially runs across
the past time dimension and takes in the vehicle-relative coordinates,
width/height, and heading as inputs. Moreover, given that our scenes can have a
variable number of actors as opposed to constant number in the original paper, we use $k$-nearest neighbors with $k=4$ to select the nearest neighbor features at every future timestep. Finally, we found that in the autoregressive model setting, training using direct teacher forcing \cite{lamb_teacherforcing} by conditioning the next state on the ground-truth current state caused a large mismatch between training and inference. Instead, we add white noise of 0.2m to the conditioning ground-truth states during training to better reflect error during inference.
\subsection{Implementation of PRECOG objective}
The PRECOG planning objective is given by:
\begin{align}
\mathbf{z}^{r*} = \text{argmax}_{\mathbf{z}^r} \mathbb{E}_{\mathbf{Z}^h}[\log q(f(\mathbf{Z}) | \phi) + \log p(\mathcal{G} | f(\mathbf{Z}, \phi)]
\end{align}
where the second term represents the goal likelihood, and the first term represents the "multi-agent" prior, which is a joint density term that can be readily evaluated by the model. In order to plan with the PRECOG objective, one must optimize the ego-agent latent $\mathbf{z}^r$ over an expectation of latents sampled from other actors $\mathbf{Z}^h$.
The joint density term can be evaluated by computing the log-likelihood according to the decoded Gaussian at each timestep $\log q(f(\mathbf{Z})) = \log q(S) = \sum_{t=1}^{T} q(\mathbf{S}_t | \mathbf{S}_{1:t-1}) = \sum_{t=1}^{T} \mathcal{N}(\mathbf{S}_t; \mathbf{{\mu}_t}, \Sigma_t)$. Meanwhile, the authors use an example of a goal state penalizing L2 distance as an example of goal likelihood (assuming a Gaussian distribution), which admits a straightforward translation into our definition of a goal energy, for both goal states and goal lanes. Hence, we use the same goal energy definition given in Sec. 4-A of the main paper to compute the goal likelihood. We weight the prior likelihood and goal likelihood with two hyperparameters $\lambda_1, \lambda_2$, which are determined from the validation sets.
In practice, we implement the PRECOG objective as follows: we sample 100
ego-agent latents $\mathbf{z}^r$, effectively using random shooting
rather than gradient descent. In
discussion with the authors they confirmed that results should be similar. Then
for each ego-agent latent, we sample 15 joint actor latent samples
$\mathbf{Z}^h$ as a Monte-Carlo approximation to the expectation. We then
evaluate the goal/prior likelihood costs for each candidate ego-agent latent and
select the ego-agent latent with the smallest cost. Evaluation of the planning
costs for all candidate samples can be efficiently done in a batch manner using
one forward GPU pass. Note that in selecting the optimal ego-agent latent
$\mathbf{z}^r$, there is an intricacy that since PRECOG is a joint
autoregressive model, the ego-agent latent does not correspond to a fixed
ego-agent trajectory, as the final trajectory will depend on the other actor
latents. We avoid this challenge in simulation execution by replanning
sufficiently often (0.3s)
and also observing that the other actor latents do not generally perturb the ego-agent trajectory too much.
\subsection{Discussion of Results}
As indicated in Table I of the main paper, the PRECOG model underperforms both our non-reactive and reactive objectives based on the energy-based model. We qualitatively analyzed some of the simulation scenarios and offer some hypotheses for the results. The first is that since PRECOG does not explicitly define a collision prior, it's possible that the model does not try to avoid collision in all cases, especially on test scenarios that are out-of-distribution from the training data (especially in CARLA, where the traffic density in simulation is higher than in training). The second is that sampling in latent space does not guarantee a diverse range of trajectories for the ego-agent. In fact, we notice that in some turning scenarios where we set the goal state to be the result of a turn, the ego-agent still goes in a straight line, even when the prior likelihood weight goes to 0 and the number of ego-agent samples is high (tried setting to 1000). We hypothesize that this is partially due to test distribution shift. Nevertheless, we find learned autoregressive models promising to keep in mind for the future.
We showcase a qualitative example comparing PRECOG with our model in the next section, in Fig. \ref{fig:supp_qual_simba2}.
\section{Additional Qualitative Results}
We present a few additional qualitative results to better visually highlight the difference between our reactive and non-reactive model in various interactive scenarios. For each demonstration, we provide snapshots of the simulation at different timesteps. As with the results in the main paper, the ego-agent and planned trajectory are in \textcolor{green}{green}, the actors are represented by an assortment of other colors, and the goal position or lane is in \textcolor{cyan}{cyan}. Results are best viewed by zooming in with a digital reader.
In Fig. \ref{fig:supp_qual_simba1} we demonstrate the ego-agent performing an unprotected left turn in front of an oncoming vehicle in Simba. We first emphasize that since this scenario was initialized from a real driving sequence, the actual ``expert'' trajectory also performed the unprotected left turn against the same oncoming vehicle, implying that such an action is not an unsafe maneuver depending on the oncoming vehicle's position/velocity. The visualizations of our models show that the reactive model is successfully able to perform the left turn, while the non-reactive model surprisingly gets stuck in place, even as the oncoming vehicle slows down. We speculate that this may be due to the model choosing to stay still rather than violate the safety distance of the other actor.
Fig. \ref{fig:supp_qual_simba2} showcases a comparison between our reactive/non-reactive models and PRECOG in performing a left turn at a busy intersection. We note that both the reactive/non-reactive models are able to reach the goal state, though admittedly they violate lane boundaries in doing so (lane following is not explicitly encoded as an energy in our model). Interestingly, the PRECOG model plans a trajectory to the goal at $t=1$ but is not able to complete it at later timestamps, either implying that the latent samples for the ego-agent do not capture such a behavior or that the prior likelihood cost is too high to go any further. It's possible that the model can be tuned further, both in terms of the data, the training scheme, as well as the PRECOG evaluation procedure, so we mostly present these as initial results for future investigation.
In Fig. \ref{fig:supp_qual_carla1}, \ref{fig:supp_qual_carla2} we demonstrate a lane merge scenario and a roundabout turn scenario in CARLA. We note that these are complex scenarios involving multiple actors in the lane that the ego-agent is supposed to merge into. In Fig. \ref{fig:supp_qual_carla1}, the visualizations show that the reactive agent is able to spot a gap at $t=2$, and merge in at $t=3$. Meanwhile, the non-reactive agent keeps going straight until $t=3$, and even then it wavers between merging and going straight. Fig. \ref{fig:supp_qual_carla2} demonstrates the ego-agent merging into a roundabout turn with multiple actors. While both models reach similar states initially, towards the end the reactive model reasons that it can still merge inwards, while the non-reactive model is stuck waiting for all the actors to pass.
\begin{figure*}[t!]
\centering
\def0.24\textwidth{0.24\textwidth}
\setlength{\tabcolsep}{0.2pt}
\begin{tabular}{cccc}
\figcap{supp/figures/qual/simba_turn11_r0.png}{400 250 400 250}{Reactive, t=0s} &
\figcap{supp/figures/qual/simba_turn11_r1.png}{400 250 400 250}{Reactive, t=1s} &
\figcap{supp/figures/qual/simba_turn11_r2.png}{400 250 400 250}{Reactive, t=2s} &
\figcap{supp/figures/qual/simba_turn11_r3.png}{400 250 400 250}{Reactive, t=3s} \\
\figcap{supp/figures/qual/simba_turn11_nr0.png}{400 250 400 250}{Non-Reactive, t=0s} &
\figcap{supp/figures/qual/simba_turn11_nr1.png}{400 250 400 250}{Non-Reactive, t=1s} &
\figcap{supp/figures/qual/simba_turn11_nr2.png}{400 250 400 250}{Non-Reactive, t=2s} &
\figcap{supp/figures/qual/simba_turn11_nr3.png}{400 250 400 250}{Non-Reactive, t=3s} \\
\end{tabular}
\caption{Visualization of a Simba turn for non-reactive (bottom) and reactive (top) models at 3 different time steps: 0s, 1s, 2s, 3s (left-to-right).
}
\label{fig:supp_qual_simba1}
\end{figure*}
\begin{figure*}
\centering
\def0.24\textwidth{0.24\textwidth}
\setlength{\tabcolsep}{0.2pt}
\begin{tabular}{cccc}
\figcap{supp/figures/qual/simba_turn9_r0.png}{400 250 400 250}{Reactive, t=0s} &
\figcap{supp/figures/qual/simba_turn9_r1.png}{400 250 400 250}{Reactive, t=1s} &
\figcap{supp/figures/qual/simba_turn9_r2.png}{400 250 400 250}{Reactive, t=2s} &
\figcap{supp/figures/qual/simba_turn9_r3.png}{400 250 400 250}{Reactive, t=3s} \\
\figcap{supp/figures/qual/simba_turn9_nr0.png}{400 250 400 250}{Non-Reactive, t=0s} &
\figcap{supp/figures/qual/simba_turn9_nr1.png}{400 250 400 250}{Non-Reactive, t=1s} &
\figcap{supp/figures/qual/simba_turn9_nr2.png}{400 250 400 250}{Non-Reactive, t=2s} &
\figcap{supp/figures/qual/simba_turn9_nr3.png}{400 250 400 250}{Non-Reactive, t=3s} \\
\figcap{supp/figures/qual/simba_turn9_precog0.png}{400 250 400 250}{PRECOG, t=0s} &
\figcap{supp/figures/qual/simba_turn9_precog1.png}{400 250 400 250}{PRECOG, t=1s} &
\figcap{supp/figures/qual/simba_turn9_precog2.png}{400 250 400 250}{PRECOG, t=2s} &
\figcap{supp/figures/qual/simba_turn9_precog3.png}{400 250 400 250}{PRECOG, t=3s} \\
\end{tabular}
\caption{Visualization of a Simba turn scenario for reactive (top), non-reactive (middle), and PRECOG (bottom) models at 3 different time steps: 0s, 1s, 2s, 3s (left-to-right).
}
\label{fig:supp_qual_simba2}
\end{figure*}
\begin{figure*}
\centering
\def0.24\textwidth{0.24\textwidth}
\setlength{\tabcolsep}{0.2pt}
\begin{tabular}{cccc}
\figcap{supp/figures/qual/carla_r2_1.png}{600 400 800 400}{Reactive, t=1s} &
\figcap{supp/figures/qual/carla_r2_2.png}{600 400 800 400}{Reactive, t=2s} &
\figcap{supp/figures/qual/carla_r2_3.png}{600 400 800 400}{Reactive, t=3s} &
\figcap{supp/figures/qual/carla_r2_4.png}{600 400 800 400}{Reactive, t=4s} \\
\figcap{supp/figures/qual/carla_nr2_1.png}{600 400 800 400}{Non-Reactive, t=1s} &
\figcap{supp/figures/qual/carla_nr2_2.png}{600 400 800 400}{Non-Reactive, t=2s} &
\figcap{supp/figures/qual/carla_nr2_3.png}{600 400 800 400}{Non-Reactive, t=3s} &
\figcap{supp/figures/qual/carla_nr2_4.png}{600 400 800 400}{Non-Reactive, t=4s} \\
\end{tabular}
\caption{Visualization of a CARLA lane merge for non-reactive (bottom) and reactive (top) models at 3 different time steps: 1s, 2s, 3s, 4s (left-to-right).
}
\label{fig:supp_qual_carla1}
\end{figure*}
\clearpage
\begin{figure*}
\centering
\def0.24\textwidth{0.24\textwidth}
\setlength{\tabcolsep}{0.2pt}
\begin{tabular}{cccc}
\figcap{supp/figures/qual/carla_r4_0.png}{650 340 750 460}{Reactive, t=0s} &
\figcap{supp/figures/qual/carla_r4_1.png}{650 340 750 460}{Reactive, t=1s} &
\figcap{supp/figures/qual/carla_r4_2.png}{650 340 750 460}{Reactive, t=2s} &
\figcap{supp/figures/qual/carla_r4_3.png}{650 340 750 460}{Reactive, t=3s} \\
\figcap{supp/figures/qual/carla_nr4_0.png}{650 340 750 460}{Non-Reactive, t=0s} &
\figcap{supp/figures/qual/carla_nr4_1.png}{650 340 750 460}{Non-Reactive, t=1s} &
\figcap{supp/figures/qual/carla_nr4_2.png}{650 340 750 460}{Non-Reactive, t=2s} &
\figcap{supp/figures/qual/carla_nr4_3.png}{650 340 750 460}{Non-Reactive, t=3s} \\
\end{tabular}
\caption{Visualization of a CARLA roundabout turn for non-reactive (bottom) and reactive (top) models at 3 different time steps: 0s, 1s, 2s, 3s (left-to-right).
}
\label{fig:supp_qual_carla2}
\end{figure*}
\section{Interpolation Results between Non-Reactive / Reactive} \label{sec:exp_interpolate}
\begin{table}[]
\centering
\scalebox{0.62}{
\begin{tabular}{c|c|ccccc}
\toprule
Simulator & $|S^{{\mathbf{y}}_0}|$ & Success (\%) & TTC (s) & Goal Distance (m) & Collision Rate (\%) & Actor Brake \\ \midrule
CARLA & 1 (full reactive) & 72.0 & 12.8 & 2.0 & 5.0 & 41.8 \\
& $0.2K$ & 52.0 & 14.6 & 3.3 & 1.0 & 37.7 \\
& $0.4K$ & 42.0 & 15.2 & 3.7 & 4.0 & 37.3 \\
& $0.6K$ & 46.0 & 14.9 & 4.7 & 5.0 & 32.8 \\
& $0.8K$ & 52.0 & 14.5 & 4.3 & 5.0 & 36.7 \\
& $K$ (non-reactive) & 45.0 & 16.0 & 4.4 & 5.0 & 37.1 \\
\midrule
Simba & 1 (full reactive) & 82.0 & 6.8 & 4.3 & 3.5\% & - \\
& $0.2K$ & 73.5 & 7.5 & 5.4 & 3.5\% & - \\
& $0.4K$ & 76.5 & 7.4 & 5.4 & 2.5\% & - \\
& $0.6K$ & 70.5 & 7.5 & 5.2 & 3.5\% & - \\
& $0.8K$ & 68.0 & 7.6 & 5.2 & 3.5\% & - \\
& $K$ (non-reactive) & 70.0 & 7.5 & 5.2 & 3.5\% & - \\
\bottomrule
\end{tabular}}
\caption{Interpolating between a reactive and non-reactive model by varying the size of conditioning set $S^{y_e}$, in CARLA and Simba.}
\label{tab:interpolate}
\end{table}
Sec. \ref{sec:interpolate} in this document showed that we can flexibly interpolate between reactive and non-reactive objectives by increasing the size of the conditioning set $S^{{\mathbf{y}}_0}$. We demonstrate this interpolation, averaged across CARLA/Simba scenarios, in Tab. \ref{tab:interpolate}. The extremes $|S^{{\mathbf{y}}_0}|=1, |S^{{\mathbf{y}}_0}|=K$ demonstrate a tradeoff between full reactive and full-nonreactive behavior. The metrics tend to be more strongly pulled towards the non-reactive side as set size increases -- success rates trend downward and time to completion rates trend upwards, consistent with the difference in performance between the full reactive/non-reactive objectives in Tab. I in the main paper. We additionally observe that collision rates are roughly similar across the different conditioning set sizes. While the results are not conclusive, we note that they hint at a setting where an interpolated planning objective can achieve high success rates while planning more safely compared to either extreme.
\section{Effects of Varying Weights on Planning Costs}
\begin{figure}
\vspace{-5mm}
\includegraphics[width=0.48\linewidth]{figures/plots/carla_avactor_colrate.pdf}
\includegraphics[width=0.48\linewidth]{figures/plots/carla_avactor_ttc.pdf} \\
\includegraphics[width=0.48\linewidth]{figures/plots/simba_uactor_colrate.pdf}
\includegraphics[width=0.48\linewidth]{figures/plots/simba_uactor_ttc.pdf} \\
\caption{Analyzing impact of varying $\lambda_b$ (ego-agent/actor weight) in CARLA (top) and Simba (bottom).}
\label{fig:vary_hyp}
\end{figure}
In practice when implementing our reactive objective (Sec. III-B of the main paper), we define weights on the SDV/actor interaction energy ($\lambda_b$) and the actor-specific energy $\lambda_c$ to more flexibly control for safety during planning. These weight values for both our reactive planner and the non-reactive baseline are determined from the scenario validation set. To provide further insight into the planning costs, we analyze the impact of varying $\lambda_b, \lambda_c$ on our closed-loop evaluation metrics. We observe that when $\lambda_b$ decreases, collision rates for both non-reactive/reactive models go up while time to completion somewhat trends down (Fig. \ref{fig:vary_hyp}, top). This is reasonable given that $\lambda_b$ directly controls the weight on the pairwise energy, which includes collision. Additionally, when varying $\lambda_c$ in Simba, we observe that while the variation in TTC is somewhat negligible for the reactive model, collision rate does trend upward while $\lambda_c$ is decreased - implying that considering the actor unary energy does have some weight in maintaining safer behavior. Of course, we also emphasize that $\lambda_c$ makes no difference in the non-reactive results, since the actor unary term can be cancelled in the non-reactive objective (see Sec. III-B in the main paper).
\section{Additional Experiment Details}
\section{Model Details}
In this section, we provide more precise model details regarding our joint structured model. Specifically, we first detail the dataset-dependent input representations used by our model (Sec. \ref{sec:inp_details}). We then present the architecture details of our network, which predicts actor-specific and interaction energies between actors (Sec. \ref{sec:arch_details}, Sec. \ref{sec:inter_energy}). This also includes details regarding our discrete trajectory sampler (Sec. \ref{sec:traj_sampler}).
\subsection{Input Representation} \label{sec:inp_details}
As mentioned in the main paper, we assume that the trajectory history of the other actors, including their bounding box width/height and heading, are known to the ego-agent at the given timestep. Hence, we directly feed the trajectories of all actors to our model, transformed to the current ego-agent coordinates.
In addition, we add dataset-dependent context to the model. In UrbanCity and Nuscenes, we use both LiDAR sweeps and HD maps as context. Meanwhile, we directly use the input representation provided by the CARLA dataset \cite{rhinehart_precog}, which contains a rasterized LiDAR representation but no map data.
\subsubsection{UrbanCity}
We use a past window of 1 second at 10Hz as input context, and use an input
region of $[-72, 72] \times [-72, 72] \times[-2, 4]$ meters centered around the
ego-agent. From this input region, we collect the past 10 LiDAR sweeps (1s),
voxelize them with a $0.2 \times 0.2 \times 0.2$ meter resolution, and combine
the time/z-dimensions, creating a $720 \times 720 \times 300$ input tensor. We
additionally rasterize given HD map info with the same resolution. The HD maps
include different lane polylines, and polygons representing roads,
intersections, and crossings. We rasterize these information into different
channels respectively.
\subsubsection{Nuscenes}
The details are similar to UrbanCity. The main difference is that the input region is sized $[-49.6, 49.6] \times [-49.6, 49.6] \times [-3, 5]$ meters, and the voxel resolution is $0.2 \times 0.2 \times 0.25$ meters, creating a $496 \times 496 \times 320$ input LiDAR tensor.
\subsubsection{CARLA}
We use the input representation directly provided by the CARLA PRECOG dataset \cite{rhinehart_precog}, which consist of $200 \times 200 \times 4$ features derived from rasterized LiDAR. Each channel contains a histogram of points within each cell at a given z-threshold (or for all heights). Additional details can be found in
\urlstyle{tt} \url{https://github.com/nrhine1/precog_carla_dataset/blob/master/README.md#overhead_features-format}.
\subsection{Network Architecture Details} \label{sec:arch_details}
Here, we present additional details of the actor-specific and interaction energies of our model. As mentioned in the main paper, the actor-specific energies are parameterized with neural nets, consisting of a few core components. The first component is a \textbf{backbone network} that takes in the input representations to compute intermediate spatial feature maps. Then, given the past trajectory for each actor, we sample $K$ trajectories for each actor using a \textbf{realistic trajectory sampler}, given in Sec. \ref{sec:traj_sampler}. These future actor trajectory samples, as well as the past actor trajectories and backbone feature map, are in turn passed to our \textbf{unary module} to predict the actor-specific energies for each actor trajectory. Finally, our \textbf{interaction energy} is determined by computing collision and safety distance violations between actor trajectories.
\subsubsection{Backbone Network}
Given the input representation, we pass it through a backbone network to compute intermediate feature maps that are spatially downsampled from the input resolution. This backbone network is inspired by the detection networks in \cite{yang_pixor, zeng_nmp, zeng_dsdnet}. This network consists of 5 sub-blocks, each block containing $[2,2,3,6,5]$ Conv2d layers with $[32,64,128,256,256]$ output channels respectively, with a 2x downsampling max-pooling layer in front of the 2nd -- 4th blocks. The input is fed through the first 4 blocks, generating intermediate feature maps of different spatial resolutions. These intermediate feature maps are then pooled/upsampled to the same resolution (4x downsample from input), and then fed to the fifth block to generate the final feature map (at 4x downsample from the input). Each convolution is followed by a BatchNorm2d and ReLU layer.
\subsubsection{Unary Module}
We broadcast the past trajectory per actor with their $K$ future trajectory samples to create a $N \times K$ matrix of concatenated trajectories. The purpose of the unary network is then to predict the actor-specific energy for each actor trajectory sample.
We first define a Region of Interest (ROI) centered on each actor's current
position and oriented according to each actor's heading. In UrbanCity/Nuscenes,
we define the ROI to be $12.8 \times 12.8$ meters. In CARLA we define it to be much bigger at $100 \times 100$ meters due to the absence of map data in the training set.
We then use this rotated ROI to extract a corresponding feature per actor from the backbone feature map. We then obtain a 1D representation of this ROI feature per actor by feeding it through an MLP consisting of 6 Conv2d layers with $[512, 512, 1024, 1024, 512, 512]$ output filters with 2x downsampling in the 1st, 3rd, 5th layers, and an adaptive max-pool collapsing the spatial dimension at the end.
We additionally extract \textit{positional embeddings} for each actor trajectory
sample, at each timestep, by indexing the backbone feature map at the
corresponding sample position at the given timestep, with bilinear interpolation. This extracts a $N \times K \times T \times 512$ dimensional tensor, where $T$ represents the total horizon of the trajectory (both past and future timesteps), and $512$ is the channel dimension of the backbone feature map. We collapse the tensor into a 1D representation as well: $N \times K \times (T * 512)$.
Finally, we directly encode the trajectory information per timestep into a trajectory feature, consisting of $[\Delta_x, \Delta_y, \cos(\Delta_{\theta}), \sin(\theta), \Delta_d]$, where $\Delta_x, \Delta_y$ represent the displacement in x,y directions from the previous timestep, $\Delta_d$ represents displacement magnitude, and $\theta$ represents current heading. The trajectory feature, positional embeddings, and 1-D ROI feature are all concatenated along the channel dimension to form a final unary feature. It is fed to a final MLP consisting of 5 fc layers of $1024, 1024, 512, 256, 1$ output channels respectively, and the output is a $N \times K$ matrix of unary energies.
\subsection{Interaction Energies} \label{sec:inter_energy}
Our interaction energy is a pairwise energy between two actor trajectory samples that contains two non-learnable components: collision cost and safety distance cost. The collision detection is efficiently implemented using a GPU kernel computing the IOU among every pairwise sample from two actors, given their timestep, positions, bounding box width/height, and heading. The output is a $N \times N \times K \times K$ matrix, where a given entry is 1 if the trajectory sample of actor $i$ and trajectory sample of actor $j$ collide at any point in the future, and 0 if not. The collision energy is only computed over future samples, not past samples.
Similarly, the safety distance violation is computed over all future actor
samples. For a given pairwise sample between actor $i$ and actor $j$ at a given
timestep, the distance from the center point of actor $i$ is computed to the
polygon of actor $j$ (minimal point-to-polygon distance). If the distance is within a given safety threshold, then the energy is the squared distance of violation within the threshold. Note that unlike collision energy, this matrix is not symmetric. This choice was made to use a GPU kernel that efficiently computes point distances to polygons in parallel.
\subsection{Trajectory Sampler Details} \label{sec:traj_sampler}
We follow the discrete trajectory sampler used in \cite{zeng_nmp, zeng_dsdnet}. The sampler first estimates the initial speed/heading of each actor given the provided past trajectory. From these values, the sampler samples from three trajectory modes: a straight line, circular trajectory, or spiral trajectory with $[0.3, 0.2, 0.5]$ probability. Within each mode, the control parameters such as radius, acceleration are uniformly sampled within a range to generate a sampled trajectory. We use 50 trajectories per actor (including the ego-agent) for CARLA, and 100 trajectories per actor on UrbanCity and Nuscenes. Additional details can be found in \cite{zeng_nmp, zeng_dsdnet}.
\section{Model Properties}
In this section, we discuss some additional properties of our model. First, we discuss the reasons and tradeoffs for excluding the actor-actor interaction term in the reactive objective (Sec. \ref{sec:exclude_actor_actor}). Next, recall that in addition to our reactive objective, we also implemented a non-reactive planning objective as a baseline: $f_{\text{nonreactive}} = {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})]$. We demonstrate that we can \textit{flexibly interpolate} between the non-reactive and reactive objectives within our deep structured model by varying the size of the conditioning set in the prediction model of the reactive objective (Sec. \ref{sec:interpolate}). Experimental results showcasing this behavior are demonstrated in Sec. \ref{sec:exp_interpolate}). Moreover, we demonstrate that the non-reactive objective under our joint structured model is related to maximizing the marginal likelihood of the ego-agent, marginalizing out other actors (Sec. \ref{sec:nonreact_marg}).
\subsection{Excluding the Actor-Actor Interaction Term} \label{sec:exclude_actor_actor}
We mention in Sec. III-B of the main paper that we exclude the actor-actor interaction term as a cost from our reactive planning objective. The primary reason for this is due to computational reasons. To illustrate this, we distribute the full expectation in the objective over the costs:
{\small
\begin{align}
f_{\text{reactive}} &= C_{\text{traj}}^{{\mathbf{y}}_0}+ {\mathbb{E}}_{p_{{\mathcal{Y}}_r | {\mathbf{y}}_0}}[\\
&\sum_{i=1}^{N}C_{\text{inter}}^{{\mathbf{y}}_0, {\mathbf{y}}_i} + \sum_{i=1}^{N}C_{\text{traj}}^{{\mathbf{y}}_i} + \sum_{i=1,j=1}^{N,N}C_{\text{inter}}^{{\mathbf{y}}_i, {\mathbf{y}}_j}] \\
&= C_{\text{traj}}^{{\mathbf{y}}_0} + \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i|{\mathbf{y}}_0}C_{\text{inter}}^{{\mathbf{y}}_0, {\mathbf{y}}_i} + \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i|{\mathbf{y}}_0}C_{\text{traj}}^{{\mathbf{y}}_i} \\
&+ \sum_{i,j,{\mathbf{y}}_i,{\mathbf{y}}_j}p_{{\mathbf{y}}_i, {\mathbf{y}}_j|{\mathbf{y}}_0}C_{\text{inter}}^{{\mathbf{y}}_i, {\mathbf{y}}_j}
\end{align}
}
First, we note that the last summation term implies computing a full $N \times N \times K \times K$ matrix (containing both the interaction cost between any pair of samples from two actors, as well as the probabilities) for every value of ${\mathbf{y}}_0$. For our values of $N, K$, one of these matrices will generally fit in memory on a Nvidia 1080Ti GPU, but additionally batching by the number of ego-agent samples (which is $N$) will not. Moreover, we note that the Loopy Belief Propagation algorithm used for obtaining actor marginals will provide marginal probabilities $p_{{\mathbf{y}}_i}$ and pairwise probabilities $p_{{\mathbf{y}}_i, {\mathbf{y}}_j}$ \cite{nowozin_tutorial} for all actor samples, which directly gives us the conditional actor marginal probabilities $p_{{\mathbf{y}}_i|{\mathbf{y}}_0}$ with one LBP pass. However, $p_{{\mathbf{y}}_i, {\mathbf{y}}_j|{\mathbf{y}}_0}$ is not readily provided by the algorithm, requiring us to run LBP for every value of ${\mathbf{y}}_0$ to obtain these conditional pairwise marginals.
We acknowledge that the actor-actor interaction term can capture situations that the actor-specific term does not, specifically where the ego-agent's actions led to dangerous interactions between two other actors (e.g. the SDV causes a neighboring car to swerve and narrowly collide with another car). We can potentially approximate the term by only considering neighboring actors to the SDV -- we leave this for future work.
\subsection{Interpolation} \label{sec:interpolate}
We observe an additional level of flexibility in this model: being able to
interpolate between our reactive/non-reactive objectives, which have thus far
been presented as distinct. A potential advantage of interpolation is the ability to customize between conservative, non-reactive driving to efficient navigation depending on the user's preference.
Recall that our reactive objective is defined as $f = {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}})}[C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})]$, which can be simplified into
\begin{align}
f = C_{\text{traj}}^{{\mathbf{y}}_0} + \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i|{\mathbf{y}}_0}C_{\text{inter}}^{{\mathbf{y}}_0, {\mathbf{y}}_i} + \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i|{\mathbf{y}}_0}C_{\text{traj}}^{{\mathbf{y}}_i}
\end{align}
(see Sec. III-B, III-C). Similarly, our non-reactive baseline objective is defined as $f_{\text{nonreactive}} = {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})]$, and can be simplified into
\begin{align}
f_{\text{nonreactive}} = C_{\text{traj}}^{{\mathbf{y}}_0} + \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i}C_{\text{inter}}^{{\mathbf{y}}_0, {\mathbf{y}}_i}
\end{align}
The key to interpolation between these two objectives lies within our conditional prediction model for a given actor ${\mathbf{y}}_i$ within the reactive objective: $p({\mathbf{y}}_i | {\mathbf{y}}_0, {\mathcal{X}}; {\mathbf{w}})$, currently conditioned on a single ego-agent plan. We can modify the conditioning to be on a set: $S^{{\mathbf{y}}_0}$ with $k, 1 \leq k \leq K$ elements, which are the top-$k$ candidate trajectories closest to ${\mathbf{y}}_0$ by L2 distance. Then, we define $p({\mathbf{y}}_i | S^{{\mathbf{y}}_0}, {\mathcal{X}}; {\mathbf{w}}) = \frac{1}{Z} \sum_{\bar{{\mathbf{y}}}_0 \in S^{{\mathbf{y}}_0}} {p({\mathbf{y}}_i, \bar{{\mathbf{y}}}_0, {\mathcal{X}}; {\mathbf{w}})}$, where $Z$ is a normalizing constant. Intuitively, conditioning actor predictions on this set implies that actors do not know the exact plan that the SDV has, but may have a rough idea about the general intent. When $|S^{{\mathbf{y}}_0}|$ is 1, we obtain our reactive model. When $|S^{{\mathbf{y}}_0}|$ is $K$, it is straightforward to see that $Z=1$, and hence we obtain our actor marginals $p({\mathbf{y}}_i | {\mathcal{X}}; {\mathbf{w}})$ used in the non-reactive model. Moreover, when $|S^{{\mathbf{y}}_0}| = K$ the actor-specific cost term $\sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i | S^{{\mathbf{y}}_0}}C_{\text{traj}}^{{\mathbf{y}}_i} = \sum_{i,{\mathbf{y}}_i}p_{{\mathbf{y}}_i}C_{\text{traj}}^{{\mathbf{y}}_i}$ no longer depends on the candidate SDV trajectory ${\mathbf{y}}_0$; hence we can remove it from the planing objective, which results in the non-reactive objective.
Please see Sec. \ref{sec:exp_interpolate} for experimental results in our simulation scenarios at different conditioning set sizes.
\subsection{Non-Reactive Objective and Marginal Likelihood} \label{sec:nonreact_marg}
Additionally, it is fairly straightforward to show that the non-reactive objective is closely related to maximizing the marginal likelihood of the ego-agent. Let the marginal likelihood of the ego-agent be denoted as $p({\mathbf{y}}_0 | {\mathcal{X}}; {\mathbf{w}})$.
{\small
\begin{align}
\ln p({\mathbf{y}}_0 | {\mathcal{X}}; {\mathbf{w}}) &= \ln {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[p({\mathbf{y}}_0 | {\mathcal{Y}}_r, {\mathcal{X}}; {\mathbf{w}})] \\
&>= {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[\ln p({\mathbf{y}}_0 | {\mathcal{Y}}_r, {\mathcal{X}}; {\mathbf{w}})] \label{eq:jensens}
\end{align}
}
The lower bound in \ref{eq:jensens} is obtained through Jensen's inequality. Now, note that if we try to maximize this term through ${\mathbf{y}}_0$, we obtain our non-reactive planning objective (which is a minimization over the joint costs):
{\small
\begin{align}
& \text{argmax}_{{\mathbf{y}}_0} {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[\ln p({\mathbf{y}}_0 | {\mathcal{Y}}_r, {\mathcal{X}}; {\mathbf{w}})] \\
&= \text{argmax}_{{\mathbf{y}}_0} {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[\ln p({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})] \\
&= \text{argmin}_{{\mathbf{y}}_0} {\mathbb{E}}_{{\mathcal{Y}}_r \sim p({\mathcal{Y}}_r | {\mathcal{X}}; {\mathbf{w}})}[C({\mathcal{Y}}, {\mathcal{X}}; {\mathbf{w}})]
\label{eq:nonreact_marg2}
\end{align}
}
This implies that maximizing the marginal likelihood of the SDV trajectory under our model can be considered as a non-reactive planner.
|
1,116,691,497,887 | arxiv | \section{Introduction}
Cluster-assembled materials, in which atomic clusters are arranged methodically
in a periodic array to form a solid, are emerging materials for their
technological applications. Also known as nanostructured materials, they allow
the integration of multiple length scales into a hierarchical material
\cite{Khanna}. With precisely controlled building-blocks, such synergistic
cluster-assemblies can be used to create multitudes of periodic structures
having unusual symmetries that are ``custom-made'' for specific requirements.
Although various successful attempts have been made to explore the possibility
of existence of such materials \cite{Khanna}, smaller sized clusters are
difficult to handle experimentally. Thus, modeling these materials
theoretically, generates an impetus for experimental processes.
Modern era of materials research revolves around 2-dimensional (2D) materials
as a consequence of discovery of graphene \cite{novo, allen}. Recent times
have witnessed an emergence of variety of different 2D materials that have not
only opened new avenues of fundamental research but also of device designs.
Amidst the deluge of 2D materials, pseudo-planar and van der Waals (vdW) bonded
layered materials have maintained their own identity \cite{Molle, Jariwala_1}.
Keeping with the current research trends, we planned to explore the
cluster-assembled vdW bonded bilayers of CdSe consisting of buckled
pseudo-planar sheets. Also, having predicted the existence of such bilayer
theoretically, we further wanted to see whether the bilayer can form ordered
spin structures, upon introduction of transition metal (TM) atoms. We
speculate that such materials can facilitate a number of potential
functionalities that can be readily engineered. Earlier such attempts were made
by Liu \textit{et al.} in their studies of cluster-assembled sheets of
endohedrally doped Si$_{12}$ clusters with vanadium atoms \cite{Liu_1}. They
observed that two different types of cluster-assembled sheets prefer
ferromagnetic ordering with free-electron-mediated mechanism.
This article presents our results on bilayers assembled using Cd$_6$Se$_6$
clusters that we further functionalize with the help of TM atoms, Co and Cr. We
also simulate scanning tunneling spectroscopy results by calculating the
tunneling properties of these pristine and functionalized bilayers with the
help of Bardeen, Tersoff and Hamann (BTH) formalism \cite{Bardeen, TerHam}
combined with first-principles density functional theory (DFT) approach.
\section{Computational details}
Our calculations for Cd$_6$Se$_6$ cluster-assembled bilayers are based on DFT
formalism as implemented in Vienna Ab-initio Simulations Package (VASP)
\cite{VASP}. The structural and electronic properties are obtained with the
help of exchange-correlation energy functional as given by Perdew, Burke and
Ernzerhof (PBE) with projector-augmented-wave (PAW) method used to describe the
core electrons \cite{pbe,paw}. The self-consistent convergence criterion of
energy is set to 10$^{-5}$~eV. The occupation numbers are treated according to
the Gaussian scheme with a broadening of 0.001~eV. The bilayers are constructed
(in the xy-plane) in such a manner that the two periodic images are separated
by a distance of 15~{\AA} (in the z-direction). In order to optimize the
structures, relaxation procedures are carried out according to conjugate
gradient algorithm. The relaxation is achieved with the help of
Hellmann-Feynman forces and the stress tensors by taking appropriate
derivatives of total energy of the unit cell in every iteration with the
convergence threshold on forces set as 0.01~eV/{\AA}. The Brillouin zone (BZ)
is represented by the Monkhorst-Pack \textbf{k}-point mesh of
$8\times8\times1$. For weak interactions involved in the computation, due to
the presence of bilayered structures, we have used PBE+D3 method with Grimme
vdW corrections as implemented in VASP \cite{VDW}. To understand the structural
stability, phonon bandstructure and density of states (DOS) are calculated
using density functional perturbation theory with the help of linear response
method \cite{phonopy}.
In order to study the tunneling properties of these pristine and functionalized
bilayers, we use BTH formalism as implemented by He \textit {et al.}
\cite{pandey} in their scanning tunneling microsope (STM)-like setup. As per
this formalism, the electron tunneling current can be calculated in the
low-bias limit using first-order perturbation theory as follows:
\begin{eqnarray}
\hskip -2cm
I &=& \frac{4\pi e}{\hbar} \int_{-\infty}^{+\infty}\rho_s \left( \epsilon + \frac{eV} {2}\right)
\rho_t \left( \epsilon - \frac{eV} {2}\right) \nonumber \\
&& \times e^{-2d \sqrt {2(m/\hbar^2)(\phi_{av} - \epsilon)}} \nonumber \\
&& \times \left\{ \left[ f \left( \epsilon - \frac{eV}{2} \right) \right]
\left[ 1 - \left[ f \left( \epsilon + \frac{eV}{2} \right) \right] \right] \nonumber \right. \\
&& \:\:\: \left.-\left[ f \left( \epsilon + \frac{eV}{2} \right) \right] \left[ 1 + \left[ f
\left( \epsilon - \frac{eV}{2} \right) \right] \right] \right\} d\epsilon \nonumber
\end{eqnarray}
\noindent where $\rho_s$ and $\rho_t$ are the projected densities of states
(PDOS) of the sample and the tip respectively, $d$ is the tip to sample
distance, $\epsilon$ is the injection energy of the tunneling electron, $e$ is
the electronic charge, $m$ is the effective mass of the electron, $\hbar$ is
the Planck constant, $\phi_{av}$ is the average work-function of the sample and
the tip and $f$ is the Fermi distribution function. In this particular
formalism, due to the low-bias criterion $m$ and $\phi_{av}$ are assumed to be
constant. Since, the tip and the sample are assumed to be in electrochemical
equilibrium, their Fermi energies are aligned and are taken to be the reference
energy in the above equation. Bias-induced changes are not included on the
sample DOS, which occur only at high applied bias. In our STM-like setup, the
sample is the bilayer (pristine/functionalized), while the tip is modeled by
7-atom gold cluster. This fully relaxed tip geometry is chosen to mimic the
sharp STM tip. The DOS of the STM-tip is artificially broadened (broadening
factor 0.2~eV) to consider the broadening due to the semi-infinite nature of
the tip. While designating the value of broadening, the reported value for life
time broadening of electrons in a cluster in a scanning tunneling spectroscopy
study is taken into account \cite{BUK}.
\section{Results and discussion}
\subsection{Geometry and electronic structure of CdSe bilayers}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.3\textwidth]{fig_1.eps}
\end{center}
\caption{Geometry of Cd$_6$Se$_6$ cluster. (a)~Top view and (b)~Side view. Cd
and Se atoms are indicated in magenta and green respectively. The colour scheme is
maintained throughout this article.
\label{fig:1}}
\end{figure}
Motivation behind this study is to examine whether an assembly of small CdSe
clusters can form stable 2D-sheets that can further be used in different
applications. As a first step towards this, we chose Cd$_6$Se$_6$ cluster which
is the smallest wurtzite cage with $C_1$ symmetry as observed by Jose
\textit{et al.} \cite{jose}. The cluster consists of two planar Cd$_3$Se$_3$
clusters stacked on top of each other in chair conformation, where each
Cd$_3$Se$_3$ cluster forms a hexagon of side 2.60~{\AA} and the Cd-Se stacking
distance is 2.82~{\AA} (see Figure \ref{fig:1}).
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.7\textwidth]{fig_2.eps}
\end{center}
\caption{Initial and optimized geometries of Cd$_6$Se$_6$ cluster-assembly, for
Type-1 (a) and (d), Type-2 (b) and (e) and Type-3 (c) and (f) structures, respectively.
\label{fig:2}}
\end{figure*}
There may be a large number of possible 2D configurations to be found by
arranging Cd$_6$Se$_6$ clusters as building blocks. Here we design three
configurations and let them undergo unconstrained relaxation. These
configurations are shown in the upper panel of Figure \ref{fig:2}~(a), (b) and
(c), and are named as Type-1, Type-2 and Type-3 structures. The corresponding
geometries of these configurations upon relaxation are shown in Figure
\ref{fig:2}~(d), (e) and (f). In each of these cases, the clusters form
bilayered sheets that show buckling in both the layers. Out of these three
structures, Type-2 has the smallest number of atoms in the unit cell (6 Cd and
6 Se atoms) with a hexagonal lattice (7.83~{\AA}) i.e. the unit cell consists
of a single Cd$_6$Se$_6$ unit. Unit cell of Type-1 bilayer consists of four
units of Cd$_6$Se$_6$ cluster (lattice parameters, $a = 14.09~{\AA}$ and $b =
14.39~{\AA}$) and that of Type-3 bilayer consists of two units of Cd$_6$Se$_6$
cluster (lattice parameters, $a = 7.94~{\AA}$ and $b = 13.80~{\AA}$). Average
width of the bilayer in Type-1 (2.78~{\AA}) mostly remains uniform with the
most uneven distribution of in-plane Cd-Se bond lengths ranging from 2.65~{\AA}
to 2.78~{\AA}, amongst the three bilayers. For Type-2 structure, the bilayer
width varies between 2.82~{\AA} and 2.91~{\AA}. The average Cd-Se bond length
in this case is 2.64~{\AA} and the hexagons formed by Cd-Se atoms are not
regular. Average width of the bilayer in Type-3 geometry is 2.82~{\AA} with
in-plane Cd-Se bond lengths varying from 2.62~{\AA} to 2.78~{\AA}. Comparing
all the relaxed geometries we observe that Type-2 structure has the highest
symmetry amongst the three types and it shows similar structure to that of
$AA'$ type stacking in $h$-BN bilayers \cite{heine}. $AA'$ is shown to be the
most stable stacking order in $h$-BN bulk and bilayer forms, both
experimentally as well as theoretically. Type-1 structure shows a bilayer
composed of hexagons and quadrilaterals, whereas optimized geometry of Type-3
structure contains a stretched array formed of octagons and hexagons.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig_3.eps}
\end{center}
\caption{Phonon bandstructure and DOS of Cd$_6$Se$_6$ cluster-assembly, for Type-2 structure.
\label{fig:3}}
\end{figure}
The relaxed geometries were further compared energetically to understand their
relative stabilities. We calculated the binding energy ($B.E.$) per atom as
follows,
\begin{equation}
B.E./atom = \frac {E_{Total} - \{n \times (E_{Cd} + E_{Se})\}} {n}
\end{equation}
\begin{table}
\begin{center}
\begin{tabular}{c c}
\hline
~\\
Structure & $B.E./atom$ \\
& (eV) \\
\hline \hline
~\\
Type-1 & -2.67 \\
~\\
Type-2 & -2.79 \\
~\\
Type-3 & -2.69 \\
~\\
\hline
\end{tabular}
\end{center}
\caption{Comparison between the binding energies of Type-1, Type-2 and Type-3 structures.
\label{table:1}}
\end{table}
The values of $B.E./atom$ are shown in Table \ref{table:1} for all the three
structures and they differ only marginally. It can be seen that Type-2
structure, which has the highest symmetry also has the lowest $B.E.$ value.
This indeed reiterates that layered materials are predominantly formed in
hexagonal symmetries, including different stacking orders of the hexagonal
layers \cite{heine}. To confirm the stability of Type-2 assembly (owing to its
lowest $B.E.$ value), we calculated its phonon bandstructures that is shown in
Figure \ref{fig:3}. With the exception of small imaginary frequencies around
the $\Gamma$ point of the BZ the phonon bandstructure does not show any
unstable phonon modes confirming its stability. Small imaginary frequencies
(\textless 10~THz) can be attributed to the limitations of the numerics. \\
Further, it is imperative to study whether the predicted 2D
material withstands small temperature fluctuations. Hence, we performed
$ab~initio$ molecular dynamics (MD) simulations at room temperature i.e. 300K
for $\sim$ 10000 fs. We observed that apart from some small thermal
fluctuations the Type-2 structure remains in tact. Thus, our observed lowest
energy cluster-assembled 2D bilayer of CdSe, is thermally stable upto 300K. \\
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.85\textwidth]{fig_4.eps}
\end{center}
\caption{Electronic structure and PDOS of Cd$_6$Se$_6$ cluster-assembly are shown for
Type-1 structure in (a) and (d), for Type-2 structure in (b) and
(e) and for Type-3 structure in (c) and (f), respectively.
\label{fig:4}}
\end{figure*}
Figure \ref{fig:4} shows the electronic bandstructures and PDOS of Cd$_6$Se$_6$
cluster-assembled bilayers. Type-1 structure has a direct band gap of 0.90~eV
at $\Gamma$ point of the BZ, whereas Type-2 structures has an indirect band gap
of 1.28~eV from $M \rightarrow \Gamma$. Type-3 structure also shows an indirect
band gap of 1.23~eV where the conduction band minimum (CBM) lies between $X$
and $Y$ points and the valence band maximum (VBM) lies at the $\Gamma$ point of
the BZ. In all the three structures, the VBM consists of Se $p$ and Cd $d$
states with Se $p$ being predominant and the CBM is made up of Cd $s$ states.
CdSe, in its bulk form has been reported to have experimental band gap of
1.84~eV at 0~K \cite{kittel}, whereas our calculations for bulk wurtzite form
of CdSe show a band gap of 0.53~eV. This happens due to the tendency of DFT to
underestimate the band gaps and we expect that just as the case of bulk CdSe,
the actual band gaps for our bilayers, would even be higher than our reported
values. Hence, the cluster-assembled bilayers, studied in the current work, may
show band gaps larger than that of bulk CdSe.
\subsection{Transition metal doped CdSe bilayers}
We now present the effects of TM adatoms in our Type-2 CdSe bilayer that we
obtained as the most stable cluster-assembled sheet. TM doping in a
semiconductor is a viable way to improve its properties as a probable material
for spintronic device. Group II-VI semiconductors doped with different TM atoms
have been investigated as diluted magnetic semiconductors by various
researchers. Co doping is known to induce antiferromagnetic behaviour in bulk
CdSe \cite{Niu} whereas Cr doping in II-VI semiconductors shows ferromagnetic
behaviour \cite{Niwa, Shri}. For our study, we investigated their effects on
structural, magnetic and electronic properties of the bilayer. There are two
fundamentally different locations for the introduction of adatoms: one
in-between the layers and other on the top. Our calculations indicate that upon
insertion of TM adatom in between the layers, our structure gets highly
distorted resulting in non-Cd$_6$Se$_6$ configuration. Therefore we report only
the later case hereafter. We present two cases of Co/Cr adsorbed on top of
Type-2 Cd$_6$Se$_6$ cluster-assembled bilayer. In the first one, single TM atom
is kept on top of center of every hexagon giving 50\% adsorption per Cd-Se pair
in the bilayer, while in the second case a single TM atom is kept on top of
center of alternate hexagon giving $\sim$17\% adsorption per bilayer atom. With
these initial geometries we performed unconstrained relaxation to obtain the
minimum energy solution. Additionally, magnetic ground state is determined by
unconstrained minimization of all possible magnetic configurations. Below we
report only the ground state magnetic configuration of 50\% and 17\%
concentration of Co and Cr adatoms over Type-2 CdSe bilayer.
\subsubsection{Structural, electronic and magnetic properties : Co-adatom}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig_5.eps}
\end{center}
\caption{Side and top views of the optimized geometries of Type-2 Cd$_6$Se$_6$
cluster-assembled bilayer with Co adsorption at (a)~50\% and (b)~17\%
adsorption per Cd-Se pair in the bilayer. Co atoms are indicated in purple and
the colour is maintained throughout this article. Total DOS and PDOS of the
same bilayers with Co adsorption are shown in (c)~at 50\% and in (d)~at 17\%
per Cd-Se pair in the bilayer. Se $p$ and Co $d$ states are indicated in blue
and black respectively. Spin densities of the bilayers with Co adsorption are
shown in (e) and (f) at 50\% and 17\% adsorption per Cd-Se pair in the bilayer
respectively. The isosurfaces shown in the figure are taken at one fourth of
the maximum isovalue. The up-spin and the down-spin densities are shown in blue
and yellow respectively.
\label{fig:5}}
\end{figure*}
We notice that for higher concentration of Co-adatom (50\%), a tendency of
clustering of Co atoms is seen over the bilayer (see Figure \ref{fig:5} (a)).
Crossing over the underlying bonds Co atoms form triangular geometries over
every alternate hexagon along the armchair direction. The distorted triangle
has a mean bond length of 2.23~{\AA}, and is located at a distance of
$\sim$2.11~{\AA} above the bilayer. The underlying bilayer also distorts to
accommodate the adatoms. The resulting system is ferromagnetic with a magnetic
moment of 10.19 $\mu_B$/unit cell and has a $B.E.$ per atom of -3.73~eV. This
behaviour is also seen from the total DOS and PDOS in Figure \ref{fig:5}~(c).
Spin-up DOS for this configuration shows a gap near Fermi energy, whereas the
spin-down DOS shows conducting nature resulting in half-metalic behaviour.
Spin-down channel mainly conducts through Co $d$ states, that are empty in
spin-up channel. Se $p$ states also acquire small magnetization due to the
presence of Co atoms. This bahaviour is different than bulk where Co impurities
are antiferromagnetically coupled. Antiferromagnetic state of the same
configuration lies 0.58~eV above the ferromagnetic state.
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig_6.eps}
\end{center}
\caption{Side and top views of the optimized geometries of Type-2 Cd$_6$Se$_6$
cluster-assembled bilayer with Cr adsorption at (a)~50\% and (b)~17\%
adsorption per Cd-Se pair in the bilayer. Total DOS and PDOS of the same
bilayers with Cr adsorption at (c)~50\% and (d)~17\% per Cd-Se pair in the
bilayer. Cd $s$, Se $p$ and Co $d$ states are indicated in red, blue and black
respectively. In Figure (d), we have additionally plotted the PDOS for two
different sites of Se and Cr atoms in light blue and grey respectively, that
clearly demonstrates the antiferromagnetic coupling. The sites chosen are
indicated in Figure (b). Spin densities of the bilayers with Cr adsorption at
(e)~50\% and (f)~17\% adsorption per Cd-Se pair in the bilayer. The isosurfaces
shown in the figure are taken at one fourth of the maximum isovalue. The
up-spin and the down-spin densities are shown in blue and yellow respectively.
\label{fig:7}}
\end{figure*}
Upon reducing the concentration of Co to 17\%, as seen from Figure
\ref{fig:5}~(b), similar to the previous case, the adatoms move away from their
initial positions and the underlying bilayer distorts. Similar to its 50\%
counterpart, the TM atoms are ferromagnetically coupled and the system has a
$B.E.$ of -2.87~eV/atom. The magnetic moment now reduces to 1.00 $\mu_B$/unit
cell. Co atoms have an average distance of 7.42~{\AA} between each other. The
ferromagnetic nature is also seen from the total and site projected DOS plots
shown in Figure \ref{fig:5}~(d). We can see the presence of gap states near
Fermi energy in both spin-up and spin-down channels that originate mainly due
to the presence of Co $d$ states. Thus, one can see the possibility to tune the
band gap and magnetism using concentration of Co. Moreover one can also
speculate that certain concentration ($\sim$50\%) of Co-adatom may give rise to
half-metallic phases - a feature that has been rarely investigated in 2D
materials.
\subsubsection{Structural, electronic and magnetic properties : Cr-adatom}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.6\textwidth]{fig_7.eps}
\end{center}
\caption{(a)~Schematic illustration of the STM-like set-up having Type-2
Cd$_6$Se$_6$ bilayer as the sample with Au probe tip. Inset shows the geometry
of the Au probe tip from various angles. (b)~DOS plot of the Au probe
tip.
\label{fig:10a}}
\end{figure*}
Cr-adsorption at 50\% concentration, upon full relaxation, also shows
clustering tendency, but unlike Co where atoms form islands, here atoms form
ribbons. As seen in Figure \ref{fig:7}~(a), Cr adatoms form ribbons of width
5.07~{\AA} along the armchair direction resulting in an array of triangular
structures. The mean bond length in the triangle is 3.60~{\AA} and two stripes
are separated by 9.45~{\AA}. Unlike Co-adsorption, small degree of distortion
is observed in the underlying CdSe bilayer. Triangular array obviously opens up
the possibility of several magnetic structures. We performed series of
calculations of ferromagnetic and antiferromagnetic ordering and found that the
ferromagnetic and antiferromagnetic states are barely separated in energy by
only 1~meV/atom with the value of $B.E./atom$ as -3.41~eV. The magnetic moment
in case of ferromagnetic phase is 1.00$\mu_B$/unit cell. In Figure \ref{fig:7}
(c), we present the total DOS and PDOS of the ferromagentic phase, which mainly
originates from Cr $d$ states with Cd $s$ and Se $p$ states showing their
presence by acquiring small magnetization. \\
On the other hand, lower concentration of Cr (17\%) prefers to be in
antiferromagnetic phase. As seen from Figure \ref{fig:7}~(b), the geometry of
the bilayer does not change much and despite being $\sim$7.84~{\AA} apart, Cr
atoms are coupled antiferromagnetically (See Figure \ref{fig:7} (d)). In the
figures of total DOS and PDOS (See Figure \ref{fig:7} (d)), the antiferromagnetic
coupling is indicated by showing $d$ states of two Cr atoms with black and
grey; and $p$ states of two Se atoms with blue and light blue. As discussed
before, for both the concentrations of Cr, the effective magnetic moment is
fairly small. This is also evident from the spin density plots of these
structures shown in Figure \ref{fig:7} (e) and (f). It is worth noticing that for
higher concentration, the magnetic order is destroyed. This behaviour is
opposite to the behaviour of bulk Cr-doped CdSe system. Thus we expect that
with further increase in concentration, system will become non-magnetic. Thus
of Cr offers a viable way to achieve desired magnetic phases. \\
The most noticeable difference between the two adatoms is that changing the
concentration of Co adatoms can tune the band gap and can help get desirable
magnetic moment whereas changes in the Cr concentration changes the magnetic
nature of the bilayer. Co adatom has strong binding which also results in
clustering of the adatoms and structural deformation of bilayer. Cr adatoms, on
the other hand forms highly ordered patterns, maintaining the structure of the
underlying bilayer.
\subsubsection{Tunneling properties}
We now investigate the pristine and TM doped bilayers with the help of BTH
formalism. BTH formalism is well-suited in the limits of small bias voltage
(V), $e*V << \phi_m$, where $\phi_m$ is the work function of the tip.
Considering the typical metal work function, $\phi_m$ = 4~eV, bias voltage
range below 2~V can typically be useful in BTH formalism \cite{FunPico}. In our
current work, the STM-like setup consists of pristine/functionalized Type-2
Cd$_6$Se$_6$ bilayer as sample and 7-atom gold cluster as tip (see Figure
\ref{fig:10a}~(a)). This fully relaxed tip geometry is chosen to mimic the
sharp STM tip. The DOS of the STM-tip is artificially broadened (broadening
factor 0.2~eV) to consider the broadening due to the semi-infinite nature of
the tip. It should be noted that the Au$_7$ cluster that we are incorporating
as a tip has paramagnetic ground state that is also evident in the DOS of the
tip in Figure \ref{fig:10a}~(b).
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=0.7\textwidth]{fig_8.eps}
\end{center}
\caption{Tunneling properties of Cd$_6$Se$_6$ cluster-assembled bilayer showing
(a)~I-V and (b)~dI/dV characteristics and (c)~constant height mode image.
Depending upon the position of the tip over the bilayer, the graphs are plotted
in red where the tip is held over a Cd atom and in black where the tip is held
over an Se atom. Bright regions in Figure (c) indicate Cd and Se atoms whereas
the dark regions indicate the voids.
\label{fig:11_a}}
\end{figure*}
In Figure \ref{fig:11_a} we present the calculated tunneling characteristics of
Type-2 Cd$_6$Se$_6$ bilayer for a low-bias range of -1.0~V to 1.0~V. The I-V
characteristics are computed at Cd and Se sites on the bilayer. By positioning
the tip exactly above these sites, we obtain the spatially resolved tunneling
spectra. These spectra are directly dependent on the sum of the relevant site
projected DOS of atoms in that plane and the adjoining planes till the
exponential distance factor in the tunneling current equation becomes
negligible. I-V characteristics for the bilayer are shown in Figure
\ref{fig:11_a}~(a). The curves are obtained upon sweeping the tip bias and
calculating the current at a particular lateral tip position, keeping the tip
to sample distance fixed ($\sim$4.5{\AA}). The I-V characteristics show
rectification nature since no states are available in the gap region (1.28~eV)
to contribute to the tunneling current. Figure \ref{fig:11_a}~(b) shows the
differential conductance (dI/dV) characteristics which correlates well with the
LDOS of the structure calculated using DFT. Changing the tip-sample separation
affects the magnitude of tunneling current, but the nature of graphs remains
unaffected. We also simulate the constant height mode STM images for the
bilayer as shown in Figure \ref{fig:11_a}~(c). These images provide the
topography of this quasi-2D structure. The height above the bilayer at which
the constant height mode image is taken, is chosen such that the optimum
resolution of the bilayer is obtained. The geometric features of the bilayer
are clearly seen in this image indicating the hexagonal patterns in the form of
bright spots (i.e. Cd and Se atoms). The brightness of the atoms shows their
relative proximity to the tip.
\begin{figure*}[ht]
\begin{center}
\includegraphics[height=0.8\textheight]{fig_9.eps}
\end{center}
\caption{Geometries of Cd$_6$Se$_6$ cluster-assembled bilayer with (a)~50\% and
(b)~17\% Co adsorption showing the numbering of the positions at which the STM
tip is placed to calculate I-V ((b),(f)) and dI/dV ((c),(g)) characteristics
along with constant height mode images ((d),(h)) of the corresponding
structures. Bright regions in Figures (d) and (h) indicate atoms.
\label{fig:11}}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[height=0.8\textheight]{fig_10.eps}
\end{center}
\caption{Geometries of Cd$_6$Se$_6$ cluster-assembled bilayer with (a)~50\% and
(b)~17\% Cr adsorption showing the numbering of the positions at which the STM
tip is placed to calculate I-V ((b),(f)) and dI/dV ((c),(g)) characteristics
along with constant height mode images ((d),(h)) of the corresponding
structures. Bright regions in Figures (d) and (h) indicate atoms.
\label{fig:12}}
\end{figure*}
We now focus our attention on the tunneling properties of the TM
doped bilayer for different compositions. The results for these are compiled in
Figures \ref{fig:11} and \ref{fig:12} for Co doped and Cr doped bilayers
respectively. In these figures, the tip positions are numbered and colour
coded. Graph for each position is then drawn with the same colour in all the
further figures depicting I-V characteristics corresponding to various
positions numbered as I to VII in Figures (b) and (f), dI/dV curves in Figures
(c) and (g) and constant height mode images in (d) and (h).
Let us first discuss the results for Co adsorbed bilayer presented in Figure
\ref{fig:11}. The I-V curves for all the positions for higher (Figure
\ref{fig:11}~(b)) as well as lower concentration (Figure \ref{fig:11}~(f))
maintain the linearity near zero bias, which is consistent with their
overlapping valence and conduction states seen in the DOS. It is known that the
tunneling current is directly related to the convolution of relevant regions of
the local density of states (LDOS) of tip and sample. Since there is finite
LDOS at Fermi energy we see increase in tunneling current with increase in
applied bias voltage. This behaviour is consistently present at every position
of 50\% Co-doped bilayer, but as the bias voltage reaches 0.5 V, we observe
that the tunneling current starts decreasing with an increase in the bias
voltage. The tunneling current decays upto a certain bias voltage range and
begins to rise again as bias increases further. This feature is known as
negative differential conductance (NDC) and it is characteristic of tunnel
diode. NDC is seen in the dI/dV characteristics which correlates well with the
LDOS calculated using DFT (see Figure \ref{fig:11}~(c)). In case of 17\%
Co-doped bilayer, the NDC arises at tip bias of $\sim$0.20 V and it is much
more pronounced than that in case of 50\% Co-adsorption. dI/dV characteristics
of 17\% Co-doped bilayer are seen in Figure \ref{fig:11}~(g). For both the
concentrations, the NDC features are also seen in negative bias region. NDC
indicates that no states are available for tunneling in the energy range
resulting in a little overlap of LDOS of Au-tip and the Co-doped bilayer,
therefore there is decrease in the tunneling current even with an increase in
bias voltage. The onset of NDC results from a sharp drop in the tunneling
probability when bias voltage reaches an energy level at which tunneling is
forbidden. For higher composition of Co-dopant (50\%), the higher density of TM
states allows the current to increase more-or-less in linear fashion. Reduction
in the Co-dopant concentration reduces the number of energy levels allowed for
transition, hence reducing the tunneling current and giving rise to more
pronounced NDC features. Constant height mode STM images are also depicted in
Figure \ref{fig:11}~(d) and (h). These images reveal the geometric features of
the Co-doped bilayers. In Figure \ref{fig:11}~(d), one can notice the
triangular patterns formed by Co atoms on top of the bilayer. But only the
atoms in the alternate rows of triangles are resolved well due to the fact that
the alternate triangles are placed slightly (~0.2 {\AA}) closer to the bilayer
than others. In Figure \ref{fig:11}~(h), Cd and Se atoms in the same plane as
that of Co are also observed as bright spots.
Figure \ref{fig:12} shows the tunneling characteristics of the bilayer with Cr
adsorption. Similar to the Co-adsorption, one can observe the NDC features in
Cr-doped bilayer for both the concentrations. The NDC exists in the region of
bias voltage where there is minimum in the tip DOS. This is also confirmed in
dI/dV plots for both the concentrations (see Figures \ref{fig:12}~(c) and (g)).
Constant height mode STM images for Cr-doped bilayers are shown in Figure
\ref{fig:12}~(d) and (h). In Figure \ref{fig:12}~(d), the ribbon-like patterns
made by Cr atoms are clearly seen. The atoms that look diffused in these
ribbons are the ones that are not in the same plane. Similarly, in Figure
\ref{fig:12}~(h), one can see the Cr-atoms in the form of hexagonal patterns.
\section{Conclusions}
To summarize, we predicted and analyzed novel quasi-2D structures of CdSe built
intuitively using clusters as building blocks. We also functionalized them to
demonstrate their ability to control their electronic and magnetic properties.
The most stable configuration of CdSe bilayer shows an indirect band gap of
1.28~eV and I-V characteristics of a schottky diode. The bilayer
shows dynamical stability via all real phonon modes and sustains the
temperatures upto 300K as evident from our $ab~initio$ MD calculations.
We used TM atoms, Co and Cr, to functionalize the most stable configuration of
the bilayer. We found that even small concentration of Co adatoms, reorganizes
the bilayer to make it ferromagnetic. Upon increasing the Co concentration,
system shows tell-tale signs of half-metallic systems. It remains to be seen if
pure half-metallic behaviour can be obtained for certain concentration. On the
other hand, Cr doping shows a transition from ferro to antiferromagnetic
ordering upon decreasing the adatom concentration. The exciting effect of
doping is seen in the I-V characteristics, which we compute using BTH
formalism. While pristine bilayer shows classic Schottky diode-like features,
we found that the functionalized system shows the characteristics of a tunnel
diode via NDC.
We expect that our studies bring out the versatile nature of the
cluster-assembled CdSe bilayer that can find novel applications in the field of
electronics and spintronics.
AK acknowledges the financial support from the Nanomission Council, Department
of Science \& Technology, Government of India (Grant code: SR/NM/NS-15/2011(G))
through a major research project and DST-PURSE and DST-FIST grants to
Savitribai Phule Pune University and Department of Physics, respectively. DS
acknowledges financial support from Universities Grant Commission - Basic
Scientific Research (UGC-BSR) through a fellowship. We also thank C-DAC, Pune
for use of their computing facilities. The geometries of all the structures
shown in this article are generated using VESTA \cite{vesta}.
|
1,116,691,497,888 | arxiv | \section{Introduction}
Arguably the most promising short term application of quantum information technology is in the field of cryptography, with quantum key distribution (QKD) the canonical example \cite{bb84-orig,Ekert:1991p460}. In the years since its inception, researchers have worked to improve the rigour and generality of security proofs, design protocols that maximise performance and bridge the gap between theoretical proposal and experimental implementation \cite{Lo:2014ex,Scarani:2009p378}. On the security side, one looks to derive a security proof that is {\it composably} secure against arbitrary eavesdropping attacks whilst including all finite-size statistical effects \cite{Renner:2005p464} (see also \cite{Tomamichel:2012p7120}). Practically, one searches for schemes that maximise both the raw clock-rate (the number of transmissions per second) and the number of secure bits per transmission to achieve the largest overall secret key rate at a given transmission distance.
Most photonic QKD implementations fall into one of two regimes. Traditional discrete variable (DV) schemes encode the secret key in a two-dimensional Hilbert space such as the polarisation degrees of freedom of a single photon. Extending from the original works \cite{bb84-orig,Ekert:1991p460}, these protocols now enjoy universal security proofs \cite{Tomamichel:2012p7120} that function with reasonably small finite-size data blocks, and converge to the ideal Devetak-Winter rates for collective attacks \cite{Devetak:2005p5086} in the asymptotic limit. Continuous variable (CV) schemes utilise an infinite-dimensional Hilbert space, commonly the quadratures of the optical field \cite{Reid:2000p5545,Grosshans:2002p377}. Whilst the finite range and precision of real-life detectors ensures the key is never perfectly continuous, CVQKD nevertheless has the capability to achieve greater than one bit per transmission. Furthermore, composable, general, finite-size CVQKD security proofs have also appeared, although the present results either require extremely large block sizes \cite{Leverrier:2015he}, or are very sensitive to losses \cite{Furrer:2012p8365,Furrer:2014uy} and fail to converge to the Devetak-Winter rates.
This behaviour is in large part due to the different way loss manifests itself in DV and CV systems. If a single photon is sent through an extremely lossy channel, it will only be detected with very low probability. However, in the instances where a detection does take place, the quantum state is largely preserved and the security is unaffected. Therefore, one can in principle achieve high rates over lossy channels by improving the repetition rate of the photon source or multiplexing. But for coherent or squeezed states commonly used in CVQKD, the loss degrades the signal for all transmissions, rendering the information advantage so small that even modest experimental imperfections will eventually prohibit key extraction.
An alternative approach is to encode the key in the continuous degrees of freedom of single photons, inheriting both the loss tolerance of DVQKD and the larger encoding space of CV protocols \cite{Zhang:2008jh}. These time-frequency schemes are primarily pursued via the temporal and spectral correlations of single photons emitted during spontaneous parametric down conversion (SPDC) and the security stems from the conjugate nature of frequency and arrival time measurements. One can use fast time-resolving detectors to directly measure photon arrival times and a grating spectrometer to measure frequency. It is also possible to adopt just the former detection scheme and convert to frequency measurements via dispersive optics \cite{Mower:2013tu}, or the solely the latter and convert to time via phase modulation \cite{Nunn:2013kf}. Significant progress has been made on the theoretical \cite{Zhang:2014gt} and experimental front \cite{Lee:2014vm,Zhong:iu} however, a general composable security proof is lacking. Exploiting techniques from traditional CVQKD \cite{Navascues:2006p805,GarciaPatron:2006p381,Leverrier:2010p150}, security proofs have been derived against Gaussian collective attacks and extended to incorporate finite-size effects \cite{Lee:2015db} and decoy-states \cite{Bunandar:2015wx} culminating in a result including both \cite{Bao:jh}.
In this work we present a finite-size, composably secure proof for TFQKD by combining the entropic uncertainty proofs for CVQKD \cite{Furrer:2012p8365} with efficient, finite-size decoy-state analysis \cite{Ma:2005ua,Lim:2014uw} for DVQKD. The resultant proofs allow for high rates of key to be distributed over urban and inter-city distances with reasonable block sizes.
\section{Security Proof I}
\subsection{Generic protocol}
A fairly generic TFQKD decoy-state protocol can be summarised as follows.
\num{\item Quantum transmission and measurement: Quantum states are distributed from Alice to Bob through a potentially eavesdropper controlled quantum channel. In particular, using a pulsed SPDC source she prepares time-frequency entangled photons. Each round of transmission is defined by a time frame of length $T_f$ which is centred about the peak of each pump pulse. Alice randomly varies her pump power between three values $\mu_1, \mu_2, \mu_3$, according to probabilities $\{ p_{\mu_1},p_{\mu_2},p_{\mu_3}=1-p_{\mu_1}-p_{\mu_2}\}$. Immediately after the channel, we make the worst case assumption which is that Eve completely purifies the shared state, $\rho_{AB}$, such that the overall tripartite state, $\ket{ABE}$, is pure. Alice and Bob then randomly switch between measuring the frequency or arrival time of the photons. They choose either the time or frequency measurement for key generation and use the other to check for an eavesdroppers presence. To analyse both possibilities, we will write the two incompatible observables as positive operator valued measurements (POVMs) $(\mathbb{X_A},\mathbb{P_A}$) for Alice and $(\mathbb{X_B},\mathbb{P_B}$) for Bob. Here we will always denote $\mathbb{X}$ as the key generating observable and $\mathbb{P}$ as the check.
\item Parameter Estimation: Alice and Bob first announce their measurement choices in each round over a public, but authenticated, classical channel and discard all instances where they differ, as well as any instances where two or more detections occur in the same frame. This results in raw, correlated variables $(X_A,X_B)$ which take values $x_A = [x_A^1,x_A^2...x_A^{n_X}],x_B = [x_B^1,x_B^2...x_B^{n_P}]$ which are strings of length $n_X-$, distributed according to a probability distribution $p_{x_A,x_B} = \mathrm{Pr}(X_A = x_A, X_B = x_B)$ and similarly for $P_A$ and $P_B$. Throughout, we will use uppercase to denote random variables and lowercase to denote a corresponding string that is an instantiation of that variable.
Alice then announces which intensity was used in each transmission and the results are further partitioned into substrings e.g. $x_A$ is partitioned into $x_{A,\mu_k}$ of length $n_{X,\mu_k}$ for $k\in \{1,2,3\}$ and similarly for the other strings. Using the number of detections for each pump power and decoy state analysis, Alice and Bob lower bound the number of signals that originated from a single photon transmission. They then announce all outcomes for the $\mathbb{P}$ observables and evaluate the quality of their correlations.
If the quality is sufficiently high (in a way we will make precise later) they proceed, otherwise they abort. Call the passing probability $p_{\mathrm{pass}}$. Conditional on passing, they are left with raw keys which are partially correlated between Alice and Bob as well as the eavesdropper. The overall conditional state between Alice, Bob and Eve is a classical-quantum state of the form,
\eqn{\rho_{X_AX_BE} = \sum_{x_A,x_B} p_{x_A,x_B}\ket{x_A,x_B}\bra{x_B,x_A}\otimes\rho_E^{x_A,x_B} \label{meas}}
\item Reconciliation: Either Alice or Bob is designated the reference partner, which means that their string is designated as the `correct' string. The reference partner then sends information to the other party to correct any errors between the two strings. If the reference partner is Alice, and the reconciliation information flows in same direction as the quantum transmission this is called direct reconciliation (DR). The converse is called reverse reconciliation (RR). Here we will consider the DR case. If the reconciliation is successful, Alice and Bob will now have perfectly correlated strings $x_B = x_A$ which are still partially known to Eve. In fact, Eve will usually have learned some more information about the strings during the reconciliation process.
The amount of `leaked' information is denoted $l_{\mathrm{EC}}$. There is also an additional loss from a reconciliation check procedure, where Alice announces a further string of size $\log(1/\epsilon_c)$ to ensure the strings are identical except with probability $\epsilon_c$.
\item Privacy Amplification: Alice and Bob now apply a function, $f$, drawn randomly from a family, $\mathcal{F}$, of two-universal hashing functions to their measurement strings giving $\{f(x_A), f(x_B)\} = \{s_A, s_B\}$. The final state is now
\eqn{\rho_{S_AS_BE} = \sum_{s_A,s_B} p_{s_A,s_B}\ket{s_A,s_B}\bra{s_B,s_A}\otimes\rho_E^{S_A,S_B} \label{S}}
This ideally result in strings of length $l$ which are perfectly correlated, uniformly random, and completely independent of Eve. These are the final secret keys. The goal of a security analysis is to find a lower bound on the number of extractable bits, $l$, for any given protocol.
\subsection{Composable security}
We now formally state the definitions of composable security and a formalism to quantitatively relax from the ideal case \cite{Renner:2005p464,Tomamichel:2012p7120}.
\begin{Definition}
A protocol that outputs a state of the form (\ref{S}) is
\begin{itemize}
\item {\it $\epsilon_c$-correct} if $\mathrm{Pr}[S_A \neq S_B] \leq \epsilon_c$ and {\it correct} if the condition holds for $\epsilon_c = 0$.
\item $\epsilon_s$-secret if
\eqn{\hspace{0.2cm} p_{\mathrm{pass}} \frac{1}{2} ||\rho_{S_AE} - \tau_{S_A}\otimes\sigma_E|| \leq \epsilon_s \label{sec}}
where $\rho_{S_AE} = \mathrm{tr}_B(\rho_{S_AS_BE})$, $||\cdot||$ is the trace norm and $\tau_{S_A}$ is the uniform (i.e. maximally mixed) state over $S_A$. It is {\it secret} if the condition holds for $\epsilon_s = 0$.
\end{itemize}
The protocol is {\it ideal} if is is both correct and secret and {\it $\epsilon_{\mathrm{sec}}$-secure} if it is $\epsilon_{\mathrm{sec}}$-indistinguishable from an ideal protocol. This means that there is no device or procedure that can distinguish between the actual protocol and an ideal protocol with probability higher than $\epsilon_{\mathrm{sec}}$. If the protocol is $\epsilon_s$-secret and $\epsilon_c$-correct then it is $\epsilon_{\mathrm{sec}}$-secure for any $\epsilon_{\mathrm{sec}}> \epsilon_c + \epsilon_s$.
\end{Definition}
The choice of error reconciliation fixes $\epsilon_c$ so the goal is now to find a method to bound $\epsilon_s$. First, we briefly introduce the entropic quantities appropriate for finite-size analysis. For a random variable $X$ coupled to a quantum system $E$ associated with a Hilbert space $\mathcal{H}_E$ with the joint system described by a classical-quantum state $\rho_{XE} = \sum_x p_x \ket{x}\bra{x} \otimes \rho_E^x$, the conditional min-entropy of $X$ can be defined as the negative logarithm of the optimal probability of successfully guessing $X$ given $E$ \cite{Konig:2009uj}, that is,
\eqn{H_{\mathrm{min}}(X|E)_{\rho_{XE}} = -\log\bk{\sup_{\{E_x\}} \sum_x p_x \mathrm{tr}\bk{E_x\rho_E^x}}}
where the supremum is taken over all POVMs and the logarithm here and throughout is taken to be base 2. A related quantity is the conditional max-entropy
\eqn{H_{\mathrm{max}}(X|E)_{\rho_{XE}} = 2\log\bk{\sup_{\sigma_E}\sum_{x} F(p_X\rho_E^x,\sigma_E)}}
where $F(\rho,\sigma) = \mathrm{tr}\bk{|\sqrt{\rho}\sqrt{\sigma}|}$ is the quantum fidelity and the supremum is over all physical states in $\mathcal{H}_E$, that is $S(\mathcal{H}_E) = \{ \sigma_E \in \mathcal{H}_E|\sigma_E\geq0,\mathrm{tr}(\sigma_E) = 1\}$. One can also define smoothed versions of these quantities that consider $\epsilon$-regions in the state space. Concretely we have,
\eqn{H_{\mathrm{min}}^\epsilon(X|E)_{\rho_{XE}} &=& \sup_{\tilde{\rho}_{XE}} H_{\mathrm{min}}(X|E)_{\tilde{\rho}_{XE}} \nonumber\\
H_{\mathrm{max}}^\epsilon(X|E)_{\rho_{XE}} &=& \inf_{\tilde{\rho}_{XE}} H_{\mathrm{max}}(X|E)_{\tilde{\rho}_{XE}}}
where the supremum and infimum are taken over all states $\tilde{\rho}_{XE}$ that are $\epsilon$-close in the purified distance, defined as $\mathcal{P}(\rho,\sigma) = \sqrt{1-F^2(\rho,\sigma)}$. We again emphasise that throughout this work we will be considering the classical-quantum states conditioned on the parameter estimation test having been passed. For the rest of this work we will suppress the state subscript in the entropies.
If the guessing probability is low then the variable $X$ must have a high degree of randomness with respect to an observer holding $E$. Intuitively then, we might expect the conditional smooth min-entropy to be related to the number of secret bits extractable from variable $X$ with failure probability $\epsilon$ as described in Definition 1. This intuition is usefully formalised in the Leftover Hash Lemma (with quantum side information) \cite{Tomamichel:2011ci,Berta:2011p8367}.
\begin{Lemma}
\label{leftover}
Let $\rho_{X_AX_BE}$ be a state of the form (\ref{meas}) where $X_A$ is defined over a a discrete-valued and finite alphabet, E is a finite or infinite dimensional system and $R$ is a register containing the classical information learnt by Eve during information reconciliation. If Alice applies a hashing function, drawn at random from a family of two-universal hash functions \footnote{Let $X,S$ be sets of finite cardinality $|S|\leq|X|$. A family of hash functions $\{\mathcal{F}\}$, is a set of functions $f: X\rightarrow S$ such that $\forall f \in \mathcal{F}, (x,x') \in X$, $\mathrm{Pr}[(f(x) = f(x')]\leq \frac{1}{|S|}$} that maps $X_A$ to $S_A$ and generates a string of length $\it{l}$, then
\eqn{\frac{1}{2}||\rho_{S_AE} - \tau_{S_A}\otimes\sigma_E|| \leq \sqrt{2^{l - H_{\mathrm{min}}^\epsilon(X_A|ER)-2}}+2\epsilon \label{hash}}
where $H_{\mathrm{min}}^\epsilon(X_A|ER)$ is the conditional smooth min-entropy of the raw measurement data given Eve's quantum system and the information reconciliation leakage.
\end{Lemma}
Comparing (\ref{sec}) and (\ref{hash}) we see that with an appropriate choice of $l$ we can ensure the security condition is met. In particular we see that the smooth min-entropy is a lower bound on the extractable key length. Suppose that we are only able to bound the smooth min-entropy with a certain probability $1- \epsilon_{\mathrm{fail}}$ (in this work this will be due to the use of Hoeffding's bound in the decoy-state analysis). To get a more exact expression notice that if we choose
\eqn{l &=& H_{\mathrm{min}}^\epsilon(X_A|ER) + 2 -2 \log \frac{p_{\mathrm{pass}}}{\epsilon_1}} for some $\epsilon_1>0$ then the r.h.s of (\ref{hash}) is $\epsilon_1/p_{\mathrm{pass}}+2\epsilon$. Then, provided \eqn{\epsilon \leq \frac{\epsilon_s' - \epsilon_1}{2p_{\mathrm{pass}}} \label{ese1}} the convexity and boundedness of the trace distance implies we will satisfy (\ref{sec}) for any secrecy parameter $\epsilon_s \geq \epsilon_s'+ \epsilon_{\mathrm{fail}}$.
Recalling that by assumption Eve learns at most $l_{EC} + \log 1/\epsilon_c$ bits during information reconciliation we have that,
\eqn{H_{\mathrm{min}}^\epsilon(X_A|ER) &\geq& H_{\mathrm{min}}^\epsilon(X_A|E) - l_{EC} - \log \frac{1}{\epsilon_c}}
Finally since $\log(p_{\mathrm{pass}}) <0$ we have the following result \cite{Tomamichel:2012p7120,Furrer:2012p8365}
\begin{Theorem}
Let $\rho_{X_AE}$ describe the state between Alice and Eve conditioned on the parameter estimation test succeeding such that the Leftover Hash lemma is applicable. For an error correction scheme as defined above we may extract an $\epsilon_c$-correct and $\epsilon_s$-secret key of length
\eqn{l\geq H_{\mathrm{min}}^\epsilon(X_A|E) - l_{EC} - \log\frac{1}{\epsilon_c\epsilon_1^2} +2 \label{kth1}}
\end{Theorem}
So the problem has essentially condensed to bounding the conditional smooth min-entropy, $H_{\mathrm{min}}^\epsilon(X_A|E)$. The central idea is to quantify the smooth min-entropy in one observable by observing the statistics of another, incompatible, observable. This is nothing more than a manifestation of Heisenberg's uncertainty principle, which has long underpinned quantum cryptographic protocols. Specifically, this notion is quantitatively expressed via an uncertainty relation for the smooth min- and max-entropies \cite{Tomamichel:2011p461} and its extension to the infinite dimensional setting in \cite{Berta:2011p8367,Furrer:2014ig}. These relations can be formulated as follows \cite{Tomamichel:2011p396,Furrer:2012p8365}. Let $\rho_{ABC}$ be an $n_X$-mode state shared between Alice, Bob and Charlie and let Alice's measurements be described by POVMs $\mathbb{X}_A$ and $\mathbb{P}_A$ with elements $\{E_{i}\}$ and $\{F_{j}\}$ respectively. Let $X_A$ be the random variable describing the measurement outcome and $\rho_{X_AC}$ be the joint state of the measurement register and system $C$ given that Alice measured $\mathbb{X}_A$ on each of the $n_X$ modes. Further, let $\mathscr{P}_A$ describe the measurement outcome and $\rho_{\mathscr{P}_AB}$ be the joint state of the measurement register and system $B$ given the counterfactual scenario where Alice instead measured $\mathbb{P}_A$ upon each mode. The sum of the corresponding smooth entropies satisfies the relation
\eqn{H_{\mathrm{min}}^\epsilon(X_A|C) + H_{\mathrm{max}}^\epsilon(\mathscr{P}_A|B) \geq -n_X \log c \label{eur}}
where $c = \max_{i,j} ||\sqrt{E_{i}}\sqrt{F_{j}}||_\infty$ quantifies the compatibility of the measurements with $||\cdot ||_\infty$ the operator norm or the largest singular value.
We now turn to our specific measurement setup where we identify the conjugate measurements $\mathbb{X_A}$ and $\mathbb{P_A}$ with time and frequency.
\subsection{Time-frequency measurement uncertainty relation}
Following \cite{Delgado:1997cm,Zhang:2014gt} we describe the arrival time and conjugate frequency detuning measurements by the following operators,
\eqn{\hat{t}_J &=& \int dt \hspace{0.2cm} t_J \hspace{0.2cm} \hat{E}^\dag(t)_J\hat{E}(t)_J \nonumber\\
\hat{\omega}_J &=& \int \frac{d\omega}{2\pi} \hspace{0.2cm} \omega_J \hspace{0.2cm} \hat{A}^\dag(\omega)_J\hat{A}(\omega)_J }
for $J \in \{A,B\}$. If we restrict the field operators to the Hilbert space spanned by the single photon time or frequency domain states, $\{ \ket{t_J}:-\infty<t<\infty\}$ and $\{ \ket{\omega_J}:-\infty<\omega<\infty\}$, then we have $\hat{E}_J(t) = \ket{0_J}\bra{t_J}$ and $\hat{A}_J(\omega) = \ket{0_J}\bra{\omega_J}$ so that we can write,
\eqn{\hat{t}_J &=& \int dt \hspace{0.2cm} t \ket{t_J}\bra{t_J} \nonumber\\
\hat{\omega}_J &=& \int \frac{d\omega}{2\pi} \hspace{0.2cm} \omega \ket{\omega_J}\bra{\omega_J} \label{tfop}}
These operators can be shown to be maximally complementary, self-adjoint projectors describing an arrival time measurement that satisfy $\com{\hat{t}_J}{\hat{\omega_K}} = i \delta_{JK}$, and hence can be considered equivalent to the canonical position and momentum operators \cite{Delgado:1997cm}.
Fortunately, the smooth-min entropy uncertainty relations have recently been extended to allow for observables and eavesdroppers living in infinite dimensional Hilbert spaces \cite{Furrer:2011p8368,Furrer:2012p8365,Furrer:2014ig}. However, only in the instances where Alice's source emitted exactly one photon will the POVM's be restricted as per (\ref{tfop}) and result in a useful uncertainty relation. To this end, let $\mathbb{X}_{A,1}$ be a POVM, defined as the restriction of the POVM $\mathbb{X}_A$ to the single photon subspace such that it is described as per (\ref{tfop}). We can now consider the decomposition of the measurement record into variables describing the single, vacuum and multi-photon components components such that we have $H_{\mathrm{min}}^\epsilon(X_A|E) = H_{\mathrm{min}}^{\epsilon}(X_{A,1}X_{A,0}X_{A,m}|E)$. In order to apply the uncertainty relation directly we consider the case where Eve assumed to know the multi-photon and vacuum measurements and is left solely with estimating the single photon components, that is we set $C = X_{A,0}X_{A,m}E$ in (\ref{eur}). The following section explains how to relate $H_{\mathrm{min}}^\epsilon(X_A|X_{A,0}X_{A,m}E)$ to $H_{\mathrm{min}}^\epsilon(X_A|E)$ and also how to estimate the number of single photon events in a given set of detections. Even though Alice never knows in a given run how many photons are emitted, the number of single-photon events in a collection of runs can be bounded via decoy-state analysis which involves using states with known {\it average} photon numbers. For now we turn to computing the overlap for measurements described by (\ref{tfop}).
In fact, Alice and Bob actually measure coarse grained, finite versions of these measurements. This is a practical necessity in ordinary CVQKD (all homodyne measurements have a finite precision and dynamic range) and in this case, measuring precisely an arrival time operator as defined in (\ref{tfop}) would require a detector that has been turned on in the infinite past. Furthermore, a finite alphabet is necessary in order to apply the leftover hash lemma. In standard CVQKD the quadrature observables can usually be treated symmetrically. In this work we must consider the conjugate observables individually, partly because in practice they have different achievable measurement resolutions and partly because they are physically different quantities. For instance, for arrival time measurements the maximum value is equal to the time frame duration for each measurement round, which in turn puts immediate limits on the maximum overall clock rate of the protocol.
Alice's measurements are divided into evenly spaced bins of width $\delta_{X},\delta_P$ up to a maximum value $\pm \Delta_{X},\pm\Delta_P$ such that $M_{X}=2\Delta_{X}/\delta_{X}+1, M_{P}=2 \Delta_{P}/\delta_{P} +1$ are assumed integer alphabet sizes for simplicity. We can write binned observables corresponding to intervals on the real line $I_{1} = (-\infty, -\Delta_X +\delta_X], I_2 = (-\Delta_X+\delta_X, -\Delta_X+2\delta_X]...I_{M_X} = (\Delta_X-\delta_X,\infty]$. The measurement outcome range is then denoted $\mathcal{X} = \{1,2,...,M_X\} \subset \mathbb{Z}$. Thus the POVM elements of $\mathbb{X}_{A,1}$ are projectors in (\ref{tfop}) integrated over the bin intervals,
\eqn{E_i = \int_{I_i} k_A \ket{k_A}\bra{k_A} dk_A \label{quad}, \hspace{0.2cm} k_A\in \{t_A, \omega_A\}}
and similarly for $\mathbb{P}_{A,1} = \{F_j\}$. Notice that this is something of a problem as the two infinite end intervals of these binned measurements actually have a large overlap. In fact $||\sqrt{E_1}\sqrt{F_1}|| \approx 1$ which would mean that for these particular measurements the RHS of (\ref{eur}) is approximately zero and the relationship becomes useless.
To avoid this problem, instead consider a second, hypothetical set of discrete measurements $(\tilde{\mathbb{X}}_{A,1}, \tilde{\mathbb{P}}_{A,1})$ which are defined as per (\ref{quad}) but over a new interval set which is simply the infinite collection of intervals, $\{\tilde{I}_i\}_{i\in \mathbb{Z}}$, of width $\delta$, enumerated such that $\tilde{I}_j = I_j \hspace{0.2cm} \forall \hspace{1mm} k\in \mathcal{X}$. For these measurements the maximum overlap is given by \cite{Furrer:2012p8365},
\eqn{c(\delta_X,\delta_P) = \frac{\delta_X\delta_P}{2\pi}S_0^{(1)}\bk{1,\frac{\delta_X\delta_P}{4}}}
where $S_0^{(1)}(\cdot,u)$ is the radial prolate spheroidal wavefunction of the first kind. Thus, for sufficiently small bin sizes, we can always recover a nontrivial value of $c$ and thus a useful uncertainty relation. The idea is that, for a state that mostly lives in the phase space spanned by the region $[-\Delta_X, \Delta_X]$, the classical-quantum states after Alice applies $\tilde{\mathbb{X}}_{A,1}$ and $\mathbb{X}_{A,1}$ will be very close. We will use our knowledge of Alice's state preparation to quantify this `closeness'. In particular, we will assume that for the all states used in the protocol Alice's source produces a tensor product state, and in particular for the $n_X$ states on which $\mathbb{X}_A$ is measured there is some $\sigma_{AB}$ such that $\rho_{AB} = (\sigma_{AB})^{\otimes {n_X}}$. Moreover, our knowledge of Alice's state allows us to lower bound the probability of measuring a value within the range $[-\Delta_X, \Delta_X]$ on any given run such that,
\eqn{\int_{-\Delta_X}^{\Delta_X} \mathrm{tr}\bk{\sigma_{AB} \ket{k_A}\bra{k_A}} dk_A \geq p_{\Delta_X}}
This it turn means that the probability of measuring an absolute value larger than $\Delta_X$ at any point in the whole protocol given the parameter test was passed is $g(p_{\Delta_X},n_X)/p_{\mathrm{pass}}$ where,
\eqn{g(p_{\Delta_X},n_X) \leq1- p_{\Delta_X}^{n_X}}
and a similar relation holds for the $\mathbb{P}_A$ measurements.
We then finally have a relation between the entropies of the two discretized measurements conditional on a system $C$, namely \cite{Furrer:2012p8365}
\eqn{H_{\mathrm{min}}^{\epsilon}(X_{A,1}|C)&>&H_{\mathrm{min}}^{\epsilon-\epsilon'}(\tilde{X}_{A,1}|C) \nonumber\\
-H_{\mathrm{max}}^{\epsilon}(\mathscr{P}_{A,1}|C)&>&-H_{\mathrm{max}}^{\epsilon-\epsilon''}(\tilde{\mathscr{P}}_{A,1}|C)}
where \eqn{\epsilon' = \sqrt{\frac{2g(p_{\Delta_X},n_X)}{p_{\mathrm{pass}}}},\hspace{0.2cm}\epsilon'' = \sqrt{\frac{2g(p_{\Delta_P},n_X)}{p_{\mathrm{pass}}}} \label{eprime}} (recall that the scripted variable $\mathscr{P}_A$ is denoting the hypothetical situation where $\mathbb{P}_A$ was measured on the $n_X$ key generating modes instead). Putting all this together with the uncertainty relation (\ref{eur}) finally allows us to write,
\eqn{H_{\mathrm{min}}^{\epsilon}(X_{A,1}|X_{A,0}X_{A,m}E) &\geq& n_{X,1}\log_2\frac{1}{c(\delta_X,\delta_P)} \nonumber\\
&-&H_{\mathrm{max}}^{\epsilon-\epsilon'-\epsilon''}(\mathscr{P}_{A,1}|B) \label{minmax}}
where $n_{X,1}$ is the number of instances where Alice and Bob measured in the same basis and only a single photon was created.
In reality however, the measurement record will also include contributions from vacuum and multi-photon terms so we will need a way to determine a lower bound on the min-entropy of the whole string, $H_{\mathrm{min}}^{\epsilon}(X_{A}|E)$ in terms of $H_{\mathrm{min}}^{\epsilon}(X_{A,1}|E)$ so that we can apply (\ref{minmax}). We will also require a lower bound on $n_{X,1}$ and an upper bound upon $H_{\mathrm{max}}^{\epsilon-\epsilon'-\epsilon''}(\mathscr{P}_{A,1}|B)$ based upon the correlations in the $n_P$ measurements of $\mathbb{P}$ observables. Fortunately, all of these can be achieved via decoy-state analysis.
\subsection{Decoy state analysis}
We employ the decoy-state analysis of \cite{Lim:2014uw} which we will recapitulate in our notation for completeness. Recalling the decomposition of the measurements into vacuum, single and multi-photon components we have $H_{\mathrm{min}}^\epsilon(X_A|E) = H_{\mathrm{min}}^{\epsilon}(X_{A,1}X_{A,0}X_{A,m}|E)$. Applying a generalisation of the chain rule for smooth-entropies \cite{Vitanov:hz} gives,
\eqn{ H_{\mathrm{min}}^\epsilon(X_{A}|E) &>& H_{\mathrm{min}}^{\alpha_1}(X_{A,1}|X_{A,0}X_{A,m}E)\nonumber \\
&+& H_{\mathrm{min}}^{\alpha_3 + 2\alpha_4+\alpha_5}(X_{A,0}X_{A,m}|E)\nonumber\\ &-& 2\log_2\frac{1}{\alpha_2} - 1 \nonumber}
for $\epsilon = 2\alpha_1 + \alpha_2 + \alpha_3 + \alpha_4+\alpha_5$ where $\alpha_i>0$ for all $i$.
Applying the same chain rule to the second term on the rhs gives,
\eqn{H_{\mathrm{min}}^{\alpha_3 + 2\alpha_4+\alpha_5}(X_{A,0}X_{A,m}|E) &>& H_{\mathrm{min}}^{\alpha_4}(X_{A,m}|E)\nonumber\\ &+& H_{\mathrm{min}}^{\alpha_5}(X_{A,0}|E)\nonumber\\
&-& 2\log_2\frac{1}{\alpha_3}- 1 \nonumber\\
&\geq& n_{X,0}\log_2 M_X\nonumber\\ &-& 2\log_2\frac{1}{\alpha_3}- 1 \nonumber}
where $n_{X,0}$ is the number of $X$ basis measurements that resulted when the source produced a vacuum state. In the second inequality we have used that $H_{\mathrm{min}}^{\alpha_4}(X_{A,m}|E) \geq 0$, which is equivalent to assuming all multi-photon events are insecure and also that $H_{\mathrm{min}}^{\alpha_5}(X_{A,0}|E)\geq H_{\mathrm{min}}(X_{A,0}|E) = H_{\mathrm{min}}(X_{A,0}) = n_{X,0} \log_2M_X$ where the inequality is true by definition and final equality comes from assuming that vacuum contributions are uncorrelated with the chosen bit values and uniformly distributed across the measurement range. Note that since $\alpha_4$ and $\alpha_5$ now no longer feature directly, we can set them arbitrarily small and neglect them from further calculations. Putting this together gives,
\eqn{H_{\mathrm{min}}^\epsilon(X_A|E)&\geq&H_{\mathrm{min}}^{\alpha_1}(X_{A,1}|X_{A,0}X_{A,m}E) + n_{X,0}\log_2M_X \nonumber \\
&-& \log_2\frac{1}{\alpha_3^2\alpha_2^2} - 2}
which we can now bound according to (\ref{minmax}) to get
\eqn{H_{\mathrm{min}}^\epsilon(X_{A}|E)&\geq& n_{X,1}\log_2\frac{1}{c(\delta_X,\delta_P)}\nonumber\\ &-&H_{\mathrm{max}}^{\alpha_1-\epsilon'-\epsilon''}(\mathscr{P}_{A,1}|B)
+ n_{X,0}\log_2M_X\nonumber\\ &-& \log_2\frac{1}{\alpha_3^2\alpha_2^2} - 2 \label{hminbnd}}
Now, we also need to derive lower bounds upon the number of vacuum and single photon contributions. Recall that in the protocol, Alice probabilistically selects a pump power, $\mu_k$, with probability $p_k$ which in turn probabilistically results in an $n$-photon state with conditional probability \eqn{p_{n|\mu_k} = \frac{e^{-\mu_k}\mu_k^n}{n!} \label{pnk}} assuming a Poissonian source. Although we cannot directly know how many detections are due to a particular photon number emission, we do know how many detections are due to a particular pump power. The main idea of a decoy state analysis is to use the latter information to place bounds on the former. Following \cite{Ma:2005ua,Lim:2014uw} we first note from the eavesdropper's perspective it could just as well be a counterfactual scenario where Alice instead creates n-photon states and merely probabilistically partitions them so that each subset has a mean photon number $\mu_k$. Indeed, Bayes' rule allows us to write the down the appropriate probability of pump power given $n$-photon emission as,
\eqn{p_{\mu_k|n} = \frac{p_{\mu_k}p_{n|\mu_k}}{\tau_n} \label{pkn}}
where
\eqn{\tau_n = \sum_k p_{\mu_k}\frac{e^{-\mu_k}\mu_k^n}{n!}} is the total probability of an $n$-photon emission. Note that technically all of these probabilities should also be conditioned on the parameter test on the $\mathbb{P}_A$ basis measurements passing. However, when considering the $\mathbb{X}_A$ basis Alice can be sure that this conditioning will make no difference. To see this, consider the counterfactual case where she prepares $n$-photon states. By simply not assigning $\mu$ values in the $\mathbb{X}_A$ basis until after the parameter test on the $\mathbb{P}_A$ is completed she can ensure that probabilities like (\ref{pkn}) are unchanged by conditioning.
In the asymptotic limit of large statistics, (\ref{pkn}) allows us to relate the number of coincidences given a certain pump power, $n_{X,\mu_k}$ to the number given an $n$-photon emission, $n_{X,n}$, via
\eqn{n^*_{X,\mu_k} &=& \sum_{n=0}^\infty p_{\mu_k|n}n_{X,n}\nonumber\\
&=& \sum_{n=0}^\infty \frac{p_{\mu_k}e^{-\mu_k}\mu_k^n}{\tau_n n!} n_{X,n}}
where $n^*_{X,\mu}$ is the asymptotic value of $n_{X,\mu_k}$ and we have substituted in from (\ref{pkn}) and (\ref{pnk}). We can then use Hoeffding's inequality for independent events which says that the difference between observed statistics and their asymptotic values is bounded by
\eqn{|n^*_{X,\mu_k}- n_{X,\mu_k}| \leq \lambda(n_X,\epsilon_2)}
and hence $n^{-}_{X,\mu_k}\leq n^{*}_{X,\mu_k} \leq n^{+}_{X,\mu_k}$ where,
\eqn{n^{\pm}_{X,\mu_k} := n_{X,\mu_k} \pm \lambda(n_X,\epsilon_2)}
with probability at least $1-2\epsilon_2$ where $\lambda(n_X,\epsilon_2) = \sqrt{\frac{n_X}{2} \ln \frac{1}{\epsilon_2}}$.
Now consider the following expression:
\eqn{&&\frac{\mu_2e^{\mu_3}n^*_{X,\mu_3}}{p_{\mu_3}} - \frac{\mu_3e^{\mu_2}n^*_{X,\mu_2}}{p_{\mu_2}}\nonumber\\
&=& \sum_{n=0}^\infty \bk{\frac{\mu_2e^{\mu_3} p_{\mu_3}e^{-\mu_3}\mu_3^n n_{X,n}}{n!\tau_n p_{\mu_3}} - \frac{\mu_3e^{\mu_2} p_{\mu_2}e^{-\mu_2}\mu_2^n n_{X,n}}{n!\tau_n p_{\mu_2}}} \nonumber\\
&=& \mu_2\mu_3 \sum_{n=0}^\infty \frac{(\mu_3^{n-1} - \mu_2^{n-1})n_{X,n}}{\tau_n n!}\nonumber}
Notice that in the above expression the summand vanishes when $n=1$. This means we can split up the sum as,
\eqn{&&\frac{\mu_2e^{\mu_3}n^*_{X,\mu_3}}{p_{\mu_3}} - \frac{\mu_3e^{\mu_2}n^*_{X,\mu_2}}{p_{\mu_2}}\nonumber\\
&=& \frac{(\mu_2 - \mu_3)n_{X,0}}{\tau_0}- \mu_2\mu_3 \sum_{n=2}^\infty \frac{(\mu_2^{n-1} - \mu_3^{n-1})n_{X,n}}{\tau_n n!}\nonumber\\
&\leq& \frac{(\mu_2 - \mu_3)n_{X,0}}{\tau_0}}
where the inequality holds provided $\mu_2>\mu_3$. Rearranging gives a lower bound on the vacuum conincidences,
\eqn{n_{X,0} &\geq& n_{X,0}^-:=\tau_0 \frac{e^{\mu_3}\mu_2n^-_{X,\mu_3} - e^{\mu_2}\mu_3n^+_{X,\mu_2}}{p_{\mu_3}p_{\mu_2}(\mu_2 - \mu_3)}}
which holds with probability at least $1 - 4\epsilon_2$.
The single photon bound is somewhat more involved. First, by similar reasoning as above, we have:
\eqn{&&\frac{e^{\mu_2}n^*_{X,\mu_2}}{p_{\mu_2}} - \frac{e^{\mu_3}n^*_{X,\mu_3}}{p_{\mu_3}}\nonumber\\
&=& \sum_{n=0}^\infty \frac{(\mu_2^{n} - \mu_3^{n})n_{X,n}}{\tau_n n!} \nonumber\\
&=& \frac{(\mu_2 - \mu_3)n_{X,1}}{\tau_1}+ \sum_{n=2}^\infty \frac{(\mu_2^{n} - \mu_3^{n})n_{X,n}}{\tau_n n!} \label{s11}}
since now the $n=0$ term vanishes. Now, using the identity $a^n -b^n = (a-b)\sum_{i=0}^{n-1}a^{n-1-i}b^i$ we have
\eqn{\mu_2^{n} - \mu_3^{n} = (\mu_2 - \mu_3) \sum_{i=0}^{n-1}\mu_2^{n-1-i}\mu_3^i}
which combined with the inequality $\sum_{i=0}^{n-1}\mu_2^{n-1-i}\mu_3^i \leq (\mu_2 + \mu_3)^{n-1}$ $\forall n\geq 2$
gives
\eqn{\mu_2^{n} - \mu_3^{n} &\leq& (\mu_2 - \mu_3)(\mu_2 + \mu_3)^{n-1}\nonumber\\
&=& \frac{\mu_2 - \mu_3}{\mu_2 + \mu_3} (\mu_2 + \mu_3)^{n} \nonumber\\
&=& \frac{\mu_2^2 - \mu_3^2}{(\mu_2 + \mu_3)^2} (\mu_2 + \mu_3)^{n} \nonumber \\
&\leq & \frac{\mu_2^2 - \mu_3^2}{\mu_1^2} \mu_1^n}
where the second last equality results in a tighter bound when we apply the condition $\mu_1 > \mu_2 + \mu_3$ to obtain the last inequality. Substituting back in (\ref{s11}) yields:
\eqn{&&\frac{e^{\mu_2}n^*_{X,\mu_2}}{p_{\mu_2}} - \frac{e^{\mu_3}n^*_{X,\mu_3}}{p_{\mu_3}}\nonumber\\
&\leq& \frac{(\mu_2 - \mu_3)n_{X,1}}{\tau_1}+ \frac{\mu_2^2 - \mu_3^2}{\mu_1^2}\sum_{n=2}^\infty \frac{\mu_1^nn_{X,n}}{\tau_n n!}\label{n1}}
Rewriting the sum as
\eqn{\sum_{n=2}^\infty \frac{\mu_1^nn_{X,n}}{\tau_n n!} &=& \sum_{n=0}^\infty \frac{\mu_1^nn_{X,n}}{\tau_n n!} - \frac{n_{X,0}}{\tau_0} - \frac{\mu_1n_{X,1}}{\tau_1} \nonumber\\
&=& \frac{e^{\mu_1}}{p_{\mu_1}}n^*_{X,\mu_1} - \frac{n_{X,0}}{\tau_0} - \frac{\mu_1n_{X,1}}{\tau_1}}
and substituting back into (\ref{n1}), we can solve for $n_{X,1}$, and using the Hoeffding bounds arrive at the following lower bound for the single photon detections:
\eqn{n_{X,1} &\geq& n_{X,1}^- :=\frac{\mu_1 \tau_1}{\mu_1(\mu_2 - \mu_3) - (\mu_2^2 - \mu_3^2)}\left [\frac{e^{\mu_2}}{p_{\mu_2}}n^-_{X,\mu_2} \right . \nonumber \\
&-& \left. \frac{e^{\mu_3}}{p_{\mu_3}}n^+_{X,\mu_3} + \frac{\mu_2^2 - \mu_3^2}{\mu_1^2} \bk{\frac{n^-_{X,0}}{\tau_0} - \frac{e^{\mu_1}}{p_{\mu_1}}n^+_{X,\mu_1}} \right ]\label{nx1}}
which holds with probability at least $1-6\epsilon_2$.
Now the only unbounded term in the key rate formula is the max-entropy term $H_{\mathrm{max}}^{\alpha_1-\epsilon'-\epsilon''}(\mathscr{P}_{A,1}|B)$. Firstly, by the data processing inequality we have $H_{\mathrm{max}}^{\alpha_1-\epsilon'-\epsilon''}(\mathscr{P}_{A,1}|B) \leq H_{\mathrm{max}}^{\alpha_1-\epsilon'-\epsilon''}(\mathscr{P}_{A,1}|\mathscr{P}_{B,1})$. We again use the results of \cite{Furrer:2012p8365}, where a statistical bound on the smooth max-entropy over a classical probability distribution is found based on the observed correlations. Alice and Bob quantify the correlations by computing the average distance (essentially the Hamming distance but for non-binary strings) which for two strings $p_A$ and $p_B$ taking values in $\mathbb{R}$ is defined as:
\eqn{d(p_A,p_B): = \frac{1}{n_P} \sum_{i=1}^{n_P} |p_{A}^i - p_{B}^i|
:= \frac{m_P}{n_P} \label{dpe}}
In order to bound $H_{\mathrm{max}}^{\alpha_1-\epsilon'-\epsilon''}(\mathscr{P}_{A,1}|\mathscr{P}_{B,1})$ we proceed in three steps. Firstly, we use decoy-state arguments to upper bound $d(p_{A,1},p_{B,1})$, the average distance on just the single photon terms. Then, following \cite{Furrer:2012p8365}, we use this upper bound and a result by Serfling \cite{Serfling:1974dx} to upper bound the average distance that could be observed on the counterfactual variables $d({\scriptstyle\mathscr{P}}_{A,1},{\scriptstyle\mathscr{P}}_{B,1})$. Finally, we use this quantity to upper bound the smooth max-entropy.
The quantity $m_{P}$ in (\ref{dpe}) is just counting the number of bins between Alice and Bob's measurements. Considering the substring corresponding to pump power $\mu_1$, in the asymptotic limit, we expect $m_{P,\mu_1}^*$ from $m_{P}$ errors to be assigned to $\mu_1$
where
\eqn{m_{P,\mu_1}^* = \sum_{n=0}^\infty p_{\mu_1|n}m_{P,n}}
and $m_{P,n}$ is the number of errors in the $\mathbb{P}$ basis resulting from $n$-photon states.
Just as when we were bounding the number of single-photon terms, we can use Hoeffding's result to bound the difference between this unknown asymptotic quantity and the observed value,
\eqn{m_{P,\mu_1}^*\leq m_{P,\mu_1}^+ = m_{P,\mu_1}+ \lambda'(\epsilon_1,n_P,M_P)}
except with probability $1-2\epsilon_2$ where now $\lambda'(\epsilon_2,n_P,M_P) = \sqrt{\frac{m_PM_P^2}{2} \ln \frac{1}{\epsilon_2}}$ to account for the non-binary nature of entries in the error strings.
Hence we expect in the asymptotic limit to have
\eqn{m^*_{P,\mu_k} &=& \sum_{n=0}^{\infty} p_{\mu_k | n} m_{P,n} \geq p_{\mu_k | 1} m_{P,1}\nonumber\\
&=& p_{\mu_k | 1} n_{P,1} d(p_{A,1} , p_{B,1})}
Rearranging gives,
\eqn{d(p_{A,1} , p_{B,1}) &\leq& \frac{m^*_{P,\mu_k}}{p_{\mu_k|1} n_{P,1} }\nonumber\\
&\leq& \frac{m^{+}_{P,\mu_k}}{p_{\mu_k|1} n^{-}_{P,1} }\nonumber\\
&:=& d^{+}_{P,1}}
with probability at least $1-4\epsilon_2$ where $n_{P,1}^-$ is calculated in the same manner as (\ref{nx1}). Now, say that Alice and Bob abort the protocol whenever $d_{P,1}^+>d_0$.
Now, we again consider bounding the counterfactual average distance $d({\scriptstyle\mathscr{P}}_{A,1},{\scriptstyle\mathscr{P}}_{B,1})$. For brevity we define $d_{\mathscr{P},1} = d({\scriptstyle\mathscr{P}}_{A,1},{\scriptstyle\mathscr{P}}_{B,1})$ and $d_{P,1}=d(p_{A,1},p_{B,1}) $ and denote the total average distance that would be observed on the combination of the strings as $d_{P,\mathrm{tot}}$. Given that the observed correlations pass the parameter estimation test, we are interested in the probability that the average distance of the hypothetical measurements would be greater than $d_{P,1}$ by a fixed amount.
\eqn{\mathrm{Pr}[d_{\mathscr{P},1}> d^+_{P,1} + C|``\mathrm{pass}"]&\leq& \mathrm{Pr}[d_{\mathscr{P},1}> d_{P,1} + C|``\mathrm{pass}"]\nonumber \\
&\leq& \frac{\mathrm{Pr}[d_{\mathscr{P},1}> d_{P,1} + C]}{p_\mathrm{pass}}\label{pass}}
where we have used Bayes' theorem in the last line.
Bounding $\mathrm{Pr}[d_{\mathscr{P},1}> d_{P,1} + C]$ is a standard problem of random sampling without replacement. Defining the total number of detections coming from single photons as $N_1 = n_{X,1} + n_{P,1}$ we have,
\eqn{N_1 d_{P,\mathrm{tot}} = n_{X,1}d_{\mathscr{P},1} + n_{P,1}d_{P,1} \label{drel}}
A result by Serfling shows that for any $a$ \cite{Serfling:1974dx},
\eqn{\mathrm{Pr}[d_{\mathscr{P},1}> a + C|d_{P,\mathrm{tot}} = a] \leq \exp\bk{\frac{-2 n_{X,1}N_{1} C^2}{(n_{P,1}+1) M_P^2}}\label{serf}}
where we recall that $M_{P}$ is the size of the alphabet of $\mathbb{P}_A$ outcomes. Now using (\ref{drel}) and (\ref{serf}) we can write,
\eqn{&&\mathrm{Pr}[d_{\mathscr{P},1}> d_{P,1} + C] = \mathrm{Pr}[d_{\mathscr{P},1}> d_{P,\mathrm{tot}} + \frac{n_{P,1}}{N_1}C]\nonumber\\
&=& \sum_a \mathrm{Pr}[d_{P,\mathrm{tot}}=a]\mathrm{Pr}[d_{\mathscr{P},1}> a + \frac{n_{P,1}}{N_1}C|d_{P,\mathrm{tot}} = a]\nonumber \\
&\leq& \exp\bk{\frac{-2 n_{X,1} (n_{P,1})^2C^2}{(n_{P,1}+1)N_{1} M_P^2}}}
Substituting back into (\ref{pass}) and recalling that the protocol aborts whenever $d_{P,1}^+>d_0$ we have,
\eqn{\mathrm{Pr}[d_{\mathscr{P},1}> d_0 + C|``\mathrm{pass}"] \leq\frac{\exp\bk{\frac{-2 n_{X,1}^- (n_{P,1}^-)^2C^2}{(n_{P,1}^++1)N_{1}^+ M_P^2}} }{p_{\mathrm{pass}}} \label{expbound}}
where we have substituted in the lower bounds in the numerator and upper bounds in the denominator. In order to evaluate (\ref{expbound}) we still require the upper bound $N_1^+$, noting that this will automatically yield $n_{P,1}^+ = N_1^+ - n_{X,1}^-$. To this end, define $n_{\mu_k}$ as the total number of detections in both bases at a given pump power, $n_{\mu_k}^*$ as its asymptotic value and $N_n$ as the number of detections from $n$-photon states. Then we may write,
\eqn{\frac{e^{\mu_2} n^*_{\mu_2}}{p_{\mu_2}} - \frac{e^{\mu_3} n^*_{\mu_3}}{p_{\mu_3}} &=& \sum_{n=0}^\infty \frac{(\mu_2^n - \mu_3^n)N_n}{\tau_n n!} \nonumber\\
&=& \frac{(\mu_2 - \mu_3)N_1}{\tau_1} + \sum_{n=2}^\infty \frac{(\mu_2^n - \mu_3^n)N_n}{\tau_n n!} \nonumber \\
&\geq& \frac{(\mu_2 - \mu_3)N_1}{\tau_1}}
provided $\mu_2>\mu_3$ which implies
\eqn{N_1 \leq N_1^+:= \frac{\tau_1}{\mu_2 - \mu_3}\bk{\frac{e^{\mu_2} n^+_{\mu_2}}{p_{\mu_2}} - \frac{e^{\mu_3} n^-_{\mu_3}}{p_{\mu_3}}}}
except with probability $1-4\epsilon_2$.
Finally, we use the following result \cite{Furrer:2012p8365} (Proposition 1),
\begin{Lemma}
Let $\mathcal{P}$ be a finite alphabet, $Q(p,p')$ a probability distribution on $\mathcal{P}^n\times \mathcal{P}^n$ for some $n\in \mathbb{N}$, $\kappa>0$ and $\nu>0$. If $\mathrm{Pr}_{Q}[d(p,p')\geq \kappa] \leq \nu^2$ then,
\eqn{H_{\mathrm{max}}^\nu(P|P')< n\log \gamma(\kappa)}
where
\eqn{\gamma(x) = (x + \sqrt{1+x^2})\bk{\frac{x}{\sqrt{1+x^2}-1}}^x}
\end{Lemma}
This result might seem surprising given that an entropy is by definition label-independent, whereas the average distance explicitly depends upon the choice of labels. The resolution is that the lemma is derived by taking a worse case scenario in which the number of observed bin errors is assumed to be due to individual entries each of which different by only one bin, thus maximising the max-entropy. This means that the bound will hold true regardless of the labelling convention used on the data, but a poor choice of labelling (for instance one that numbered adjacent bins by greatly differing numbers) would result in a very pessimistic bound. We can apply this result by setting $\nu^2 = \exp\bk{\frac{-2 n_{X,1}^- (n_{P,1}^-)^2C^2}{(n_{P,1}^++1)N_{1}^+ M_P^2}}/ p_{\mathrm{pass}}$. This allows us to bound
\eqn{H_{\mathrm{max}}^{\nu}(\mathscr{P}_{A,1}|\mathscr{P}_{B,1}) \leq \log_2 \gamma(d_0 +C) \label{maxbound}}
where
\eqn{C &=& M_P \sqrt{\frac{N_1^+(n_{P,1}^++1)}{n_{X,1}^-(n_{P,1}^-)^2} }\nonumber \\
&\times& \sqrt{\ln \frac{1}{\sqrt{p_{\mathrm{pass}}}\nu}}\label{C1}}
for any $p_{\mathrm{pass}}$ which is always possible provided $\nu>0$. Thus, provided $\alpha_1 - \epsilon' - \epsilon'' >0$ there is some $C$ such that we can set $\nu = \alpha_1 - \epsilon' - \epsilon''$ and use this result to bound the smooth max-entropy in (\ref{hminbnd}).
The final step is to account for all the error terms due to finite-size effects to find the actual secrecy parameter and to eliminate the explicit dependence upon $p_{\mathrm{pass}}$. From the decoy state analysis we can rewrite $\alpha_1 = (\epsilon - \alpha_2 - \alpha_3)/2$ (recall that we can neglect the $\alpha_4$ and $\alpha_5$ terms). From our security definitions, provided (\ref{ese1}) is satisfied, we will extract an $\epsilon_s = \epsilon_s' + \epsilon_{\mathrm{fail}}$ secret key. In particular we may satisfy (\ref{ese1}) by choosing $\epsilon = \frac{\epsilon_s' - \epsilon_1}{2 \sqrt{p_{\mathrm{pass}}}}$ in which case we have,
\eqn{\sqrt{p_{\mathrm{pass}}}\nu &=& \frac{1}{2} \bk{\frac{\epsilon'_s - \epsilon_1}{2} - \sqrt{p_{\mathrm{pass}}}(\alpha_2 + \alpha_3) } \nonumber \\
&-& \sqrt{(2g(p_{\Delta_X},n_X))} - \sqrt{(2g(p_{\Delta_P},n_X))} \nonumber \\
&\geq& \frac{1}{2} \bk{\frac{\epsilon'_s - \epsilon_1}{2} - (\alpha_2 + \alpha_3) } \nonumber \\
&-& \sqrt{(2g(p_{\Delta_X},n_X))} - \sqrt{(2g(p_{\Delta_P},n_X))} \nonumber\\
&=& \frac{1}{2} \bk{\frac{\epsilon_s - \epsilon_{\mathrm{fail}} - \epsilon_1}{2} - (\alpha_2 + \alpha_3) } \nonumber \\
&-& \sqrt{(2g(p_{\Delta_X},n_X))} - \sqrt{(2g(p_{\Delta_P},n_X))} \label{nu}}
where the second line used $p_{\mathrm{pass}} \leq1$. This lower bound on $\nu$ can be used to upper bound on the logarithmic term in (\ref{C1}).
We must include the failure probabilities from the Hoeffding bounds, which we applied to the number of counts for three pump powers in two measurement bases, each contributing an error term $2\epsilon_2$. This gives an overall error budget $\epsilon_{\mathrm{fail}} = 12\epsilon_2$. If, for simplicity, we choose $\epsilon_{2} = \alpha_2 = \alpha_3 := \epsilon_1$ and set $\epsilon_1 = \epsilon_{s}/21$ then straightforward substitution into (\ref{nu}), which is used to bound (\ref{C1}) and hence (\ref{maxbound})and (\ref{hminbnd}) gives us a final expression for the $\epsilon_c$-correct, $\epsilon_{s}$-secret key length:
\eqn{l&\geq& -n^{-}_{X,1}\log_2c(\delta_X,\delta_P) -n^{-}_{X,1}\log_2\gamma(d_0 + C')\nonumber\\
&-& 4\log_2\frac{21}{\epsilon_s} + n_{X,0}\log_2M_X - l_{EC} - \log_2\frac{1}{\epsilon_c\epsilon_s} \label{kfinal}}
where
\eqn{C' &=& M_P \sqrt{\frac{N_1^+(n_{P,1}^++1)}{n_{X,1}^-(n_{P,1}^-)^2} }\nonumber \\
&\times& \sqrt{\ln \frac{1}{\epsilon_s/21 - \sqrt{(2g(p_{\Delta_X},n_X))} - \sqrt{(2g(p_{\Delta_P},n_X))} }} \nonumber}
As noted earlier, in order for there to be a positive keyrate, the denominator inside the logarithmic term in $C$ must positive. This means that $\Delta_X,\Delta_P$ are not free parameters, but must be chosen to ensure that this condition is satisfied.
\section{Numerical Evaluation}
We now turn to the the numerical evaluation of the key rate formula, taking parameters mostly from \cite{Lee:2014vm,Zhong:iu} for the simulations. We will consider transmission through optical fibre at telecom wavelengths, which is well modelled as a lossy channel where the transmission is related to the distance, $L$, via $T = 10^{-0.02L}$. When the number of channel uses $N$ (instances where Alice attempts to generate a pair of photons and transmit one to Bob) is large and each party chooses to measure the $\mathbb{X}(\mathbb{P})$ observable with probability $p_X (1-p_X)$ the number of observed counts after sifting for a given pump power will be well approximated by $n_{X,\mu_k} = p_X^2p_{\mu_k}\kappa_{\mu_k}N (n_{P,\mu_k} = (1-p_X)^2p_{\mu_k}\kappa_{\mu_k}N)$ where $\kappa_{\mu_k}$ is the coincidence probability of at least one photon being detected by both Alice and Bob. It is given by \cite{Bunandar:2015wx},
\eqn{\kappa_{\mu_k} &=& \sum_{n = 0}^\infty p_{n|\mu_k} (1 - (1-p_d)(1-\eta_A)^n)\nonumber\\
&\times& (1 - (1-p_d)(1-\eta_B T)^n) }
where $p_d$ are the dark count probabilities for Alice and Bob's detectors and $\eta_A$ and $\eta_B$ are their respective efficiencies.
We consider an SPDC source that generates a photon pair with temporal wave function
\eqn{\ket{\Psi_{AB}} = \int dt_A dt_B \hspace{0.2cm} e^{i\omega_P(t_A+t_B)/2}\psi(t_A,t_B)\ket{t_A,t_B}}
where
\eqn{\psi(t_A,t_B) = \frac{\exp\bk{\frac{-(t_A-t_B)^2}{4\sigma_{\mathrm{cor}}^2} -\frac{(t_A+t_B)^2}{16\sigma_{\mathrm{coh}}^2}}}{\sqrt{2 \pi\sigma_{\mathrm{cor}}\sigma_{\mathrm{coh}}}}\nonumber}
and $\sigma_{\mathrm{coh}}$ and $\sigma_{\mathrm{cor}}$ are the pump coherence and photon correlation times respectively. The variance and covariance of Alice and Bob's measurement strings will be
\eqn{V_{t_A} &=& V_{t_B} = \nonumber \sigma_{\mathrm{coh}}^2 + \frac{\sigma_{\mathrm{cor}}^2}{4}\\
\left < t_A t_B \right > &=& \sigma_{\mathrm{coh}}^2 - \frac{\sigma_{\mathrm{cor}}^2}{4}}
One can also write this as a spectral wave function,
\eqn{\psi(\omega_A,\omega_B) = \frac{\exp\bk{\frac{-\sigma_{\mathrm{cor}}^2(\omega_A-\omega_B)^2}{4} -\sigma_{\mathrm{coh}}^2(\omega_A+\omega_B)^2}}{\sqrt{2 \pi\sigma_{\mathrm{cor}}\sigma_{\mathrm{coh}}}}\nonumber}
with spectral variances and correlations,
\eqn{V_{\omega_A} &=& V_{\omega_B} = \nonumber \frac{1}{16} \left(\frac{1}{\sigma_{\mathrm{coh}}^2}+\frac{4}{\sigma_{\mathrm{cor}}^2}\right)\\
\left < \omega_A \omega_B \right > &=& \frac{-\sigma _{\text{cor}}^2+4 \sigma _{\text{coh}}^2}{16 \sigma _{\text{coh}}^2 \sigma _{\text{cor}}^2}}
The final calculations necessary to compute the key rate are the leaked information reconciliation, $\l_{EC}$ and the observed correlations $d(p_{A},p_B)$. The average distance in a typical run for a given sample size can be found by generating appropriately correlated Gaussian distributed strings, binning them and evaluating (\ref{dpe}) directly. For the parameters chosen here, one finds $d(p_A,p_B)\approx 0.1$. For the sample sizes necessary for positive key, the amount of information leaked during reconciliation is well approximated by \cite{Leverrier:2010p150,Furrer:2012p8365},
\eqn{l_{EC} = n_X(H(X_A) - \beta I(X_A:X_B))}
where $H(X) = -\sum_{x\in \mathcal{X}} p(x)\log_2 p(x)$ is the Shannon entropy, $I(X_A:X_B) = H(X_A) - H(X_A|X_B)$ is the mutual information and $0\leq \beta\leq 1$ is the reconciliation efficiency. Recent advances have demonstrated efficiencies as large as 0.94 \cite{Gehring:2015ie}. The probabilities for any given outcome can be found by evaluating the discretised observables in (\ref{quad}) over the appropriate wavefunction.
In Fig.~\ref{rawkey} we plot the secret key rate, $l/N$, for various values of the channel uses $N$, as a function to the transmission distance. For the parameters chosen here the protocols where time or frequency are used as the key generating measurement perform comparably. The time-encoded protocol achieves positive key over 40 km for $N = 10^9$ and out to almost 140 km for $N = 10^{11}$. It should be noted that there are many parameters that affect the protocols performance, particularly the source design and decoy state strategy, and a systematic optimisation could further improve performance. In particular, the choice of whether to encode in frequency or time is strongly dependent upon the properties of the source and detectors. For the parameters used here, encoding in the time basis results in higher key rates, but for keeping all other parameters fixed and decreasing the coherence time to $\sigma_{\mathrm{coh}} = 0.3ns$ results in virtually identical rates for both protocols.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\columnwidth]{bitsperch.pdf}
\caption{Secret key rate as a function of transmission distance for protocols where the key is generated from frequency (dashed) or time (solid) variables. Sample sizes are $N = \{10^9, 10^{10}, 10^{11}\}$ in red, green and blue respectively. Simulation parameters are: $\{\mu_1,\mu_2,\mu_3\} = \{0.2,0.1,0.01\}$, $\{p_{\mu_1},p_{\mu_2},p_{\mu_3}\} = \{0.7,0.2,0.1\}$, $\sigma_\mathrm{coh}$ = 0.5ns, $\sigma_{\mathrm{cor}}$ = 20 ps, $\delta{t}$ = 60 ps, $\delta_\omega = 5$ GHz, $\epsilon = 10^{-10}$, $p_d = 6\times10^{-7}$, $\eta_A = \eta_B = 0.93$, $\beta = 0.94$ and $p_X = 0.5$.}
\label{rawkey}
\end{center}
\end{figure}
A second quantity of interest is the photon information efficiency (PIE), the number of secret bits extracted per coincident detection. Recall that one of the attractions of these TFQKD schemes was the promise of a PIE of greater than one bit per photon. In Fig.~\ref{pie} we plot the PIE for the same scenarios as Fig.~\ref{rawkey} and observe a value greater that 1 over distances of ~40 km for $N= 10^10$ and ~90 km for $N=10^{11}$, showing that the protocol is indeed making use of the higher dimensions available.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=\columnwidth]{bitsperph.pdf}
\caption{Number of secret bits per detected photon as a function of transmission distance for protocols where the key is generated from frequency (crosses) or time (dashed) variables. Parameters the same as Fig.~\ref{rawkey}.}
\label{pie}
\end{center}
\end{figure}
Arguably the most important quantity however, is the achievable number of secret bits per second. For most protocols this is simply determined by the rate per channel use and the practically achievable clock rate of the relevant source. However, in TFQKD where the key is actually encoded in a temporal variable itself, the question is more involved. In particular, recall we earlier noted that for a positive key we had to ensure the positivity of the statistical fluctuation term $C$. This implies the condition
\eqn{\epsilon_s/21 > \sqrt{(2g(p_{\Delta_X},n_X))} +\sqrt{(2g(p_{\Delta_P},n_X))}\label{delcon}} which in turn means that both $\Delta_X$ and $\Delta_P$ must be sufficiently large. For the arrival time measurement, the maximum observable value dictates the time frame for a given round, $T_f = 2\Delta_t$ and hence a hard upper limit on the possible clock rate of the protocol of $\frac{1}{T_f}$.
Using our knowledge of Alice's source we can calculate these probabilities for this protocol via Gaussian integration. For a Gaussian distributed variable of variance $V_X$ we have,
\eqn{p_{\Delta_X} = \mathrm{erf}\bk{\frac{\Delta_X}{\sqrt{2 V_X}}}}
Now for any $\epsilon>0$ if we require $\epsilon >\sqrt{(2g(p_{\Delta_X},n_X))} $ then substituting and rearranging gives,
\eqn{\Delta_X = \sqrt{2 V_X}\mathrm{erf}^{-1}\left[ \bk{1-\frac{\epsilon^2}{2}}^{1/n_X} \right ]}
Applying this to (\ref{delcon}), if we choose to make the two terms on the RHS equal, for the parameters considered here this leads to a requirement on the frequency detection bandwidth of $>290$ GHz or ~5 nm at telecom wavelengths. Similarly we have a requirement on the duration of each round of $T_f>$11.73 ns or a maximum clock rate of 85 MHz. In Fig.~\ref{bps} we plot the number of bits per second assuming the system is run at its maximum clock rate and observe that the system can achieve rates of over a Mb/s up to a distance of 10-20km depending upon the sample size. Furthermore, for $N=10^{11}$ a key rate of ~100 kb/s is possible up to around 90km.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\columnwidth]{bitspers}
\caption{Number of secret bits per second as a function of transmission distance for protocols where the key is generated from frequency (dashed) or time (solid) variables. Parameters the same as Fig.~\ref{rawkey}.}
\label{bps}
\end{center}
\end{figure}
\section{Conclusions}
We have presented a composable security proof for high-dimensional TFQKD, valid against arbitrary attacks and including all finite-size effects. Numerical simulations show that composably secure TFQKD protocols can indeed extract greater than 1 secret bit per detected photon resulting in key rates of over a MB/s at metropolitan distances and maximum range of well over 100km for sufficiently large sample sizes.
Several avenues for further work remain. Firstly, whilst the proof here has been for the case where Alice and Bob can directly make either spectral or temporal measurements, most concrete proposals for TFQKD involve time-to-frequency \cite{Nunn:2013kf} or frequency to time \cite{Mower:2013tu} conversion. Provided Alice's devices are well characterised it should be straightforward to determine the appropriate uncertainty relation between these effectively conjugate measurements. Secondly there is also a potential weakness to intercept-resend attacks which is particular to TFQKD protocols due to the combination of an in-principle unbounded measurement spectrum and coincidence post-selection as first pointed out in \cite{Nunn:2013kf}. Essentially, the problem is that if Eve makes an extremely precise non-destructive measurement of one observable, say arrival time, this will project onto a state that has limited support within the finite range of Bob's frequency detectors. If Alice and Bob both chose to measure time then Eve will learn this bit and if they both choose to measure frequency, with high probability Bob's detectors will not register the photon and the round will be discarded, opening a loophole in the security. A counter-measure based upon a pre-measurement filtering was proposed in \cite{Nunn:2013kf} which would need to be rigorously incorporated into this proof \cite{prep}. Finally, a remaining unanswered question in all security proofs based upon an uncertainty relation is incorporating an imperfect knowledge of the measurements made by the trusted party. In practice, Alice is not perfectly certain of the POVMs that describe her measurements. A possible solution might incorporate some amount of real time detector tomography into the security analysis.
We note that the proof presented here could also be used to rigorously certify the randomness of measurement strings, extending the work of \cite{Vallone:2014ts} to explicitly include a failure probability. This is a particularly attractive possibility since a major strength of these proposals is the high number of bit/photon and hence large overall rates at short distances.
{\it Note added:} During the writing up of this work the authors became aware of related results by Niu et al. \cite{Niu:2016us}.
\begin{acknowledgements}
NW would like to thank H.M. Chrzanowski for many helpful discussions. The authors acknowledge funding support from the EPSRC National Quantum Technology Hub in Networked Quantum Information Technologies. J.N. was supported by a Royal Society fellowship.
\end{acknowledgements}
\bibliographystyle{apsrev}
|
1,116,691,497,889 | arxiv | \section{Introduction}
The profinite completion $\widehat{G}$ of a group $G$ is the inverse limit of its finite quotients. If $G$ is residually finite, then $G$ embeds into $\widehat{G}$ and it is natural to wonder what properties of $G$ can be detected from $\widehat{G}$. Especially a question of Grothendieck, posed in 1970 \cite{Grothendieck70}, sparked interest in this direction: Is an embedding $\iota \colon H \to G$ of finitely presented, residually finite groups an isomorphism, if it induces an isomorphism $\widehat{\iota}\colon \widehat{H} \to \widehat{G}$ of profinite completions?
The answer is negative. Finitely generated counterexamples were constructed already in 1986 by Platonov and Tavgen \cite{PlatonovTavgen}. The finitely presented case was settled almost 20 years later by Bridson and Grunewald~\cite{BridsonGrunewald}.
In this article we explore the interplay between amenability and the profinite completion of finitely generated groups. Our interest was prompted by the following variation of Grothendieck's problem:
\smallskip
$(*)$ \textit{Given two finitely generated residually finite groups $A$, $G$, where $A$ is amenable. Suppose $\iota\colon A \to G$ induces an isomorphism $\widehat{\iota}\colon \widehat{A} \to \widehat{G}$. Is $G$ amenable?}
\smallskip
The answer is negative and we obtain the following result (a consequence of Theorem \ref{thm:main-theorem-precise} below).
\begin{theorem}\label{thm:main-theorem}
There is an uncountable family of pairwise non-isomorphic, residually finite $18$-generator groups $(G_j)_{j\in J}$ and a residually finite $6$-generator group $A$ with embeddings $\iota_j \colon A \to G_j$ such that the following properties
hold:
\begin{enumerate}
\item $\widehat{\iota_j} \colon \widehat{A} \rightarrow \widehat{G_j}$ is an isomorphism,
\item $A$ is amenable,
\item each $G_j$ contains a non-abelian free subgroup.
\end{enumerate}
In particular, amenability is not a profinite invariant of finitely generated residually finite groups.
\end{theorem}
We note that the fibre product construction used in \cite{PlatonovTavgen} and \cite{BridsonGrunewald} is unable to provide such examples, since a fibre product $P \subseteq H \times H$ projects onto $H$ and thus, it is non-amenable exactly if $H$ is non-amenable (e.g., a non-elementary word hyperbolic group as in \cite{BridsonGrunewald}). Uncountable families of finitely generated \emph{amenable} groups with isomorphic profinite completions were constructed in \cite{Nekrashevych14,Pyber04}.
The groups in Theorem \ref{thm:main-theorem} are just-infinite branch groups with the congruence subgroup property. The construction is inspired by a method of Segal \cite{Segal01} and influenced by ideas of Nekrashevych \cite{Nekrashevych14}. The method is rather flexible and allows us to merge a \emph{perfect} residually finite group (e.g.~$\SL_n(\mathbb{Z})$) with a related amenable group in such a way that the amenable group and the merged group have isomorphic profinite completions. The first step bears similarity with the Sidki-Wilson construction of branch groups with non-abelian free subgroups \cite{SidkiWilson03}.
Let us note that without the requirement of finite generation in $(*)$ there are obvious counterexamples. For instance, $A = \bigoplus_n \mathrm{Alt}(n)$ is amenable as a direct limit of finite groups and $G = \prod_n \mathrm{Alt}(n) = \widehat{A} = \widehat{G}$ contains a non-abelian free group. It would be interesting to have finitely presented counterexamples to $(*)$.
Since our family $(G_j)_{j \in J}$ is uncountable, it is clear that most of the groups cannot be finitely presented. We were unable to verify that none of these groups admits a finite presentation.
\smallskip
Thinking of amenability as a concept of analytical nature (e.g., existence of a left invariant mean on $\ell^\infty(G)$), it doesn't seem surprising that the answer to $(*)$ is negative. From a different perspective, though, amenability is not far from being detectable on finite quotients.
H.~Kesten \cite{Kesten59} characterized amenability in terms of the spectral radius of symmetric random walks. For a residually finite group $G$ a random walk can surely be studied by looking at large finite quotients of $G$. Trying to exploit this relation one realizes that a \emph{uniform} behavior of all random walks on all finite quotients can be used to deduce amenability of $G$.
In the 70's G.~Keller defined \cite{Keller72} a notion of \emph{uniform amenability} by imposing that
the size of an $\varepsilon$-F{\o}lner set for a generating set $S$ can be uniformly bounded in terms of $\epsilon$ and $|S|$ (see Definition \ref{def:uniform-foelner}). A couple of years later the concept was independently defined by Bo\.zejko \cite{Bozejko80}. Wysocza\'nski \cite{Wyso88} showed that
uniform amenability can be characterized in terms of a uniform Kesten condition. This leads to the following result.
\begin{theorem}\label{thm:uniform-amenability-profinite}
Let $G_1, G_2$ be residually finite groups with $\widehat{G}_1 \cong \widehat{G}_2$. Then $G_1$ is uniformly amenable if and only if $G_2$ is uniformly amenable.
\end{theorem}
As of today the collection of finitely generated groups which are amenable but not uniformly so is rather small and this explains why the construction of counterexamples to $(*)$ actually requires some effort. In turn, the group $A$ of Theorem~\ref{thm:main-theorem} is a new example of an amenable group which is not uniformly amenable.
There are several ways to prove Theorem \ref{thm:uniform-amenability-profinite} and already Keller's results \cite{Keller72} point in this direction. Here we don't work out the random walk argument sketched above. Instead, we first establish new characterizations of uniform amenability in terms of a uniform isoperimetric inequality and a uniform Reiter condition and use them to give a short proof of the theorem. In addition, we give a new short proof that a uniformly amenable group satisfies a law (a result of Keller \cite{Keller72}). This implies that the profinite completion of a uniformly amenable group is positively finitely generated in the sense of A.~Mann~\cite{Mann96}. Our results on uniform amenability are discussed in Section~\ref{sec:uniform-amenability}.
\medskip
We now give an overview of the remaining sections. In Section~\ref{sec:trees-and-construction} we present the basic construction which will be applied throughout. This construction takes a perfect, self-similar subgroup $G \leq \Aut(T_X)$ of the automorphism group of a regular rooted tree $T_X$ and produces -- under a condition introduced by Segal \cite{Segal01} -- a
branch group $\Gamma_G^\Omega \leq \Aut(T_X)$. The construction depends on a certain subset $\Omega$ of the boundary of the tree.
In Section~\ref{sec:csp} we show that these branch groups are just infinite and have the congruence subgroup property. We deduce that the profinite completion is always an iterated wreath product which only depends on the action of $G$ on the first level of the tree. As a consequence we obtain a zoo of groups with isomorphic completions and inclusions between these groups induce profinite isomorphisms. In Section \ref{sec:uncountable} we use a rigidity result of Lavrenyuk and Nekrashevych \cite{LavrenyukNekrashevych02} to show that the construction (without additional assumptions) gives rise to an uncountable family of pairwise non-isomorphic groups. Finally we discuss concrete examples in Sections \ref{sec:matrix-groups} and \ref{sec:amenable}. To obtain the amenable group $A$ in Theorem \ref{thm:main-theorem} we apply the construction to the special affine group $\mathbb{F}_p^n \rtimes \SL_n(\mathbb{F}_p)$ acting on the $p^n$-regular rooted tree by rooted automorphisms. It follows from a result of Bartholdi, Kaimanovich and Nekrashevych \cite{BartholdiKaimanovichNekrashevych10} that the result is an amenable group (for suitable parameters $\Omega$). On the other hand, we apply the construction to the special affine group $\mathbb{Z}^n \rtimes \SL_n(\mathbb{Z})$ which acts self-similarly on the $p^n$-regular rooted tree (obtained from the pro-$p$ completion of $\mathbb{Z}^n$). Merging these groups with $A$, we obtain the family $(G_j)$ of non-amenable groups in Theorem~\ref{thm:main-theorem}.
\section{Uniformly amenable classes of groups}\label{sec:uniform-amenability}
In this section we study \emph{uniform amenability} of groups and classes of groups. The concept was introduced by Keller in \cite{Keller72} and independently by Bo\.zejko \cite{Bozejko80}. Here we establish new equivalent characterizations and use them to prove that uniform amenability is a profinite property. Let us begin with the usual characterization using a \emph{uniform F\o{}lner condition}.
\begin{definition}\label{def:uniform-foelner}
A class of groups $\mathfrak{K}$ is \emph{uniformly amenable}
if there is a function $m\colon \mathbb{R}_{>0} \times \mathbb{N} \to \mathbb{N}$
such that for every $\epsilon > 0$, all $G \in \mathfrak{K}$, and every finite subset $S \subseteq G$,
there is a finite set $F \subseteq G$ satisfying
\begin{enumerate}
\item $|F| \leq m(\epsilon, |S|)$ and
\item $|SF| \leq (1+\epsilon)|F|$.
\end{enumerate}
In this case we say that $\mathfrak{K}$ is $m$-uniformly amenable.
We say that a group $G$ is uniformly amenable, if the class consisting of $G$ is uniformly amenable.
We will always assume that $m$ is non-decreasing in the second argument; this can be achieved by replacing $m$ by $m'(\epsilon,N) = \max_{k \leq N} m(\epsilon, k)$.
\end{definition}
\begin{example}
(1) Let $d \in \mathbb{N}$ be given.
The class $\mathfrak{Fin}_d$ of all finite groups of order at most $d$ is uniformly amenable for the function
$m(\epsilon, N) = d$; indeed, the finite group $G$ itself is always a suitable F\o{}lner set.
(2) The class of abelian groups is uniformly amenable.
(3) Extensions of uniformly amenable groups are uniformly amenable \cite[Thm.~3]{Bozejko80}. In particular, every virtually solvable group is uniformly amenable.
(4) Direct unions of $m$-uniformly amenable groups are $m$-uniformly amenable. In particular, ascending HNN-extensions of uniformly amenable groups are uniformly amenable.
(5)
Let $\mathfrak{K}$ be a class of groups such that the free group $F_2$ of rank $2$ is residually $\mathfrak{K}$, then $\mathfrak{K}$ is not uniformly amenable (this will follow from Corollary \ref{cor:unif-amen-grps-satisfy-laws} below).
In particular, every class of groups which contains all finite symmetric groups is \emph{not} uniformly amenable.
\end{example}
\begin{example}
Let $G$ be a finite group. The direct power $G^{I}$ is uniformly amenable for every set $I$.
Given a positive integer $n$, the set of all $n$-tuples $X \mathrel{\mathop{:}}= G^n$ in $G$ is finite. We enumerate the elements,
say $X = \{x^{(1)},\dots, x^{(k)}\}$ where $k = |G|^n$. Here $x^{(i)} = (x_1^{(i)}, \dots, x_n^{(i)})$.
Consider the group $G^{k}$ with the ``universal'' $n$-element subset $U = \{u_1, \dots, u_n\}$ where $u_j = (x_j^{(1)}, x_j^{(2)},\dots,x_j^{(k)})$.
Let $S \subseteq G^{I}$ be a subset with $n$-elements, say $S = \{s_1,\dots,s_n\}$.
For every $i\in I$ we obtain an $n$-tuple $S(i) \mathrel{\mathop{:}}= (s_1(i),s_2(i),\dots,s_n(i)) \in X$,
and we obtain a map $t\colon I \to \{1,\dots,k\}$ such that $S(i) = x^{(t(i))}$.
The homomorphism $\alpha\colon G^k \to G^{I}$ defined by $\alpha(g_1,\dots,g_k)(i) = g_{t(i)}$
maps the universal set $U$ to $S$. Therefore the subgroup generated by $S$ is isomorphic to a subfactor of $G^k$ and we deduce that $G^{I}$ is uniformly amenable.
\end{example}
\begin{definition}
A class of groups $\mathfrak{K}$ satisfies a \emph{uniform isoperimetric inequality}, if
there is a function $\widetilde{m}\colon \mathbb{R}_{>0} \times \mathbb{N} \to \mathbb{N}$ such that for all $\epsilon > 0$, every $G \in \mathfrak{K}$ and every finite symmetric subset $S \subseteq G$
there is a finite subset $E \subseteq G$ with $|E| \leq \widetilde{m}(\epsilon, |S|)$ and
\[ \frac{|\partial_{S} E|}{|E|} \leq \epsilon \]
where $\partial_{S} E = SE \setminus E$ denotes the $S$-boundary of $E$.
\end{definition}
\begin{lemma}\label{lem:isoperimetric}
A class of groups $\mathfrak{K}$ is uniformly amenable if and only if it satisfies a uniform isoperimetric inequality.
\end{lemma}
\begin{proof}
Assume that $\mathfrak{K}$ is $m$-uniformly amenable. We define $\widetilde{m}(\epsilon,N) \mathrel{\mathop{:}}= m(\epsilon, N+1)$.
Let $\epsilon > 0 $ be given, let $G \in \mathfrak{K}$ and let $S \subseteq G$ be a finite symmetric subset. We define $S^* = S \cup \{1_G\}$. Uniform amenability provides a F\o{}lner set $E \subseteq G$ with $|E| \leq m(\epsilon,N+1)$ and
\[
|S^*E| \leq (1+\epsilon)|E|.
\]
Since $S^*E = E \cup \partial_{S}E$ the assertion follows.
Assume conversely that $\mathfrak{K}$ satisfies a uniform isoperimetric inequality with respect to $\widetilde{m}$. We define
$m(\epsilon, N) = \max_{k \leq 2N} \widetilde{m}(\epsilon,k)$. Let $\epsilon > 0$, $G \in \mathfrak{K}$, and a finite set $S \subseteq G$ be given. Define $T = S \cup S^{-1}$.
By assumption, there is a finite subset $E \subseteq G$ with $|E| \leq m(\epsilon, |T|) = \max_{k \leq 2|S|} \widetilde{m}(\epsilon,k) = m(\epsilon, |S|)$ which satisfies
\[\frac{|\partial_T E|}{|E|} \leq \epsilon. \]
We obtain
\[ |SE| \leq |TE| \leq |E| + |\partial_T E| \leq (1+\epsilon)|E|.\qedhere\]
\end{proof}
\begin{definition}
A class of groups $\mathfrak{K}$ satisfies the \emph{uniform Reiter condition}, if there is a function $r \colon \mathbb{R}_{>0} \times \mathbb{N} \to \mathbb{N}$ such that for all $\epsilon > 0$, every $G \in \mathfrak{K}$, and every finite subset $S \subseteq G$, there is a finitely supported probability measure $\mu$ on $G$ such that $|\supp(\mu)| \leq r(\epsilon,|S|)$ and
\begin{equation}\label{eq:reiter-inequ}
\Vert \lambda^*_g(\mu) - \mu \Vert_{\ell^1} < \epsilon
\end{equation}
for all $g \in S$. Here $\lambda^*_g(\mu)$ denotes the pullback of $\mu$ with respect to the left multiplication with $g$, i.e.,
$ \lambda^*_g(\mu) (A) = \mu(gA)$.
\end{definition}
\begin{proposition}
A class $\mathfrak{K}$ of groups is uniformly amenable if and only if it satisfies the uniform Reiter condition.
\end{proposition}
\begin{proof}
Assume that $\mathfrak{K}$ is uniformly amenable. By Lemma \ref{lem:isoperimetric} the class $\mathfrak{K}$ satisfies a uniform isoperimetric inequality w.r.t.~a function $\widetilde{m}$. Let $\epsilon >0$, $G \in \mathfrak{K}$ and $S \subseteq G$ be given. Put $S^* = S \cup S^{-1}$.
There is a finite subset $E \subseteq G$ with $|E| \leq \widetilde{m}(\epsilon, 2|S|)$ for which $\frac{|\partial_{S^*} E|}{|E|} < \epsilon$.
Let $\mu$ be the uniform probability measure supported on $E$. Since $|\partial_{S^*} E| < \epsilon |E|$, we have $|g^{-1}E \Delta E| < \epsilon |E|$ for all $g \in S$ and thus
\[
\Vert \lambda^*_g(\mu) - \mu \Vert_{\ell^1} = \frac{|g^{-1}E \Delta E|}{|E|} < \epsilon
\]
Conversely, assume that $\mathfrak{K}$ satisfies the uniform Reiter condition. Let $\epsilon' > 0$, $G \in \mathfrak{K}$ and a finite symmetric subset $S \subseteq G$ be given. Set $\epsilon = \epsilon' / |S|$. Using the uniform Reiter condition, we find a finitely supported probability measure $\mu$ on $G$ with $|\supp(\mu)|\leq r(\epsilon,|S|)$ which satisfies \eqref{eq:reiter-inequ}. For all $t \in [0,1]$ we define the level set $E_\mu(t) = \{ g \in G \mid \mu(\{g\}) \geq t\}$.
We claim that some level set satisfies a suitable isoperimetric inequality.
Summing the equality $|\lambda^*_g(\mu)(\{x\})-\mu(\{x\})| = \int_0^1 |1_{E_\mu(t)}(gx)-1_{E_\mu(t)}(x)| \;dt$ over all $x \in G$ we obtain
\[
\Vert \lambda^*_g(\mu)-\mu\Vert_{\ell^1} = \int_0^1 \sum_{x \in G} |1_{E_\mu(t)}(gx)-1_{E_\mu(t)}(x)| \;dt = \int_0^1 |g^{-1}E_\mu(t)\Delta E_\mu(t)| dt
\]
Taking the sum over all $g \in S$
we see that
\[
\epsilon' = |S| \epsilon > \sum_{g \in S} \Vert \lambda^*_g(\mu)-\mu\Vert_{\ell^1} = \int_{0}^1 \sum_{g \in S} |g^{-1}E_\mu(t) \Delta E_\mu(t)| \, dt \geq \int_0^1 |\partial_{S} E_\mu(t)| \,dt.
\]
Suppose for a contradiction that $|\partial_S E_{\mu}(t)| > \epsilon' |E_\mu(t)|$ for all $t$. Then the last integral can be estimated by
\[
\int_0^1 |\partial_{S} E_\mu(t)| \,dt > \epsilon' \int_{0}^1 |E_\mu(t)| \,dt = \epsilon'
\]
which yields a contradiction.
\end{proof}
\begin{remark}
It was proven in \cite{Wyso88} that uniform amenability of groups can be characterized by a uniform version of Kesten's condition on random walks. The argument given there -- based on a theorem of Kaimanovich (see \cite{Kaimanovich-80} or \cite[Thm.~5.2]{Kaimanovich-Vershik-83}) -- directly generalizes to classes and shows that
a uniformly amenable class of groups $\mathfrak{K}$ satisfies a \emph{uniform Kesten condition}:\\
There is a function $\kappa\colon \mathbb{R}_{>0} \times \mathbb{N} \to \mathbb{N}$ such that
for every $\epsilon > 0$, every $G \in \mathfrak{K}$, every finitely supported symmetric probability measure $\mu$ on $G$
and all $n \geq \kappa(\epsilon, |\supp(\mu)|)$
\[
\mathbb{P}(X_{2n} = 1_G)^{\frac{1}{2n}} > 1 - \epsilon
\]
where $X_n$ denotes the $\mu$-random walk on $G$ starting at the identity $1_G$.
\end{remark}
\begin{proposition}\label{prop:quotients}
Let $\mathfrak{K}$ be a uniformly amenable class of groups. The class of all quotients of groups in $\mathfrak{K}$ is uniformly amenable.
\end{proposition}
\begin{proof}
Assume that $\mathfrak{K}$ satisfies the uniform Reiter condition for a function $r$. Let $G \in \mathfrak{K}$,
and let $N \subseteq G$ be a normal subgroup. The canonical projection $G \to G/N$ will be denoted by $\pi$. Given $\epsilon > 0$ and a finite subset $S \subseteq G/N$, we lift $S$ to a finite subset $S'$ in $G$, i.e., $\pi(S') = S$ and $|S'| = |S|$.
There is a finitely supported probability measure $\mu'$ on $G$ with $|\supp(\mu')| \leq r(\epsilon,|S|)$ that satisfies
\[
\Vert \lambda^*_g(\mu') - \mu' \Vert_{\ell^1} < \epsilon
\]
for all $g \in S'$. Let $\mu = \pi_*(\mu')$ be the pushforward measure on $G/N$. Clearly the support of $\mu$ has at most as many elements as the support of $\mu'$.
Moreover,
\begin{align*}
\Vert \lambda^*_{gN}(\mu) - \mu\Vert_{\ell^1} & = \sum_{x \in G/N} |\mu(\{gx\})-\mu(\{x\})|
= \sum_{x \in G/N} |\sum_{w \in x} \mu'(\{gw\})-\mu'(\{w\})| \\
&\leq \sum_{h\in G} |\mu'(\{gh\})-\mu'(\{h\})|
= \Vert \lambda^*_g(\mu') - \mu' \Vert_{\ell^1} < \epsilon
\end{align*}
for all $g \in S'$. Therefore the class of factor groups satisfies the uniform Reiter condition for the same function $r$.
\end{proof}
\begin{theorem}\label{thm:residual-argument}
Let $G$ be a group and let $\mathcal{F}$ be a filter base\footnote{$\mathcal{F}$ is a non-empty set of normal subgroups, such that for all $N,M \in \mathcal{F}$ the intersection $N\cap M$ contains an element of $\mathcal{F}$. } of normal subgroups with
$\bigcap \mathcal{F} = \{1_G\}$.
The group $G$ is uniformly amenable if and only if the class $\{G/N \mid N \in \mathcal{F} \}$ is uniformly amenable.
\end{theorem}
\begin{proof}
Suppose $G$ is uniformly amenable. It follows immediately from Proposition \ref{prop:quotients} that $\{G/N \mid N \in \mathcal{F}\}$ is uniformly amenable.
For the converse statement assume that $\{G/N \mid N \in \mathcal{F}\}$ satisfies the uniform Reiter condition for a function $r$ (w.l.o.g.~non-decreasing in the second argument).
Let $\epsilon > 0$ and a finite subset $S \subseteq G$ be given. We define $S_N = \pi_N(S) \subseteq G/N$, where $\pi_N\colon G \to G/N$ denotes the canonical projection for all $N \in \mathcal{F}$.
Let $C_N$ denote the set of probability measures $\mu$ on $G$ with $ |\supp(\mu)| \leq r(\epsilon, |S|)$ and whose
pushforward $\mu_N \mathrel{\mathop{:}}= (\pi_N)_*(\mu)$ to $G/N$ satisfies
\[
\Vert \lambda_{\pi_N(g)}^*(\mu_N) - \mu_N \Vert_{\ell^1} < \epsilon
\]
for all $g \in S$.
By assumption the sets $C_N$ are non-empty and $C_N \subseteq C_M$, if $N \subseteq M$ (this follows from the proof of Proposition~\ref{prop:quotients}). Since the supports are bounded, the sets $C_i$ are compact in the topology of pointwise convergence. By compactness there is a measure $\nu \in \bigcap_{N \in \mathcal{F}}^\infty C_N$.
Now, let $N\in \mathcal{F}$ be sufficiently small such that distinct elements in $T = S^{-1}\supp(\nu) \cup \supp(\nu)$ represent distinct cosets in $G/N$.
For all $g \in S$ we obtain
\begin{align*}
\Vert \lambda_g^*(\nu) - \nu \Vert_{\ell^1} &= \sum_{x \in T}|\nu(gx)-\nu(x)| =
\sum_{xN \in TN/N}|\nu_N(gxN)-\nu_N(xN)|\\
&= \Vert \lambda^*_{\pi_N(g)}(\nu_N) - \nu_N \Vert_{\ell^1} < \epsilon. \qedhere
\end{align*}
\end{proof}
The following result shows that uniform amenability is a profinite property
and immediately implies Theorem \ref{thm:uniform-amenability-profinite}.
\begin{corollary}
Let $H$ be a profinite group. If some dense subgroup $G \subseteq H$ is uniformly amenable, then $H$ is uniformly amenable.
\end{corollary}
\begin{proof}
Let $\mathcal{F}$ be the filter of open normal subgroups of $H$.
Now since $G$ is uniformly amenable, the class $\{G/(G \cap N) \mid N \in \mathcal{F}\}$ is uniformly amenable.
Since $G$ is dense in $H$, we have $G/(G\cap N) \cong H/N$ for all open normal subgroups $N \leq H$.
The uniform amenability of $H$ follows from Theorem~\ref{thm:residual-argument}.
\end{proof}
The next result is due to Keller \cite[Cor.~5.9]{Keller72}. As it will be used in the corollary below, we include a new short proof based on the uniform Kesten condition.
\begin{corollary}\label{cor:unif-amen-grps-satisfy-laws}
Every class $\mathfrak{K}$ of uniformly amenable groups satisfies a common group law.
\end{corollary}
\begin{proof}
It follows from the uniform Kesten condition that there is a number $N$ such that every pair of distinct elements in any group $G \in \mathfrak{K}$ satisfies some relation of length at most $N$.
Since there are finitely many such relations, we can form a nested commutator involving all such relations (possibly involving a new letter to make sure that the result is non-trivial); this nested commutator is a law in $G$.
\end{proof}
\begin{corollary}
Let $G$ be a finitely generated, uniformly amenable group. Then the profinite completion $\widehat{G}$ is positively finitely generated.
\end{corollary}
\begin{proof}
Since $G$ satisfies a non-trivial law $u$, every subquotient satisfies the same law.
However, there is a finite group for which $u$ is not a law (e.g. some large symmetric group). In particular, this finite group is not a subquotient of $G$ and similarly is not a subquotient of $\widehat{G}$. It follows from
\cite[Thm.\ 1.1]{Borovik-Pyber-Shalev} that $\widehat{G}$ is positively finitely generated.
\end{proof}
\begin{remark}
Keller asked whether a group which satisfies a law is amenable and whether an amenable group which satisfies a law is uniformly amenable. The answer to the first question is negative, since it is known by work of Adyan \cite{Adyan82} that free Burnside groups of large exponent are non-amenable. In fact, they are even uniformly non-amenable \cite{Osin07}.
On the other hand, Zelmanov's solution of the restricted Burnside problem implies that residually finite groups (not necessarily finitely generated) of bounded exponent are uniformly amenable. It is tempting to propose the following variation of Keller's question:
\end{remark}
\begin{question}
Is every family of finite groups which satisfies a common law uniformly amenable?
\end{question}
Let us close this section by noting that in general the class of uniformly amenable groups seems to be poorly understood and it would be fruitful to have more examples. Are there uniformly amenable groups which are not elementary amenable? Are there uniformly amenable groups with intermediate growth?
\section{Groups acting on rooted trees and the $\Omega$-construction}\label{sec:trees-and-construction}
The purpose of this section is to introduce the basic construction of groups we will frequently use. We begin by fixing some basic
terminology from the theory of groups acting on rooted trees.
\subsection{Groups acting on rooted trees}
By a \emph{rooted tree} we will always mean a tree $T$ with a distinguished vertex, called \emph{the root of $T$}, which we will denote by $\emptyset$.
An automorphism of $T$ will always be assumed to fix the root of $T$.
The group of all such automorphisms will be denoted by $\Aut(T)$.
Accordingly, an action of a group $G$ on a rooted tree $T$ is an action by graph isomorphisms that fix the root of $T$.
Let $V(T)$ denote the vertex set of $T$.
The distance of a vertex $v \in V(T)$ to the root $\emptyset$ is called the \emph{level} of $v$ and will be denoted by $\lv(v)$.
Two vertices $v,w \in V(T)$ are called \emph{adjacent} if they are connected by an edge.
In this paper we will mostly be interested in group actions on trees that arise as Cayley graphs of free monoids.
More precisely, let $X$ be a non-empty finite set which one can think of as an alphabet.
Let $X^{\ast}$ denote the free monoid generated by $X$, i.e.\ the set of (finite) words over $X$ with composition given by concatenation of words.
Let $T_{X}$ denote the Cayley graph of $X^{\ast}$ with respect to $X$.
Clearly $T_X$ is a tree and we consider $T_X$ as a rooted tree where the root is the empty word $\emptyset$.
Note that the set $X^{\ell}$ of words of length $\ell \in \mathbb{N}$ is precisely the set of vertices of level $\ell$ in $T_X$.
As every $\alpha \in \Aut(T_X)$ fixes the root of $T_X$, it follows that $\alpha$ preserves the level sets $X^{\ell}$.
Thus for every subgroup $G \leq \Aut(T_X)$ we have a natural homomorphism from $G$ to the symmetry group $\Sym(X^{\ell})$.
If $\ell = 1$, we write $\sigma_g \in \Sym(X)$ to denote the image of $g$ under this homomorphism.
On the other hand, every permutation $\sigma \in \Sym(X)$ gives rise to an automorphism of $T_X$ by defining $\sigma(xw) = \sigma(x)w$ for all $x \in X$ and $w \in X^{\ast}$.
To simplify notation, this automorphism will be denoted by $\sigma$ as well.
Automorphisms obtained in this way will be called \emph{rooted} (here we follow the terminology of \cite{BGS-branch}).
Another important type of automorphism is obtained by letting the direct sum $\Aut(T_X)^{X} \mathrel{\mathop{:}}= \bigoplus \limits_{x \in X} \Aut(T_X)$ act on $T_X$ via $((g_x)_{x \in X},yw) \mapsto yg_y(w)$ for all $y \in X$ and $w \in X^{\ast}$.
Together with the rooted ones, these automorphism can be used to decompose arbitrary automorphism of $T_X$ as follows.
\begin{definition}\label{def:splitting-an-automorphism}
Let $X$ be a finite set and let $\alpha \in \Aut(T_X)$.
For each $x \in X$ we define the \emph{state} of $\alpha$ at $x$ as the unique automorphism $\alpha_x \in \Aut(T_X)$ that satisfies
\[
\alpha(xw) = \sigma_{\alpha}(x)\alpha_x(w)
\]
for every $w \in X^{\ast}$.
This gives us a decomposition $\alpha = \sigma_{\alpha} \circ (\alpha_x)_{x \in X}$ which is called the \emph{wreath decomposition} of $\alpha$.
\end{definition}
If the alphabet $X$ is clear from the context, we will often just write $(\alpha_x)$ instead of $(\alpha_x)_{x \in X}$.
Note that the wreath decomposition endows us with an isomorphism
\[
\Aut(T_X)
\rightarrow
\Sym(X) \ltimes \Aut(T_X)^{X},\ \alpha \mapsto \sigma_{\alpha} \cdot (\alpha_x).
\]
\begin{definition}\label{def:stabilizer-level-n}
Let $G \leq \Aut(T_X)$ be a subgroup and let $v$ be a vertex of $T_X$.
The subtree of $T_X$ whose vertex set is given by $vX^{\ast}$ will be denoted by $(T_X)_v$.
We write $\St_G(v)$ for the stabilizer of $v$ in $G$.
The \emph{rigid stabilizer} of $v$ in $G$, denoted by $\RiSt_G(v)$, is the subgroup of $\St_G(v)$ that consists of elements $g \in G$ that fix every vertex outside of $(T_X)_v$.
For $\ell \in \mathbb{N}_0$ we further define the \emph{level $\ell$ stabilizer subgroup}
\[
\St_G(\ell) \mathrel{\mathop{:}}= \bigcap \limits_{v \in X^{\ell}} \St_G(v)
\]
and the \emph{rigid level $\ell$ stabilizer subgroup}
\[
\RiSt_G(\ell) \mathrel{\mathop{:}}= \langle \bigcup \limits_{v \in X^{\ell}} \RiSt_G(v) \rangle
\]
in $G$.
\end{definition}
Let $G$ be a group that acts on a rooted tree $T$.
We call $G$ a \emph{branch group}, if the index of $\RiSt_{G}(\ell)$ in $G$ is finite for every $\ell \in \mathbb{N}$.
For a subgroup $G \leq \Aut(T_X)$, we say that $G$ is \emph{self-similar}, if for each $g \in G$ the elements $g_x$ in the wreath decomposition $g = \sigma_g \circ (g_x)_{x \in X}$ are contained in $G$.
\begin{notation}\label{def:iota-v}
Given a subgroup $G \leq \Aut(T_{X})$ and a word $v \in X^{\ast}$ of length $\ell$, we consider the embedding $\iota_v \colon G \rightarrow \Aut(T_{X})$ given by
\[
\iota_v(g)(uw) =
\begin{cases}
ug(w),& \text{if } u = v\\
uw,& \text{if } u \neq v
\end{cases}
\]
for every $g \in G$, $w \in X^{\ast}$ and $u \in X^{\ell}$.
\end{notation}
\subsection{The $\Omega$-construction}\label{subsec:construction}
Let us fix a non-empty finite set $X$ and an element $o \in X$.
Let $X^{+} \mathrel{\mathop{:}}= X \setminus \{o\}$ and let $\mathcal{S}$ denote the space of infinite sequences $(\omega_n)_{n \in \mathbb{N}}$ over $X^{+}$.
We consider the \emph{left shift operator} $L \colon \mathcal{S} \rightarrow \mathcal{S}$ given by $(\omega_1,\omega_2,\omega_3,\ldots) \mapsto (\omega_2,\omega_3,\ldots)$.
\begin{definition}\label{def:omega-elements}
Given a sequence $\omega = (\omega_n) \in \mathcal{S}$, we define the homomorphism
\[
\widetilde{\ \cdot\ }^{\omega} \colon \Aut(T_X) \rightarrow \Aut(T_X),\ \alpha \mapsto \widetilde{\alpha}^{\omega} = (\alpha_x)_{x \in X},
\]
where
\[
\alpha_x =
\begin{cases}
\widetilde{\alpha}^{L(\omega)},& \text{if } x = o\\
\alpha,& \text{if } x = \omega_1\\
\id, & \text{otherwise.}
\end{cases}
\]
If $G$ is a subgroup of $\Aut(T_X)$, we write $\widetilde{G}^{\omega}$ to denote the image of $G$ under $\omega$.
The group generated by $G$ and $\widetilde{G}^{\omega}$ will be denoted by $\Gamma_{G}^{\omega}$.
More generally, for every non-empty subset $\Omega \subseteq \mathcal{S}$, we define $\Gamma_{G}^{\Omega}$ as the subgroup of $\Aut(T_{X})$ that is generated by all groups $\Gamma_{G}^{\omega}$ with $\omega \in \Omega$.
\end{definition}
Let $G$ be a subgroup of $\Aut(T_X)$.
Adapting a notion introduced by Segal~\cite{Segal01}, we say that $G$ has \emph{property H} if for all $x,y \in X$ the following hold:
\begin{itemize}
\item $G$ acts transitively on the first level of $T_X$, i.e.\ on $X$.
\item for all $x\neq y$ in $X$ there exists $g \in \St_G(x)$ with $g(y)\neq y$.
\end{itemize}
\begin{lemma}\label{lem:structure-of-rist}
Let $G \leq \Aut(T_X)$ be a perfect, self-similar subgroup that satisfies property H.
For every $\omega = (\omega_n)_{n \in \mathbb{N}} \in \mathcal{S}$ we have $\iota_{\omega_1}(G) \subseteq \RiSt_{\Gamma_G^{\omega}}(\omega_1)$.
\end{lemma}
\begin{proof}
Since $G$ satisfies property~H, we can find some $h \in G$ with $h(\omega_1) = \omega_1$ and $h(o) \neq o$.
Let $h = \sigma_h \cdot (h_x)$ be the wreath decomposition of $h$.
Consider an arbitrary $g \in G$ and its image $\widetilde{g}^{\omega}$ in $\widetilde{G}^{\omega}$.
Recall that $\widetilde{g}^{\omega} = (g_x)$,
where $g_{o} = \widetilde{g}^{L(\omega)}$, $g_{\omega_1} = g$ and $g_x = \id$ otherwise.
By conjugating $\widetilde{g}^{\omega}$ with $h$ we obtain
\begin{align*}
h \widetilde{g}^{\omega} h^{-1}
&= \sigma_h \cdot (h_x) \circ (g_x) \circ (h_x^{-1}) \cdot \sigma_h^{-1}\\
&= \sigma_h \cdot (h_{x} g_{x} h_{x}^{-1}) \cdot \sigma_h^{-1}\\
&= (h_{\sigma_h(x)} g_{\sigma_h(x)} h_{\sigma_h(x)}^{-1}).
\end{align*}
From the self-similarity of $G$ we see that
\[
h_{\sigma_h(\omega_1)} g_{\sigma_h(\omega_1)} h_{\sigma_h(\omega_1)}^{-1}
= h_{\omega_1} g_{\omega_1} h_{\omega_1}^{-1}
= h_{\omega_1} g h_{\omega_1}^{-1}
\]
is an element of $G$.
Further we have $h_{\sigma_h(x)} g_{\sigma_h(x)} h_{\sigma_h(x)}^{-1} = \id$ for $\sigma_h(x) \in X \setminus \{o,\omega_1\}$.
In particular it follows that $h_{\sigma_h(o)} g_{\sigma_h(o)} h_{\sigma_h(o)}^{-1} = \id$.
Let $k \in G$ be a further element.
Note that the commutator $[\widetilde{k}^{\omega},h \widetilde{g}^{\omega} h^{-1}]$
takes the form
\[
[\widetilde{k}^{\omega},h \widetilde{g}^{\omega} h^{-1}]
= ([k_x,(h_{\sigma_h(x)} g_{\sigma_h(x)} h_{\sigma_h(x)}^{-1})])
\]
and that $[k_x,(h_{\sigma_h(x)} g_{\sigma_h(x)} h_{\sigma_h(x)}^{-1})] = \id$ for $x \neq \omega_1$.
On the other hand we have $[k_{\omega_1},(h_{\sigma_h({\omega_1})} g_{\sigma_h({\omega_1})} h_{\sigma_h({\omega_1})}^{-1})] = [k,h_{\omega_1} g h_{\omega_1}^{-1}]$.
Thus we see that every element of the form $\iota_{\omega_1}([k,h_{\omega_1} g h_{\omega_1}^{-1}])$ lies in $\RiSt_{\Gamma_{G}^{\omega}}(\omega_1)$.
Since $G$ is perfect and $g,k \in G$ were chosen arbitrarily, it follows that $\iota_{\omega_1}(G)$ is contained in $\RiSt_{\Gamma_{G}^{\omega}}(\omega_1)$.
\end{proof}
For every non-empty subset $\Omega \subseteq \mathcal{S}$ we consider its image
\[
L(\Omega) = \Set{L(\omega)}{\omega \in \Omega} \subseteq \mathcal{S}
\]
under the shift operator.
\begin{lemma}\label{lem:structure-of-rist}
Let $G \leq \Aut(T_X)$ be a perfect, self-similar subgroup that satisfies property~H.
For every non-empty $\Omega \subseteq \mathcal{S}$ and every $x \in X$ we have $\RiSt_{\Gamma_{G}^{\Omega}}(x)
= \iota_x(\Gamma_{G}^{L(\Omega)})$.
\end{lemma}
\begin{proof}
Let $\omega = (\omega_n) \in \Omega$.
By Lemma~\ref{lem:structure-of-rist} we have
$\iota_{\omega_1}(G)\subseteq \RiSt_{\Gamma_{G}^{\omega}}(\omega_1)$.
Since $G$ is self-similar and level-transitive, this implies
\begin{equation}\label{eq:structure-of-rist-2}
\iota_{o}(G)
\subseteq \RiSt_{\Gamma_{G}^{\omega}}(o)
\subseteq \RiSt_{\Gamma_{G}^{\Omega}}(o).
\end{equation}
We observe that $\widetilde{g}^{\omega}\iota_{\omega_1}(g)^{-1} = \iota_o(\widetilde{g}^{L(\omega)})$ and hence
Lemma~\ref{lem:structure-of-rist} implies further that
\begin{equation}\label{eq:structure-of-rist-3}
\iota_o(\widetilde{G}^{L(\omega)})
\subseteq \RiSt_{\Gamma_{G}^{\Omega}}(o).
\end{equation}
As $\omega \in \Omega$ was arbitrary,~\eqref{eq:structure-of-rist-2} together with~\eqref{eq:structure-of-rist-3} show that
\[
\iota_o(\Gamma_{G}^{L(\Omega)})
\subseteq \RiSt_{\Gamma_{G}^{\Omega}}(o)
\subseteq \St_{\Gamma_{G}^{\Omega}}(o).
\]
A further application of the level-transitivity and the self-similarity of $G$ now gives us $\iota_x(\Gamma_{G}^{L(\Omega)}) \subseteq \RiSt_{\Gamma_{G}^{\Omega}}(x)$ for every $x \in X$.
On the other hand, each $\Gamma_{G}^{\omega}$ is generated by elements of the form $g = \sigma_g \cdot (g_x)$ with either $g_x \in \widetilde{G}^{L(\omega)}$ or $g_x \in G$.
From this we see that the reverse inclusion $\RiSt_{\Gamma_{G}^{\Omega}}(x) \subseteq \iota_x(\Gamma_{G}^{L(\Omega)})$ is also satisfied.
\end{proof}
\begin{corollary}\label{cor:structure-of-rist-higher-level}
Let $G \leq \Aut(T_X)$ be a perfect, self-similar subgroup that satisfies property~H.
For every non-empty $\Omega \subseteq \mathcal{S}$ and every word $v \in X^{\ast}$ of length $\ell$, the restricted stabilizer of $v$ in $\Gamma_{G}^{\Omega}$ is given by $\RiSt_{\Gamma_{G}^{\Omega}}(v) = \iota_v(\Gamma_{G}^{L^{\ell}(\Omega)})$.
Moreover, we have $\RiSt_{\Gamma_{G}^{\Omega}}(\ell) = \St_{\Gamma_{G}^{\Omega}}(\ell)$ for every $\ell \in \mathbb{N}_0$. In particular, $\Gamma_G^{\Omega}$ is a branch group and the action is level-transitive.
\end{corollary}
\begin{proof}
The proof is by induction on the length of $v$.
If $v$ is the empty word, then there is nothing to show.
Suppose now that the corollary holds for some $\ell \in \mathbb{N}_0$.
Let $w \in X^{\ell+1}$ be a word of the form $w = vx$ with $v \in X^{\ell}$ and $x \in X$.
From Lemma~\ref{lem:structure-of-rist} we know that $\RiSt_{\Gamma_{G}^{L^{\ell}(\Omega)}}(x)
= \iota_x(\Gamma_{G}^{L^{\ell+1}(\Omega)})$ for every $x \in X$.
We obtain
\begin{align*}
\RiSt_{\Gamma_{G}^{\Omega}}(w)
&=&
& \RiSt_{\RiSt_{\Gamma_{G}^{\Omega}}(v)}(vx) &
&=&
&\RiSt_{\iota_v(\Gamma_{G}^{L^{\ell}(\Omega)})}(vx)&\\
&=&
&\iota_v(\RiSt_{\Gamma_{G}^{L^{\ell}(\Omega)}}(x))&
&=&
&\iota_v(\iota_x(\Gamma_{G}^{L^{\ell+1}(\Omega)}))&
&=&
&\iota_{w}(\Gamma_{G}^{L^{\ell+1}(\Omega)}).&
\end{align*}
Since $\Gamma_{G}^{\Omega}$ is generated by elements of the form $g = \sigma_g \cdot (g_x)$ with either $g_x \in G$ or $g_x \in \widetilde{G}^{L(\omega)}$ for some $\omega \in \Omega$, it follows that $\St_{\Gamma_{G}^{\omega
}}(\ell)$ is contained in the group generated by all subgroups of the form $\iota_{v}(\Gamma_{G}^{L^{\ell}(\Omega)})$ with $v$ of level $\ell$.
Together with the first part this implies $\St_{\Gamma_{G}^{\Omega}}(\ell)
= \RiSt_{\Gamma_{G}^{\Omega}}(\ell)$ and since $\St_{\Gamma_{G}^{\Omega}}(\ell)$ is of finite index, we conclude that $\Gamma_{G}^{\Omega}$ is a branch group.
By property~H the group $G$ acts transitively on the first level. Since $\RiSt_{\Gamma_G^{\Omega}}(v)$ contains $\iota_v(G)$, it follows by induction that $\Gamma_G^{\Omega}$ acts transitively on every level.
\end{proof}
To finish this section we show that the groups $\Gamma_{G}^{\Omega}$ act like iterated wreath products on each level. Recall that for group
$G,H$ with actions on sets $X$ and $Y$,
the \emph{permutational wreath product} $G \wr_X H$ is defined as the semidirect product $G \ltimes H^X$ where $G$ acts on $H^X$ by permuting the coordinates.
We define the \emph{natural action} of $G \wr_X H$ on the product set $X \times Y$ by $(g \cdot (h_x),(x,y)) \mapsto (g(x),h_x(y))$.
Given a finite set $X$ and a subgroup $Q \leq \Sym(X)$, we consider the iterated permutational wreath product of $Q$ given by
\[
\wr_X^{n} Q
= Q \wr_X (Q \wr_X (\cdots (Q \wr_X Q))\cdots)
\]
Note that the natural action of an element $\alpha \in \wr_X^{n} Q$ on $X^n$ extends to a tree automorphism on $T_X$ by setting $\alpha(vw)=\alpha(v)w$ for all $v \in X^n$ and $w \in X^{\ast}$.
In the following, we will identify $\wr_X^{n} Q$ with its image in $\Aut(T_X)$ under this action.
\begin{proposition}\label{prop:equal-image}
Let $G \leq \Aut(T_X)$ be a perfect, self-similar subgroup that satisfies property~H.
Let $Q \leq \Sym(X)$ denote the image of $G$ under the canonical action on $X$.
Then for every non-empty subset $\Omega \subseteq \mathcal{S}$ and every $\ell \in \mathbb{N}$, the image of $\Gamma_{G}^{\Omega}$ in $\Aut(T_{X}) / \St_{\Aut(T_{X})}(\ell)$ is given by the permutational wreath product $\wr_{X}^{\ell} Q$.
\end{proposition}
\begin{proof}
By construction, every $g \in \Gamma_{G}^{\Omega}$ has a wreath decomposition
\begin{equation}\label{eq:equal-image-1}
g = \sigma_g \circ (g_x),
\end{equation}
where $\sigma_g$ is a rooted automorphism that corresponds to an element in $Q$ and $g_x \in \Gamma_{G}^{L(\Omega)}$ for every $x \in X$.
Note that this implies that for every $g \in \Gamma_{X}^{\Omega}$ and every word $v = x_1 \dots x_{\ell}$ over $X$, its image under $g$ is given by
\begin{equation}\label{eq:equal-image-2}
g(v) = \sigma_1(x_1) \ldots \sigma_{\ell}(x_{\ell})
\end{equation}
for some appropriate permutations $\sigma_i \in Q$, i.e., the image of $\Gamma_{G}^{\Omega}$ lies in $\wr_{X}^{\ell} Q$.
On the other hand, Lemma~\ref{lem:structure-of-rist} tells us that $\RiSt_{\Gamma_{G}^{\Omega}}(x)
= \iota_x(\Gamma_{G}^{L(\Omega)})$ for every $x \in X$.
Together with~\eqref{eq:equal-image-1}, this shows that for every $\sigma \in Q$ its corresponding rooted automorphism, also denoted by $\sigma$, lies in $\Gamma_{G}^{L(\Omega)}$.
From Corollary~\ref{cor:structure-of-rist-higher-level} it therefore follows that $\iota_v(\sigma) \in \RiSt_{\Gamma_{G}^{\Omega}}(v)$ for every $v \in X^{\ast}$ and every $\sigma \in Q$.
In view of~\eqref{eq:equal-image-2}, this implies that the image of $\Gamma_{G}^{\Omega}$ in $\Aut(T_{X}) / \St_{\Aut(T_{X})}(\ell)$
is given by $\wr_{X}^{\ell} Q$.
\end{proof}
\section{The congruence subgroup property}\label{sec:csp}
Let $T$ be a rooted tree. We make $\Aut(T)$ into a topological group by declaring the subgroups $\St_{\Aut(T)}(n)$ to be a base of open neighbourhoods of the identity. Equipped with this topology the automorphism group $\Aut(T)$ is a compact, totally disconnected Hausdorff topological group, i.e., a profinite group.
Recall that the \emph{profinite completion} of a residually finite group $G$ is defined as the inverse limit $\widehat{G} \mathrel{\mathop{:}}= \varprojlim \limits_{N \unlhd_f G} G / N$ of the system of all normal subgroups of finite index in $G$.
If $G$ is a subgroup of $\Aut(T)$, we can further consider its \emph{tree completion} $\overline{G}$: the closure of $G$ in $\Aut(T)$ with respect to the profinite topology. In particular $\overline{G}$ is a profinite group and $\overline{G} \cong \varprojlim \limits_{n} G / \St_G(n)$.
In this case the universal property of the profinite completion gives rise to a canonical homomorphism
\[
\res^G_T \colon \widehat{G} \rightarrow \overline{G}.
\]
The homomorphism $\res^G_T$ allows us to extend the action of $G$ on $T$ to an action of $\widehat{G}$ on $T$.
Since $G$ is dense in both $\widehat{G}$ and $\overline{G}$, the map $\res_T$ is always surjective.
The goal of this section is to formulate sufficient conditions under which $\res_T$ is injective.
\begin{definition}\label{def:CSP}
Let $T$ be a rooted tree.
A subgroup $G \leq \Aut(T)$ satisfies the \emph{congruence subgroup property} $(\CSP)$ if $\res^G_T \colon \widehat{G} \rightarrow \overline{G}$ is an isomorphism.
\end{definition}
\begin{remark}\label{rem:rewriting-CSP}
From the definitions it directly follows that a subgroup $G \leq \Aut(T)$ satisfies the congruence subgroup property if and only if for every normal subgroup $N \unlhd G$ of finite index there is a number $n \in \mathbb{N}$ such that $\St_G(n)$ is contained in $N$.
\end{remark}
The following very useful observation was extracted by Segal~\cite[Lemma $4$]{Segal01} from the proof of~\cite[Theorem $4$]{Grigorchuk00}.
\begin{lemma}\label{lem:commutator-rist}
Let $T$ be a rooted tree and let $G \leq \Aut(T)$ be a subgroup that acts level transitively on $T$.
Then for every non-trivial normal subgroup $N \unlhd G$ there is some $n \in \mathbb{N}$ with $\RiSt_{G}(n)' \leq N$, where $\RiSt_{G}(n)'$ denotes the commutator subgroup of $\RiSt_{G}(n)$.
\end{lemma}
Recall that an infinite group $G$ is called \emph{just infinite} if every proper quotient of $G$ is finite.
\begin{corollary}\label{cor:criterion-CSP}
Let $T$ be a rooted tree and let $G \leq \Aut(T)$ be a subgroup that acts level transitively on $T$.
Suppose that every rigid stabilizer $\RiSt_G(v)$ is perfect and that the groups $\St_G(n)$ and $\RiSt_G(n)$ coincide for every $n \in \mathbb{N}$.
Then $G$ is just infinite and satisfies the $\CSP$.
\end{corollary}
\begin{proof}
Let $N$ be a non-trivial normal subgroup of $G$.
From Lemma~\ref{lem:commutator-rist} we know that there is some $n$ with $\RiSt_{G}(n)' \leq N$.
Since the rigid stabilizers are perfect, it follows that
\[
\RiSt_{G}(v) = \RiSt_{G}(v)' \leq \RiSt_{G}(n)'
\]
for every vertex $v$ of level $n$ in $T$.
On the other hand $\St_{G}(n) = \RiSt_{G}(n)$ is generated by the level $n$ rigid vertex stabilizers $\RiSt_{G}(v)$.
Thus we obtain $\St_{G}(n) = \RiSt_{G}(n)' \leq N$, which proves the claim.
\end{proof}
This result can be applied to the groups $\Gamma_{G}^{\Omega}$ defined in the previous section.
\begin{theorem}\label{thm:CSP-for-Omega}
Let $G \leq \Aut(T_X)$ be a perfect, self-similar subgroup that satisfies property~H.
Then for every non-empty subset $\Omega \subseteq \mathcal{S}$ the group $\Gamma_{G}^{\Omega}$ is just infinite and satisfies the congruence subgroup property.
\end{theorem}
\begin{proof}
From Corollary~\ref{cor:structure-of-rist-higher-level} we know that the groups $\RiSt_{\Gamma_{G}^{\Omega}}(\ell)$ and $\St_{\Gamma_{G}^{\Omega}}(\ell)$ coincide for every $\ell \in \mathbb{N}$.
As each rigid stabilizer $\RiSt_{\Gamma_{G}^{\Omega}}(v)$ is generated by isomorphic copies of the perfect group $G$, it follows that $\RiSt_{\Gamma_{G}^{\Omega}}(v)$ is perfect itself.
Now the claim follows from Corollary~\ref{cor:criterion-CSP}.
\end{proof}
As a consequence of Theorem~\ref{thm:CSP-for-Omega}, we see that the action of $\widehat{\Gamma_{G}^{\Omega}}$ on $T_{X}$ is a faithful extension of the action of $\Gamma_{G}^{\Omega}$ on $T_{X}$ and that $\widehat{\Gamma_{G}^{\Omega}}$ is isomorphic to $\overline{\Gamma_{G}^{\Omega}} \leq \Aut(T_X)$.
In the following, it will be important for us to observe that under the assumptions of Theorem \ref{thm:CSP-for-Omega} the tree completion $\overline{\Gamma_{G}^{\Omega}}$ does not depend on $\Omega$.
In fact, the tree completion is always an \emph{iterated wreath product}. Let $X$ be a finite set and let $Q \leq \Sym(X)$.
Consider the inverse limit $\wr_{X}^{\infty} Q \mathrel{\mathop{:}}= \varprojlim \limits_n \wr_X^{n} Q$ of the iterated wreath products, where the projection $\wr_X^{n+1} Q \rightarrow \wr_X^{n} Q$ is given by restricting the natural action of $\wr_X^{n} Q$ on $X^{n+1}$ to the first $n$ coordinates.
Then the iterated wreath product $\wr_{X}^{\infty} Q$ acts on $T_X$ and we identify $\wr_{X}^{\infty} Q$ with its image in $\Aut(T_X)$ under this action.
We note that this is a closed subgroup of $\Aut(T_X)$.
Since a closed subgroup of $\Aut(T_X)$ is uniquely determined by its actions on all finite levels of the tree, the following result is a direct consequence of
Theorem \ref{thm:CSP-for-Omega} and Proposition~\ref{prop:equal-image}.
\begin{corollary}\label{cor:profinite-completion-is-iterated-wreath}
Let $G \leq \Aut(T_X)$ be a perfect, self-similar subgroup that satisfies~H.
Let $Q \leq \Sym(X)$ denote the image of $G$ under the canonical action on $X$.
For every non-empty subset $\Omega \subseteq \mathcal{S}$, the canonical map $\res^{\Gamma_{G}^{\Omega}}_{T_{X}}$ defines an isomorphism from $\widehat{\Gamma_{G}^{\Omega}}$ onto $\wr_X^{\infty} Q \leq \Aut(T_X)$.
\end{corollary}
\begin{corollary}\label{cor:induced-isomorphism-general}
Let $G,H \leq \Aut(T_X)$ be perfect, self-similar subgroups that satisfy~H.
If the images of $G$ and $H$ in $\Sym(X)$ coincide, then the profinite completions
$\widehat{\Gamma_{G}^{\Omega}}$ and $\widehat{\Gamma_{H}^{\Omega'}}$ are isomorphic for all non-empty subsets $\Omega,\Omega' \subseteq \mathcal{S}$.
If moreover $G$ is a subgroup of $H$ and $\Omega \subseteq \Omega'$, then $\Gamma_{G}^{\Omega}$ is a subgroup of $\Gamma_{H}^{\Omega'}$ and the inclusion map $j$ induces an isomorphism $\widehat{j} \colon \widehat{\Gamma_{G}^{\Omega}} \rightarrow \widehat{\Gamma_{H}^{\Omega'}}$.
\end{corollary}
\begin{proof}
The first assertion follows immediately from Corollary \ref{cor:profinite-completion-is-iterated-wreath}. Assume that $G \leq H$ and $\Omega \subseteq \Omega'$. By definition $\Gamma_{G}^{\Omega} \subseteq \Gamma_{H}^{\Omega'}$.
We observe that the following diagramm commutes
\[
\begin{tikzcd}
\Gamma_{G}^{\Omega}\arrow[d]\arrow[r, "i"] & \Gamma_{H}^{\Omega'} \arrow[r]\arrow[d] & \Aut(T_X) \arrow[d,equal]\\
\widehat{\Gamma_{G}^{\Omega}}\arrow[r, "\widehat{i}"]& \widehat{\Gamma_{H} ^{\Omega'}}\arrow[r,"\res^{\Gamma_{H} ^{\Omega'}}_{T_X}"] & \Aut(T_X)
\end{tikzcd}
\]
and we deduce that $\res^{\Gamma_{G}^{\Omega}}_{T_X} = \res^{\Gamma_{H} ^{\Omega'}}_{T_X} \circ \widehat{i}$.
Now it follows from Corollary~\ref{cor:profinite-completion-is-iterated-wreath} that $\widehat{i}$ is an isomorphism.
\end{proof}
\section{Uncountably many groups up to isomorphism}\label{sec:uncountable}
The aim of this section is to prove that -- under mild assumptions on $G$ -- the family of groups $\Gamma^{\Omega}_G$ where $\Omega$ runs through the non empty subsets $\Omega \subseteq \mathcal{S}$ contains uncountably many isomorphism types of groups.
Let $G$ be a group that acts via two homomorphisms $\varphi_1,\varphi_2 \colon G \rightarrow \Aut(T)$ on a rooted tree $T$.
We say that the actions are \emph{conjugated}, if there is an automorphism $\gamma \in \Aut(T)$ such that $\varphi_2(g) = \gamma \varphi_1(g) \gamma^{-1}$ for every $g \in G$.
\begin{definition}\label{def:rigid-presentation-alternative}
Let $T$ be a tree.
We say that a subgroup $G \leq \Aut(T)$ is \emph{rigid}, if every automorphism $\alpha$ of $G$ is induced by a conjugation of $T$.
More precisely, this means that there is some $\gamma \in \Aut(T)$ with $\alpha(g) = \gamma g \gamma^{-1}$ for every $g \in G$.
\end{definition}
The following result is a special case of~\cite[Proposition 8.1]{LavrenyukNekrashevych02}.
\begin{proposition}\label{prop:rigidity-criterion}
Let $T$ be a rooted tree and let $G \leq \Aut(T)$ be a branch group.
Suppose that for every vertex $v$ the rigid stabilizer $\RiSt_G(v)$ acts level-transitively on the subtree $T_v$.
Then $G$ is rigid in $\Aut(T)$.
\end{proposition}
Recall that we write $\widehat{G}$ to denote the profinite completion of a residually finite group $G$.
\begin{lemma}\label{lem:rigidity}
Let $T$ be a rooted tree and let $G_1,G_2 \leq \Aut(T)$ be two branch groups whose restricted stabilizers $\RiSt_{G_i}(v)$ act level-transitively on $T_v$ for every vertex $v$.
Suppose that $G_1$ and $G_2$ satisfy the congruence subgroup property and that $\overline{G_1} = \overline{G_2} \subseteq \Aut(T)$.
Then every isomorphism between $G_1$ and $G_2$ is induced by a conjugation in $\Aut(T)$.
\end{lemma}
\begin{proof}
Define $\overline{G} \mathrel{\mathop{:}}= \overline{G_1} = \overline{G_2}$.
Suppose that $f \colon G_1 \rightarrow G_2$ is an isomorphism and let $\widehat{f} \colon \widehat{G_1} \rightarrow \widehat{G_2}$ be the corresponding isomorphism on the profinite completions.
By the congruence subgroup property, the restrictions $\res^{G_i}_T \colon \widehat{G_i} \rightarrow \overline{G_i}$ are isomorphisms between the profinite completions and the tree completions. The homomorphism
$f_0 \mathrel{\mathop{:}}= \res^{G_2}_{T} \circ \widehat{f} \circ (\res^{G_1}_{T})^{-1}$ is thus an automorphism of $\overline{G}$, i.e.,
the following diagram commutes
\[
\begin{tikzcd}
\widehat{G_1}\arrow[r, "\widehat{f}"]\arrow[d,"\res^{G_1}_T",swap] & \widehat{G_2} \arrow[d, "\res^{G_2}_T"]\\
\overline{G}\arrow[r, "f_0"]& \overline{G}
\end{tikzcd}.
\]
Since the rigid stabilizers of $\overline{G}$ contain those of $G_1$ (and $G_2$) we can therefore apply Proposition~\ref{prop:rigidity-criterion} to deduce that there is some $\gamma \in \Aut(T)$ with $f_0(g) = \gamma g \gamma^{-1}$ for all $g \in \overline{G}$.
For every $g \in G_1 \subseteq \overline{G}$, we therefore obtain $f(g) = f_0(g) = \gamma g \gamma^{-1}$.
\end{proof}
\begin{definition}\label{def:volume-of-tree-auto}
Let $X$ be a finite alphabet and let $T_X$ be the corresponding $\abs{X}$-regular rooted tree with vertex set $X^{\ast}$.
Given a tree automorphism $g \in \Aut(T_X)$ and a number $\ell \in \mathbb{N}$, we consider the subset $\Fix_{\ell}(g) \subseteq X^n$ of vertices of level $\ell$ that are fixed by $g$.
The \emph{support volume} of $g$ is defined as $\vol(g) \mathrel{\mathop{:}}= \lim \limits_{\ell \rightarrow \infty} \frac{\abs{X^{\ell} \setminus \Fix_{\ell}(g)}}{\abs{X^{\ell}}}$.
\end{definition}
Given a tree automorphism $g \in \Aut(T)$ and a vertex $v$ with $g(v) \neq v$, it follows that no descendant of $v$ is fixed by $g$.
Thus $\frac{\abs{X^{\ell} \setminus \Fix_{\ell}(g)}}{\abs{X^{\ell}}}$ is a non-decreasing sequence of numbers that are bounded above by $1$.
In particular this tells us that the limit $\vol(g) = \lim \limits_{\ell \rightarrow \infty} \frac{\abs{X^{\ell} \setminus \Fix_{\ell}(g)}}{\abs{X^{\ell}}}$ indeed exists. In fact, the support volume measures the set of elements in the boundary of $T_X$ which are moved by $g$.
The support volume is invariant under conjugation. Let $\alpha \in \Aut(T)$ be an automorphism. Then $\Fix_{\ell}(\alpha g \alpha^{-1}) = \alpha(\Fix_\ell(g))$
and hence $\vol(g) = \vol(\alpha g \alpha^{-1})$.
We return to the construction introduced in Section~\ref{subsec:construction}. Let $X$ be a non-empty finite set with an element $o \in X$ and define $X^+ = X \setminus \{o\}$. Recall that $\mathcal{S} \mathrel{\mathop{:}}= (X^+)^\infty$.
\begin{theorem}\label{thm:continuum-of-volume-simple}
Let $X$ be a finite set with $|X|\geq 3$.
Let $G \leq \Aut(T_X)$ be a non-trivial subgroup.
For every $\omega \in \mathcal{S}$ the set of real numbers
\[
\Set{\vol(g)}{g \in \Gamma^{\{\omega,\omega'\}}_G, \omega' \in \mathcal{S}} \subseteq [0,1]
\]
is uncountable.
\end{theorem}
\begin{proof}
Let $t \in G$ be an element, which acts non-trivially on $T_X$; in particular, $\vol(t) > 0$.
Since $|X| \geq 3$, we can pick an element $z_n \in X \setminus \{o,\omega_n\}$ for every $n \in \mathbb{N}$.
Let $S \subseteq \mathbb{N}$ be a set of natural numbers.
We define $\omega' = \omega'(S)$ by
\[
\omega_n' = \begin{cases} \omega_n & \text{ if } n \not\in S\\
z_n & \text{ if } n \in S. \end{cases}
\]
Consider the element
$g = (\widetilde{t}^{\omega})^{-1}\widetilde{t}^{\omega'} \in \Gamma_G^{\{\omega,\omega'\}}$. Then $g$ acts like $t^{-1}$ on $(T_X)_{o^{n-1}\omega_n}$
and like $t$ on $(T_X)_{o^{n-1}z_n}$ for every $n \in S$ and acts trivially on all vertices not contained in one of these subtrees. We obtain
\[
\vol(g) = \sum_{n \in S} \frac{2\vol(t)}{|X|^{n}} = 2\vol(t) \sum_{n \in S} |X|^{-n}
\]
and we observe that this number uniquely determines the set $S$.
Indeed, since $|X|\geq 3$ the first non-zero term dominates the series. This completes the proof of the theorem, using that there are uncountable many subsets $S \subseteq \mathbb{N}$.
\end{proof}
\begin{corollary}\label{cor:uncountably-many-iso-types-simple}
Let $G \leq \Aut(T_X)$ be a countable, perfect, self-similar subgroup that satisfies property~H.
For every $\omega \in \mathcal{S}$
the family of groups $(\Gamma_G^{\{\omega,\omega'\}})_{\omega' \in \mathcal{S}}$ contains uncountably many distinct isomorphism types.
\end{corollary}
\begin{proof}
Recall that
by Corollary \ref{cor:structure-of-rist-higher-level} the groups $\Gamma_G^{\Omega}$ are branch groups and the rigid stabilizers act level-transitively. By Theorem \ref{thm:CSP-for-Omega} these groups have the congruence subgroup property and by Corollary~\ref{cor:profinite-completion-is-iterated-wreath} the closure of $\Gamma_G^{\Omega}$ in $\Aut(T_X)$ does not depend on $\Omega$. We conclude using Lemma \ref{lem:rigidity} that every isomorphism between two of the groups $\Gamma_G^{\Omega}$ is induced by a conjugation in $\Aut(T_X)$. In particular, this means, that isomorphisms between these groups preserve the support volume of elements.
We note that $G$ is perfect and acts transitively on $X$, hence we must have $|X| \geq 5$. Theorem \ref{thm:continuum-of-volume-simple} therefore shows, that the set of support volumes of elements in the groups $\Gamma_G^{\{\omega,\omega'\}}$ is uncountable. However, $G$ is countable and so the groups $\Gamma_G^{\{\omega,\omega'\}}$ are countably generated and thus countable.
In conclusion each isomorphism type contributes at most countably many numbers to the uncountable set of support volumes and consequently uncountably many isomorphisms types have to occur.
\end{proof}
In the next section we will discuss a concrete example of a group $G$ where a similar argument can be used to show that the numer of isomorphism types in the family $(\Gamma_G^\omega)_{\omega \in \mathcal{S}}$ is uncountable.
\begin{remark}
We briefly return to Grothendieck's question.
If $G \leq \Aut(T)$ is a finitely generated group which satisfies the assumptions of Corollary \ref{cor:uncountably-many-iso-types-simple}, then the groups $(\Gamma_{G}^\Omega)_{\Omega \subseteq \mathcal{S}}$ where $\Omega$ runs in the finite subsets of $\mathcal{S}$ form an uncountable directed system of finitely generated residually finite groups in which every inclusion induces an isomorphism between profinite completions (see Corollary~\ref{cor:induced-isomorphism-general}).
\end{remark}
\section{Matrix groups acting on trees}\label{sec:matrix-groups}
Given a prime number $p$ and a natural number $n$ we consider the
set $\mathcal{A}_{p,n} \mathrel{\mathop{:}}= \{0,\ldots,p-1\}^n$ which takes the role of the alphabet (called $X$ in the previous sections).
Let $\mathcal{A}_{p,n}^{\ast}$ denote the free monoid generated by $\mathcal{A}_{p,n}$, i.e.\ the set of (finite) words over $\mathcal{A}_{p,n}$.
Let $T_{p,n}$ denote the Cayley graph of $\mathcal{A}_{p,n}^{\ast}$ with respect to $\mathcal{A}_{p,n}$.
Clearly $T_{p,n}$ is a tree whose boundary $\partial T_{p,n}$ can be identified with the set $\mathcal{A}_{p,n}^{\infty}$ of infinite sequences over $\mathcal{A}_{p,n}$.
The element $0 = (0,0,\dots,0) \in \mathcal{A}_{p,n}$ is the distinguished element and
we write $\mathcal{S}_{p,n}$ to denote the space of infinite sequences over $\mathcal{A}_{p,n} \setminus \{0\}$.
\begin{definition}\label{def:affine-group}
Given a commutative, unital ring $R$ and a natural number $n \in \mathbb{N}$ we write $\SAff_n(R)$ to denote the group of affine transformations of $R^n$ whose linear part lies in $\SL_n(R)$. We note that $\SAff_n(R) \cong R^n \rtimes \SL_n(R)$.
\end{definition}
It is a well-known fact that $\SL_n(\mathbb{Z})$ and $\SL_n(\mathbb{F}_p)$ are perfect for $n \geq 3$ (see for example~\cite[1.2.15]{Hahn-OMeara} and \cite[p.~46]{Wilson-FSG}).
In the following we need an affine version of this observation.
\begin{lemma}\label{lem:Affn-is-perfect}
The groups $\SAff_n(\mathbb{Z})$ and $\SAff_n(\mathbb{F}_p)$ are perfect for $n \geq 3$.
\end{lemma}
\begin{proof}
For every $v \in \mathbb{Z}^n$ let $T_v$ denote the translation by $v$.
Since $\SL_n(\mathbb{Z})$ is perfect (for $n \geq 3$), it suffices to show that every translation $T_{e_i}$ by a standard unit vector $e_i \in \mathbb{Z}^n$ can be written as a commutator in $\SAff_n(\mathbb{Z})$.
To see this, we consider the elementary matrices $E_{i,j} \in \SL_n(\mathbb{Z})$ for $1 \leq i < j \leq n$ which are defined by $E_{i,j} \cdot e_j = e_i+e_j$ and $E_{i,j} \cdot e_k = e_k$ for $k \neq j$.
Then
\[
[T_{-e_j},E_{i,j}]
= T_{-e_j} E_{i,j} T_{-e_j}^{-1} E_{i,j}^{-1}
= T_{-e_j} + T_{E_{i,j} \cdot e_j}
= T_{-e_j} + T_{e_i+e_j}
= T_{e_i}
\]
and the result for $\SAff(\mathbb{Z})$ follows. The same argument applies to $\SAff(\mathbb{F}_p)$.
\end{proof}
The set $\mathcal{A}_{p,n}^{\infty}$ can be identified with $\mathbb{Z}_p^n$ via $(x_n) \mapsto \sum \limits_{n=0}^{\infty} p^i x_i$
and similarly the $\ell$-th level of the tree can be identified with $(\mathbb{Z}/p^\ell\mathbb{Z})^n$.
In view of this, the natural action of $\SAff_n(\mathbb{Z}_p)$ on $\mathbb{Z}_p^n$ induces an action on $T_{p,n}$. In fact, the action on the $\ell$-th level factors through $\SAff_n(\mathbb{Z} / p^{\ell}\mathbb{Z})$.
\begin{lemma}\label{lem:affine-stabilizers}
The subgroup $\SAff_n(\mathbb{Z}) \leq \Aut(T_{p,n})$ is self-similar and satisfies property~H.
\end{lemma}
\begin{proof}
Let $A \in \SL_n(\mathbb{Z})$, let $b \in \mathbb{Z}^n$, and let $g \in \SAff_n(\mathbb{Z})$ be the element defined by $g(v) = Av + b$.
Let $u \in \mathbb{Z}_p$ be an element of the form $u = x + pw$ with $x \in \mathcal{A}_{p,n}$ und $w \in \mathbb{Z}_p$.
Let further $x' \in \mathcal{A}_{p,n}$ and $b' \in \mathbb{Z}_p$ be such that $Ax+b = x' + pb'$.
Then we have
\[
g(u)
= A(x + pw) + b
= Ax+b + pAw
= x' + p(Aw+b'),
\]
which tells us that $g_x$ is given by $g_x(w) = Aw+b'$.
As $Ax+b = x' + pb'$ implies $b' \in \mathbb{Z}$ and $g_x \in \SAff_n(\mathbb{Z})$, we deduce that $\SAff(\mathbb{Z})$ is self-similar.
The action of $\SAff(\mathbb{Z})$ on the first level $A_{p,n}$ factors through $\SAff(\mathbb{F}_p)$. In fact, this is the natural action of $\SAff(\mathbb{F}_p)$ on $\mathbb{F}_p^n$. Since this action is $2$-transitive, it clearly satisfies property~H.
\end{proof}
\begin{definition}\label{def:collatz-automorphism}
Let $\omega = (\omega_n)_{n \in \mathbb{N}} \in \mathcal{S}_{p,n}$.
For $g \in \SAff_n(\mathbb{Z})$ we define the map $\widetilde{g}^{\omega} \colon \mathbb{Z}_p^n \rightarrow \mathbb{Z}_p^n$ by
\[
\widetilde{g}^{\omega}(u) =
\begin{cases}
p^{\ell-1} \omega_{\ell} + p^{\ell}g(v),& \text{if } u = p^{\ell-1} \omega_{\ell} + p^{\ell}v \text{ for some } \ell \in \mathbb{N} \text{ and } v \in \mathbb{Z}_p^n\\
u,& \text{if } u \not\equiv p^{\ell-1} \omega_{\ell} \mod p^{\ell} \text{ for every } \ell \in \mathbb{N}.
\end{cases}
\]
Further we define $\widetilde{\SAff_n(\mathbb{Z})}^{\omega} \mathrel{\mathop{:}}= \Set{\widetilde{g}^{\omega}}{g \in \SAff_n(\mathbb{Z})}$.
\end{definition}
From this definition one can easily see that $\widetilde{\SAff_n(\mathbb{Z})}^{\omega}$ is a group and that
\[
\widetilde{\ \cdot \ }^{\omega} \colon \SAff_n(\mathbb{Z}) \rightarrow \widetilde{\SAff_n(\mathbb{Z})}^{\omega},\ g \mapsto \widetilde{g}^{\omega}
\]
is a group isomorphism.
The elements $\widetilde{g}^{\omega}$ can also be defined recursively with the \emph{left shift operator}
\[
L \colon \mathcal{A}_{p,n}^{\infty} \rightarrow \mathcal{A}_{p,n}^{\infty},\ (x_1,x_2,x_3,\ldots) \mapsto (x_2,x_3,x_4\ldots).
\]
Indeed, given a sequence $\omega = (\omega_n)_{n \in \mathbb{N}} \in \mathcal{S}_{p,n}$ and an element $g \in \SAff_n(\mathbb{Z})$ then we can write
\[
\widetilde{g}^{\omega} = (g_x)_{x \in \mathcal{A}_{p,n}},
\]
where
\[
g_x =
\begin{cases}
\widetilde{g}^{S(\omega)},& \text{if } x = 0\\
g,& \text{if } x = \omega_1\\
\id, & \text{otherwise.}
\end{cases}
\]
This is exactly the formula used in Definition \ref{def:omega-elements}. For every subset $\omega \in \mathcal{S}_{p,n}$ we define the subgroup $\Gamma_{p,n}^{\omega} \leq \Aut(T_{p,n})$ to be the group generated by $\SAff_n(\mathbb{Z})$ and $\widetilde{\SAff_n(\mathbb{Z})}^{\omega}$. Recall that for a set $\Omega \subseteq \mathcal{S}_{p,n}$ we define the subgroup $\Gamma_{p,n}^{\Omega} \leq \Aut(T_{p,n})$ to be generated by the groups $\Gamma_{p,n}^{\omega}$ with $\omega \in \Omega$.
Lemma \ref{lem:Affn-is-perfect} and Lemma \ref{lem:affine-stabilizers} allow us to use the results developed in the foregoing sections.
In particular, we obtain the following result.
\begin{corollary}\label{cor:summary}
Let $n \geq 3$ and let $\Omega, \Omega_1, \Omega_2 \subseteq \mathcal{S}_{p,n}$ be non-empty subsets.
\begin{enumerate}
\item Then $\Gamma_{p,n}^{\Omega}$ is a level-transitive, just infinite branch group which contains a non-abelian free group and satisfies the congruence subgroup property.
The profinite completion is isomorphic to the closure of $\Gamma_{p,n}^{\Omega}$ in $\Aut(T_{p,n})$ and does not depend $\Omega$.
\item If $\Gamma_{p,n}^{\Omega_1}$ and $\Gamma_{p,n}^{\Omega_2}$ are isomorphic, then they are already conjugated in $\Aut(T_{p,n})$.
\item For $\Omega_1 \subseteq \Omega_2$ the inclusion $\Gamma_{p,n}^{\Omega_1} \to \Gamma_{p,n}^{\Omega_2}$ induces an isomorphism between the profinite completions.
\end{enumerate}
\end{corollary}
\begin{proof}
It is well-known that $\SL_n(\mathbb{Z})$ contains non-abelian free subgroups.
By Corollary \ref{cor:structure-of-rist-higher-level} the groups $\Gamma_{p,n}^{\Omega}$ are branch groups and the rigid stabilizers act level-transitively. By Theorem \ref{thm:CSP-for-Omega} these groups have the congruence subgroup property and by Corollary~\ref{cor:profinite-completion-is-iterated-wreath} the closure of $\Gamma_{p,n}^{\Omega}$ in $\Aut(T_{p,n})$ is isomorphic to the profinite completion and does not depend on $\Omega$. Lemma \ref{lem:rigidity} shows that every isomorphism between two of the groups is induced by a conjugation in $\Aut(T_{p,n})$.
The third assertion follows from Corollary~\ref{cor:induced-isomorphism-general}.
\end{proof}
It follows from Corollary~\ref{cor:uncountably-many-iso-types-simple} that the number of isomorphism types among the groups $\Gamma_{p,n}^{\{\omega, \omega'\}}$ is uncountable. A variation of the argument shows that we can also find uncountably many groups up to isomorphism in the family $(\Gamma_{p,n}^{\omega})_{\omega \in \mathcal{S}_{p,n}}$. In particular, most of these groups don't admit a finite presentation.
\begin{proposition}\label{prop:continuum-of-isom-classes}
For every $n \geq 3$ and every prime $p$ there are uncountably many isomorphism classes of groups of the form $\Gamma_{p,n}^{\omega}$.
\end{proposition}
\begin{proof}
If two of the groups $\Gamma_{p,n}^{\omega}$ are isomorphic, then they are conjugated in $\Aut(T_{p,n})$ (see Corollary~\ref{cor:summary}) and since
conjugation in $\Aut(T_{p,n})$ preserves support volumes, we deduce that for isomorphic groups the sets
\[
\vol(\Gamma_{p,n}^\omega) = \{\vol(g) \mid g \in \Gamma_{p,n}^\omega \}
\]
coincide. We note that the groups $\Gamma_{p,n}^\omega$ are finitely generated and thus $\vol(\Gamma_{p,n}^\omega)$ is a countable set.
In particular, it is sufficent to prove -- following Theorem \ref{thm:continuum-of-volume-simple} -- that the set
$\bigcup_{\omega \in \mathcal{S}_{p,n}} \vol(\Gamma_{p,n}^\omega)$
is uncountable.
Let $e_1,e_2,\dots, e_n$ denote the standard basis of $\mathbb{Z}^n$.
Consider the elementary matrix $A = E_{1,2} \in \SL_n(\mathbb{Z})$ with $Ae_1 = e_1$ and $Ae_2 = e_1 + e_2$.
Let $T = T_{e_1} \in \SAff_n(\mathbb{Z})$ be the translation with the first standard basis vector.
For every subset $S \subseteq \mathbb{N}$ we define
$\omega = \omega(S) \in \mathcal{S}_{p,n}$ such that
\[
\omega_i = \begin{cases} e_1 \; & \text{ if } i \not\in S\\
e_2 & \text{ if } i \in S\end{cases}.
\]
Let $\omega' = A \omega$.
Using the formula given in Definition \ref{def:collatz-automorphism} it is readily checked that $A\widetilde{T}^{\omega}A^{-1} = \widetilde{T}^{\omega'}$.
We consider the commutator $g = [A,\widetilde{T}^{\omega}] \in \Gamma_{p,n}^{\omega}$ and we observe that
\[
g = [A,\widetilde{T}^{\omega}] = A \widetilde{T}^{\omega} A^{-1} (\widetilde{T}^{\omega})^{-1} = \widetilde{T}^{\omega'} (\widetilde{T}^{\omega})^{-1}
\]
In particular, $g$ acts non-trivially exactly on the boundary points $x \in \mathbb{Z}_p^n$ congruent to
$p^{i}e_2$ or $p^{i}(e_1+e_2)$ with $i \in S$ .
We obtain
\[
\vol(g) = \sum_{i \in S} \frac{2}{p^{ni}} = 2 \sum_{i \in S} p^{-ni}
\]
and we observe that this number uniquely determines the set $S$. Since there are uncountably many subsets $S \subseteq \mathbb{N}$, this completes the proof.
\end{proof}
\section{Amenable groups acting on trees}\label{sec:amenable}
Let $X$ be a finite set, let $o \in X$ and let $X^{+} \mathrel{\mathop{:}}= X \setminus \{o\}$.
Let $\mathcal{S}$ denote the set of infinite sequences over $X^{+}$.
Our goal in this section is to introduce amenable groups that have the same profinite completions as $\Gamma_{p,n}^{\Omega}$ for $n \geq 3$.
To this end we introduce automatic automorphisms of $T_X$.
Recall that for every vertex $v \in T_X$, we write $(T_X)_v$ to denote the subtree of $T_X$ whose vertex set is given by $vX^{\ast}$.
For $\alpha \in \Aut(T_X)$ we have $\alpha((T_X)_v) = (T_X)_{\alpha(v)}$.
Thus we can define the \emph{state of $\alpha$ at $v$} as the unique automorphism $\alpha_v$ of $T_X$ that satisfies $\alpha(vw) = \alpha(v)\alpha_v(w)$ for every $w \in X^{\ast}$.
The set of all states of $\alpha$ will be denoted by $S(\alpha) \mathrel{\mathop{:}}= \Set{\alpha_v \in \Aut(T_X)}{v \in X^{\ast}}$.
\begin{definition}\label{def:automatic-automorphism}
An automorphism $\alpha$ of $T_X$ is called \emph{automatic}, if $S(\alpha)$ is finite.
\end{definition}
\begin{example}\label{exam:automatic-automorphism}
Let $\sigma$ be a rooted automorphism of $T_X$ and let $\omega = (\omega_{\ell})_{\ell \in \mathbb{N}} \in \mathcal{S}$.
Consider the automorphism $\widetilde{\alpha}^{\omega}$ of $T_X$.
For $v \in X^{\ast}$ we have
\[
\widetilde{\alpha}^{\omega}_v =
\begin{cases}
\widetilde{\alpha}^{L^{\ell}(\omega)},& \text{if } v = o^{\ell} \text{ for some } \ell \in \mathbb{N}_0\\
\alpha,& \text{if } v = o^{\ell}\omega_{\ell} \text{ for some } \ell \in \mathbb{N}_0\\
\id, & \text{otherwise.}
\end{cases}
\]
Thus the set of states of $\widetilde{\alpha}^{\omega}$ is finite if and only if $\Set{L^{\ell}(\omega) \in \mathcal{S}}{\ell \in \mathbb{N}}$ is a finite subset of $\mathcal{S}$.
From this we see that $\widetilde{\alpha}^{\omega}$ is automatic if and only if there is some $N \in \mathbb{N}$ such that $L^{N}(\omega)$ is periodic.
\end{example}
\begin{definition}\label{def:bounded-automorphism}
An automorphism $\alpha \in \Aut(T_X)$ is called \emph{bounded}, if there is some $C \geq 0$ such that
\[
\abs{\Set{v \in X^{\ell}}{\alpha_v \neq \id}} \leq C
\]
for all $\ell \in \mathbb{N}_0$.
\end{definition}
\begin{example}\label{exam:bounded-automorphism}
Let $\sigma$ be a rooted automorphism of $T_X$.
Then $\widetilde{\alpha}^{\omega}$ is clearly bounded for every choice of $\omega \in \mathcal{S}$.
\end{example}
It can be easily seen that the set of all bounded automatic automorphisms of $T_X$ forms a group.
In~\cite[Theorem 1.2]{BartholdiKaimanovichNekrashevych10}, Bartholdi, Kaimanovich and Nekrashevych proved that this group is amenable.
As subgroups of amenable groups are amenable, it follows that every subgroup of $\Aut(T_X)$ that is generated by bounded automatic automorphisms is amenable.
In view of Example~\ref{exam:automatic-automorphism} and Example~\ref{exam:bounded-automorphism} we therefore obtain the following.
\begin{proposition}\label{prop:amenability-criterion}
Let $G \leq \Aut(T_X)$ be a group of rooted automorphisms and let $\Omega \subseteq \mathcal{S}$ be a non-empty subset.
Suppose that every $\omega \in \Omega$ is evenually periodic, i.e.\ there is a $N_{\omega} \in \mathbb{N}$ such that $L^{N_{\omega}}(\omega)$ is periodic.
Then $\Gamma_{G}^{\Omega}$ is amenable.
\end{proposition}
Now we are able to proof Theorem \ref{thm:main-theorem}.
Let $p$ be a prime and let $n \geq 3$ be a natural number.
Consider the natural action of $\SAff_n(\mathbb{F}_p)$ on $\mathcal{A}_{p,n} \mathrel{\mathop{:}}= \{0,\ldots,p-1\}^{n}$.
Let $\mathcal{A}_{p,n}^{+}$ denote the complement of $0 \mathrel{\mathop{:}}= (0,0,\dots,0)$ in $\mathcal{A}_{p,n}$ and let $\mathcal{S}_{p,n}$
denote the set of sequences in $\mathcal{A}_{p,n}^+$.
For every non-empty set $\Omega \subseteq \mathcal{S}_{p,n}$, we write $A_{p,n}^{\Omega} \mathrel{\mathop{:}}= \Gamma_{\SAff_n(\mathbb{F}_p)}^{\Omega}$, where $\SAff_n(\mathbb{F}_p)$ is identified with the corresponding group of rooted automorphisms of $T_{p,n} \mathrel{\mathop{:}}= T_{\mathcal{A}_{p,n}}$.
Let further $G_{p,n}$ denote the subgroup of $\Aut(T_{p,n})$ that is generated by the canonical actions of $\SAff_n(\mathbb{F}_p)$ and $\SAff_n(\mathbb{Z})$ on $T_{p,n}$.
Let $M_{p,n}^{\Omega} \mathrel{\mathop{:}}= \Gamma_{G_{p,n}}^{\Omega}$ be the corresponding $\Omega$-group. Equivalently, $M_{p,n}$ is the subgroup of $\Aut(T_{p,n})$ that is generated by $A_{p,n}^{\Omega}$ and $\Gamma_{p,n}^{\Omega}$.
\begin{theorem}\label{thm:main-theorem-precise}
Let $n \geq 3$ be a natural number, let $p$ be a prime, let $\omega \in \mathcal{S}_{p,n}$ be eventually periodic,
and let $\Omega \subseteq \mathcal{S}_{p,n}$ be a finite subset.
Then the following hold:
\begin{enumerate}
\item $A_{p,n}^{\omega}$ is a finitely generated amenable group.
\item $M_{p,n}^{\Omega}$ is finitely generated and contains a non-abelian free group.
\item If $\omega \in \Omega$, then the inclusion $\iota \colon A_{p,n}^{\omega} \rightarrow M_{p,n}^{\Omega}$ induces an isomorphism $\widehat{\iota} \colon \widehat{A_{p,n}^{\omega}} \rightarrow \widehat{M_{p,n}^{\Omega}}$ of profinite completions.
\item The family $(M_{p,n}^{\{\omega,\omega'\}})_{\omega' \in \mathcal{S}_{p,n}}$ contains uncountably many pairwise non-isomorphic groups.
\end{enumerate}
\end{theorem}
\begin{proof}
The first assertion follows from Proposition \ref{prop:amenability-criterion}.
Since $\Omega$ is finite, it follows that $M_{p,n}^\Omega$ is finitely generated. Since $M^\Omega_{p,n}$ contains
$\Gamma_{p,n}^\Omega$, it contains a non-abelian free subgroup by \ref{cor:summary}.
To prove the third assertion,
we verify the assumptions of Corollary~\ref{cor:induced-isomorphism-general}.
First we observe that the groups $\SAff_n(\mathbb{F}_p)$ and $\langle \SAff_n(\mathbb{F}_p) \cup \SAff_n(\mathbb{Z}) \rangle$ are perfect (see Lemma \ref{lem:Affn-is-perfect}). In addition these groups are self-similar and satisfy property~H; to see this one can use the argument given in Lemma \ref{lem:affine-stabilizers}.
Finally, it follows from \ref{cor:uncountably-many-iso-types-simple} that the family $(M_{p,n}^{\{\omega,\omega'\}})_{\omega' \in \mathcal{S}_{p,n}}$ of subgroups contains uncountably many pairwise non-isomorphic groups
\end{proof}
In order deduce Theorem \ref{thm:main-theorem} from Theorem \ref{thm:main-theorem-precise}, it remains to determine the number of generators. It is known that $\SL_n(\mathbb{Z})$ and $\SL_n(\mathbb{F}_p)$ are $2$-generated (see \cite{HuaReiner49}),
and so $\SAff_n(\mathbb{Z})$ and $\SAff_n(\mathbb{F}_p)$ can be generated by $3$ elements.
Since $A_{p,n}^{\omega}$ is generated by two copies of $\SAff_n(\mathbb{F}_p)$, it is $6$-generated. Similary the group $G_{p,n}$ is $6$-generated and so $M_{p,n}^{\{\omega,\omega'\}}$ -- which is generated be three copies of $G_{p,n}$ -- can be generated using $18$ elements.
\bibliographystyle{amsplain}
|
1,116,691,497,890 | arxiv | \section{Introduction}
Lossless data compression is important in application domains and usage environments where bandwidth or storage limitations may negatively impact application or system performance. Generally classifiable into statistical or dictionary methods, lossless data compression algorithms can range widely in compression speed and efficiency (compression factor). Certain algorithms, especially the more efficient, can be quite computationally expensive, and as the data processing needs of current scientific endeavor continue to scale with more rapidity than storage or bandwidth, compression becomes increasingly necessary, but questions remain as to how to accelerate it and how to do so without consuming the resources devoted to computation.
The use of graphics processing units (GPUs) for general purpose computation, i.e. problems outside the graphical domain, is a relatively recent development. First this was achieved though third party toolkits, e.g. Stanford's BrookGPU, but even more recently have GPU manufacturers themselves begun to offer general purpose tools which give the programmer a lower level communion with the chip than earlier GPGPU programming interfaces which are built upon OpenGL and DirectX. One of these, and currently the most prominent, is the Compute Unified Device Architecture(CUDA) from the NVIDIA corporation. The potential benefits of GPUs in general purpose computation are great, but potential must be emboldened, more so even than for parallel programming on the x86. To achieve anywhere near the theoretical maximums in performance on the GPU, the computation patterns underlying a solution's algorithm must be very near to the traditional usage of the GPU. A prospective algorithm's implementation on the GPU should be, in order of importance to performance, highly data parallelizable, logically simple, and have relatively many computations to memory accesses. In essence, to use the GPU to maximum effect, the abstractable computation patterns underlying a solution should be co-linear to the GPUs original task, graphics rendering. Our problem domain, I/O, while it does not perfectly fit these criteria, has already benefited from GPUs to enhance storage redundancy~\cite{mlcurry}; we attempt now their utilization in lossless data compression.
One major difficulty here in achieving good speedup with slim negative side effects is that lossless data compression algorithms can generally not be, in their unaltered form, thought of as highly parallelizable. Indeed, if one wishes to express these algorithms in parallel, one often needs to consider tradeoffs between compression efficiency and performance. Nevertheless, we hope to effectively demonstrate that it is possible to come to a reasonable middle ground with respect to coding acceleration and efficiency loss.
\section{Huffman Compression}\label{huffman}
Statistical methods of data compression perform analysis on the nature of the data to make intelligent decisions about how it can be represented more efficiently in compressed form. The Huffman encoding algorithm falls within this genus and operates by counting the appearance of every distinct symbol in the uncompressed data, then representing the more frequent with shorter codes than the less. Every symbol in the data is replaced with its code, and if the data is non-random, i.e. a few symbols appear with greater frequency than others, compression can be achieved. The Huffman compression algorithm is old by the standards of our science~\cite{huffman}, but is still used, and has the attractive quality of being a primitive of several more modern and common algorithms, e.g. Deflate~\cite{gzip} and potentially the algorithm described by Burrows and Wheeler~\cite{bwt}.
\subsection{Parallel Huffman Compression}\label{p_huffman}
There is literature on parallel Huffman coding and of variety, ranging from the actual construction of Huffman codes in parallel~\cite{berman}, ~\cite{atallah} to ~\cite{klein} which addresses details of decomposition for parallel Huffman decoding and demonstrates some moderate decoding speedups while maintaining optimally encoded data by making use of the observation that Huffman codes can frequently synchronize. Because of limitations in our architecture, we must try to create the simplest encoding routine possible. In doing this we make a minor modification to the output of the Huffman algorithm.
An alteration is necessary because of the nature of Huffman codes, i.e. they are of a variable length; an encoded data string is composed of these codes packed together in a nature where bit codes can cross byte boundaries. Simple decomposition of the encoded data stream into blocks of static size would result in the practical certainty that decoding would take an erroneous path, which is discussed in some detail in~\cite{klein}. One counter to this is to pack the blocks to byte boundaries, introducing some size overhead. One more change is necessary. Because the codes are of variable lengths, even if we encode a constant number of symbols in each block, the resulting length of the encoded block will vary, sometimes dramatically. For this reason, we must encode an indication of where the block starts and ends. Our approach is again simple; at the start of the encoded block we give the length of the block which is known by making an additional pass over the unencoded block and summing the lengths of the code representation of the symbols. Our implementation stores this length as an unencoded four byte integer for simplicity, and because of this and the requirements of our architecture, we pack the blocks to four byte boundaries.
\begin{figure}
\centering
\includegraphics[scale=.5]{ascii_strings.eps}
\caption{The original string and its ASCII representation. As ASCII encodes each character with a single byte, the nine character string is 72 bits.}
\label{fig:string}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{huff_codes.eps}
\caption{The binary tree created by the Huffman algorithm and the encoded representation of the original string. The encoded representation of a character is
found by traversing the tree, assigning a binary 0 or 1 for a left or right traversal respectively.}
\label{fig:codes}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{packed_bytes.eps}
\caption{Decomposing the string into three symbol blocks and then packing the encoded bits to the next byte boundary.}
\label{fig:packed_bytes}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=.5]{add_delimiters.eps}
\caption{The addition of a (underlined)length delimiter at the start of the block. Single bytes are used for the overhead in the diagrams for simplicity. In our implementation, we pack the block to four bytes and use a four byte integer to represent the block length}
\label{fig:delimiters}
\end{figure}
The overhead of our modifications range therefore, from between 32 bits and 63 bits per block, with the variation being because if the size of the encoded block is evenly divisible by four bytes, it is unnecessary to add packing bits to its tail. This overhead naturally becomes less significant as the length of the block is increased, which is indicated in the figure measuring block size against overhead. The time required for summing the block lengths is measurable but undramatic and most noticeable when comparing the runtimes of a sequential block encoder to a sequential traditional(non-block) encoder.
To parallelize decoding, it is sufficient to build a table of offsets into the encoded data from these block length delimiters. The computation threads on the GPU can then index into the encoded data and decode in parallel, storing decoded data in a sequential array indexable by thread and block numbers.
\begin{figure}
\centering
\includegraphics[scale=1]{efficiency-lost.eps}
\caption{The size overhead of using the parallel Huffman algorithm graphed
against the block size. The number of bytes overhead per block remains a constant, so as the block size increases the overhead becomes less significant. At large block sizes, the overhead per block can be less than one percent.}
\label{fig:eff_lost}
\end{figure}
\section{Performance Comparisons}\label{performance_comp}
\subsection{Encoding}\label{perm_comp_enc}
Acceleration over our sequential implementation was achieved for both encoding and decoding.
This comparison is most meaningful in terms of throughputs, the amount of data which can be encoded or decoded per second. Following is the comparison of our sequential encoder to our parallel GPU encoder and a parallel CPU encoder programmed with OpenMP. The GPU used in these experiments is the NVIDIA GeForce GTX 285 with 240 cores at 1.5 GHz, and the CPU used is the Intel Core i7 Extreme Edition 965 with four cores at 3.2 GHz. Despite the GPU having 60 times the number of cores as our CPU, the differences in throughput between the GPU encoder and the OpenMP encoder are not dramatic. This paradox can be largely resolved by recalling that the architecture of the GPU was developed for the SIMD, single instruction multiple data, programming model while our CPU was developed with MIMD, multiple instruction multiple data, in mind.
The processors in the GPU are organized into 30 groups of 8 cores. Each group of cores is known as a multiprocessor and contains a single control unit and a small amount of high speed memory shared between the cores in the multiprocessor. The control unit broadcasts an instruction to all the cores, and optimal performance can only be achieved when every core can execute it. If, for example, the instruction is a branching statement, then there is a likelihood that some cores will not follow the jump, and in this case, some cores must remain inactive until they either themselves satisfy the branching instruction or control passes beyond the branching sections of the code. Therefore, in the worst case, when only one core can satisfy the jump and the other seven are left idle, our GPU behaves more like a crippled 30 core shared memory MIMD machine with a slow clock speed and no automatic memory caching. Our encoder consists of complicated branching statements for the bit manipulation which makes worst case behavior relatively likely. This also illustrates that in heterogeneous programming environments, one must be very aware of the strengths and weaknesses of the various architectures so that programming effort can be directed where benefits are most likely to be found.
\begin{figure}
\centering
\includegraphics[scale=1]{encoding-throughput.eps}
\caption{We saw superior performance with the GPU based encoder compared to our multi-core CPU encoder and our single threaded CPU implementation}
\label{fig:enc_perf}
\end{figure}
\subsection{Decoding}\label{perm_comp_dec}
Our decoding routine consists of reading bits and traversing a binary tree repeatedly for each code string. This contains branching instructions, but markedly fewer than the encoding routine, and the factor of acceleration on the GPU is greater than that of the encoding routine. Also interestingly, the measured increases in throughput from using OpenMP on the CPU, compared to the sequential implementation, are even better than linear by number of cores on the CPU. By launching increasing numbers of threads, we can hide latency by issuing more memory requests. In this way, we saw continued performance improvements through increasing thread counts up to 8. Intel's Hyper-Threading technology assists significantly in this.
\begin{figure}
\centering
\includegraphics[scale=1]{decoding-throughput.eps}
\caption{Again, our GPU based decoder gave better performance than both CPU decoders.}
\label{fig:dec_perf}
\end{figure}
\section{Conclusions}\label{conclusions}
The data presented here suggests that the strengths of the GPU architecture are robust enough to give performance benefits to applications which, while data parallel, still have a not insignificant level of logical complexity. Optimal use of the GPU's SIMD cores requires the complete elimination of divergence within warps, which, in practicality, requires the complete absence of if statements from the GPU sub-routine; however, sub-optimal performance, through the emulation of MIMD, can still be acceptable. Despite the large number of divergent threads in a warp; our encoder kernel is capable of throughputs, sans memory transfer times to and from the GPU, in excess of 4 GB/sec. Total encoding throughputs using the GPU are weighed down by the need to transfer data to and from the card; however, in an online system, or when encoding very large amounts of data, this could be somewhat ameliorated by using asynchronous data transfers with the GPU to fully exploit bus resources while encoding.
Realistically, current performance levels for our GPU encoder and decoder do not warrant the use of the program as a standalone encoding system. The Huffman algorithm itself is not the best choice for such purposes and even the strengths of the GPU do not make up for the algorithm's deficiencies. However, our encoding system could be used as an auxiliary process to a GPU application. Much greater coding performance than that shown in the above figures could be seen were the data to be encoded already on the GPU.
\bibliographystyle{abbrv}
|
1,116,691,497,891 | arxiv |
\section{Location-Based Geometric Beamforming and Mobility Management} \label{sec:beamforming_and_mobility}
Network densification and accurate \gls{UE} positioning in 5G will open new opportunities also for \gls{RRM} and \gls{MIMO}. Especially \gls{MU-MIMO} is seen as a promising solution for 5G as it
enables \gls{MIMO} gains also with simple single antenna \glspl{UE}.
As discussed in Section~\ref{sec:state}, \glspl{UE} in \glspl{UDN} are close to an \gls{AN} with a high \gls{LoS} probability. This makes it possible to design and adopt geometric beams at transmitters without the need to estimate the full-band \gls{CSIT}{\color{black}~\cite{SDM14, kela_borderless_2015, kela_location_based_beamforming_2016}}. This is enabled by using the estimated elevation and azimuth angles relative to the \gls{AN}'s coordinate system. The synthesized \gls{MU-MISO} matrix can then be formed comprising only \gls{LoS}-paths for all served \glspl{UE}. One significant benefit of such beamforming scheme is that full-band \gls{UL} reference signals, traditionally employed for obtaining \gls{CSIT}, can be replaced with narrowband \gls{UL} pilots.
This will allow for substantial energy savings, especially on \gls{UE} side, which is a very important aspect in future wideband 5G networks. In addition to transmit beamforming, the location-based approach can be used also for calculating the receive filters at \glspl{UE} when high-accuracy \gls{DoA} estimates of the desired signals are available.
In addition to \gls{MU-MIMO} beamforming, accurate positioning is also a key enabler for paradigm shift from classical cellular networks towards device-centric borderless networks with centralized control entities. When {\color{black} the} network is constantly keeping track of \gls{UE} locations, it can assign a set of serving \glspl{AN} for each \gls{UE}. Then data for each \gls{UE} is partially {\color{black} or} fully available at some small set of nearby \glspl{AN} as also outlined in \cite{huawei_5g_air_interface}. This enables ultra-short latencies and borderless \gls{QoE} with seamless mobility decreasing handover latencies~\cite{kela_borderless_2015}. Furthermore, such device-centric mobility approach can reduce the energy and resource consuming cell measurement and reporting burden of legacy cellular systems.
\subsection{Evaluation Setup}
We consider a similar setup as in Sections~\ref{sec:DoA_results} and \ref{sec:positioning_results},
with $43$ \glspl{AN}, user density of \SI{1000}{users/km}$^2$,
and all users are dropped with a uniform distribution on the simulated street area. To follow the 3GPP requirements and scenarios for next generation studies~\cite{3GPP_TS_38_913}, a single unpaired \SI{200}{MHz} \gls{TDD} carrier is assumed and \SI{30}{km/h} velocity is used for the \glspl{UE}. Additionally, the ratio between \gls{DL} and \gls{UL} is configured to \SI{4.7}:\SI{1} in the employed 5G \gls{TDD} frame structure~\cite{kela_location_based_beamforming_2016}. Every \gls{DL} transmission is assumed to start with a precoded \gls{DL} pilot and \gls{MRC} is used for {\color{black} calculating} the receive filter according to the measured \gls{DL} pilot. In case of location-based receive beamforming, estimated elevation and azimuth angles relative to \gls{UE}'s coordinate system are used for calculating the receive filter towards the serving \gls{AN}. For both location-based transmit and receive beamforming, a \SI{2}{} degree measurement error in addition to the \gls{UL} pilot measurement aging in both elevation and azimuth angles is assumed. \gls{UL} pilots used for \gls{CSIT} estimation and positioning are scheduled according to the round-robin scheme. Hence, in the simulated scenario the average \gls{CSIT} latency is $\sim$\SI{3.3}{ms}. \glspl{UE} are assigned to be served by the closest \gls{AN}, i.e., a centralized mobility management scheme based on estimated \gls{UE} locations is assumed.
\subsection{Performance Results and Comparison of Location-based and CSI-based Beamforming}
In~\cite{kela_location_based_beamforming_2016},
it was observed that both \gls{MF} and \gls{ZF} precoders work rather well in \glspl{UDN}, where \gls{LoS}-paths are dominating over reflections and diffractions. Hence, for this study a \gls{BD} algorithm~\cite{spencer_block_diagonalization}, which can be understood as an extension of \gls{ZF}, is chosen instead of conventional \gls{ZF}. Especially attractive feature in \gls{BD} is that the beams can be optimized for multiantenna receivers enabling better performance of receive filters.
In Fig.~\ref{fig_dl_tput}, \glspl{CDF} of user experienced \gls{DL} throughputs are shown with both \gls{CSI}-based and location-based transmit and receive beamforming schemes. Due to high \gls{LoS} probability and dominance of \gls{LoS}-paths, both \gls{CSI}-based and location-based beamforming schemes obtain rather similar performance over the whole distribution. Additionally, focusing the receive filter only to \gls{LoS}-path with location-based receive beamforming outperforms \gls{DL} pilot based receive beamforming. Furthermore, ~$100$\% increase in 5-percentile throughput can be obtained when compared to the \gls{CSI}-based approach. Since \glspl{AN} are using same physical resources for transmitting beamformed \gls{DL} pilots, \gls{DL} pilot contamination degrades the performance of \gls{CSI}-based receive beamforming. In case of transmit beamforming with \gls{ZF}-based precoders like \gls{BD}, better performance at 5-percentile can be obtained with channel-based transmit beamforming due to the fact that there are still a few \glspl{UE} in \gls{NLoS} condition towards the serving \gls{AN}. Thus, the best overall performance is obtained by using channel information for transmit beamforming and location information for the receive filter. Note that the pilot overhead is here the same for all beamforming schemes. However, if pilot overhead caused by full-band reference signals needed in \gls{CSI}-based beamforming was reduced in the {\color{black} corresponding} location-based schemes, the performance would improve in terms of the mean throughput and area capacity as shown in~\cite{kela_location_based_beamforming_2016}. This is because location-based beamforming schemes do not require full-band reference signals, while narrowband, even single-subcarrier, pilots suffice.
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{dl_throughput_v2}
\caption{\gls{DL} user throughput \glspl{CDF}, over random routes through the Madrid map, with channel/\gls{CSI} and location-based \gls{MU-MIMO} transmit \gls{BF} and \gls{RBF}. Relying only on location information is better on average than using only {\color{black}\gls{CSI}} measurements. The best overall performance is obtained by using channel-based \gls{BF} and location-based \gls{RBF}.}
\label{fig_dl_tput}
\vspace{-6pt}
\end{figure}
In this example, in order to increase fairness of \gls{BD} precoding and to reach the throughput requirement of \SI{50}{Mbps} for all users all the time, {\color{black} stated in}~\cite{ngmn_5g}, the scheduling method introduced in \cite{kela_borderless_2015} is used. This fair \gls{TD} scheduling approach is applied in a way that in every other subframe only a subset of users is chosen as scheduling candidates, in particular the users with the lowest past average throughput. In other subframes the number of simultaneously served users is maximized to increase the total system throughput.
{\color{black} The results in Fig.~\ref{fig_dl_tput} indicate that such scheduling provides less variation in throughputs across the \glspl{UE}.}
Moreover, the fair \gls{TD} scheduling with channel-based transmit beamforming and location-based receive beamforming decreases the simulated area throughput from~\SI{1}{Tbps/km\textsuperscript{2}} to~\SI{0.65}{Tbps/km\textsuperscript{2}}. {\color{black} Hence, when more fair \gls{TD} scheduling is applied, the total system throughput suffers to a certain extent from favoring users with poor channel conditions over users with high signal-to-interference-plus-noise ratio.}
\section{Conclusions and Future Work} \label{sec:conclusion}
In this article, the prospects and enabling technologies for high-efficiency device positioning and location-aware communications in dense 5G networks were discussed and described. It was demonstrated that very high accuracy 2D/3D positioning and tracking can be accomplished, by adopting \gls{DoA} and \gls{ToA} estimation in the \glspl{AN} together with appropriate fusion filtering in the form of \gls{EKF}, for example. In general, outdoor positioning accuracies below one meter were shown to be technically feasible. It was also shown that location information can be used efficiently in the radio network, e.g., for geometric location-based beamforming, where the needed pilot or reference signal overhead is substantially smaller compared to the basic \gls{CSI}-based beamforming approaches. Thus, extracting and tracking the locations of the user devices in the 5G radio network can offer substantial benefits and opportunities for location-based services, in general, as well as to enhanced and more efficient communications and radio network management.
\section{Introduction} \label{sec:introduction}
\IEEEPARstart{F}{uture} 5G networks are expected to provide huge improvements in the capacity, number of connected devices, energy efficiency, and latencies when compared to the existing communications systems~\cite{Osseiran14, huawei_5g_air_interface}. These features will be enabled by the combination of higher bandwidths, advanced antenna technologies, and flexible radio access solutions, among others. Especially in urban environments, 5G networks are also expected to consist of densely distributed \glspl{AN}~\cite{huawei_5g_air_interface}
located, e.g., in lamp posts above the streets as illustrated in Fig.~\ref{fig_lamp_post}. Consequently, a single \gls{UE} in such dense networks {\color{black} is within coverage range to} multiple closely located \glspl{AN} at a time. Such short \gls{UE}-\gls{AN} distances
provide obvious benefits for communications, e.g., due to lower propagation losses and shorter propagation times, but interestingly can also enable highly accurate \gls{UE} positioning. Altogether, 5G networks allow for many opportunities regarding acquisition and exploitation of \gls{UE} location information in unforeseen manners~{\color{black}\cite{Werner15, koivisto_joint_2016}}. This is the leading theme of this article.
\urldef\gforum\url{http://kani.or.kr/5g/whitepaper/201
One of the improvements in 5G networks concerns the positioning accuracy.
It is stated, e.g., in~\cite{ngmn_5g, 5g-ppp_vision_2015, 5g_forum_5g_2015}, {\color{black} and~\cite{3GPP_TS_38_913}\footnote{See also 3GPP technical report 22.862, v.14.1.0.}}, that 5G should provide a positioning accuracy in the order of one meter or even below. That is significantly better than the accuracy of a couple of tens of meters provided in \gls{LTE} systems by \gls{OTDoA}-based techniques. The required positioning accuracy in 5G networks will outperform also commercial \glspl{GNSS} where the accuracy is around \SI{5}{m}, and \gls{WLAN} fingerprinting resulting in a \mbox{\SIrange[range-phrase = --]{3}{4}{m}} accuracy.
Another improvement that 5G networks may provide concerns the energy efficiency of positioning. This stems from the common assumption that 5G networks will exploit frequently transmitted \gls{UL} pilot signals for channel estimation purposes at the {\color{black} \glspl{AN}}. These signals can be used also for positioning in a network-centric manner where the \gls{UE} location is estimated either {\color{black} independently} in the \glspl{AN} or in a centralized fusion center{\color{black}, assuming known AN locations,} and thus no calculations are needed in the mobile \glspl{UE}. Note that this is a considerable difference to the device-centric positioning, e.g., \gls{GNSS}, where the mobile \glspl{UE} are under heavy computational burden. Therefore, network-centric positioning techniques provide significant power consumption improvements and enable ubiquitous high-accuracy positioning that can run in the background continuously.
{\color{black} Such a functionality decreases also the signaling overhead when the location information is to be used on the network side, but on the other hand, requires additional care for privacy as the positioning is not carried out at the \glspl{UE} themselves.}
As a third improvement in 5G-based positioning, {\color{black} regardless whether it is network- or device-centric, location information can be obtained} in complete independence of \gls{UE}-satellite connections everywhere under the network coverage area, including also challenging indoor environments.
\begin{figure}[!t]
\centering
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=8.85cm]{UDNlamppost}
\caption{}
\label{fig_lamp_post}
\end{subfigure}
\\[12pt]
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=8.85cm]{UL_ref_MU-MIMO_fusion2}
\caption{}
\label{fig_overall}
\end{subfigure}
\caption{Illustration of a 5G network where (a) AN, deployed in a lamp post, provides a LoS connection to a nearby UE and (b) ANs estimate DoAs/ToAs of the UEs based on UL pilot signals. The obtained estimates are then communicated to a fusion center providing the final location estimate which, in turn, enables geometric DL beamforming.}
\label{fig_ANs}
\vspace{-6pt}
\end{figure}
\urldef\whereone\url{http://www.ict-where.eu/}
\urldef\wheretwo\url{http://www.ict-where2.eu/}
The aim of this article is to discuss the technical enablers of envisioned device positioning in 5G networks, and to promote the prospects of the obtained location-awareness. In this regard, focus is given to {\color{black} location-based} communication and network management techniques such as location-based beamforming as well as mobility and \gls{RRM}{\color{black}~\cite{SDM14}\footnote{\color{black}{See also the WHERE and WHERE2 projects at \mbox{\whereone} and \mbox{\wheretwo}}}}. We recognize that \gls{UE} location information can be exploited by the \gls{UE} itself as well as shared with third parties, thus allowing for innovative location-based applications to emerge.
Particularly, we will focus on the connected car application, being identified, e.g., in~\cite{ngmn_5g} as one key application and target for future 5G mobile communication networks, with a minimum of $2000$~connected vehicles per km$^2$ and at least \SI{50}{Mbps} in \gls{DL} {\color{black} throughput}.
Now, facilitating such greatly enhanced connected vehicle applications, having a 5G network with built-in capability to localize and track vehicles is a very tempting prospect. Furthermore, location information is a central element towards self-driving cars, \glspl{ITS}, drones as well as other kinds of autonomous vehicles and robots which are envisioned to be part of not only the future factories, but the overall future society within the next $5-10$~years.
\section{Introduction}
\input{introduction}
\input{network_and_positioning}
\input{technologies}
\input{beamforming_and_mobility}
\input{conclusion}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran/bibtex/IEEEtran}
\section{5G Networks and Positioning Prospects} \label{sec:network_and_positioning}
{\color{black} \subsection{Technical Properties of 5G Radio Networks}}
Generally, it is expected that network densification will play an important role in achieving demanding requirements of 5G networks. The inter-site distance of \glspl{AN} in such \glspl{UDN} is envisioned to range from a few meters up to a few tens of meters, e.g., assuming several \glspl{AN} per room indoors and an \gls{AN} on each lamp post outdoors~\cite{Osseiran14}. Moreover, these 5G \glspl{AN} are expected to be equipped with smart antenna solutions, such as antenna arrays supporting \gls{MIMO} techniques~\cite{5g-ppp_vision_2015}. Such antenna technologies are suitable for effective communications as well as accurate \gls{DoA} estimation, which in turn allows for high-accuracy positioning. Furthermore, it is argued that devices tend to be in \gls{LoS} condition with one or multiple \glspl{AN} due to network densification, which is a favorable condition not only for communications but also for positioning purposes.
{\color{black} It} is commonly agreed that 5G technologies will require wide bandwidths in order to meet the envisioned capacity requirements. Therefore, 5G networks will most likely operate at higher frequency bands, including \glspl{mmWave}, where the availability of unallocated spectrum is considerably higher. Such high frequency bands together with \glspl{UDN} can provide very high overall system capacity and enable an efficient frequency reuse~\cite{5g-ppp_vision_2015}. However, with the envisioned high frequencies, the propagation conditions become more demanding due to, e.g., larger propagation losses. Hence, the effective range between transmitting and receiving elements is relatively short which also emphasizes the importance of expected \glspl{UDN}. Furthermore, the utilization of effective antenna solutions become more practical as a result of shorter wavelengths, and consequently due to smaller physical size of antenna elements. {\color{black} In addition to the potential frequency bands above \SI{6}{GHz}, also frequencies below \SI{6}{GHz} are expected to be used in 5G networks~\cite{ngmn_5g}.} Apart from a communication perspective, the envisioned wide bandwidths enable also very accurate \gls{ToA} estimates which in turn provide an opportunity for positioning with remarkably high accuracy \cite{Werner15}.
In contrast to the earlier cell-centric architectures, it is currently under discussion whether 5G networks will be developed in a more device-centric manner. Moreover, it is envisioned that 5G networks could also provide improved \gls{QoE} at cell borders with only a minimal system-performance degradation compared to earlier systems~\cite{kela_borderless_2015}. This development enables tailoring of a particular communication session and the functions of the associated \glspl{AN} to a connected device or service instead of obtaining services from the \gls{AN} commanding the specific cell.
In such device-centric architecture, a given device can periodically send \gls{UL} signals to connected \glspl{AN} in which \gls{UL} reference signals are used for channel estimation, but they can also be employed for network-centric positioning as illustrated in Fig.~\ref{fig_overall}. Furthermore, future 5G networks are expected to operate with relatively short radio frames
resulting in availability of frequent location information about a transmitting device.
{\color{black}\subsection{Leveraging Location-Awareness in 5G Networks}}
\label{sec:prospects}
Continuous positioning provides awareness not only of the current but also of the past \gls{UE} locations and thus the network is able to carry out \gls{UE} tracking. When the \gls{UE} location and movement information is processed by predictive algorithms, the network can even predict the \gls{UE} locations to some extent. Availability of such location information in turn enables a wide selection of entirely new features and services in 5G networks. First, location-awareness can be used for communications purposes by enhancing the utilization of spatial dimension, e.g., by geometric beamforming~\cite{kela_location_based_beamforming_2016} and sophisticated spatial {\color{black} interference mitigation}. These features allow for multiplexing a high density of \glspl{UE} and provide significant throughput improvements for high-mobility \glspl{UE}, as illustrated in Section~\ref{sec:beamforming_and_mobility}. Second, a combination of location information and measured radio parameters over a long time period {\color{black} enables} the construction of \glspl{REM}, depicted in Fig.~\ref{fig_REM}, which, in turn, can open many opportunities in terms of proactive \gls{RRM}{\color{black}~\cite{SDM14}}. Particularly, knowledge of large-scale fading and location-based radio conditions can be utilized for {\color{black}\it\gls{RRM} purposes} without the need of knowing the instantaneous channel information between the \gls{AN} and \gls{UE}. Therefore, the network is able to carry out proactive allocation of active \glspl{UE} to nearby \glspl{AN} such that, e.g., power consumption, load balancing and latencies are optimized as depicted in Fig.~\ref{fig_proactive}. Location-awareness can improve network functionalities also
by enabling proactive location-based backhaul routing such that the \gls{UE}-specific data can be communicated with a high robustness and low end-to-end latency.
\begin{figure}[!t]
\centering
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=8.85cm]{REM}
\caption{}
\label{fig_REM}
\end{subfigure}
\\[12pt]
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=8.85cm]{street_view_proactive_RRM}
\caption{}
\label{fig_proactive}
\end{subfigure}
\\[12pt]
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=6.55cm]{ITS}
\caption{}
\label{fig_ITS}
\end{subfigure}
\caption{Illustrations of {\color{black} selected} positioning prospects in 5G a) \gls{REM} generation, b) proactive \gls{RRM} for a car whose location is being tracked, and c) \gls{ITS}-based traffic control and collision avoidance with self-driving cars.}
\label{fig_prospects}
\vspace{-6pt}
\end{figure}
{\color{black} The obtained location-awareness can be exploited also in the \glspl{UE} as well as by third parties for providing other than purely communications type of services. Taking traffic and cars as an example, up-to-date location information and predicted \gls{UE} trajectories can provide remarkable improvements, e.g., in terms of traffic flow, safety and energy efficiency. When comprehensively gathered car location information is shared with \glspl{ITS}, functionalities such as traffic monitoring and control can be enhanced. Accurate location information is needed also in the cars themselves, e.g., for navigation purposes, especially when considering autonomous and self-driving cars. Location-awareness is required also for collision avoidance. Within communications range cars can report their location directly to other cars, but when the link between the cars is blocked, location notifications are transmitted in collaboration with \glspl{ITS} as illustrated in Fig.~\ref{fig_ITS}. Naturally, the demands and functionalities regarding self-driving cars cannot be met everywhere and at all times by existing communications systems and satellite-based positioning. Consequently, advanced communications capabilities and network-based positioning in 5G is likely to play an important role in the development of self-driving car systems}.\\[2pt]
\section{Enabling Technologies for High-Efficiency Network-centric Positioning} \label{sec:technologies}
\subsection{State-of-the-Art}
\label{sec:state}
Dense networks are characterized by radio channels that are dominated by the \gls{LoS}-path.
For example, the typical Rice-factor{\color{black}, being a power ratio between the LoS component and all other propagation paths,} in urban micro-cell environments is around \SI{10}{\decibel}, even in sub-\SI{6}{\giga \hertz}~\cite{metis_channels}. Additionally, network densification increases the \gls{LoS} probability between \glspl{UE} and \glspl{AN}. As an example, 3GPP employs a channel model based on extensive measurements in which the \gls{LoS} probability is higher than \SI{0.7}{} for a maximum \gls{UE}-\gls{AN} distance of \SI{35}{m}.
Determining the \glspl{AN} that are in \gls{LoS} condition to a given \gls{UE} is important since it allows estimating and tracking the directional parameters of the \gls{LoS}-path, in addition to the time-of-flight and clock-offsets, thus greatly improving the \gls{UE} positioning accuracy. Particularly, the \gls{LoS} condition of a radio link may be assessed by estimating the corresponding Rice-factor. A multichannel observation is obtained for each \gls{UL} reference signal given that a multicarrier waveform and multiantenna \glspl{AN} are employed. Sequential estimation of the Rice-factor can be accomplished, e.g., with a particle filter due to the non-Gaussian nature of the amplitude distribution of the \gls{UL} multicarrier channel. Finally, \gls{LoS} detection can be accomplished using a likelihood-ratio test, or a model selection technique. In case all \glspl{AN} are in \gls{NLoS} to the \gls{UE}, {\color{black} coarse} network-centric positioning can still be achieved using radio frequency fingerprinting, received signal strength indicator and cell-identifier, among others \cite{SDM14}.
Multicarrier waveforms offer a versatile approach for estimating ranges between a given \gls{UE} and multiple \glspl{AN}~\cite{SDM14}. Relying solely on \gls{UL} reference signals makes it possible to synchronize the \glspl{AN} as well as the \gls{UE}, in addition to estimating the \glspl{ToA} of the \gls{LoS}-paths~\cite{Werner15, koivisto_joint_2016}.
{\color{black}
The actual sequential estimation of the \glspl{ToA} and clock-offsets can be accomplished with different Bayesian filters either in a
cascaded or fully centralized manner depending on the network architecture, baseband processing capabilities of the \glspl{AN}, and backhaul capacity.
Note that the UL reference signals can also provide additional information for \gls{UE} positioning when utilized, e.g., for tracking Doppler-shifts.
}
\glspl{AN} with multiantenna transceivers allow for estimating the \gls{DoA} of the \gls{LoS}-path from \gls{UL} reference signals, and such an information can be used for \gls{UE} positioning. Planar or conformal arrays, such as circular or cylindrical antenna arrays, make it possible for estimating elevation and azimuth arrival angles, and enable 3D positioning. Bayesian filtering techniques can also be employed for tracking the \glspl{DoA} of the \gls{LoS}-paths from mobile \glspl{UE} as well as fusing the \glspl{ToA} and \glspl{DoA} in order to allow for joint \gls{UE} positioning and network synchronization~\cite{Werner15, koivisto_joint_2016}. \glspl{AN} with analog beamforming structures and sectorized antennas can also be exploited for \gls{UE} positioning and tracking~\cite{WWHCV15}.
{\color{black} Due to the non-linear nature of the involved state-space models, estimation and tracking can be carried out with different non-linear Bayesian filtering techniques. In this article, the tracking processes are carried out using the \gls{EKF} due to its highly accurate estimation performance and low computational complexity compared to, e.g., particle filters and the \gls{UKF}. In general, within the \gls{EKF}, the state of a system is first propagated through a linearized state evolution model and this prediction is, thereafter, updated using the obtained measurements and a linearized measurement model, through which the state is associated with the measurements~\cite{koivisto_joint_2016}.}
Finally, the techniques overviewed in this section for \gls{UE} positioning can also be employed for estimating the locations of the \glspl{AN}. For example, a few well-surveyed \glspl{AN} can be used for finding the locations of neighboring \glspl{AN}, which in turn may be used as new anchors. Such a procedure is useful since surveying all \glspl{AN} would increase the deployment cost of \glspl{UDN} significantly. {\color{black} Alternatively, joint \gls{UE} tracking and \glspl{AN} positioning can be achieved using techniques stemming from \gls{SLAM} \cite{bruno_wislam_2011}. These techniques are versatile but the cost is an increase in computational complexity due to the large number of parameters to be estimated.}
\subsection{Tracking of Directional Parameters using EKFs}
\label{sec:DoA_results}
We start by {\color{black} demonstrating} the performance of \glspl{EKF} in tracking the directional parameters of the \gls{LoS}-path. We consider the case where both the \glspl{AN} and \glspl{UE} are equipped with multiantenna transceivers. Two schemes are considered, namely a network-centric approach and a decentralized scheme. {\color{black} In} the network-centric approach, the arrival and departure angles of the \gls{LoS}-path between a \gls{UE} and an \gls{AN} are tracked jointly at the \gls{AN}\footnote{The departure angles can only be retrieved from the arrival angles if the orientation of the \gls{UE}'s array is known. The network-centric approach requires that the calibration data of the \gls{UE}'s array is acquired by the \gls{AN}, e.g., over a \gls{UL} control channel.}. The \gls{UE} transmits periodically \gls{UL} reference signals from all of its antenna elements. Each \gls{UE} antenna element is assigned a single subcarrier, which is different from {\color{black} those} used by the other antennas.
The departure angles are transmitted to the \gls{UE} on a \gls{DL} control channel.
The decentralized scheme consists in tracking the double-directional parameters of the \gls{LoS}-path independently at the \gls{AN} and \gls{UE}. Such a scheme is based on narrowband \gls{UL} transmissions from a single {\color{black} antenna element of a \gls{UE}}. This allows the \gls{AN} to track the arrival angles of the \gls{LoS}-path. These arrival angles are used for designing a beamforming weight-vector that is exploited by the \gls{AN} to transmit a beamformed \gls{DL} reference signal towards the \gls{UE}. This makes it possible for the \gls{UE} to track the arrival angles, and thus design the receive beamforming weight-vector. The transmit and receive beamforming weight-vectors designed in this fashion are compared to \gls{CSIT}-based precoding schemes in Section \ref{sec:beamforming_and_mobility}.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.52]{madrid_map_no_red}
\caption{METIS Madrid grid layout, from~\cite{metis_channels}, where ANs (blue triangles) are distributed densely along the streets.}
\label{fig_madrid}
\vspace{-6pt}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth,trim={0.8cm 0.5cm 1.2cm 0.5cm},clip]{angle_ekf_updated}
\caption{Accuracy of tracking the arrival and departure angles with \glspl{EKF} in terms of the median error. In network-centric \gls{EKF}, the \gls{UE} transmits \gls{UL} reference signals from all antenna elements and the \gls{AN} tracks both arrival and departure angles of the \gls{LoS}-path. In decentralized \gls{EKF}, the \gls{UE} transmits \gls{UL} reference signals from a single antenna element which is used by the \gls{AN} to track the arrival angles of the \gls{LoS}-path with an \gls{EKF}. Such directional parameters are employed in order to design a \gls{DL} beamformed reference signal that {\color{black} then} allows the \gls{UE} to track and design a similar receive beamforming vector.}
\label{fig_angle_results}
\vspace{-6pt}
\end{figure}
The performance of both network-centric and decentralized approaches have been analyzed with a \gls{TDD} based 5G simulator. In particular, we have considered a \gls{UDN} composed of $74$ \glspl{AN} with a deployment identical to that illustrated in Fig.~\ref{fig_madrid}. The \glspl{AN} are equipped with circular arrays composed of $20$ dual-polarized 3GPP patch elements \SI{5}{m} above the ground.
The \glspl{UE} are equipped with circular arrays composed of $4$ dual-polarized elements (cross-dipoles). The transmit power budget of the \glspl{UE} is \SI{10}{dBm} while that of the \glspl{AN} is \SI{23}{dBm}. The \glspl{UE} take different routes {\color{black} through} the Madrid grid (see Fig.~\ref{fig_madrid}) with velocities of \SIrange{30}{50}{km/h}. The carrier-frequency is \SI{3.5}{GHz} and the METIS map-based ray tracing channel model \cite{metis_channels} has been employed. An \gls{OFDM} waveform is used in both \gls{UL} and \gls{DL}. The subcarrier spacing is \SI{240}{\kilo\hertz} and the \gls{TTI} equals \SI{200}{\micro\second}. The \gls{UL} and \gls{DL} reference signals are Zadoff-Chu sequences, similar to those used in \gls{LTE}. The pilots employed by the network-centric and decentralized \glspl{EKF} for tracking the double-directional parameters are transmitted on a single subcarrier in a frequency-hopping manner spanning \SI{10}{\mega\hertz}. Such \gls{UL} and \gls{DL} pilots are transmitted on every $500^\text{th}$ \gls{TTI}. Hence, the \gls{UL} and \gls{DL} beaconing rate is \SI{10}{beacons/s}. The latency between \gls{UL} and \gls{DL} pilots in the decentralized scheme is $2$ \glspl{TTI}.
Fig.~\ref{fig_angle_results} illustrates the performance of both network-centric and decentralized \glspl{EKF} in tracking the double-directional parameters of the \gls{LoS}-path in terms of the median error. In the network-centric \gls{EKF}, the \gls{UL} beacons received at the \glspl{AN} are impaired by uncoordinated interference due to \glspl{UE} transmitting simultaneously roughly \SI{250}{\meter} away from the receiving \glspl{AN}. {\color{black} The performance difference in azimuth-angle estimation at the \gls{AN} and \gls{UE} is due to the larger array aperture of the former. However, the elevation angle estimates at the \glspl{UE} outperform those obtained at the \glspl{AN}. This is explained by the highly directive beampatterns employed at the \glspl{AN} which decrease the effective aperture of the \glspl{AN}' arrays in the elevation domain. In particular, the beampatterns of the 3GPP patch elements composing the arrays at the \glspl{AN} are characterized by a large attenuation at the poles, thus decreasing the estimation accuracy that can be obtained for the elevation angles.}
In the decentralized \gls{EKF}, the \glspl{AN} transmit \gls{DL} reference signals to \SI{8}{} \glspl{UE} simultaneously (\SI{20}{\percent} of the spatial degrees-of-freedom). Such \gls{DL} reference signals are impaired by similar pilots transmitted by neighboring \glspl{AN} (unless {\color{black} being} muted). Results {\color{black} in Fig.~\ref{fig_angle_results}} show that muting neighboring \glspl{AN} leads to improved performance on the azimuth and elevation angles at the \glspl{UE} due to reduced \gls{DL} interference. Such an interference coordination does not influence the performance of the estimated azimuth and elevation angles at the \glspl{AN} since these parameters are {\color{black} estimated} from \gls{UL} reference signals. The network-centric EKF outperforms the decentralized EKF since all parameters are estimated and tracked jointly. The cost is an increase of the computational complexity and {\color{black} required} control channel capacity.
\subsection{Positioning Accuracy using Cascaded EKFs}
\label{sec:positioning_results}
Next we assume that the \glspl{DoA} and \glspl{ToA} are acquired using the network-centric \gls{EKF}-based approach deployed at \glspl{AN} as described in Section~\ref{sec:DoA_results} {\color{black} with more details available in~\cite{koivisto_joint_2016}}. These spatial and temporal estimates from all the \gls{LoS}-\glspl{AN} can be thereafter fused into 3D \gls{UE} location estimates using an additional positioning and synchronization \gls{EKF}, thus assembling a cascaded \gls{EKF} solution within a network {\color{black} as a whole~\cite{Werner15, koivisto_joint_2016}}. In addition to 3D location estimates, the latter \gls{EKF} can be simultaneously used for tracking the valuable clock offset estimates of unsynchronized \glspl{UE} and \gls{LoS}-\glspl{AN}. In order to demonstrate the performance of the cascaded \gls{EKF}, two alternative scenarios for synchronization are considered. In the first scenario, the \glspl{UE} have unsynchronized clocks with drifts whereas the \glspl{AN} are assumed to be synchronized among each other. In the second scenario, the ANs have also mutual clock-offsets, which are not fundamentally varying over time, whereas the clocks within the \glspl{UE} are again drifting as mentioned above. Such scenarios are later denoted as Pos\&Clock \gls{EKF} and Pos\&Sync \gls{EKF}, respectively~\cite{koivisto_joint_2016}.
Considering the radio interface numerology described in Section~\ref{sec:DoA_results} and exploiting a \gls{CV} motion model for the \glspl{UE} attached to vehicles with a maximum speed of \SI{50}{km/h}, the performance of the Pos\&Clock and Pos\&Sync \glspl{EKF} are compared with the classical \gls{DoA}-only \gls{EKF} using both \SI{4.8}{MHz} and \SI{9.6}{MHz} \gls{RS} bandwidths. Since only automotive applications are considered here, more appealing 2D positioning approach was used in the evaluations. {\color{black} The 2D positioning results in terms of \glspl{CDF} are depicted in Fig.~\ref{fig_pos_results} after averaging over multiple random trajectories on the Madrid grid.} Based on the results, the cascaded \glspl{EKF} can provide extremely accurate location estimates for the \glspl{UE} even in the case of unsynchronized \glspl{AN}. As expected, Pos\&Clock and Pos\&Sync \glspl{EKF} outperform the \gls{DoA}-only \gls{EKF} due to the additional \gls{ToA} estimates. Because of a better time resolution, \SI{9.6}{MHz} \gls{RS} bandwidth implies more accurate {\color{black}\gls{ToA} estimates}, and consequently, more accurate positioning can be obtained. {\color{black} Despite the fact that the Pos\&Clock \gls{EKF} is more accurate than the Pos\&Sync EKF due to the synchronized \glspl{AN}, both methods can achieve the envisioned sub-meter positioning accuracy of future 5G networks~\cite{5g-ppp_vision_2015, 5g_forum_5g_2015} with a probability of at least 93\% when using the \SI{9.6}{MHz} bandwidth. In addition to high-accuracy positioning performance, both Pos\&Clock and Pos\&Sync EKFs are able to track also the clock offsets of the \glspl{UE} and \glspl{AN} with an extremely high accuracy.
Finally, due to being able to estimate both azimuth and elevation DoAs in addition to ToAs, the positioning EKF can also facilitate 3D and single AN based positioning\footnote{\color{black} Video of 3D and single AN-based positioning is available at \url{http://www.tut.fi/5G/COMMAG16}.}.}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth,trim={0.8cm 0.5cm 1.2cm 0.5cm},clip]{positioning_cdf_final}
\caption{{\color{black}\Glspl{CDF} for 2D positioning errors with \SI{4.8}{MHz} and \SI{9.6}{MHz} \gls{RS} bandwidths over random routes through the Madrid map. Pos\&Clock EKF refers to synchronized \glspl{AN} whereas Pos\&Sync EKF refers to unsynchronized network elements.}}
\label{fig_pos_results}
\vspace{-6pt}
\end{figure} |
1,116,691,497,892 | arxiv | \section{Introduction}
\label{sec:intro}
Graph-based visual representations are becoming increasingly popular due to their ability to encode visual, semantic, and even temporal relationships in a compact representation that has several downstream tasks such as object tracking~\cite{bal2022bayesian}, scene understanding~\cite{johnson2015image} and event complex visual commonsense reasoning~\cite{aakur2019going,aakur2022knowledge,liang2022visual}. Graphs can help navigate clutter and express complex semantic structures from visual inputs to mitigate the impact of noise, clutter, and (appearance/scene) variability, which is essential in scene understanding. Scene graphs, defined as directed graphs that model the visual-semantic relationships among entities (objects) in a given scene, have proven to be very useful in downstream tasks such as visual question-answering~\cite{hildebrandt2020scene,teney2017graph}, captioning~\cite{johnson2015image}, and even embodied tasks such as navigation~\cite{ravichandran2022hierarchical}, to name a few.
There has been a growing body of work~\cite{xu2017scene,zellers2018neural,Tang_2019_CVPR,Chen_2019_CVPR,yang2021probabilistic,guo2021general,shit2022relationformer} that has focused on the problem of scene graph generation (SGG), that aims to generate scene graph from a given input observation. However, such approaches have tackled the problem by beginning with a fully connected graph, where all entities interact with each other before pruning it down to a more compact graph by predicting edge relationships, or the lack of one, between each pair of localized entities. This approach, while effective, has several limitations. First, by modeling the interactions between entities with a dense topology, the underlying semantic structure is ignored during relational reasoning, which can lead to poor predicate (relationship) classification. Second, by constructing pairwise relationships between all entities in a scene, there is tremendous overhead on the predicate classification modules since the number of pairwise comparisons can grow non-linearly with the increase in the number of detected concepts. Combined, these two issues aggravate the existing long-tail distribution problem in scene graph generation. Recent progress in \textit{unbiasing}~\cite{Tang_2019_CVPR,tang2020unbiased,suhail2021energy,Li_2022_CVPR} has attempted to address this issue by tackling the long-tail distribution problem. However, they depend on the quality of the underlying graph generation approaches, which suffer from the above limitations.
In this work, we aim to overcome these limitations using a two-stage, generative approach called IS-GGT, a transformer-based iterative scene graph generation approach. An overview of the approach is illustrated in Figure~\ref{fig:intuition}. Contrary to current approaches to SGG, we leverage advances in generative graph models~\cite{liao2019efficient,belli2019image} to first sample the underlying interaction graph between the detected entities before reasoning over this sampled semantic structure for scene graph generation. By decoupling the ideas of graph generation and relationship modeling, we can constrain the relationship classification process to consider only those edges (pairs of entities) that have a higher probability of interaction (both semantic and visual) and hence reduce the computational overhead during inference. Additionally, the first step of generative graph sampling (Section~\ref{sec:ggt}) allows us to navigate clutter by rejecting detected entities that do not add to the semantic structure of the scene by iteratively constructing the underlying entity interaction graph conditioned on the input image. A relation prediction model (Section~\ref{sec:rel_pred}) reasons over this constrained edge list to classify the relationships among interacting entities. Hence, the relational reasoning mechanism only considers the (predicted) global semantic structure of the scene and makes more coherent relationship predictions that help tackle the long-tail distribution problem without additional unbiasing steps and computational overhead.
\textbf{Contributions.} The contributions of this paper are three-fold: (i) we are among the first to tackle the problem of scene graph generation using a \textit{graph generative} approach without constructing expensive, pairwise comparisons between all detected entities, (ii) we propose the idea of iterative interaction graph generation and global, contextualized relational reasoning using a two-stage transformer-based architecture for effective reasoning over cluttered, complex semantic structures, and (iii) through extensive evaluation on Visual Genome~\cite{krishna2017visual} we show that the proposed approach achieves state-of-the-art performance (without unbiasing) across all three scene graph generation tasks while considering only $20\%$ of all possible pairwise edges using an effective graph sampling approach.
\section{Related Work}
\label{sec:relatedWork}
\textbf{Scene graph generation}, introduced by Johnson \textit{et al.}~\cite{johnson2015image}, aims to construct graph-based representations that capture the rich semantic structure of scenes by modeling objects, their interaction and the relationships between them. Most approaches to scene graph generation have followed a typical pipeline: object detection followed by \textit{pairwise} interaction modeling to generate plausible \textit{(Subject, Predicate, Object)} tuples, which represent the labeled edge list of the scene graph. Entity localization (i.e., concept grounding) has primarily been tackled through localization and labeling of images through advances in object detection~\cite{ren2015faster,carion2020end}. The relationship or predicate classification for obtaining the edge list tuples has focused mainly on capturing the global and local contexts using mechanisms such as recurrent neural networks and graph neural networks to result in seminal approaches to scene graph generation such as IMP~\cite{xu2017scene}, MOTIFS~\cite{zellers2018neural}, and R-CAGCN~\cite{yang2018graph}. Single-stage methods such as FC-SSG~\cite{Liu_2021_CVPR} and Relationformer~\cite{shit2022relationformer}, as well relational modeling approaches such as RelTR~\cite{cong2022reltr} have integrated context through transformer-based ~\cite{vaswani2017attention} architectures. However, these approaches fail to explicitly tackle the long-tail distributions prevalent in visual scene graphs as proposed by Tang \textit{et al.}~\cite{Tang_2019_CVPR} and Chen \textit{et al.}~\cite{Chen_2019_CVPR}, concurrently. \textbf{Unbiased scene graph generation} models explicitly tackle this problem by building upon SGG models such as VCTree and MOTIFs to provide better predicate classification. Several approaches have been successfully applied to tackle unbiased generation, such as using external knowledge (VCTree~\cite{Tang_2019_CVPR} and KERN~\cite{Chen_2019_CVPR}), counterfactual reasoning (TDE~\cite{tang2020unbiased}), energy-based loss functions (EBML~\cite{suhail2021energy}), modeling predicate probability distributions (PPDL~\cite{Li_2022_CVPR} and PCPL~\cite{10.1145/3394171.3413722}), graphical contrastive losses~\cite{zhang2019graphical}, cognitive trees (CogTree~\cite{yu2020cogtree}, bi-level sampling~\cite{Li_2021_CVPR}, and regularized unrolling (RU-Net\cite{Lin_2022_CVPR}), to name a few. However, these approaches still perform expensive pairwise comparisons to obtain the final scene graph as a collection of tuples rather than directly modeling the underlying semantic structure.
Instead of considering graph generation as tuple detection, we build upon an exciting avenue of research in \textit{graph generative models}~\cite{liao2019efficient,belli2019image,ingraham2019generative,he2022td} to directly sample graph structures conditioned on images. By modeling the graph generation process as sequential decoding of adjacency lists, we can effectively model the interaction between detected entities using a simple, directed graph. A transformer-based relation classification model then converts the simple graph into a labeled, weighted, directed graph to generate scene graphs in an iterative, two-stage approach to move beyond triplet-based detection.
\section{Proposed Approach}
\label{sec:ProposedApproach}
\begin{figure*}[th]
\centering
\includegraphics[width=0.99\textwidth]{Imgs/Arch.pdf}
\caption{The \textbf{overall architecture} of the proposed IS-GGT is illustrated.
We first ground the concepts in the image data (Section~\ref{sec:grounding}) and use a generative transformer decoder network to sample an entity interaction graph (Section~\ref{sec:ggt}) before relation or predicate classification (Section~\ref{sec:rel_pred}) using a transformer-based contextualization mechanism for efficient scene graph generation.}
\label{fig:arch}
\end{figure*}
\textbf{Overview.} We take a two-stage, generative approach to the problem of scene graph generation. The overall approach, called IS-GGT, is shown in Figure ~\ref{fig:arch}. There are three major components to the approach: (i) concept grounding, (ii) structural reasoning, and (iii) relational reasoning. Based on the idea of generative graph models, we use scene-level localization and entity concept hypothesis (Section~\ref{sec:grounding}) to first sample the underlying semantic structure of the scene using a generative transformer decoder network (Section~\ref{sec:ggt}). Once the semantic structure is sampled, the semantic relations (predicates), i.e., the edges, are labeled to characterize the scene graph (Section~\ref{sec:rel_pred}).
\textbf{Problem Statement.} Scene graph generation (SGG) aims to generate a graph structure $\mathcal{G}=\{\mathcal{V}, \mathcal{E}\}$ from a given input image $I$, where $\mathcal{V}=\{v_1, v_2, \ldots v_n\}$ is the graph's nodes representing localized entities (objects) in the image and $\mathcal{E}=\{e_1, e_2, \ldots e_k\}$ represent the edges that describe the relationship connecting two nodes $n_i$ and $n_j$. Each node $v_i \in \mathcal{V}$ has two attributes, a label $l_i \in \mathcal{C_N}$ and a bounding box $bb_i$, where $\mathcal{C_N}$ is the space of all possible concepts in an environment. Each edge $e_i \in \mathcal{E}$ is characterized by a label $r_i \in \mathcal{R_K}$ and an optional assertion score $p(r_i)$, where $\mathcal{R}_K$ is the set of all possible relationships that can be present between the entities $\mathcal{C_N}$.
Typical approaches to this problem have focused on extracting plausible triplets from an exhaustive search space consisting of all possible edges in a fully connected graph. Each node is connected to every other node. A relational prediction model is then trained to distinguish between the plausible relationship between the nodes, including \textit{null} relationship (indicated by a \textit{background} class). On the other hand, we aim to first sample the underlying semantic structure based on the node (entity) hypothesis to help model the global context before relationship classification. This allows us to reduce the computational overload for relationship prediction while restricting the relational reasoning to interactions that are considered to be plausible.
We present the proposed framework below.
\subsection{Concept Grounding: Entity Hypotheses}\label{sec:grounding}
The scene graph generation process begins with entity hypotheses generation, which involves the localization and recognition of concepts in a given image $I$. Following prior work~\cite{zellers2018neural,Tang_2019_CVPR,xu2017scene}, we use a standard ResNet-based~\cite{he2016deep}, FasterRCNN~\cite{ren2015faster} model as the localization module. The object detector returns a set of $n$ detected entities $v_1, v_2, \ldots v_n$, characterized by their location using bounding boxes ($bb_1, bb_2, \ldots bb_n \in \mathcal{B}$) and corresponding labels ($l_1, l_2 \ldots l_n \mid l_i \in C_N$). These entities ($\mathcal{V}$) serve as our node hypothesis space, over which the scene graph generation is conditioned. Each entity is described by a feature representation ($f^i_N$) from the underlying ResNet encoder, through ROIAlign~\cite{he2017mask} using the predicted bounding boxes (ROIs) and the labels are generated through the classification layer from the object detector. Compared to prior work~\cite{zellers2018neural,Tang_2019_CVPR}, we do not have separate visual encoders for capturing the relationships among concepts at this stage.
We allow the entities to be detected and represented independently, which enables us to decouple the ideas of graph prediction and predicate classification.
\begin{algorithm}[t]
\caption{Scene semantic graph structure sampling using a generative transformer decoder.}
\label{alg:GGT}
\begin{algorithmic}[1]
\Require $\mathcal{V} = v_1, v_2, \ldots v_n \mid v_i = \{l_i, f^i_N, bb_i\}$
\Ensure $\mathcal{G} = \{\mathcal{V}, \mathcal{E}\} = \hat{\mathcal{A}_N} = \{\hat{\mathcal{A}^i_N}\}$
\State $ {\mathcal{G}} \gets \emptyset$ \Comment{\textit{Initialize empty graph}}
\State $ \hat{\mathcal{A}_N} \gets \emptyset$ \Comment{\textit{Initialize empty adjacency matrix}}
\State $\mathcal{E} \gets \emptyset$ \Comment{\textit{Initialize empty edge list}}
\For{each node $v_i$ in $\mathcal{V}$}
\State $c_t \gets [f^{1:i}_N; \hat{\mathcal{A}^{1:i}_N}, l_{1:i}]$\Comment{\textit{Context vector for decoding.}}
\State $c_t \gets c_t + PositionalEncoding(c)$
\State $h^0_t \gets MLP(c_t)$ \Comment{\textit{Linear projection}}
\State $\hat{h}^K_l \gets TransformerDecoder(h^0_t)$
\State $h^i_t \gets MLP(\hat{h}^K_l)$ \Comment{\textit{Learned feature space}}
\State $\hat{\mathcal{A}^i_N} \gets Sample(\sigma(MLP(h_t))$
\Comment{\textit{$v_i$'s adjacency list}}
\State $\hat{l}_i \gets Softmax(MLP(h^i_t))$ \Comment{$v_i$'s auxiliary label}
\State $\hat{\mathcal{A}_N} \gets \hat{\mathcal{A}_N} \bigcup \{\hat{\mathcal{A}^i_N}\}$ \Comment{\textit{Populate adjacency matrix}}
\State $\mathcal{E} \gets \mathcal{E} \bigcup EdgeList(\hat{\mathcal{A}^i_N})$ \Comment{\textit{Collect edge list}}
\EndFor
\State $\mathcal{G} \gets \{\mathcal{V}, \mathcal{E}\}$ \Comment{\textit{Construct final interaction graph}}
\end{algorithmic}
\end{algorithm}
\subsection{Iterative Interaction Graph Generation}\label{sec:ggt}
At the core of our approach is the idea of graph sampling, where we first model the interactions between the detected entities in a graph structure. This sampled graph is a \textit{simple}, \textit{directed} graph, where the edges are present only between nodes (i.e., the detected entities) that share a semantically meaningful relationship. Each edge $e_i$ is unlabeled and merely signifies the plausible existence of a semantic relationship between the connecting nodes $v_i$ and $v_j$. Inspired by the success of prior work~\cite{belli2019image}, we model this graph generation process as the autoregressive decoding of the adjacency list $\mathcal{A}^i_N$ for each node $v_i$, using a transformer network~\cite{vaswani2017attention}.
A simplified pseudocode of the whole process is shown in Algorithm~\ref{alg:GGT}.
Given an empty graph $\mathcal{G}=\emptyset$, the underlying structural graph is generated through a sequence of edge and node additions.
Each step of the decoding process emits an output adjacency list conditioned upon the visual features $f^i_N$ of each detected node $v_i$, its hypothesized label $l_i$ and the previously decoded adjacency matrices up to the current step $t$ given by $\hat{\mathcal{A}_t}$.
This iterative graph generation process results in an adjacency matrix $\hat{\mathcal{A}} = \{\mathcal{A}^1_N, \mathcal{A}^2_N \ldots \mathcal{A}^n_N, \forall v_i \in \mathcal{V}\}$.
The final adjacency matrix is an $N\times N$ matrix that can be sampled by some threshold $\gamma$ to obtain a binary adjacency matrix. The values where $\hat{\mathcal{A}}^i_N(i,j)=1$'s indicate that an edge is present between nodes $v_i$ and $v_j$, which can then be added to the edge list $\mathcal{E}$. The edge list is then sorted by its \textit{energy}, given by $E(e_{ij})=\sigma(p_i + p_j)$, where $p_i$ and $p_j$ refer to the confidence scores from the detector that provides a measure of confidence about the existence of the concepts $v_i$ and $v_j$ in the image, respectively. The collection of nodes $\mathcal{V}$ and edge list $\mathcal{E}$ provide the underlying semantic structure.
Formally, we define this process as maximizing the probability of observing a scene graph $\mathcal{G}$ conditioned on the input image $I$, and is given by
\begin{equation}
P(G \mid I) = P(\hat{\mathcal{A}}_N | I) = \prod_{i=1}^{N}{p(\hat{\mathcal{A}}^i_N \mid \hat{\mathcal{A}^{1:i}}_N, f^{1:i}_N, l_{1:i})}
\label{eqn:GGT_Objective}
\end{equation}
where we decompose the probability of observing the graph $\mathcal{G}$ as the joint probability over the separate adjacency lists for each node $v_i$ given its visual features $f^i_N$ and label $l_i$, along with the other nodes that have previously been sampled. Note that the ordering of the nodes can vary greatly; thus, search space to learn the sequence of adjacency lists can grow exponentially with the number of nodes. To this end, we present a fixed ordering of the nodes to be added to the graph based on the confidence score from the object detector to provide a tractable solution. We use a transformer-based decoder model trained in an auto-regressive manner to learn probability measure.
The decoder is trained using two loss functions - an adjacency loss $\mathcal{L}_{\mathcal{A}}$ and a semantic loss $\mathcal{L}_{\mathcal{S}}$. The former is a binary cross-entropy loss between the predicted and actual binary adjacency matrix, while the latter is a cross-entropy loss for node label prediction. Specifically, we define $\mathcal{L}_{\mathcal{A}} = \frac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N{-(a_{ij}}log(\hat{a_{ij}}) + (1 - a_{ij})log(1 - \hat{a_{ij}}))$ and $\mathcal{L}_{\mathcal{S}} = -\sum_j^{\mathcal{C}}{l_j log(p(\hat{l}_j))}$, where $l_j$ is the entity's label as predicted by the concept grounding module from Section~\ref{sec:grounding} and $\hat{l}_j$ is the softmax probability from the node prediction of the transformer decoder as defined in line 11 of Algorithm~\ref{alg:GGT}. Note that we use the semantic loss $\mathcal{L}_{\mathcal{S}}$ as a mechanism to inject the semantics of the grounded concepts into the decoding process and do not use these predictions (termed \textit{node sampling}) as node labels for the final graph. We observe that node sampling (see Section~\ref{sec:ablation}) reduces the performance slightly. We attribute it to the fact the object detector has access to the global, image-level context and hence has a better classification performance.
The total loss is given by
\begin{equation}
\mathcal{L}_{G} = \lambda \mathcal{L}_{\mathcal{A}} + (1-\lambda) \mathcal{L}_{\mathcal{S}}
\label{eqn:GGT_loss}
\end{equation}
where $\lambda$ is a trade-off between semantic and adjacency losses. In our experiments, we set $\lambda=0.75$ to place more emphasis on the adjacency loss. During training, we use teacher forcing in the transformer decoder and convert the adjacency matrix to binary for tractable optimization.
\begin{table*}[ht]
\centering
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{} & \multirow{2}{*}{{\textbf{Approach}}} & \multicolumn{2}{c|}{\textbf{PredCls}} &
\multicolumn{2}{c|}{\textbf{SGCls}} & \multicolumn{2}{c|}{\textbf{SGDet}} & \multicolumn{1}{p{1.2cm}|}{\textbf{Average}} & \multicolumn{1}{p{1.2cm}|}{\textbf{Average}}\\
\cline{3-8}
& & \textbf{mR@50} & \textbf{mR@100} & \textbf{mR@50} & \textbf{mR@100} & \textbf{mR@50} & \textbf{mR@100} & \textbf{mR@100} & \textbf{mR@50}\\
\hline
\multirow{10}{*}{\rotatebox[origin = c]{90}{\textbf{Without Unbiasing}} }
& FC-SSG~\cite{Liu_2021_CVPR} & 6.3 & 7.1 & 3.7 & 4.1 & 3.6 & 4.2 & 4.5 & 5.1 \\
& IMP\cite{xu2017scene} & 9.8 & 10.5 & 5.8 & 6.0 & 3.8 & 4.8 & 7.1 & 6.5\\
& MOTIFS \cite{zellers2018neural} & 14.0 & 15.3 & 7.7 & 8.2 & 5.7 & 6.6 & 10.0 & 9.1 \\
& VCTree\cite{Tang_2019_CVPR} & 17.9 & 19.4 & 10.1 & 10.8 & 6.9 & 8.0 & 12.7 & 11.6 \\
& KERN\cite{Chen_2019_CVPR} & - & 19.2 & - & 10 & - & 7.3 & 12.2 & - \\
& R-CAGCN\cite{yang2021probabilistic} & - & 19.9 & - & 11.1 & - & 8.8 & 13.3 & - \\
& Transformer\cite{guo2021general} & - & 17.5 & - & 10.2 & - & 8.8 & 12.2 & - \\
& Relationformer\cite{shit2022relationformer} & - & - & - & - & \underline{9.3} & 10.7 & - & - \\
& RelTR\cite{cong2022reltr} & 21.2 & - & 11.4 & - & 8.5 & - & - & 13.7 \\
\cmidrule{2-10}
& \textbf{IS-GGT (Ours)} & \textbf{26.4} & \textbf{31.9} & \textbf{15.8} & \textbf{18.9} & \textbf{9.1} & \textbf{11.3} & \textbf{20.7} & \textbf{17.1} \\
\midrule
\midrule
\multirow{15}{*}{\rotatebox[origin = c]{90}{\textbf{With Unbiasing}} }
& RU-Net\cite{Lin_2022_CVPR} & - & 24.2 & - & 14.6 & - & 10.8 & 16.5 & -\\
\cmidrule{2-10}
& IMP+EBML\cite{suhail2021energy} & 11.8 & 12.8 & 6.8 & 7.2 & 4.2 & 5.4 & 8.46 & 7.6 \\
& VCTree+EBML\cite{suhail2021energy} & 18.2 & 19.7 & 12.5 & 13.5 & 7.7 & 9.1 & 14.1 & 12.8\\
& MOTIFS+EBML\cite{suhail2021energy} & 18.0 & 19.5 & 10.2 & 11 & 7.7 & 9.1 & 13.2 & 12.0\\
\cmidrule{2-10}
& MOTIFS+TDE\cite{tang2020unbiased} & {25.5} & 29.1 & 13.1 & 14.9 & 8.2 & 9.8 & 17.9 & 15.6\\
& VCTree+TDE\cite{tang2020unbiased}& {25.4} & 28.7 & 12.2 & 14 & \underline{9.3} & {11.1} & 17.9 & 15.6\\
\cmidrule{2-10}
& MOTIFS+CogTree\cite{yu2020cogtree} & {26.4} & 29 & 14.9 & 16.1 & \underline{10.4} & \underline{11.8} & 19.0 & \underline{17.2}\\
& VCTree+CogTree\cite{yu2020cogtree} & \underline{27.6} & 29.7 & \underline{18.8} & \underline{19.9} & \underline{10.4} & \underline{12.1} & {20.6} & \underline{18.9}\\
\cmidrule{2-10}
& IMP+PPDL\cite{Li_2022_CVPR} & {24.8} & 25.3 & 14.2 & 15.9 & \underline{9.8} & 10.4 & 17.2 & 16.2\\
& MOTIFS+PPDL\cite{Li_2022_CVPR} & \underline{32.2} & \underline{33.3} & 17.5 & 18.2 & \underline{11.4} & \underline{13.5} & \underline{21.7} & \underline{20.4}\\
& VCTree+PPDL\cite{Li_2022_CVPR} & \underline{33.3} & \underline{33.8} & \underline{21.8} & \underline{22.4} & \underline{11.3} & \underline{14.4} & \underline{23.5} & \underline{22.1}\\
\cmidrule{2-10}
& BGNN\cite{Li_2021_CVPR} & \underline{30.4} & \underline{32.9} & 14.3 & 16.5 & \underline{10.7} & \underline{12.6} & {20.7} & \underline{18.5}\\
\cmidrule{2-10}
& PCPL\cite{10.1145/3394171.3413722} & \underline{35.2} & \underline{37.8} & \underline{18.6} & \underline{19.6} & \underline{9.5} & \underline{11.7} & \underline{23.0} & \underline{21.1}\\
\hline
\end{tabular}
}
\caption{\textbf{Comparison with the state-of-the-art} scene graph generation approaches, with and without unbiasing. We consistently outperform all models that do not use unbiasing and some early unbiasing models across all three tasks while offering competitive performance to current state-of-the-art unbiasing models. Approaches outperforming the proposed IS-GGT are underlined.}
\label{tab:sota}
\end{table*}
\subsection{Edge Labeling: Relation Prediction}\label{sec:rel_pred}
The final step in the proposed approach is predicate (or entity relation) prediction, which involves the labeling of the edges $\mathcal{E}$ in the interaction graph $\mathcal{G}$ generated from Section~\ref{sec:ggt}. To further refine the interaction graph, we assign an ``edge prior'' to each sampled edge $e_{ij} \in \mathcal{E}$ between two nodes $n_i$ and $n_j$. This prior is a function of the confidence scores ($c_i$ and $c_j$, respectively) obtained from the concept grounding module (Section~\ref{sec:grounding}) and is given by $E(e_{ij}) = \sigma(c_i \times c_j)$. Finally, we sort the edges based on their edge prior and take the top $K$ edges as the final edge list to represent the scene graph $\mathcal{G}_s$. In our experiments, we set $K{=}250$ to provide a tradeoff between inference time and expressiveness, although we find that lower values of $K$ do not reduce the performance (see Section~\ref{sec:gg_eval}). Given the final edge list $\mathcal{E}$, we then predict the relationship by maximizing the probability $P(r_{k} \mid f^i_N, f^j_N, S^i_N, S^j_N, bb_i, bb_j, \mathcal{F}^G)$, where $\mathcal{F}^G$ is the global image context captured by a contextualization mechanism, and $r_{k}$ is the relationship of the $k^{th}$ edge between nodes $n_i$ and $n_j$ described by their visual features $f^i_N$ and $f^j_N$, and semantic features $S^i_N$ and $S^j_N$, respectively.
We obtain the contextualized global features $\mathcal{F}^G$ using DETR~\cite{carion2020end}.
The semantic features are obtained through an embedding layer initialized by pre-trained word embeddings of the concept labels $\mathcal{C}$ such as GloVe~\cite{pennington2014glove} or ConceptNet Numberbatch~\cite{speer2017conceptnet}. We use an encoder-decoder transformer~\cite{vaswani2017attention} to model this probability.
Specifically, we use a linear projection to map the entity features (visual features $F^i_N$ and localization features $bb_i$) of each node in the edge $e_k = e_{ij} \in \mathcal{E}$ into a shared visual embedding space by $\hat{h
}^k_v=RELU(W_c [f^i_N; bb_i; f^j_N; bb_j])$. A visual-semantic entity embedding is obtained by a linear projection and is given by $\hat{h}^k_{sv} = RELU(W_{sv} [\hat{h}^k; S^i_N, S^j_N])$. An encoder-decoder transformer then takes these visual-semantic features to predict the relationship through a series of attention-based operations given by
\begin{align}
h^k_{sv} &= {Att}^E_{enc}(Q = K = V = \hat{h}^k_{sv})\\
\hat{h}^k &= {Att}^D_{dec}(Q = \hat{h}^k_{sv}, K = V = \mathcal{F}^G)
\label{eqn:transformer}
\end{align}
where ${Att}^E_{enc}(\dots)$ is a transformer encoder consisting of $E$ multi-headed attention layer ($MHA(Q,K,V) {=} W_{a}[h_1; h_2; \ldots h_K]$), as proposed in Vaswani \textit{et al.}~\cite{vaswani2017attention}, where $h_i{=}Attn(Q{=}W_QX, K{=}W_KX, V{=}W_VX)$. The multi-headed attention mechanism applies a scaled dot product attention operation given by $Attn(Q,K,V){=}Softmax(\frac{QK^T}{\sqrt{D_K}}V)$. The resulting vector $h^k_{sv}$ is then passed through a D-layer transformer decoder that obtains a contextualized representation $\hat{h}^k$ for each edge $e_k$ with respect to the global context $\mathcal{F}^G$. The relationship (or predicate) for each edge is obtained by applying a linear layer on $\hat{h}_k$ followed by softmax to obtain the probability of a relationship $p(\hat{r}_k)$. We train this network using a weighted cross-entropy loss given by
\begin{equation}
\mathcal{L}_{R} = -w_r\sum_{l=1}^{\mathcal{C_N}}{r_klog(\hat{r}_k)}
\label{eqn:cross_entropy}
\end{equation}
where $r_k$ is the target relationship class, $\hat{r}_k$ is the probability of the predicted relationship class and $w_r$ is the weight given to correct relationship class. In our experiments, we set the weights as the inverse of the normalized frequency of occurrence of each relationship $r_k\in \mathcal{C_N}$. The weighted cross-entropy allows us to address the long-tail distribution of the predicate relationships in the scene graph classification task in a simple yet efficient manner.
\textbf{Implementation Details.}
In our experiments, we use a Faster RCNN model with ResNet-101~\cite{he2016deep} as its backbone, trained on Visual Genome, and freeze the detector layers. The features extracted from the object detector were $2048$ dimensions and were filtered to obtain bounding boxes specific to the target vocabulary. The iterative graph decoder from Section~\ref{sec:ggt} has a hidden size of dimension $256$ and 6 layers with a sinusoidal positional encoding and is trained for 50 epochs with a learning rate of $0.001$. The predicate classifier (Section~\ref{sec:rel_pred}) is set to have $256$ in its hidden state for both networks, and GloVe embeddings~\cite{pennington2014glove} with 300-d vectors are used to derive the semantic features $S^i_N$. The predicate classifier is trained for $20$ epochs with a learning rate of $1\times 10^{-4}$. The training took around 3 hours for both networks on a GPU server with a 64-core AMD Threadripper processer and 2 NVIDIA Titan RTX GPUs.
\begin{table}[t]
\centering
\resizebox{0.475\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\toprule
\multirow{2}{*}{\textbf{Approach}} & \textbf{PredCls} & \textbf{SGCls} & \textbf{SGDet} & \textbf{Mean}\\
& \textbf{zR@\{20/50\}} & \textbf{zR@\{20/50\}} & \textbf{zR@\{20/50\}} & \textbf{\textbf{zR@\{20/50\}}}\\
\midrule
VCTree\cite{Tang_2019_CVPR} & 1.4 / 4.0 & 0.4 / 1.2 & 0.2 / 0.5 & 0.7 / 1.9 \\
MOTIFS\cite{zellers2018neural} & 1.3 / 3.6 & 0.4 / 0.8 & 0.0 / 0.4 & 0.6 / 1.7 \\
FC-SGG~\cite{Liu_2021_CVPR} & -/\underline{7.9} & -/1.7 & -/\underline{0.9} & -/\underline{3.5}\\
VCTree + EBML\cite{suhail2021energy} & \underline{2.3} / {5.4} & \underline{0.9} / \underline{1.9} & \underline{0.2} / {0.5} & \underline{1.1} / {2.6} \\
MOTIFS + EBML\cite{suhail2021energy} & 2.1 / 4.9 & 0.5 / 1.3 & 0.1 / 0.2 & 0.9 / 2.1 \\
IS-GGT (Ours) & \textbf{5.0} / \textbf{8.3} & \textbf{1.4} / \textbf{2.6} & \textbf{1.0} / \textbf{1.3} & \textbf{2.5} / \textbf{4.1}\\
\bottomrule
\end{tabular}
}
\caption{\textbf{Zero-shot evaluation} on Visual Genome. We report the recall@20 and recall@50 for fair comparison.}
\label{tab:zeroshot}
\end{table}
\section{Experimental Evaluation}
\label{sec:eval}
\textbf{Data.} We evaluate our approach on Visual Genome~\cite{krishna2017visual}. Following prior works~\cite{zellers2018neural,xu2017scene,Tang_2019_CVPR,Chen_2019_CVPR}, we use the standard scene graph evaluation subset containing 108k images with 150 object (entity) classes sharing 50 types of relationships (predicates). We use the $70\%$ of the data for training, whose subset of 5,000 images is used for validation, and the remaining 30\% is used for evaluation.
We evaluate our approach on three standard scene graph generation tasks - predicate classification (\textbf{PredCls}), scene graph classification (\textbf{SGCls}), and scene graph generation (\textbf{SGDet}). The goal of PredCls is to generate the scene graph, given ground truth entities and localization, while in SGCls, the goal is to generate the scene graph, given only entity localization. In SGDet, only the input image is provided, and the goal is to generate the scene graph along with the entity localization.
\textbf{Metrics and Baselines.} Following prior work~\cite{Chen_2019_CVPR,Tang_2019_CVPR,cong2022reltr,yang2021probabilistic}, we report the mean recall (\textbf{mR@K}) metric, since the recall has shown to be biased towards predicate classes with larger amounts of training data~\cite{Chen_2019_CVPR,Tang_2019_CVPR}. We report across different values of $K \in. \{50,100\}$ We also present the average mR@K across all tasks to summarize the performance of the scene graph generation models across the three tasks with varying difficulty. We also report the zero-shot recall (\textbf{zsR@K}, $K\in\{20,50\}$) to evaluate the generalization capabilities of the SGG models. Finally, we compare against two broad categories of scene graph generation models - those with unbiasing and those without unbiasing. Unbiasing refers to the use of additional training mechanisms, such as leveraging prior knowledge to tackle the long-tail distribution in predicate classification. All numbers are reported under the \textit{with graph constraint} setting.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{Imgs/Plot1.pdf}
\caption{\textbf{Impact of graph sampling.}
We greatly reduce the number of pairwise comparisons made for scene graph generation.
Using only 200 edges ($\sim 20\%$ of all edges), we outperform most state-of-the-art approaches on the mean mR@100 across all tasks.}
\label{fig:GGT_plot}
\end{figure}
\subsection{Comparison with State-Of-The-Art}\label{sec:SOTA}
We evaluate our approach on the test split of Visual Genome with the mean recall under graph constraints metric (mR@50 and mR@100) and compare with several state-of-the-art scene graph generation approaches, both with and without unbiasing. The results are summarized in Table~\ref{tab:sota}. Without bells and whistles, we significantly outperform approaches that do not use unbiasing across all three tasks. Interestingly, we outperform the closely related, transformer-based ReITR~\cite{cong2022reltr} model by $2.7$ points in the average mR@50 metric.
In comparison with models with unbiasing, we see that we perform competitively to current state-of-the-art models such as PPDL~\cite{Li_2022_CVPR}, CogTree~\cite{yu2020cogtree}, and BGNN~\cite{Li_2021_CVPR}, while outperforming some of the earlier approaches to unbiasing such as EBML~\cite{suhail2021energy} and TDE~\cite{tang2020unbiased} across all tasks.
Of particular interest is the comparison with RU-Net~\cite{Lin_2022_CVPR}, a scene graph generation model that jointly models unbiasing and generation in a unified framework, as opposed to other approaches, which primarily focus on improving the predicate classification performance of underlying SGG models. We consistently outperform RU-Net across all three tasks, with an average mR@100 improvement of $3.6$ absolute points. It is also remarkable to note the performance difference (in mR@100) between the state-of-the-art unbiasing model (PPDL) and our IS-GGT on PredCls is less than $3\%$, considering that they are optimized specifically for this task, indicating that the graph sampling approach consistently places the edges in the ground truth scene graph in the top 100 edges.
\begin{table}[t]
\centering
\resizebox{0.475\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\toprule
\textbf{Max Edges} & \textbf{PredCls} & \textbf{SGCls} & \textbf{SGDet} & \textbf{Graph Acc.}\\
\textbf{Considered} & \textbf{mR@100} & \textbf{mR@100} & \textbf{mR@100} & \textbf{unconst. (\textit{const.})}\\
\midrule
10 & 4.6 & 3.3 & 3.5 & 11.6 (\textit{9.1})\\
100 & 24.3 & 14.0 & 10.8 & 35.1 (\textit{25.3})\\
250 & 30.1 & 17.5 & 11.8 & 44.2 (\textit{30.7})\\
500 & 30.8 & 17.6 & 11.9 & 49.5 (\textit{33.3})\\
750 & 31.0 & 17.6 & 11.9 & 51.4 (\textit{34.4})\\
All & 31.4 & 17.6 & 12.0 & 52.7 (\textit{34.8})\\
\bottomrule
\end{tabular}
}
\caption{The \textbf{quality of the sampled edges} is quantified using its impact on the three scene graph generation tasks. mR@100 and average graph accuracy are reported.}
\label{tab:ggt_impact}
\end{table}
\textbf{Zero-Shot Evaluation.} We also evaluated the generalization capabilities of our approach by considering the zero-shot evaluation setting. Here, the recall (with graph constraint) was computed only on edges (i.e., subject-predicate-object pairs) that were not part of the training set and summarize the results in Table~\ref{tab:zeroshot}. It can be seen that we outperform approaches with and without unbiasing. Specifically, we obtain and average zero-shot recall of 2.2 (at $K{=}20$) and 4.0 (at $K{=}50$), which is more than $2\times$ the performance of comparable models without unbiasing such as VCTree and MOTIFS while also outperforming the comparable FC-SGG~\cite{Liu_2021_CVPR} across all three tasks. It is interesting to note that we also outperform EBML~\cite{suhail2021energy}, which proposes to mitigate the long-tail distribution using an energy-based loss function. Interestingly, our approach, IS-GGT obtains $21.4$ zR@100, \textit{without graph constraint}, which outperforms FC-SGG~\cite{Liu_2021_CVPR} ($19.6$), VCTree+TDE~\cite{tang2020unbiased} ($17.6$), and MOTIFS+TDE~\cite{tang2020unbiased} ($18.2$) which are state-of-the-art unbiasing models in the zero-shot regimen.
\subsection{Importance of Graph Sampling.}\label{sec:gg_eval}
At the core of our approach is the notion of graph sampling, as outlined in Section~\ref{sec:ggt}. Hence, we examine its impact on the performance of the proposed IS-GGT in more detail. First, we assess the effect of considering the top K edges based on the edge prior (Section~\ref{sec:rel_pred}), which directly impacts the number of edges considered in the final graph for predicate classification.
We vary the maximum number of edges considered per predicted scene graph from 10 to 1000 and consider all pairwise comparisons for each detected entity. We assess its impact on the average mean recall (mR@100) across all three tasks (PredCls, SGCls, and SGDet) and summarize the result in Figure~\ref{fig:GGT_plot}. As can be seen, we outperform all SGG models that do not use unbiasing while considering only the top $100$ edges, which represents $\sim 10\%$ of all possible pairwise combinations while at $K{=}200$ edges outperform most models \textit{with} unbiasing. Only PCPL~\cite{10.1145/3394171.3413722} and PPDL~\cite{Li_2022_CVPR} outperform IS-GGT, although they consider all (($>1000$) combinations.
\begin{table}[t]
\centering
\resizebox{0.475\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|}
\toprule
\textbf{G.C.} & \textbf{V.F.} & \textbf{S.F.} & \textbf{G.S.} & \textbf{PredCls} & \textbf{SGCls} & \textbf{SGDet}\\
\toprule
\ding{51} & \ding{51} & \ding{55} & \ding{51} & 28.3 & 16.5 & 10.3 \\
\ding{51} & \ding{51} & {C.N.B.} & \ding{51} & 28.5 & 16.8 & 11.6 \\
\ding{51} & \ding{51} & {GloVe} & \ding{51} & \textbf{30.1} & \textbf{17.4} & \textbf{11.9} \\
\midrule
\ding{51} & \ding{55} & {GloVe} & \ding{51} & 29.2 & 15.2 & 10.0 \\
\ding{55} & \ding{51} & {C.N.B} & \ding{51} & 28.5 & 16.9 & 11.0 \\
\ding{55} & \ding{51} & {GloVe} & \ding{51} & 29.3 & 16.9 & 10.5 \\
\midrule
\ding{55} & \ding{51} & {GloVe} & \ding{55} & 27.9 & 16.1 & 11.0 \\
\ding{51} & \ding{51} & {GloVe} & \ding{55} & 28.5 & 16.8 & 11.2 \\
\ding{51} & \ding{51} & {GloVe} & W/o E.P. & N/A & 17.2 & 9.3 \\
\ding{51} & \ding{51} & {GloVe} & With N.S. & 28.5 & 17.2 & 8.9\\
\bottomrule
\end{tabular}
}
\caption{\textbf{Ablation studies} are presented to quantify each component's impact on mR@100. G.C.: global context, V.F.: visual features, S.F: semantic features, G.S.: graph sampling, C.N.B: ConceptNet Numberbatch, E.P. edge prior, and N.S: node sampling.
}
\label{tab:ablation}
\end{table}
\begin{figure*}
\centering
\begin{tabular}{cc}
\toprule
\multicolumn{2}{c}{\textbf{Scene Graph Detection}}\\
\toprule
\includegraphics[width=0.5\textwidth]{Imgs/SGDet_Qual1.pdf} &
\includegraphics[width=0.43\textwidth]{Imgs/SGDet_Qual2.pdf} \\
(a) & (b) \\
\midrule
\multicolumn{1}{c}{\textbf{Predicate Classification with Zero-Shot Edges}} &
\multicolumn{1}{c}{\textbf{Predicate Classification}}\\
\midrule
\includegraphics[width=0.52\textwidth]{Imgs/PredCls_Qual1.pdf} &
\includegraphics[width=0.43\textwidth]{Imgs/PredCls_Qual2.pdf} \\
(c) & (d) \\
\bottomrule
\end{tabular}
\caption{We present \textbf{qualitative visualizations} of the scene graphs generated by IS-GGT under (a) scene graph detection setting, (b) predicate classification on images with zero-shot predicates (indicated by blue edges), and (c) predicate classification with complex structures.}
\label{fig:qual}
\end{figure*}
In addition to the impact on the average mR@100, we also assess the \textit{quality} of the underlying graph sampled with the generative graph transformer decoder. We propose two new metrics, unconstrained and constrained graph accuracy, which measure the quality of the sampled edges. In the former, we measure the accuracy of the underlying structure by when both the nodes and edges are unlabeled and binary. In the latter, we only consider the edges to be unlabeled. Note that, in both metrics, for a node to be ``correct'', its bounding box must have at least $50\%$ overlap with a corresponding ground truth node. We summarize the results in Table~\ref{tab:ggt_impact}. It can be seen that the graph accuracy increases with the number of considered edges while plateauing out at around $500$ edges. Interestingly, the constrained accuracy, IS-GGT's theoretical upper bound, is $30.7\%$ with only $250$ sampled edges.
This is a remarkable metric considering that, on average. the number of total possible edges per image can be more than $1000$, and more than $30\%$ of the ground truth edges are part of the top $250$ edges.
These results indicate that the graph sampling does an effective job in capturing the underlying entity interaction structure.
\subsection{Ablation Studies}\label{sec:ablation}
To assess the impact of each component in the proposed IS-GGT framework, we systematically evaluate the framework's performance by exploring alternatives, including the exclusion of each element. Specifically, we assess the impact of three broad categories - (i) use of semantics, (ii) choice of visual features, and (iii) use of graph sampling.
We see that the lack of semantic features has a more significant impact, resulting in a reduction of an average of $1.47\%$ in absolute mR@100 across tasks. In contrast, the choice of semantic features (ConceptNet Numberbatch~\cite{speer2017conceptnet} vs. GloVe~\cite{pennington2014glove}) has limited impact.
We attribute the success of GloVe to its pre-training objective, which ensures that the dot product between GloVe embeddings is proportional to their co-occurrence frequency. This property helps model the potential semantic relationships between nodes using the attention mechanism in relationship prediction model (Section~\ref{sec:rel_pred}).
Interestingly, we see that adding global context as part of the predicate prediction features (Section~\ref{sec:rel_pred}) significantly improves the performance ($\sim 1.1\%$ average mR@100), whereas removing visual context altogether also results in a reduction of $\sim 1.7\%$ average mR@100.
Removing the GGT and removing the edge prior also hurt the performance. However, the recall metric does not accurately capture the reduction in the false alarms produced due to the lack of edge sampling with a generative model. Finally, we see that using node sampling ($\hat{l}_i$ from Section~\ref{sec:ggt}) affects SGCls and SGDet significantly.
We attribute it to the fact that concept grounding is an essential step in modeling the visual-semantic relationships among entities.
\textbf{Qualitative Evaluation.} We present some qualitative illustrations of some of the scene graphs generated by the proposed approach in Figure~\ref{fig:qual}. In the top row, we present the generated scene graphs under the ``detection'' setting, where the goal is to both detect entities and characterize the relationships between them. It can be seen that, although there are a large number of detected entities ($\sim 28$ per image), the graph sampling approach allows us to reject clutter to arrive at a compact representation that captures the underlying semantic structure. Figure~\ref{fig:qual}(c) shows the generalization capabilities of the proposed approach for predicate classification when previously unseen (``zero-shot'') triplets are observed. Finally, we show in Figure~\ref{fig:qual}(d) that the graph sampling also works under cluttered scenarios when localized, ground-truth entities are provided, and there is a need to reject nodes that do not add to the scene's semantic structure. We can sample sparse graph structures that express complex semantics without losing expressiveness.
\section{Conclusion}
\label{sec:conclusion}
In this work, we presented IS-GGT, one of the first works to address the problem of generative graph sampling for scene graph generation. Using a two-stage approach, we first sample the underlying semantic structure of the scene before predicate (relationship) characterization. This decoupled prediction allows us to reason over the constrained (optimal) global semantic structure while reducing the number of pairwise comparisons for predicate classification. Extensive experiments on visual genome indicate that the proposed approach outperforms scene graph generation models without unbiasing while offering competitive performance to those with unbiasing while considering only $\sim 20\%$ of the total possible edges. We aim to extend this approach for general graph generation problems such as semantic graphs~\cite{aakur2019going} and temporal graph prediction~\cite{bal2022bayesian,ji2020action}, where capturing the underlying entity interactions can help constrain the search space for complex reasoning.
\textbf{Acknowledgements.} This research was supported in part by the US National Science Foundation grants IIS 2143150, and IIS 1955230.
{\small
\bibliographystyle{ieee_fullname}
|
1,116,691,497,893 | arxiv | \section{Introduction}
This article is motivated by an effort to provide
a~new framework for constructing so\hyp{}called Floer theories,
which are used to construct invariants of low\hyp{}dimensional manifolds,
knots, as well as Lagrangians in symplectic manifolds.
Originally, Floer implanted ideas from finite\hyp{}dimensional Morse theory
(cf. \cites{Sch1993,Wit1982})
to the study of functions
on infinite\hyp{}dimensional spaces
(often called \textit{functionals} on \textit{configuration spaces}),
leading to the resolution of the Arnold conjecture in symplectic topology
\cites{Flo1987,Flo1988Lag}.
Inspired by Donaldson's proof of the diagonalization theorem
for \(4\)\hyp{}manifolds \cite{Don1983},
Floer used these ideas
to construct homological invariants of \(3\)\hyp{}manifolds,
aiming to establish gluing formulas for Donaldson's invariants.
An~analogous construction for knots in \(3\)\hyp{}manifolds
has been carried out by Kronheimer and Mrowka
and plays the key role in the proof that Khovanov homology
detects the unknot \cite{KM2011a}.
An analogue for the Seiberg\hyp{}Witten equations (in place of the
anti\hyp{}self\hyp{}duality (ASD) equations of Donaldson's theory)
is the central ingredient
of Manolescu's disproof of the triangulation conjecture
in dimensions \(\geq 5\) \cite{Man2013}.
While the idea of using Morse theory in infinite\hyp{}dimensional settings dates back
to Atiyah and Bott \cite{AB1983},
Floer's novelty was dealing with
functionals
having critical points of infinite index.
They key observation was that the unstable manifolds of critical points
were of finite index with respect to a~certain subbundle of the tangent space,
allowing one to define finite relative indices of critical points
and eventually leading to a~computation of
''middle\hyp{}dimensional'' homology groups of the space.
As noted by Atiyah \cite{A1988} (cf. \cite{CJS1995}),
Floer theory was, from the very beginning,
understood as describing the behavior of so\hyp{}called
\emph{semi\hyp{}infinite\hyp{}dimensional cycles}.
As in Morse homology (cf. \cite{Sch1993}),
one needs to overcome analytical difficulties
to even define Floer theories and prove they are well\hyp{}defined
and independent of choices -- one needs to establish regularity and compactness
of moduli spaces of trajectories between critical points.
Moreover, Morse theory is generally not well\hyp{}suited
for equivariant constructions
since one in general cannot guarantee regularity of the moduli spaces
without breaking the symmetry coming from a~group action.
To address these problems
and to allow general constructions of equivariant
(cf. \cites{DS2019pre,KMbook,FLin2018,SMMil2019})
and generalized Floer homologies
(cf. \cites{Man2014,JLin2015,AB2021pre}),
Lipyanskiy \cite{Lip2008} introduced a~framework for using such
semi\hyp{}infinite\hyp{}dimensional cycles
as a~tool for defining Floer theories
and the author has further developed these methods in \cite{Suw2020}.
This article contains results required to rigorously
define the relative invariants of \(4\)\hyp{}manifolds with boundary
and maps induced by cobordisms
in this construction, using the Seiberg\hyp{}Witten equations.
Moreover, the key result is a~gluing theorem which
is fundamental for establishing functoriality of the
cobordism maps.
\paragraph{Results.}
Consider a~\(4\)\hyp{}manifold \(X\) with boundary
a~nonempty collection of rational homology spheres
(i.e., \(b_1(\partial X) = 0\))
and a~\spinc{} structure \(\hat{\mathfrak{s}}\) on \(X\).
We study the moduli space of solutions to the Seiberg\hyp{}Witten equations
for pairs \((A,\Phi)\) of a~\spinc{} connection \(A\) on \(X\)
and a~section \(\Phi\) of the spinor bundle \(S^+\) over \(X\)
associated to the \spinc{} structure \(\hat{\mathfrak{s}}\):
\begin{equation*}
\left\{
\begin{aligned}
\frac 1 2 F^+_{A^t} - \rho^{-1}((\Phi\Phi^\ast)_0)
&= 0, \\
\Dirac_{A}^+ \Phi &= 0.
\end{aligned}
\right.
\end{equation*}
(\autoref{def:Seiberg-Witten-equations}).
The split Coulomb slice with respect to a~reference connection \(A_0\)
is given by the equations
\begin{equation*}
\left\{
\begin{aligned}
d^\ast (A-A_0)
&= 0, \\
d^\ast (\iota_{\partial X}^\ast (A-A_0))
&= 0,
\end{aligned}
\right.
\end{equation*}
(where \(\iota_{\partial X}: \partial X \hookrightarrow X\) denotes the inclusion)
together with a~condition restricting \(A-A_0\)
to a~subset of codimension \(b_0(\partial X) - 1\)
(\autoref{def:split-Coulomb-slice}).
This additional condition
depends on a~choice of a~gauge splitting \(s\)
(\autoref{def:gauge-splitting}).
The moduli space \(\SWModuli{s}{X}\)
(\autoref{def:moduli-spaces-on-4-manifolds-with-boundary})
is the quotient of the space of \(L^2_1\)\hyp{}solutions
to the Seiberg\hyp{}Witten equations
in this split Coulomb slice
by the (discrete) action of the split gauge group
(\autoref{def:gauge-group-split});
this gauge group preserves the split Coulomb slice
as well as the set of solutions to the Seiberg\hyp{}Witten equations.
There is also a~residual action of \(S^1\) on \(\SWModuli{s}{X}\)
given by multiplication of the spinor component, \(\Phi \mapsto z \Phi\),
by complex numbers in the unit circle \(z \in S^1 \subset \mathbb{C}\).
Firstly, we choose a~gauge twisting \(\tau\)
(\autoref{def:gauge-twisting})
and define the (\(S^1\)\hyp{}equivariant) twisted restriction map
\(R_\tau : \SWModuli{s}{X} \to \CoulThree{\partial X}\),
taking values in the configurations on \(\partial X\)
in the Coulomb slice
(\autoref{def:Coulomb-slice}).
The gauge splittings and gauge twistings we introduce
generalize the double Coulomb slice used in
\cites{Lip2008,Kha2015} and twistings utilized in \cite{KLS2018}.
We prove
regularity, denseness and
``semi\hyp{}infinite\hyp{}dimensionality''
of the Seiberg\hyp{}Witten moduli spaces:
\begin{restatable*}%
{semiinfdimthm}{moduli}
\label{thm:semi-infinite-dimensionality-of-moduli-spaces}
The moduli spaces \(\SWModuli{s}{X}\) are Hilbert manifolds.
The differential of the twisted restriction map
\(R_\tau : \SWModuli{s}{X} \to \CoulThree{\partial X}\)
decomposes into
\(\proj^- D R_\tau:
T \SWModuli{s}{X} \to H^-(\partial X,\mathfrak{s})\)
which is Fredholm
and
\(\proj^+ D R_\tau:
T \SWModuli{s}{X} \to H^+(\partial X,\mathfrak{s})\)
which is compact.
Moreover,
if \(b_0(\partial X)>1\),
then for any connected component
\(Y_0 \subset \partial X\)
the restriction \(R_{\tau,Y_0}\) to \(Y_0\) has dense
differential.
\end{restatable*}
The maps \(\Pi^\pm\) come from a~decomposition
\(\CoulThree{\partial X} = H^+(\partial X, \mathfrak{s})
\oplus H^-(\partial X, \mathfrak{s})\)
according to the eigenvalues
of the operator \((\star d, \Dirac_{B_0})\),
also called a~\emph{polarization}
(\autoref{def:polarization-on-Coulomb-slice}).
The proof
utilizes the Atiyah\hyp{}Patodi\hyp{}Singer boundary value problem
for an~extended linearized Seiberg\hyp{}Witten operator
(\autoref{def:extended-DSW}).
Then we prove its properties
transfer to the restriction
to the Coulomb slice.
Our low regularity setting requires us to prove
a~regularity theorem
(\namedref{thm:low-regularity})
for an~operator of the form \(D = D_0 + K : L^2_1 \to L^2\),
where \(D_0\) is a~Dirac operator and \(K\) is any compact operator,
making it a~result of independent interest.
We also prove a~strong version of the unique
continuation principle for \(D\) (\namedref{thm:unique-cont-dirac-Lr}).
Secondly, we prove a~gluing theorem for a~composite cobordism.
Assume \(X\) splits as \(X = X_1 \cup_Y X_2\)
along a~rational homology sphere \(Y\).
If the auxiliary data of gauge splittings
and gauge twistings
are compatible in
a~suitable sense
(see
\autoref{prop:twistings-are-integral-splittings}
and
\autoref{prop:integral-splittings-on-a-composite-cobordism}),
then the moduli space \(\SWModuli{s}{X}\)
can be recovered from the fiber product of
\(\SWModuli{s_1}{X_1}\) and \(\SWModuli{s_2}{X_2}\)
over the configuration space of \(Y\),
in a~way compatible with the twisted restriction maps:
\begin{restatable*}%
{gluingthm}{gluing}
\label{thm:composing-cobordisms}
Assume \(s_{\mathbb{Z}}\) and \( (s_{1,\mathbb{Z}}, s_{2,\mathbb{Z}})\)
are compatible.
Then there is an~\(S^1\)\hyp{}equivariant diffeomorphism
\(F : \SWModuli{s}{X}
\to \SWModuli[1]{s_1}{X_1} \times_{Y} \SWModuli[2]{s_2}{X_2}\)
such that
\(R_{\tau}\)
is \(S^1\)\hyp{}equivariantly homotopic to
\( \left(R_{\tau_1}
\times_{Y}
R_{\tau_2} \right) \circ F\).
\end{restatable*}
The proof uses the following fact of independent interest.
We show that the restriction map of solutions in \(\SWModuli{s}{X}\)
to a~submanifold in the interior of \(X\) is smooth as a~map
into a~configuration space of \textit{higher} regularity
(\autoref{thm:restriction-is-smooth-for-solutions}).
While it is easy to prove that its image lies
in the space of smooth configurations
(\autoref{lem:interior-smoothness-of-solutions}),
the proof of \textit{smoothness} of this map is non\hyp{}trivial.
\paragraph{Applications.}
The \namedref{thm:semi-infinite-dimensionality-of-moduli-spaces}
together with compactness of moduli spaces proved in \cite{KMbook}
(with minor modifications to account for the double Coulomb slice
instead of the Coulomb\hyp{}Neumann slice used in \cite{KMbook})
show that the maps \(R_\tau : \SWModuli{s}{X} \to \CoulThree{Y}\)
are \textit{semi\hyp{}infinite\hyp{}dimensional cycles} in \(\CoulThree{\partial X}\)
as defined in \cites{Lip2008,Suw2020},
establishing the existence of relative Seiberg\hyp{}Witten invariants of \(X\).
If the boundary \(\partial X\) is connected, these do not depend on the choice
of an~integral splitting (\autoref{lem:uniqueness-of-integral-splittings}).
This also implies that cobordisms \(\partial W = - Y_1 \cup Y_2\)
induce \textit{correspondences}
(defined in \cites{Lip2008,Suw2020})
between the configuration spaces
over \(Y_1\) and \(Y_2\).
Our methods apply to perturbed equations
as well, which we did not include for the sake of simplicity.
Varying the metrics and perturbations gives cobordisms
between the relevant moduli spaces on \(X\),
and changing the reference connections on \(X\)
gives isomorphic moduli spaces with homotopic restriction maps.
This means that the relative invariant of \(X\),
up to a~suitable cobordism relation,
does not depend on the choices of perturbations and metric.
Crucially, the \namedref{thm:composing-cobordisms} says that
the correspondence induced by a~composite cobordism
is (homotopic to) the composition of the respective correspondences,
proving the theory comes with a~TQFT\hyp{}like structure.
We hope to prove that this theory
recovers a~non\hyp{}equivariant version of monopole Floer homology
\(\widetilde{HM}_\ast\)
and describe a~construction supposed to recover
all of the flavors of monopole Floer homology (for rational homology spheres) in an~upcoming work.
Applications of Seiberg\hyp{}Witten\hyp{}Floer theory
suggest including the case \(b_1(\partial X) > 0\)
or allowing \(X\) to have cylindrical or conical ends
(like in \cite{KM1997})
with certain asymptotic conditions on the configurations.
The methods presented here should suffice to prove
regularity and semi\hyp{}infinite\hyp{}dimensionality of the corresponding
moduli spaces.
However, to establish the semi\hyp{}infinite\hyp{}dimensional theory
in full one needs to
carefully select a~slice for the gauge action
and deal with compactness issues
which we hope to address in further work.
Finally, the methods presented here should be applicable
(with appropriate adjustments) for defining
semi\hyp{}infinite\hyp{}dimensional Floer theories
in other contexts, e.g., in the Yang\hyp{}Mills\hyp{}Floer theory.
\paragraph{Organization.}
In \autoref{sec:analytical-preparation} we prove
the regularity theorem for a~Dirac operator with compact perturbation,
the \namedref{thm:low-regularity},
and a~strong unique continuation principle for a~Dirac operator with a~low\hyp{}regularity
potential, the \namedref{thm:unique-cont-dirac-Lr}.
In \autoref{sec:Seiberg-Witten-moduli-spaces-in-split-Coulomb-slice}
we introduce the basic notions of Seiberg\hyp{}Witten theory on \(3\)\hyp{} and \(4\)\hyp{}manifolds.
For a~chosen gauge splitting
the split Coulomb slice is defined,
and for a~gauge twisting a~twisted restriction map to the boundary
is introduced.
The choices of splittings and twistings are shown to be equivalent to a~choice
of an~integral splitting, one which does not require any twisting.
\Cref{sec:properties-of-moduli-spaces} provides proofs of some of the key
properties of the moduli spaces: regularity,
semi\hyp{}infinite\hyp{}dimensionality and denseness of the restriction map
(\namedref{thm:semi-infinite-dimensionality-of-moduli-spaces}).
Finally, \autoref{sec:gluing-along-a-boundary-component}
contains the proof of the \namedref{thm:composing-cobordisms},
showing that moduli on a~composite cobordism
are a~fiber product of the moduli on its components.
\paragraph{Notations.}
In this article we use the notation \(L^p_k\) for Sobolev spaces of regularity \(k\),
in accordance with the literature in gauge theory
and Floer theory in low dimensional topology.
These are often denoted by \(W^{k,p}\) or \(H^{k,p}\) or,
when \(p=2\), simply by \(H^k\).
All manifolds are assumed to be smooth,
submanifolds to be smoothly embedded,
and manifolds with boundary to have smooth boundary.
\paragraph{Acknowledgments.}
I would like to express gratitude to my graduate advisor Tom Mrowka
for his encouragement, support and sharing many insights
on geometry, topology and analysis.
I would like to thank Maciej Borodzik and Michał Miśkiewicz
for helpful comments on a~draft of this paper.
Most results of this paper have been part of the author's Ph.D. thesis
\cite{Suw2020}, although the proofs have been revised
and some results have been generalized.
In particular, \cite{Suw2020} does not consider \(4\)\hyp{}manifolds
with more than two boundary components,
or the general split Coulomb slice on them.
\section{Analytical preparation}
\label{sec:analytical-preparation}
We prove two results which are fundamental for the surjectivity proofs in
\autoref{sec:properties-of-moduli-spaces}.
The first one, the \namedref{thm:low-regularity}, is a~regularity theorem for
operators of the form \(D = D_0 + K : L^2_1 \to L^2\),
where \(D_0\) is a~first\hyp{}order elliptic operator
and \(K\) is a~compact operator.
In our applications \(K\) is a~multiplication
by an~\(L^2_1\)\hyp{}configuration on
a~\(4\)\hyp{}manifold and thus it does not factor as a~map \(L^2_1 \to L^2_1\).
If it did, elliptic regularity would immediately imply
the regularity result.
The novelty here is the general form of the perturbation \(K\)
which is only assumed to be compact as a~map \(L^2_1 \to L^2\).
The second result, the \namedref{thm:unique-cont-dirac-Lr},
is a~strong unique continuation principle in a~similar low regularity setting.
This can be understood as a~strengthening of the unique continuation results
of \cite{KMbook}*{Section 7}.
\subsection{A~regularity theorem for Dirac operators}
Let \(X\) be a~smooth Riemannian manifold with
asymptotically cylindrical ends (without boundary).
Let \(D_0\) be an~elliptic operator
\(D_0: C^\infty(X;E) \to C^\infty(X;F)\)
of order \(1\)
which is asymptotically cylindrical on the ends of \(X\).
Let \(K: L^2_1(X;E) \to L^2(X;F)\) be a~compact operator
which has a~compact formal adjoint
\(K^\ast: L^2_1(X;E) \to L^2(X;F) \).
Consider the operator
\[D = D_0 + K : L^2_1(X;E) \to L^2(X;F). \]
The aim of this subsection is to prove the following regularity theorem:
\begin{regularitythm}
\label{thm:low-regularity}
Assume that \(v \in L^2_{loc}(X;E) \) satisfies \(D v = h \)
for some \(h \in L^2(X;F) \).
Then \( v \in L^2_{1}(X;E) \).
Moreover, if \(h \in L^2_{loc}(X;E)\) instead,
then \(v \in L^2_{1,loc}(X;E)\).
\end{regularitythm}
In the course of the proof
we will make use of the following simple lemma
(cf. Lipyanskiy \cite{Lip2008}*{Lemma 44}):
\begin{lemma}
\label{lem:op-weak-comp-convergence}
Suppose the sequence of Hilbert space operators \( \{A_i : V \to W\} \)
is uniformly bounded and weakly convergent to \(A\) in the sense that
for any \(v \in V\) we have \( A_i(v) \to A(v) \).
If \( K : W \to U \) is compact, then
\( A_i \circ K \to A \circ K \).
\end{lemma}
We also need the following version of the G{\aa}rding inequality,
proven in \cite{Shu1992}*{Appendix 1, Lemma 1.4}:
\begin{proposition}[cylindrical G{\aa}rding inequality]
\label{prop:cylindrical-Garding}
For every \(s,t \in \mathbb{R}\) there exists \(C>0\)
such that for any \(u \in C_0^\infty(X;E)\)
\[ \lVert u\rVert_{L^2_{s+1}} \leq C( \lVert D_0 u \rVert_{L^2_s} + \lVert u\rVert_{L^2_t}).\]
\end{proposition}
With these in hand, we are ready to prove
the \namedref{thm:low-regularity}.
\begin{proof}
[Proof of \nameCref{thm:low-regularity}
By considering
\[
\begin{pmatrix}
0 & D_0^\ast \\
D_0 & 0
\end{pmatrix}
+
\begin{pmatrix}
0 & K^\ast \\
K & 0
\end{pmatrix}
:L^2_{1}(X;E \oplus F) \to L^2(X; E \oplus F)
\]
where \(D_0^\ast\) is the formal adjoint of \(D_0\)
we may assume, without loss of generality,
that \(E = F\) and \(D_0\) is formally self\hyp{}adjoint.
Since \(X\) has bounded geometry
and \(D_0\) is uniformly elliptic,
Proposition 4.1 in \cite{Shu1992}*{Section 1}
implies that the minimal and maximal operators
\(D_0^{min}\) and \(D_0^{max}\)
of \(D_0: L^2(X;E) \to L^2(X;E)\) coincide,
and their domains are equal to \(L^2_1(X;E)\).
Thus \(D_0\) is essentially self-adjoint.
Consider the equation
\begin{equation}
\label{eqn:dirac-large-c}
(D_0+ic) \psi + K \psi = h + i c v
\end{equation}
for large \(c \in \mathbb{R}\).
Certainly, \(\psi = v \in L^2(X;E)\) solves the equation.
We will prove that for large \(c\)
equation \eqref{eqn:dirac-large-c}
has a~unique solution \(\psi\) in \(L^2_1(X;E)\),
which is also unique in \(L^2(X;E)\),
so that \(v = \psi \in L^2_1(X;E)\), as wished.
Firstly, we want to prove \( D_0+ic : L^2_{1}(X;E) \to L^2(X;E) \)
is invertible.
Since the spectrum of \(D_0\) is real,
\( (D_0+ic)^{-1}:L^2(X;E) \to L^2(X;E) \) exists
and is bounded by \( \frac{1}{|c|} \)
(cf. \cite{Lan1993}*{Chapter XIX, Theorem 2.4}).
Therefore by
\autoref{prop:cylindrical-Garding} we have the first inequality:
\begin{align*}
\lVert (D_0+ic)^{-1} u \rVert_{L^2_1}
&\leq \lVert D_0 (D_0 + ic)^{-1} u\rVert_{L^2} + \lVert (D_0+ic)^{-1} u \rVert_{L^2}
\\ & \leq
\lVert u - ic (D_0+ic)^{-1} u \rVert_{L^2} + (1/|c|) \lVert u\rVert_{L^2}
\\ & \leq (2 + 1/|c|) \lVert u\rVert_{L^2}
\end{align*}
and therefore \( (D_0+ic)^{-1} \)
is a~bounded operator \(L^2(X;E) \to L^2_1(X;E)\).
Secondly, we want to prove weak convergence of
\( (D_0+ic)^{-1} \) to \(0\) as \(|c| \to \infty\),
i.e., that for each \(v \in L^2(X;E)\)
we have \( (D_0+ic)^{-1}(v) \to 0\) in \(L^2_1(X;E)\)
as \(|c| \to \infty\).
The spectral theorem for unbounded self-adjoint operators
\cite{Lan1993}*{Chapter XIX, Theorem 2.7}
provides us with an~orthogonal decomposition
\( L^2(X;E) = \hat{\bigoplus}_n H_n\)
such that the restriction
\(D_n = D_0|_{H_n}:H_n \to H_n\)
is a~bounded operator (in \(L^2\)\hyp{}norm on \(H_n\))
and \(D_0 = \hat{\bigoplus}_n D_n\).
In particular, \(H_n \subset \mathrm{Dom}(D_0) = L^2_1(X;E)\).
For each \(v_n \in H_n\) we thus have,
by \autoref{prop:cylindrical-Garding},
\[ \lVert v_n\rVert_{L^2_1} \leq C( \lVert D_0 v_n\rVert_{L^2} + \lVert v_n\rVert_{L^2})
\leq C (C_n + 1) \lVert v_n\rVert_{L^2} = C_n' \lVert v_n\rVert_{L^2}, \]
where \(C_n = \lVert D_n\rVert_{L^2}\).
Therefore
\begin{equation*}
\lVert (D_0+ic)^{-1} v_n \rVert_{L^2_1}
\leq C_n' \lVert (D_0 + ic)^{-1} v_n \rVert_{L^2}
\leq \frac{C_n'}{|c|} \lVert v_n\rVert_{L^2}
\xrightarrow{|c| \to \infty} 0
\end{equation*}
which proves that for any \(v \in L^2(X;E)\),
\( (D_0+ic)^{-1} v \xrightarrow{|c|\to \infty} 0 \) in \(L^2_1(X;E)\),
i.e., \( (D_0 + ic)^{-1}\) converges weakly to \(0\),
as wished.
We proceed to proving existence and uniqueness of solutions to
\eqref{eqn:dirac-large-c} in \(L^2_1(X;E)\) for large \(|c|\).
Using \autoref{lem:op-weak-comp-convergence}
we conclude that \( T = (D_0+ic)^{-1} K : L^2_1(X;E) \to L^2_1(X;E) \)
converges strongly to \(0\).
Thus, for large \(|c|\),
the operator \( \mathrm{Id}
+ (D_0+ic)^{-1} K : L^2_1(X;E) \to L^2_1(X;E)\)
is invertible.
Composing with \((D_0 + ic)\) we conclude that
\( D_0+ic + K = D+ic : L^2_1(X;E) \to L^2(X;E) \) is invertible
for large \(|c|\).
Existence and uniqueness of \( \psi \in L^2_1(X;E) \)
solving \eqref{eqn:dirac-large-c} follows.
To conclude the first part of the proof
we need to establish uniqueness of solutions
to \eqref{eqn:dirac-large-c} in \(L^2(X;E)\).
We have that \( (D_0 + ic + K) (\psi - v) = 0\).
This implies that \(\psi - v\) is perpendicular in \(L^2(X;E)\)
to the image of \( D_0^\ast - ic + K^\ast = D_0 - ic + K
:L^2_1(X;E) \to L^2(X;E)\).
However, we already established that for large \(|c|\)
the latter operator is invertible
and it follows that \(\psi - v = 0\);
thus \( v = \psi \in L^2_1(X;E) \).
Finally, it remains to consider the more general case \(h \in L^2_{loc}\).
For any compact
\(A \subset X\) we can
take a~compactly supported bump function \(\rho:X \to [0,1]\)
with \(\rho|_A=1\)
and define \(v' = \rho v \in L^2(X; E)\).
Then \(D v' = h' \) for some \(h' \in L^2(X;E)\)
and therefore \(v' \in L^2_1(X;E)\) as proven above,
so \(v|_A \in L^2_1(A;E|_A)\).
Repeating the argument for every compact \(A \subset X\)
shows that \(v \in L^2_{1,loc}(X;E) \).
\end{proof}
\subsection{A strong UCP for Dirac operators}
Let \( M\) be a~connected Riemannian manifold
and \(S\) a~real (resp. complex) Dirac bundle over it
(e.g., the real (resp. complex)
spinor bundle associated to a~\spinc{}-structure on \(M\))
with connection \(A\).
Denote by \(\Dirac_A:\Gamma(S) \to \Gamma(S)\) the corresponding Dirac operator.
Let \(V \in L^n(M;\mathbb{R})\) be a~potential.
Here we prove the following
unique continuation theorem
for spinors and potentials of low regularity.
\begin{strongucpthm
\label{thm:unique-cont-dirac-Lr}
The differential inequality
\begin{equation*}
\mleft|\Dirac_A \Phi\mright| \leq V |\Phi|
\end{equation*}
has unique continuation property in \(L^{\frac{2n}{n+2}}_{1,loc}(M;S)\).
\end{strongucpthm}
The version of this theorem for \(d+d^\ast\)
instead of \(\Dirac_A\) has been proven in \cite{Wol1992}*{Theorem 2}:
\begin{theorem}
[Wolff, \cite{Wol1992}*{Theorem 2}]
\label{thm:wolff-ucp}
Suppose \(M\) is an~\(n\)-dimensional manifold, \(n \geq 3\),
\(p = \frac{2n}{n+2}\),
and \(\omega \in L^p_{1,loc}(\Omega^\ast(M))\)
such that \( |d \omega| + |d^\ast \omega| \leq V |\omega|\)
with \(V \in L^n_{loc}(M)\).
Then if \(\omega\) vanishes on an~open set,
it vanishes identically.
\end{theorem}
It thus suffices to reduce our problem to Wolff's result,
following the idea of \cite{Mand1994}.
Since the proof in \cite{Mand1994} does not explain
the reduction rigorously, we describe the procedure below.
\begin{proof}
The problem is local, thus we need only to consider the case
of \(M\) being \(\mathbb{R}^n\) with some metric \(g\).
If \(A_0\) is the flat connection on \(S\),
\(\Dirac_A - \Dirac_{A_0}\) is a~smooth operator of order \(0\),
so (again using locality) we can assume that \(A\) is flat.
Moreover, by contractibility of \(\mathbb{R}^n\),
we can decompose \(S\) into irreducible components,
and irreducible components must be isomorphic to the real (resp. complex)
spinor bundle \(\widetilde S\) (resp. \(\widetilde S_{\mathbb{C}}\))
associated to the unique spin structure.
Thus we have reduced the problem to the case where
\(S = \widetilde S \otimes \mathbb{R}^k\)
(resp. \(S = \widetilde S \otimes \mathbb{C}^k\)) for some \(k\).
The real (resp. complex) spinor bundle \(\widetilde S\) (resp. \(\widetilde S_{\mathbb{C}}\))
embeds into \(\mathrm{C\ell}_n(\mathbb{R}^n)\)
(resp. \(\mathbb{C}\ell_n(\mathbb{R}^n)\)).
Furthermore, \cite{LM1989}*{Theorem 5.12} implies
that the Dirac operator on the Clifford bundle
\(\mathrm{C\ell}_n(\mathbb{R}^n)\)
(resp. \(\mathbb{C}\ell_n(\mathbb{R}^n) = \mathrm{C\ell}(\mathbb{R}^n)
\otimes \mathbb{C}\))
is equivalent to \( d + d^\ast\) on
\(\Lambda^\ast(\mathbb{R}^n)\)
(resp. \(\Lambda^\ast(\mathbb{R}^n) \otimes \mathbb{C}\))
via the canonical isomorphism
\(\mathrm{C\ell}(\mathbb{R}^n) \simeq \Lambda^\ast(\mathbb{R}^n)\).
Thus, \(\Dirac_{A_0}\) is equivalent to
\( (d+d^\ast) \otimes 1_{\mathbb{R}^k}
\Lambda^\ast(\mathbb{R}^n) \otimes \mathbb{R}^k \to
\Lambda^\ast(\mathbb{R}^n) \otimes \mathbb{R}^k\)
(resp. \( (d+d^\ast) \otimes 1_{\mathbb{C}^k}
\Lambda^\ast(\mathbb{R}^n) \otimes \mathbb{C}^k \to
\Lambda^\ast(\mathbb{R}^n) \otimes \mathbb{C}^k\)).
The proof of \autoref{thm:wolff-ucp} goes through
for differential forms with coefficients in \(\mathbb{R}^k\)
(resp. \(\mathbb{C}^k\)),
establishing the unique continuation property for
\((d+d^\ast) \otimes \mathbb{R}^k\)
(resp. \( (d+d^\ast) \otimes 1_{\mathbb{C}^k}\)).
This finishes the proof.
\end{proof}
In the article we will use the following special case:
\begin{corollary}[UCP for Dirac operators in 4d]
\label{cor:unique-cont-dirac-4d}
In the setting of
the \namedref{thm:unique-cont-dirac-Lr},
assume \(n=4\) and \(V \in L^2_1(M;\mathrm{Aut}(S))\).
Then any solution \(\Phi \in L^2_{1,loc}(M;S)\) to
\begin{equation*}
\Dirac_A \Phi + V \Phi = 0
\end{equation*}
which is zero on some open set
is identically zero.
\end{corollary}
\section{Seiberg-Witten moduli spaces in split Coulomb slice}
\label{sec:Seiberg-Witten-moduli-spaces-in-split-Coulomb-slice}
In this section we introduce the moduli spaces of the Seiberg\hyp{}Witten equations
on a~Riemannian \(4\)\hyp{}manifold with boundary a~collection of rational homology
spheres,
together with the restriction maps to the boundary.
For \(3\)\hyp{}manifolds, we introduce the \emph{Coulomb slice}
and its \emph{polarization}, a~decomposition of the tangent space
into a~sum of two infinite\hyp{}dimensional subspaces.
For \(4\)\hyp{}manifolds, we introduce the \emph{double Coulomb slice}
and what we call the \emph{split Coulomb slice}
together with the \emph{split gauge group}.
Since the restriction maps are generally not invariant with respect
to the split gauge group, we need to introduce appropriate
\emph{twisted restriction maps} as well.
The split gauge fixing is a~key novel element that generalizes the gauge slice
introduced by Khandhawit \cite{Kha2015}
and Lipyanskiy \cite{Lip2008}.
It simplifies the proof of the \namedref{thm:composing-cobordisms}
letting us to reduce it to the case of untwisted restriction maps.
\subsection{Coulomb slice on \texorpdfstring{\(3\)}{3}-manifolds}
We begin by introducing
\emph{polarizations} on the Seiberg\hyp{}Witten
configuration space in Coulomb gauge
on an~oriented rational homology sphere \(Y\)
and collections of such.
Let \(g\) be a~Riemannian metric
and \(\mathfrak{s}\) be a~\spinc{} structure on \(Y\).
Denote by \(S_Y\) the associated spinor bundle
and choose a~smooth \spinc{} connection \(B_0\).
The \emph{Seiberg\hyp{}Witten configuration space} on \(Y\)
is the space
\begin{equation*}
\ConfThree{Y} = \ConfThreeFull{B_0}{Y}
\end{equation*}
consisting of pairs \((B,\Psi)\) of a~\emph{\spinc{} connection}
and a~\emph{spinor} on \(Y\).
In Seiberg\hyp{}Witten theory one investigates
the Chern\hyp{}Simons\hyp{}Dirac functional \(\mathcal{L}\)
\begin{equation*}
\mathcal{L}(B, \Psi)
= - \frac{1}{8} \int_Y (B^t - B^t_0)
\wedge (F_{B^t} + F_{B^t_0} )
+ \frac{1}{2} \int_Y \langle \Dirac_{B} \Psi, \Psi \rangle.
\end{equation*}
on the configuration space.
The \emph{gauge group} \(\Gauge(Y) = L^2_{3/2}(Y;S^1)\) acts on \(\ConfThree{Y}\)
via \(u (A, \Phi) = (A - u^{-1}du, u \Phi)\)
where \(u \in \Gauge(Y)\),
leaving \(\mathcal{L}\) invariant.
If one used spaces of higher regularity,
one could work with the quotient of the configuration space by the action
of the gauge group.
However, in the low regularity setting the action of \(\Gauge(Y)\)
on the spinors is not continuous.
Because of that
(and in applications
concerning the Seiberg\hyp{}Witten stable homotopy type, cf. \cite{KLS2018}),
it is preferable to take the Coulomb slice as the model for the quotient
by the identity component of the gauge group.
Indeed, for \(Y\) a~rational homology sphere
the Hodge decomposition gives
the \(L^2\)\hyp{}orthogonal decomposition
\begin{equation*}
\Omega^1(Y) = \Omega^1_C(Y) \oplus \Omega^1_{cC}(Y)
\end{equation*}
where
\begin{math}
\Omega^1_{cC}(Y) = \{ b \in \Omega^1(Y) | d^\ast b = 0\}
= d^\ast (\Omega^2(Y))
\end{math}
is the space of (smooth) coclosed forms and
\begin{math}
\Omega^1_C(Y) = \{ b \in \Omega^1(Y) | db = 0 \}
= d(\Omega^0(Y))
\end{math}
is the space of (smooth) closed forms.
Denote by \(\proj_d\) the projection
\begin{math}
\proj_d: \SForms{1/2}{Y}
\to \SForms[C]{1/2}{Y}
= d\left( L^2_{3/2} (i \Omega^0(Y))\right)
\end{math}
along \(\SForms[cC]{1/2}{Y}\).
\begin{lemma}
[gauge fixing in \(3\)d]
\label{lem:gauge-fixing-in-3-d}
On \(Y\), there is a~continuous choice of
\emph{based} and \emph{contractible} gauge transformations
putting forms in the Coulomb slice, i.e., a~homomorphism
\begin{align*}
\SForms{1/2}{Y} &\to \GaugeIdBased(Y) \\
a & \mapsto u_a
\end{align*}
such that
\begin{math}
a - u_a^{-1} du_a = \mleft( 1-\proj_d\mright)a \in \SForms[cC]{1/2}{Y}
\end{math},
where
\begin{math}
\GaugeIdBased(Y)
= \left\{ e^f \middle| f \in L^2_{3/2}(i\Omega^0(Y)), \int_Y f = 0 \right\}
\subset \Gauge(Y)
\end{math}.
For each \(a\), there is exactly one such \(u_a\).
\end{lemma}
\begin{proof}
Denote \(\Omega^0_0(Y) = \left\{ f \in \Omega^0(Y) \middle| \int_Y f = 0 \right\} \).
The exterior derivative
\(d : L^2_{3/2}(\Omega^0_0(Y)) \to \SForms[C]{1/2}{Y} \)
has inverse \(G_d\).
Denote by \(\Pi_d\) the orthogonal projection
\(\Omega^1(Y) \to d(\Omega^0(Y))\).
We take
\(u_a = e^{G_d \Pi_d a}\)
which has the desired properties.
Uniqueness follows from the fact that
\(df = 0\) and \(\int_Y f = 0\) imply \(f=0\).
\end{proof}
Moreover, for \(Y\) a~rational homology sphere
we have \(\Gauge(Y) = \GaugeId(Y) =
\left\{ e^f \middle| f \in L^2_{3/2}(Y;i\mathbb{R}) \right\} \).
Indeed, taking \(\tilde{u} = u_{-u^{-1}du} u\)
gives us \(\tilde{u}\) with \(d(\tilde{u}^{-1} d \tilde{u})=0\),
which implies \(d \tilde{u}=0\) and thus \(\tilde{u}\)
is constant.
It follows that there is are bijections
\begin{align*}
\CoulThree{Y} &\leftrightarrow
\quotient{\mleft( \ConfThree{Y} \mright)}{\GaugeIdBased(Y)},
\\
\quotient{\mleft( \CoulThree{Y} \mright) }{S^1} &\leftrightarrow
\quotient{\mleft( \ConfThree{Y} \mright)}{\Gauge(Y)},
\end{align*}
justifying the restriction to the Coulomb slice:
\begin{definition}
[Coulomb slice]
\label{def:Coulomb-slice}
The \emph{Coulomb slice} on \(Y\)
with respect to the reference connection \(B_0\)
is the space of configurations
\begin{equation*}
\CoulThree{Y}
= \CoulThreeFull{B_0}{Y}
\subset
\ConfThree{Y}.
\end{equation*}
\end{definition}
The following subspaces are crucial to the analysis of
Atiyah\hyp{}Patodi\hyp{}Singer boundary value problem
for the Seiberg\hyp{}Witten equations on \(4\)\hyp{}manifolds
with boundary \(Y\).
\begin{definition}
[polarization on the Coulomb slice]
\label{def:polarization-on-Coulomb-slice}
We define
\( H^+(Y,\mathfrak{s})\) (resp. \(H^-(Y,\mathfrak{s})\)) \(\subset \TangCoulThree{Y}\)
to be the closure of the span of positive
(resp. nonnegative)
eigenvalues of
\begin{equation*}
(\star d) \oplus \Dirac_{B_0} :
\TangCoulThree{Y} \to \TangCoulThree{Y}.
\end{equation*}
We denote by \(\Pi^\pm: \TangCoulThree{Y} \to H^\pm(Y,\mathfrak{s})\)
the projection onto
\(H^\pm(Y,\mathfrak{s})\) along \(H^\mp(Y,\mathfrak{s})\).
\end{definition}
One of our goals is to prove that the moduli spaces
of solutions to the Seiberg\hyp{}Witten equations on \(X\)
are, in a~precise sense, comparable to
the negative subspace \(H^-(Y,\mathfrak{s})\)
via the restriction map
(cf. \namedref{thm:semi-infinite-dimensionality-of-moduli-spaces}).
Finally, if \(Y = \bigsqcup_i Y_i\) is a~disjoint sum of
oriented rational homology spheres \(Y_i\)
then we define
\begin{align*}
\ConfThree{Y}
= \prod_i \ConfThree[i]{Y_i},
&\quad
\CoulThree{Y}
= \prod_i \CoulThree[i]{Y_i},
\\
H^\pm(Y,\mathfrak{s})
&= \prod_i H^\pm(Y_i,\mathfrak{s}_i).
\end{align*}
Moreover, note that there are a~natural identifications
between configuration spaces for \(Y\)
and oppositely oriented \(-Y\).
For a~\spinc{} structure \(\mathfrak{s}\)
with its spinor bundle \(S_Y\)
there is the conjugate \spinc{} structure \(\overline{\mathfrak{s}}\)
determined by the conjugate bundle \(\overline{S}_Y\),
and the anti\hyp{}linear isomorphism \(S_Y \simeq \overline{S}_Y\)
induces natural affine isomorphisms
\begin{align*}
\ConfThree{Y}
&\simeq \Configuration{-Y}{\overline{\mathfrak{s}}}
\quad
\CoulThree{Y}
\simeq \Configuration[cC]{-Y}{\overline{\mathfrak{s}}}
\quad
H^\pm(Y,\mathfrak{s})
\simeq H^\mp(-Y,\overline{\mathfrak{s}}).
\end{align*}
\subsection{Split Coulomb slice on \texorpdfstring{\(4\)}{4}-manifolds}
\label{sec:split-Coulomb-slice-on-4-manifolds}
We turn our attention to the Seiberg\hyp{}Witten equations
and gauge fixings for configurations on a~connected oriented
\(4\)\hyp{}manifold \(X\) with nonempty boundary \(\partial X \neq
\varnothing\) satisfying \(b_1(\partial X) = 0\),
oriented using the outward normal.
As explained by Khandawit \cite{Kha2015},
the most convenient slice for these is a~kind of a~\textit{double} Coulomb slice
(which was already used by Lipyanskiy \cite{Lip2008}),
which imposes both coclosedness of the connection \(1\)\hyp{}form
on both \(X\) and \(\partial X\), as well as an~auxiliary condition
near \(\partial X\).
We drop this auxiliary condition from the definition of
the \emph{double Coulomb slice}
and instead introduce the \emph{split Coulomb slice}
which generalizes the constructions of Khandhawit and Lipyanskiy.
This allows one to choose a~gauge fixing which do not require twisting
or ones that are more geometric in nature,
depending on one's needs.
Indeed, twisting is necessary in Khandhawit's and Lipyanskiy's gauge fixing,
in which
the restriction map may not commute with the residual gauge group action.
We begin by introducing the Seiberg\hyp{}Witten equations
on \(4\)\hyp{}manifolds.
\begin{definition}
[Seiberg-Witten equations]
\label{def:Seiberg-Witten-equations}
The \emph{Seiberg\hyp{}Witten map} is defined by
\begin{equation*}
\SW : \ConfFour{X} \to \SWDomain{X},
\end{equation*}
\begin{equation*}
\SW(A,\Phi) = \mleft(
\frac 1 2 F^+_{A^t} - \rho^{-1}((\Phi\Phi^\ast)_0),
\Dirac_{A}^+ \Phi
\mright),
\end{equation*}
where \(A^t\) denotes the connection induced by \(A\) on \(\det(S^+_X)\)
and \(F^+_{A^t}\) denotes the self\hyp{}dual part of its curvature,
according to the splitting \( \Lambda^2(X) = \Lambda^+(X) \oplus
\Lambda^-(X)\) by the eigenspaces of the Hodge star \(\star\).
The \emph{Seiberg\hyp{}Witten equations}
are the equations given by \(\SW(A,\Phi)=0\), that is,
\begin{equation*}
\left\{
\begin{aligned}
\frac 1 2 F^+_{A^t} - \rho^{-1}((\Phi\Phi^\ast)_0) \hspace{-0.5em}
&= 0, \\
\Dirac_{A}^+ \Phi &= 0.
\end{aligned}
\right.
\end{equation*}
\end{definition}
Note that continuity and smoothness of the map \(\SW\) follow from
\autoref{thm:multiplication} and the
fact that continuous multilinear maps on Banach spaces
are smooth.
These equations are equivariant with respect to the action of the gauge group
\( \Gauge(X) = L^2_2(X;S^1) \).
As is easily seen, the solution set is invariant under this action.
\begin{lemma}
[gauge group action on a~\(4\)\hyp{}manifold]
\label{lem:gauge-action-4d}
The gauge group \(\Gauge(X)\) acts on \(\ConfFour{X}\)
via \(u (A, \Phi) = (A - u^{-1}du, u \Phi)\)
where \(u \in \Gauge(X)\).
Moreover, \(\SW(A, \Phi) = 0\) if and only if \(\SW(u(A,\Phi)) = 0\).
\end{lemma}
Note that this action is not continuous
since the multiplication \(L^2_2(X) \times L^2_1(X) \to L^2_1(X)\)
is not continuous.
It is well\hyp{}defined since
\(\Gauge(X) \subset L^\infty(X) \cap L^2_2(X)\)
and the multiplication \( (L^2_2(X) \cap L^\infty(X))
\times L^2_1(X) \to L^2_1(X)\)
is continuous.
In order to prove that the moduli spaces of solutions are manifolds
we need to investigate the differential of \(\SW\).
\begin{align*}
D_{(A,\Phi)}\SW
&: \TangFour{X} \to \SWDomain{X}, \\
D_{(A,\Phi)}\SW&(a,\phi)
= (d^+ a, \Dirac^+_{A_0} \phi)
+ ( -\rho^{-1}(\phi \Phi^\ast + \Phi \phi^\ast)_0,
\rho(a) \Phi + \rho(\diff{A}) \phi).
\end{align*}
Similarly, at \( (e,A,\Phi) \) the differential of the gauge group action
is:
\begin{align*}
T \Gauge(X) = L^2_2(X; \mathbb{R}) \to& \TangFour{X},\\
f \mapsto& (-df, f \Phi).
\end{align*}
As in dimension \(3\),
we can fix gauge using the Coulomb condition,
i.e., require that the \(1\)\hyp{}form is coclosed.
Adding the same condition on the boundary \(\partial X\)
ensures that the restriction to the boundary
lies in the previously defined Coulomb slice
(cf. \autoref{def:Coulomb-slice}):
\begin{definition}
[double Coulomb slice]
\label{def:double-Coulomb-slice}
We define the \emph{double Coulomb slice}:
\begin{equation*}
\Omega^1_{CC}(X) = \{ a \in \Omega^1(X) | d^\ast a = 0,
d^\ast(\iota^\ast_{\partial X} a) = 0 \}.
\end{equation*}
The gauge group preserving it is called the
\emph{harmonic gauge group}:
\begin{equation*}
\GaugeHarm(X) = \{ u : X \to S^1 | u^{-1}du \in \SForms[CC]{1}{X}\}.
\end{equation*}
\end{definition}
While the action of the full gauge group \(\Gauge(X)\) is not continuous,
the action of \(\GaugeHarm(X)\) is,
which will be proven in \autoref{lem:harmonic-gauge-group-acts-continuously}.
To define the split Coulomb slice we need to first understand
the harmonic gauge group and its relation to harmonic functions
and forms on \(X\).
Notice that we have a~well\hyp{}defined homomorphism
\begin{align}
\label{eqn:gauge-diff}
\delta : \Gauge(X) &\longrightarrow \SForms{1}{X} \\
\nonumber
u &\longmapsto \delta (u) = u^{-1}du
\end{align}
which, restricted to harmonic gauge transformations, induces
\begin{equation*}
\delta : \GaugeHarm(X) \to i\mathcal{H}^1_D(X)
\end{equation*}
where
\begin{equation*}
\mathcal{H}^1_D(X) = \left\{ a \in \Omega^1(X) \middle|
da=0, d^\ast a=0, \iota^\ast_{\partial X} a = 0 \right\}
\end{equation*}
is the space of \emph{harmonic \(1\)\hyp{}forms with Dirichlet boundary conditions}.
Note that \(\delta\) (both on \(\Gauge(X)\) and on \(\GaugeHarm(X)\))
is an~inclusion modulo \(S^1\), i.e., \(\ker \delta = S^1\),
the group of constant gauge transformations.
On the other hand, the exponential map
\begin{align*}
\exp : L^2_2(i \Omega^0(X))
&\longrightarrow \Gauge(X)
\\
f & \longmapsto e^f
\end{align*}
restricted to the space of \emph{doubly harmonic functions}
\begin{equation*}
\mathcal{H}(X)
=
\left\{ f \in \Omega^0(X) \middle| \Delta f = 0,
\Delta \mleft( f|_{\partial X} \mright) = 0 \right\}
\end{equation*}
yields a~homomorphism
\begin{equation*}
\exp : i \mathcal{H}(X) \to \GaugeHarm(X)
\end{equation*}
since the conditions \(\Delta f = 0\) and \(\Delta(f|_{\partial X})=0\)
for an~imaginary\hyp{}valued function \(f\)
are equivalent to \(df \in \SForms[CC]{1}{X} \).
Importantly, the composition
\(\delta \circ \exp : L^2_2(i \Omega^0(X)) \to \SForms{1}{X}\)
is exactly the differential \(f \mapsto df\).
Denote the image of this exponential map by
\(\GaugeHarmId(X) = \exp( i \mathcal{H}(X))\).
As the next proposition explains,
\(\GaugeHarmId(X)\) is the identity component of \(\GaugeHarm(X)\).
Thus, our goal will be to find a~gauge fixing
that dispenses with the action of this identity component,
saving only the action by \(S^1\), the constant elements.
\begin{proposition}
[sequence of harmonic gauge groups]
\label{prop:sequence-of-harmonic-gauge-groups}
The following sequence is exact:
\begin{equation}
\label{eqn:harmonic-gauge-sequence}
\begin{tikzcd}
0 \arrow{r}
& \GaugeHarmId(X) \arrow{r}
& \GaugeHarm(X) \arrow{r}
& \pi_0\GaugeHarm(X) \arrow{r}
& 0.
\end{tikzcd}
\end{equation}
The identity component
\(\GaugeHarmId(X)\) is isomorphic to
\(S^1 \times \mathbb{R}^{b_0(\partial X)-1}\)
and the group of components
\(\pi_0\GaugeHarm(X)\)
is naturally isomorphic to \(H^1(X; \mathbb{Z})\).
\end{proposition}
\begin{remark}
Recall that \(H^1(X; \mathbb{Z}) \simeq \mathrm{Hom}(\pi_0(X), \mathbb{Z})\)
has no torsion.
\end{remark}
\begin{proof}
Crucial to the understanding of the gauge group is the homomorphism
\eqref{eqn:gauge-diff}.
Hodge theory provides an~identification
\begin{math}
\mathcal{H}^1_D(X) \simeq H^1(X,\partial X; \mathbb{R}).
\end{math}
Thus, we will consider \(\delta\) as a~map
\begin{math}
\delta : \GaugeHarm(X) \to H^1(X, \partial X; i \mathbb{R})
\end{math}
with kernel \(S^1\).
This will be used to establish a~map of horizontal short exact sequences
\begin{equation}
\label{eqn:gauge-groups-and-homology}
\begin{tikzcd}
&[-20pt] S^1 \arrow[hookrightarrow]{d}{} \arrow[equals]{r}{}
&[-32pt] S^1 \arrow[hookrightarrow]{d}{}
&[-20pt] 0 \arrow{d}{}
&[-42pt] \\
0 \arrow{r}{}
& \GaugeHarmId(X) \arrow[twoheadrightarrow]{d}{} \arrow[hookrightarrow]{r}{}
& \GaugeHarm(X) \arrow{d}{\frac{\delta}{2 \pi i}} \arrow{r}{}
& \pi_0 \GaugeHarm(X)
\arrow[hookrightarrow]{d}{\frac{[\delta]}{2 \pi i}} \arrow{r}{}
& 0
\\
0 \arrow{r}{}
& \quotient{H^0(\partial X; \mathbb{R})}{H^0(X; \mathbb{R})}
\arrow{r}{} \arrow{d}{}
& H^1(X, \partial X; \mathbb{R}) \arrow{r}{}
\arrow[twoheadrightarrow]{d}{}
& H^1(X; \mathbb{R}) \arrow{r}{} \arrow[twoheadrightarrow]{d}{}
& 0
\\
& 0
& \quotient{H^1(X; \mathbb{R})}{H^1(X; \mathbb{Z})}
\arrow[equals]{r}{}
& \quotient{H^1(X; \mathbb{R})}{H^1(X; \mathbb{Z})}
&
\end{tikzcd}
\end{equation}
where the vertical sequences are also exact,
as will be shown in the course of the proof.
Firstly, we prove that \( \pi_0 \GaugeHarm(X) \simeq H^1(X; \mathbb{Z})\).
Notice that for any closed loop
\(\gamma \subset X\) the period of \(\delta(u)\),
i.e., the integral \(\int_\gamma \delta(u)\),
is an~integer multiple of \(2 \pi i\);
and it is zero whenever \(\gamma\) is contractible.
(In fact it is the obstruction to lifting
\(u|_\gamma : \gamma \to S^1\) to a~map
\(u|_\gamma : \gamma \to \mathbb{R}\).)
This way any \(u \in \GaugeHarm(X)\)
determines an~element \( [u] \in H^1(X; 2 \pi i \mathbb{Z})\)
and we get a~homomorphism
\(\GaugeHarm(X) \to H^1(X; 2 \pi i \mathbb{Z})\).
Since the periods (having values in \(2 \pi i \mathbb{Z}\))
do not change under homotopy,
this descends to a~map
\(\pi_0 \GaugeHarm(X) \to H^1(X; 2 \pi i \mathbb{Z})\)
and from its construction it follows
that it coincides with the composition
\begin{equation*}
\GaugeHarm(X) \xrightarrow{\delta}
H^1(X, \partial X; i \mathbb{R})
\to H^1(X; i \mathbb{R})
\end{equation*}
which has image in \(H^1(X; 2 \pi i \mathbb{Z})\).
It remains to notice that any element
in \(x \in H^1(X; 2 \pi i \mathbb{Z})\)
can be lifted to an~element
\(\widetilde{x} = H^1(X, \partial X; i \mathbb{R}) \simeq i
\mathcal{H}^1_D(X)\)
and then integrated along curves
to obtain an~element \(u_{\tilde{x}} \in \GaugeHarm(X)\)
mapping to \(x\) (see \eqref{eqn:integrating-1-forms}).
After dividing \(1\)\hyp{}forms by \(2\pi i\)
we obtain a~natural isomorphism
\(\pi_0 \GaugeHarm(X) \simeq H^1(X; \mathbb{Z})\),
as wished.
Moreover, this shows that the kernel \(K\) of the map
\( \GaugeHarm(X) \to H^1(X; 2 \pi i \mathbb{Z})\)
maps via \(\delta\) to the kernel of
\(H^1(X, \partial X; i \mathbb{R}) \to H^1(X; i \mathbb{R})\).
We thus get the map
\begin{equation*}
\delta|_K : K \to \im(H^0(\partial X; i \mathbb{R}))
\end{equation*}
to the image of \(H^0( \partial X; i \mathbb{R})\) in
\(H^1(X, \partial X; i \mathbb{R})\).
The map \(\delta|_K\) itself has kernel \(S^1\).
With this in mind, we turn our focus to \(\GaugeHarmId(X) =
\exp(\mathcal{H}(X))\).
Since any harmonic function \(f\) on \(X\) is determined
by its restriction \(f|_{\partial X}\) to \(\partial X\),
and harmonic functions on \(\partial X\) are locally constant,
we have an~isomorphism
\begin{equation*}
H^0(\partial X; \mathbb{R})
\simeq
\mathcal{H}(X)
\end{equation*}
and we will denote any element \(g\) in \(H^0(\partial X; \mathbb{R})\)
by \(f|_{\partial X}\) for the unique \(f \in \mathcal{H}(X)\)
such that \(f|_{\partial X} = g\).
We obtain a~canonically defined surjection
\begin{align*}
\exp : H^0(\partial X; i\mathbb{R}) &\longrightarrow \GaugeHarmId(X) \\
f|_{\partial X} &\longmapsto e^{f}
\end{align*}
with kernel generated by restrictions of \(H^0(X;2\pi i \mathbb{Z})\).
After dividing by \(2 \pi i\) we get an~isomorphism
\(\GaugeHarmId(X) \simeq
H^0(\partial X; \mathbb{R})/H^0(X;\mathbb{Z})
\simeq
S^1 \times
(H^0(\partial X; \mathbb{R}) / H^0(X;\mathbb{R}))\), as wished.
Finally, recall that the composition
\begin{equation*}
H^0(\partial X; i \mathbb{R})
\simeq \mathcal{H}(X)
\xrightarrow{\exp}
\GaugeHarmId(X) \xrightarrow{\delta}
H^1(X, \partial X; i\mathbb{R})
\end{equation*}
is given by \( f|_{\partial X} \mapsto df \)
and therefore, by Hodge theory,
represents the boundary map in the exact sequence
\begin{equation*}
0 \to H^0(X;\mathbb{R}) \to H^0(\partial X; \mathbb{R})
\to H^1(X,\partial X; \mathbb{R})
\to H^1(X; \mathbb{R})
= 0
\end{equation*}
Thus \(\delta(\GaugeHarmId(X)) = \im(H^0(\partial X; i\mathbb{R}))
= \delta(K)\).
Since \(\ker \delta|_{\GaugeHarmId(X)} = S^1
= \ker \delta|_{K}\)
and \(\GaugeHarmId(X) \subseteq K\),
we obtain that \(\GaugeHarmId(X) = K\),
i.e., the sequence \eqref{eqn:harmonic-gauge-sequence} is exact,
as wished.
\end{proof}
\begin{lemma}
[\(\GaugeHarm\) acts continuously]
\label{lem:harmonic-gauge-group-acts-continuously}
The action of \(\GaugeHarm(X)\) on \(\ConfFour{X}\) is smooth.
\end{lemma}
\begin{proof}
By \autoref{prop:sequence-of-harmonic-gauge-groups}
it suffices prove that the action of
\(\GaugeHarmId(X)\) is smooth.
Further, by \autoref{thm:multiplication}
it suffices to prove that \(\GaugeHarmId(X) \subset L^2_3(X; S^1)\)
and that this injection is continuous.
Since \(f \in i \mathcal{H}(X)\) and \(\mathcal{H}(X)\)
is finite\hyp{}dimensional,
there are constants \(C, C'\)
such that
\(\|\exp(f)\|_{L^2_3} \leq C \|f\|_{L^2_3} \leq CC' \|f\|_{L^2_2}\),
finishing the proof.
\end{proof}
Let us compare different splittings.
If \(s, s'\) are two different splittings,
then for any \([u] \in \pi_0 \GaugeHarm(X)\) we have
\(s (s')^{-1} \in \GaugeHarmId(X)\).
Therefore any two gauge splittings differ by a~homomorphism
\(\pi_0 \GaugeHarm(X) \to \GaugeHarmId(X)\).
Our goal is to reduce the gauge group action
to the action of \(S^1\) and the action of a~chosen
lift of \(\pi_0\GaugeHarm(X)\) to \(\GaugeHarm(X)\).
Precisely, we will consider splittings
\(s : \pi_0\GaugeHarm(X) \to \GaugeHarm(X)\)
of \eqref{eqn:harmonic-gauge-sequence}.
In order to choose the gauge fixing we need to understand
that a~gauge splitting induces another splitting on the level of homology.
\begin{proposition}
[homological splitting]
\label{prop:homological-splitting}
Any splitting \(s : \pi_0\GaugeHarm(X) \to \GaugeHarm(X)\)
of \eqref{eqn:harmonic-gauge-sequence}
induces a~splitting
\(s^H : H^1(X;\mathbb{R}) \to H^1(X,\partial X;\mathbb{R})\)
of the exact sequence
\begin{equation}
0 \to
H^0(\partial X; \mathbb{R}) / H^0(X;\mathbb{R})
\to
H^1(X,\partial X;\mathbb{R})
\to
H^1(X;\mathbb{R})
\to
0.
\label{eqn:relative-homology-exact-sequence}
\end{equation}
\end{proposition}
\begin{proof}
Composing
\(H^1(X; \mathbb{Z}) \xrightarrow{\cdot 2 \pi i}
\pi_0 \GaugeHarm(X) \xrightarrow{s} \GaugeHarm(X)
\xrightarrow{\frac{\delta}{2 \pi i}} H^1(X, \partial X; \mathbb{R})\)
we get a~linear map which by linearity uniquely extends to a~section
\(s_H: H^1(X; \mathbb{R}) \to H^1(X, \partial X; \mathbb{R})\)
of the aforementioned exact sequence.
\end{proof}
\begin{definition}
[gauge splitting]
\label{def:gauge-splitting}
By a~\emph{gauge splitting} we call a~splitting
\({s : \pi_0\GaugeHarm(X) \to \GaugeHarm(X)}\)
of the exact sequence \eqref{eqn:harmonic-gauge-sequence}.
We denote by
\({s^H : H^1(X;\mathbb{R}) \to H^1(X,\partial X;\mathbb{R})}\)
the associated \emph{homological splitting}.
\end{definition}
We clarify the relationship between gauge splittings and homological splittings.
\begin{proposition}
[gauge splittings from homological splittings]
\label{prop:gauge-splittings-from-homological-splittings}
\hfill \\
Let \(\sigma : H^1(X; \mathbb{R}) \to H^1(X, \partial X; \mathbb{R})\)
be a~homological splitting, i.e., a~splitting of
\eqref{eqn:relative-homology-exact-sequence}.
Then up to action of \(S^1\) there exists a~unique gauge splitting \(s\)
such that \(\sigma = s^H\).
\end{proposition}
\begin{proof}
For existence, choose \(x_0 \in X\)
and consider the map
\begin{align}
\label{eqn:integrating-1-forms}
I_{x_0} : \left\{ \eta \in \mathcal{H}^1_D(X)\right.%
&\left|\, [\eta] \in H^1(X; 2 \pi i \mathbb{Z}) \right\}
\longrightarrow \GaugeHarm(X),
\\
\nonumber
\eta &\longmapsto \left( I_{x_0}(\eta)(x) =
\exp\left(\int_{x_0}^x \eta \right) \right).
\end{align}
Notice that \(\delta(I_{x_0}(\eta)) = \eta\).
Therefore taking \(s([u]) = I_{x_0}(\sigma([\delta(u)]))\)
we get a~gauge splitting \(s: \pi_0 \GaugeHarm(X) \to \GaugeHarm(X)\)
with \(\sigma = s^H\).
The uniqueness up to action of \(S^1\) follows from
the exactness of the middle vertical sequence in
\eqref{eqn:gauge-groups-and-homology}.
\end{proof}
In order to find the appropriate gauge fixing
we need the following analogue of \autoref{prop:sequence-of-harmonic-gauge-groups}
for \(1\)\hyp{}forms.
\begin{lemma}
[decomposing \(1\)\hyp{}forms]
\label{lem:decomposing-1-forms}
\hfill \\
\(\Omega^1(X) = \Omega^1_{CC}(X) + d(\Omega^0(X))\)
and \(\Omega^1_{CC}(X) \cap d(\Omega^0(X)) = d(\mathcal{H}(X))\).
\end{lemma}
\begin{proof}
This follows from the proof of \cite{Kha2015}*{Proposition 2.2}.
(Note that our definition of \(\Omega^1_{CC}(X)\)
differs from Khandhawit's,
which we denote by \(\Omega^1_{s^\perp}(X)\) (cf.
\autoref{def:orthogonal-splitting}).)
\end{proof}
In particular, we can decompose \(1\)\hyp{}forms as
\begin{equation*}
\Omega^1(X) = \Omega^1_{CC}(X) \oplus d(\Omega^0_\partial(X))
\end{equation*}
where
\begin{equation*}
\Omega^0_\partial(X) =
\left\{ f \in \Omega^0(X) \middle| \int_{Y_i} f = 0 \text{ for each
component }Y_i \subset \partial X \right\}.
\end{equation*}
Denoting by \(\proj_{CC}\) the projection onto
\(\Omega^1_{CC}(X)\)
along
\(d(\Omega^0_\partial(X))\)
we obtain the following analog of \autoref{lem:gauge-fixing-in-3-d}.
\begin{lemma}
[Coulomb gauge fixing in \(4\)d]
\label{lem:Coulomb-slice-fixing-in-4d}
\hfill\\
There is a~unique homomorphism
\begin{align*}
\SForms{1}{X}
& \to \Gauge^{e,\partial}(X)
= \exp\mleft( L^2_2\mleft(i \Omega^0_\partial(X)\mright)\mright)
\\
a & \mapsto u^{CC}_a
\end{align*}
such that
\begin{equation*}
a - \left(u^{CC}_a\right)^{-1} d u^{CC}_a = \Pi_{CC} a
\in \SForms[CC]{1}{X}.
\end{equation*}
\end{lemma}
\begin{proof}
The projection \( (1-\Pi_{CC})\) on \(\Omega^1(X)\)
has image in \(d( \Omega^0_\partial(X))\)
and that \(d\) is injective on \(\Omega^0_\partial(X)\).
Therefore there is a~unique homomorphism
\(L^2_1(i \Omega^1(X)) \to L^2_2(i \Omega^0_\partial(X))\)
sending \(a\) to the unique \(f_a^{CC} \in \Omega^0_\partial(X)\)
such that \(a - df_a^{CC} \in \Omega^1(X)\).
Then we take \(u_a^{CC} = \exp(f_a^{CC})\).
\end{proof}
We can further decompose
\begin{equation}
\label{eqn:decomposing-CCs}
\Omega^1_{CC}(X) =
\mleft(\Omega^1_{CC}(X) \cap (\mathcal{H}^1_D(X))^\perp\mright)
\oplus \mathcal{H}^1_D(X).
\end{equation}
A~homological splitting \(s^H\) provides a~decomposition
\begin{equation}
\label{eqn:decomposing-harmonics}
\mathcal{H}^1_D(X) = s^H(H^1(X;\mathbb{R})) \oplus d(\mathcal{H}(X))
\end{equation}
which is an~analogue of
\eqref{eqn:harmonic-gauge-sequence}
for \(\mathcal{H}^1_D(X)\).
With these in hand, we are ready to define the split Coulomb slice.
\begin{definition}
[split Coulomb slice]
\label{def:split-Coulomb-slice}
Let \(s\) be a~gauge splitting and \(s^H\) its associated
homological splitting.
The \emph{split Coulomb slice} is
\begin{equation*}
\Omega^1_{s}(X) = \{ a \in \Omega^1_{CC}(X) |
a \in (\Omega^1_{CC}(X) \cap (\mathcal{H}^1_D(X))^\perp)
\oplus s^H(H^1(X;i\mathbb{R})) \}.
\end{equation*}
\end{definition}
In particular, we have that
\begin{equation}
\label{eqn:CC-exact-decomposition}
\Omega^1(X) = \Omega^1_{s}(X) \oplus d(\Omega^0(X))
\end{equation}
and parallel to \autoref{lem:Coulomb-slice-fixing-in-4d}
and \autoref{lem:gauge-fixing-in-3-d}
we can use the projection \(\proj_s\) onto the first factor
along the second one to obtain:
\begin{lemma}
[split gauge fixing in 4d]
\label{lem:split-gauge-fixing-in-4d}
There is a~unique homomorphism
\begin{align*}
\SForms{1}{X}
& \to \GaugeIdBased(X) \\
a & \mapsto u^s_a
\end{align*}
such that
\begin{equation*}
a - \left(u^s_a\right)^{-1} d u^s_a = \proj_s a
\in \SForms[s]{1}{X}.
\end{equation*}
\end{lemma}
\begin{proof}
The projection \( (1-\Pi_s)\) on \(\Omega^1(X)\)
has image in \(d( \Omega^0(X))\)
and \(d\) is injective on \(\Omega^0_0(X)\).
Therefore there is a~unique homomorphism
\(L^2_1(i \Omega^1(X)) \to L^2_2(i \Omega^0_0(X))\)
sending \(a\) to the unique \(f_a^{s} \in \Omega^0_0(X)\)
such that \(a - df_a^s \in \Omega^1(X)\).
Then we take \(u_a^s = \exp(f_a^s)\).
\end{proof}
\begin{remark}
[continuous gauge fixing within double Coulomb slice]
\label{rmk:continuous-gauge-fixing-within-double-Coulomb-slice}
If we only consider \(a \in \Omega^1_{CC}(X)\), then
the above map has image in \(\GaugeHarm(X)\), which is finite\hyp{}dimensional.
Since the latter gauge group acts continuously on the configuration space
by \autoref{lem:harmonic-gauge-group-acts-continuously},
we conclude that putting \( (A,\Phi) \in \CoulFour{CC}{X}\)
into split Coulomb slice \(\CoulFour{s}{X}\) can be done continuously
with respect to \((A,\Phi)\).
\end{remark}
The gauge group acting on this split Coulomb slice is
the product of \(S^1\) and the split gauge group:
\begin{definition}
[split harmonic gauge group]
\label{def:gauge-group-split}
Let \(s\) be a~gauge splitting.
The \emph{split gauge group} is defined to be
\begin{equation*}
\GaugeSplit[s](X) = s(\pi_0\GaugeHarm(X)).
\end{equation*}
\end{definition}
\begin{lemma}
[split gauge group preserves the split Coulomb slice]
\label{lem:split-gauge-compatible}
For \(u \in \GaugeSplit[s](X)\)
we have \(u^{-1}du \in \SForms[s]{1}{X}\).
Conversely, if \(u^{-1} du \in \SForms[s]{1}{X}\),
then for some \(z \in S^1\) we have
\(zu \in \GaugeSplit[s](X)\).
\end{lemma}
\begin{proof}
One direction follows directly from the definition of \(s^H\):
if \(u = s([u])\), then \(s^H([\delta(u)]) = \delta(u) = u^{-1}du
\in \mathcal{H}^1_D(X) \simeq H^1(X,\partial X;i \mathbb{R})\),
so \(u^{-1}du \in \im s^H \subset \SForms[s]{1}{X}\).
The other direction follows by chasing arrows in the diagram
\eqref{eqn:gauge-groups-and-homology}.
\end{proof}
The circle \(\circ\) in the superscript indicates that
the only constant gauge transformation contained
in \(\GaugeSplit[s](X)\) is the identity.
This way we do not forget the \(S^1\)\hyp{}action
when taking the quotient by the split gauge group.
We want to compare different split slices
together with the split gauge group actions.
Choose two splittings \(s, s'\).
These determine a~map
\( s' \cdot s^{-1} : \pi_0 \GaugeHarm(X) \to \GaugeHarmId(X)\).
Viewing \(\pi_0 \GaugeHarm(X)\) as a~sublattice
\(\pi_0 \GaugeHarm(X) \simeq H^1(X; 2 \pi i \mathbb{Z}) \subset H^1(X; i \mathbb{R}),\)
let \(\nu : H^1(X; i \mathbb{R}) \to \GaugeHarmId(X)\)
be any homomorphism extending \(s' \cdot s^{-1}\).
Define
\begin{align*}
F_\nu : \CoulFour{s'}{X} &\longrightarrow \CoulFour{s}{X} \\
(A, \Phi) & \longmapsto \nu\left( \proj_{\im s^H}(A-A_0) \right) \cdot (A, \Phi).
\end{align*}
where \(\proj_{\im s^H} : i\Omega^1_{CC}(X) \to s^H(H^1(X; i\mathbb{R}))\)
is the projection
along \( i (\Omega^1_{CC}(X) \cap \mathcal{H}^1_D(X)) \oplus i d(\mathcal{H}(X))\)
(cf. \eqref{eqn:decomposing-CCs}, \eqref{eqn:decomposing-harmonics}).
\begin{proposition}
[equivalence of slices]
\label{prop:equivalence-of-slices}
The map \(F_\nu\) is well\hyp{}defined,
a~diffeomorphism,
equivariant with respect to the action of
\(\pi_0 \GaugeHarm(X) \simeq \GaugeSplit[s'](X) \simeq \GaugeSplit[s](X)\).
\end{proposition}
\begin{proof}
Firstly, we need to show that the image of \(F_\nu\)
actually lies in \(\CoulFour{s}{X}\).
Equivalently, we want to show that
\begin{align*}
A_\nu : \Omega^1_{CC}(X) & \longrightarrow \Omega^1_{CC}(X) \\
a &\longmapsto \nu\mleft(\proj_{\im s^H} a \mright) \cdot a
= a + \delta\mleft(\nu\mleft(\proj_{\im s^H} a \mright) \mright)
\end{align*}
maps \(\Omega^1_s(X)\) to \(\Omega^1_{s'}(X)\).
We have \(\delta(\GaugeHarmId(X)) = d(\mathcal{H}(X))\)
and, moreover, \(\delta \circ \nu\) is a~homomorphism,
thus a~linear map \(H^1(X; i\mathbb{R}) \to d(\mathcal{H}(X))\).
What follows is that \(A_\nu\) is a~linear map.
Now \(A_\nu\) restricted to
\(\Omega^1_{CC}\mleft(X\mright) \cap (\mathcal{H}^1_D(X))^\perp\)
is identity by definition,
so it suffices to show \(A_\nu( \im s^H ) \subset \im ((s')^H)\).
Furthermore, \(H^1(X; i \mathbb{R})\) is spanned by
\([\delta(\pi_0\GaugeHarm(X))] \) and therefore
\(s^H(H^1(X; i\mathbb{R}))\) is spanned by
\(\delta(\GaugeSplit[s](X))\),
so it suffices to show \(A_\nu(\delta(\GaugeSplit[s](X))) \subset
\im((s')^H)\).
But we defined \(\nu\) so that for any \(u \in \GaugeSplit[s](X)\)
\(\nu(\delta(u)) \cdot u \in \GaugeSplit[s'](X) \),
which implies \(A_\nu( \delta(u)) = \delta( \nu(\delta(u)) \cdot u)
\in \im( (s')^H)\), as wished.
The map \(F_\nu\) is smooth because the map \(\nu\) is smooth
and the action of the finite\hyp{}dimensional \(\GaugeHarm(X)\)
on \(\ConfFour{X}\) is smooth.
It is invertible because \(F_{1/\nu}\)
is its inverse.
Indeed, since \(\im \nu \subset \GaugeHarmId(X)\),
we have that \(\delta \nu \in d(\mathcal{H}(X))\),
so \(\proj_{\im s^H}(\delta \nu) \equiv 0\).
Therefore
\begin{equation*}
\proj_{\im s^H} \left(
\nu(\proj_{\im s^H}(A-A_0)) A - A_0\right)
= \proj_{\im s^H}(A-A_0),
\end{equation*}
so
\begin{equation*}
\left(\nu\left(\proj_{\im s^H} \left(
\nu(\proj_{\im s^H}(A-A_0)) A - A_0\right)\right)\right)^{-1}
= (\nu(\proj_{\im s^H}(A-A_0)))^{-1}
\end{equation*}
and \(F_{1/\nu} \circ F_\nu = \mathrm{id}\) follows.
\end{proof}
Finally, we discuss the gauge slice used by Lipyanskiy \cite{Lip2008} and
Khandhawit \cites{Kha2015,KLS2018}.
They require that \(a \in \Omega^1_{CC}(X)\)
and that for each component
\(Y_i \subset \partial X\) we have
\(\int_{Y_i} \iota^\ast(\star a) = 0\).
Using Stokes' theorem one can show
that for \(a \in \Omega^1_{CC}(X)\) this integral condition
is equivalent to the condition that
\(\int_X df \wedge \star a = 0\) for any \(f \in \mathcal{H}(X)\).
This fits into our setup perfectly, since there is exactly one
homological splitting \(s^H_\perp\) such that
\(\im s^H_\perp = \mathcal{H}^1_D(X) \cap (d(\mathcal{H}(X)))^\perp\).
\begin{definition}
[orthogonal splitting]
\label{def:orthogonal-splitting}
We call \(s^H_\perp\) the \emph{orthogonal homological splitting}.
We say that a~splitting \(s\) is
a~\emph{orthogonal splitting}
if \(s^H = s^H_\perp\).
\end{definition}
\subsection{Restriction to the boundary and twisting}
\label{sec:restriction-to-the-boundary-and-twisting}
Unless \(\partial X\) is connected,
we are not guaranteed that the \emph{restriction} to the boundary
is invariant under the action of the split gauge group
\(\GaugeSplit[s](X)\).
If it happens to be invariant for some \(s\),
we call such \(s\) an~\emph{integral splitting}.
For a~general \(s\), we introduce and prove the existence of \emph{twistings}
of the restriction map, making it invariant under
the action of \(\GaugeSplit[s](X)\) action even for non\hyp{}integral \(s\).
As mentioned before, integral splittings are utilized in the proof of
the \namedref{thm:composing-cobordisms},
while non\hyp{}integral splittings may be more convenient in other contexts
(e.g., in constructions of \cites{Lip2008,Kha2015,KLS2018}).
We start by defining the restriction maps
for an~embedding \(\iota_Y : Y \hookrightarrow X\)
of an~oriented \(3\)\hyp{}manifold \(Y\).
Denote by \(\mathfrak{s}\) the restriction to \(Y\)
of the \spinc{} structure \(\hat{\mathfrak{s}}\) on \(X\).
We get canonical identifications \(S^\pm_X|_Y \simeq S_Y\).
Assuming \(Y\) is a~geodesic codimension\hyp{}\(1\) submanifold of \(X\),
the \spinc{} connection \(A_0\) induces a~\spinc{} connection \(B_0\) on \(Y\)
by simple restriction:
\(\nabla^{B_0} = \iota_Y^\ast \nabla^{A_0}\).
Let \(a \in \SForms{1}{X}\),
\(A \in \SConnections{1}{A_0}{X}\),
\(\Phi \in \SSpinors[+]{1}{X}\)
and \(u \in \Gauge(X)\).
We define the restrictions:
\begin{align*}
R(a) &= \iota_Y^\ast(a) \in \SForms{1/2}{Y}, \\
R(A) &= B_0 + \iota_Y^\ast(A-A_0) \in \SConnections{1/2}{B_0}{Y}, \\
R(\Phi) &= \Phi|_Y \in \SSpinors{1/2}{Y}, \\
R(u) &= u|_Y \in \Gauge(Y).
\end{align*}
Integral splittings are the ones for which restriction maps
are invariant under the split gauge group.
\begin{definition}
[integral splitting]
\label{def:integral-splitting}
We call a~gauge splitting \(s\) \emph{integral}
if for each \(u \in \GaugeSplit[s](X)\)
we have \(u|_{\partial X} \equiv 1\).
Equivalently, \(s\) is integral if the composition
\(\pi_0\GaugeHarm(X) \xrightarrow{s}
\GaugeHarm(X) \xrightarrow{R}
\GaugeHarm(\partial X) \simeq (S^1)^{\pi_0(\partial X)}\)
is trivial.
\end{definition}
The integrality of \(s\) is closely connected to the integrality of \(s^H\).
\begin{proposition}
[homological classification of integral splittings]
\label{prop:homological-classification-of-integral-splittings}
If \(s\) is integral, then
\(s^H(H^1(X;\mathbb{Z})) \subset H^1(X, \partial X; \mathbb{Z})\),
i.e.,
\(s^H\) is integral as well.
Given any integral homological splitting \(\sigma\)
there exists a~unique integral splitting
\(s\) such that \(\sigma = s^H\).
\end{proposition}
\begin{proof}
Assume \(s\) is integral.
Choose \(y_0 \in \partial X\)
and consider the map \(I_{y_0}\) defined in
\eqref{eqn:integrating-1-forms}.
We know \(s\) and \(I_{y_0} \circ s^H \circ [\delta]\)
differ by action of \(S^1\),
but since both are equal to \(1\) at \(y_0\),
thus \(s = I_{y_0} \circ s^H \circ [\delta]\).
This implies that for any \(y \in \partial X\)
and any embedded curve \(\gamma : [0,1] \to X\)
with \(\gamma(0) = y_0\) and \(\gamma(1) = y\)
we have that \(\exp\left(\int_{y_0}^{y} s^H([\delta(u)])\right) = 1\)
and thus \(\int_{y_0}^{y_1} s^H([\delta(u)]) \in 2 \pi i \mathbb{Z}\).
This proves that \(s^H\) is integral.
Similarly, if \(\sigma\) is integral,
then \(s = I_{y_0} \circ \sigma \circ [\delta]\)
satisfies that \(\sigma = s^H\)
and \(s(y) = \exp\left(\int_{y_0}^{y} s^H([\delta(u)])\right) = 1\)
for any \(y \in \partial X\).
\end{proof}
To find a~consistent way of \emph{twisting} the boundary values of \(1\)\hyp{}forms
we consider ways to ``undo'' the action of \(\GaugeSplit[s](X)\)
on the boundary ``in a~linear fashion''.
\begin{definition}
[gauge twisting]
\label{def:gauge-twisting}
We call a~homomorphism
\begin{math}
\tau:H^1(X; i \mathbb{R}) \to \GaugeHarm(\partial X)
\simeq (S^1)^{\pi_0(\partial X)}
\end{math}
a~\emph{gauge twisting} for \(s\) if the composition
\begin{equation*}
\pi_0\GaugeHarm(X)
\simeq
H^1(X; 2 \pi i \mathbb{Z})
\hookrightarrow
H^1(X; i \mathbb{R})
\xrightarrow{\tau}
\GaugeHarm(\partial X)
\end{equation*}
agrees with the action of the split gauge group on the boundary,
\( R \circ s: \pi_0\GaugeHarm(X) \to \GaugeHarm(\partial X)\).
\end{definition}
Continuous homomorphisms from a~vector space
to \(S^1\) correspond to linear functionals on the vector space.
Thus, every such twisting \(\tau\) comes from a~linear map
\(d\tau : H^1(X;i\mathbb{R}) \to H^0(\partial X; i \mathbb{R}) \)
and \(\tau = \exp \circ (d \tau)\).
We utilize it to prove the existence of gauge twistings for \(s\),
and one could use it to classify all possible gauge twistings
for \(s\).
Actually, every such homomorphism \(\tau\) is a~gauge twisting for some \(s\),
but we do not use this fact in this article.
\begin{lemma}
[existence of gauge twistings]
\label{lem:existence-of-gauge-twistings}
For a~given gauge splitting \(s\) there exists a~gauge twisting \(\tau\).
\end{lemma}
\begin{proof}
Since \(\pi_0 \GaugeHarm(X)\) is free,
we can lift the map
\( R \circ s: H^1(X; 2 \pi i\mathbb{Z})
\simeq \pi_0 \GaugeHarm(X) \to \GaugeHarm(\partial X)
\simeq (S^1)^{\pi_0(X)}\)
to a~homomorphism
\(\widetilde{\tau} : H^1(X; 2 \pi i \mathbb{Z})
\to H^0(\partial X; i\mathbb{R})\):
\begin{equation*}
\begin{tikzcd}
& H^0(\partial X; i\mathbb{R}) \arrow{d}{\exp} \\
\pi_0 \GaugeHarm(X) \arrow{r}{R \circ s}
\arrow[dashed]{ru}{\widetilde{\tau}}
& (S^1)^{\pi_0(\partial X)},
\end{tikzcd}
\end{equation*}
and this extends to a~map
\(\widetilde{\tau} : H^1(X; i \mathbb{R}) \to H^0(\partial X; i\mathbb{R})\)
by linearity.
Taking \(\tau = \exp \circ \widetilde{\tau}
: H^1(X; i \mathbb{R}) \to (S^1)^{\pi_0(\partial X)}\)
gives a~gauge twisting for \(s\).
\end{proof}
With \(\tau\) in hand, there is a~way of defining
a~twisting on the whole Coulomb slice,
enabling us to finally define the twisted restriction maps.
\begin{definition}
[twisted restriction map]
\label{def:twisted-restriction-map}
We define the \emph{Coulomb slice twisting}
\begin{math}
\tau_{CC}: \SForms[CC]{1}{X} \to \GaugeHarm(\partial X)
\end{math}
associated to \(\tau\)
to be the composition
\begin{align*}
\SForms[CC]{1}{X}
&\xrightarrow{\proj_{L^2,\perp}}
\mathcal{H}^1_D(X)
\simeq H^1(X, \partial X; \mathbb{R})
\\&\xrightarrow{\iota_X^\ast}
H^1(X; \mathbb{R})
\xrightarrow{\tau}
(S^1)^{\pi_0(\partial X)}
\simeq \GaugeHarm(\partial X).
\end{align*}
We define the \emph{twisted restriction map}
\begin{equation*}
R_\tau : \CoulFour{CC}{X}
\to \CoulThree{\partial X}
\end{equation*}
by the formula
\begin{math}
R_\tau (A,\Phi) = (R(A), \tau_{CC}(A-A_0) R(\Phi)).
\end{math}
\end{definition}
\begin{remark}
What is of importance for defining the twisted restriction maps
is the map
\(\tau_{CC} : i \Omega^1_{s}(X) \to (S^1)^{\pi_0(\partial X)}\).
The extension of \(\tau_{CC}\) to the whole of
\(i \Omega^1_{CC}(X)\) is artificial:
it does not undo the action of
\(\GaugeHarmId(X)\) on the boundary
as one might expect.
With more work,
including a~choice of a~based gauge group
\( \GaugeHarm_o(X) \subset \GaugeHarm(X) \)
(such that \( \GaugeHarm(X) / \GaugeHarm_o(X) \simeq S^1\))
and a~more general twisting,
one could work with the full \(i \Omega^1_{CC}(X)\)
and then quotient by the action of \(\GaugeHarm_o(X)\).
However, this would introduce unnecessary complications.
\end{remark}
These twisted restriction maps
are indeed invariant under \(\GaugeSplit[s](X)\).
\begin{lemma}
[twisted restriction map is invariant under split gauge group]
\label{lem:twisted-restriction-invariance}
Let \(\tau\) be a~gauge twisting for \(s\).
For any \( (A,\Phi) \in \CoulFour{CC}{X} \)
(resp. \( (a,\phi) \in \TangCoulFour{X} \))
and \(u \in \GaugeSplit[s](X)\)
we have
\begin{equation*}
R_\tau(u(A,\Phi)) = R_\tau(A,\Phi)
\end{equation*}
(resp. \(R_\tau(u(a,\phi)) = R_\tau(a,\phi)\)).
\end{lemma}
\begin{proof}
Since \(u \in \GaugeHarm(X)\),
we have \(\iota_{\partial X}^\ast(u^{-1}du) = 0\),
so \(R(A - u^{-1} du) = R(A)\).
It remains to prove
\begin{equation*}
\tau_{CC}(A - A_0 - u^{-1}du) R(u \Phi) = \tau_{CC}(A-A_0) R(\Phi)
\end{equation*}
but that is equivalent to
\begin{equation*}
(\tau_{CC}(-u^{-1}du) R(u))
\tau_{CC}(A - A_0) R(\Phi) = \tau_{CC}(A-A_0) R(\Phi)
\end{equation*}
so it suffices to prove
\(\tau_{CC}(u^{-1}du) = R(u)\).
Since \(s\) splits
\eqref{eqn:harmonic-gauge-sequence},
we have
\(u = s([u])\), where \( [u] \in \pi_0\GaugeHarm(X)\)
is the homotopy class of \(u\).
So we have to prove
\(\tau_{CC}(u^{-1}du) = R\circ s([u])\).
This follows directly from \autoref{def:gauge-twisting}
of the twistings
and \autoref{def:twisted-restriction-map}
of the twisted restriction map,
since \(u^{-1}du \in \mathcal{H}^1_D(X)\)
and the isomorphism
\(\pi_0\GaugeHarm(X) \simeq H^1(X; 2 \pi i \mathbb{Z})\)
is given by
\( [u] \mapsto [u^{-1}du]\).
\end{proof}
We conclude these sections by showing that
choosing \(\tau\) is essentially equivalent to
choosing an~integral splitting.
In general, one can restrict themselves to considering
integral splittings without any twisting at all.
\begin{proposition}
[twistings are integral splittings]
\label{prop:twistings-are-integral-splittings}
Let \(\tau\) be a~twisting for \(s\).
Then there is an~integral splitting \(s_{\mathbb{Z}}\)
and an~equivariant diffeomorphism
\begin{equation*}
F_{s,\tau} : \CoulFour{s}{X} \to \CoulFour{s_{\mathbb{Z}}}{X}
\end{equation*}
such that
\(R \circ F_{s,\tau} = R_\tau\).
\end{proposition}
\begin{proof}
Every function \(f \in \mathcal{H}(X)\) is determined by its
restriction to the boundary
\(f|_{\partial X}\),
which is locally constant.
We thus have the exact sequence
\begin{equation*}
0 \to H^0(\partial X; 2 \pi i \mathbb{Z})
\to H^0(\partial X; i \mathbb{R}) \simeq i \mathcal{H}(X)
\xrightarrow{f \mapsto \exp(f|_{\partial X})} \GaugeHarm(\partial X)
\to 0
\end{equation*}
as well as
\begin{equation*}
0 \to H^0(X; 2 \pi i \mathbb{Z})
\to i \mathcal{H}(X)
\xrightarrow{\exp} \GaugeHarmId(X)
\to 0
\end{equation*}
and from these two it follows that
\begin{equation}
\label{eqn:gauge-restriction-ses}
0 \to H^0(\partial X; 2 \pi i \mathbb{Z}) / H^0(X; 2 \pi i \mathbb{Z})
\to \GaugeHarmId(X)
\xrightarrow{\cdot |_{\partial X}} \GaugeHarm(\partial X)
\to 0
\end{equation}
is exact.
Since the group to the left is discrete
it follows that there exists a~unique lift \(\tilde{\tau}\)
of \(\tau\) to \(\GaugeHarmId(X)\):
\begin{equation*}
\begin{tikzcd}
& \GaugeHarmId(X) \arrow{d}{\cdot|_{\partial X}} \\
H^1(X; i \mathbb{R}) \arrow[dashed]{ru}{\tilde{\tau}}
\arrow{r}{\tau}
& \GaugeHarm(\partial X)
\end{tikzcd}
\end{equation*}
We define
\begin{equation*}
s_{\mathbb{Z}}([u]) = (\tilde{\tau}([u]))^{-1} \cdot s([u])
\end{equation*}
for any \( [u] \in \pi_0 \Gauge(X) \simeq H^1(X; 2 \pi i \mathbb{Z})\),
and
\begin{equation*}
F_{s,\tau} = F_{(\tilde{\tau})^{-1}}
\end{equation*}
using the construction of \(F_\nu\)
of \autoref{prop:equivalence-of-slices}.
This gives an~equivariant diffeomorphism
from \(\CoulFour{s}{X}\) to \(\CoulFour{s_{\mathbb{Z}}}{X}\).
The equality \(R \circ F_{s,\tau} = R_\tau\)
follows from the construction.
\end{proof}
Even though the spaces \(\CoulFour{s_{\mathbb{Z}}}{X}\)
and \(\CoulFour{s_{\mathbb{Z}}'}{X}\) are equivariantly diffeomorphic
by \autoref{prop:equivalence-of-slices},
the corresponding restriction maps differ by a~twist.
Thus, \textit{a priori} we cannot get rid of the choice of a~splitting.
However, this is not relevant to most
of the applications because for connected boundary
there is no choice to make.
\begin{lemma}
[uniqueness of integral splittings]
\label{lem:uniqueness-of-integral-splittings}
If \(\partial X \neq \varnothing\) is connected,
there exists exactly one integral splitting \(s_\mathbb{Z}\).
\end{lemma}
\begin{proof}
In this case, the restriction map
\(\GaugeHarmId(X) \to \GaugeHarm(\partial X)\)
is an~isomorphism
(cf. \eqref{eqn:gauge-restriction-ses}).
Therefore for each element \(\pi_0 \Gauge(X)\)
there exists exactly one representative \(u \in \GaugeHarm(X)\)
such that \(u|_{\partial X} = 1\).
\end{proof}
\subsection{Seiberg\hyp{}Witten moduli in split Coulomb slice}
\label{sec:Seiberg-Witten-moduli-in-split-gauge}
We conclude this section by defining the \emph{Seiberg\hyp{}Witten moduli spaces},
the main object of study of this article.
We also prove they only depend on the choice of \(s_{\mathbb{Z}}\)
associated to \(s\) and \(\tau\).
Thanks to \autoref{lem:split-gauge-compatible},
we can define the following.
\begin{definition}
[moduli spaces on \(4\)\hyp{}manifolds with boundary]
\label{def:moduli-spaces-on-4-manifolds-with-boundary}
We define the moduli spaces in split slice:
\begin{equation*}
\SWModuliFree{s}{X}
= \left\{ (A,\Phi) \in \CoulFour{s}{X}
\middle| \SW(A,\Phi) = 0 \right\},
\end{equation*}
\begin{equation*}
\SWModuli{s}{X}
= \quotient{\SWModuliFree{s}{X}}{\GaugeSplit[s](X)}.
\end{equation*}
\end{definition}
We also define a~version of the moduli space using the full double Coulomb slice,
\begin{equation*}
\SWModuliFree{CC}{X}
= \left\{ (A,\Phi) \in \CoulFour{CC}{X}
\middle| \SW(A,\Phi) = 0 \right\},
\end{equation*}
which will be utilized in some of the proofs.
From \autoref{lem:twisted-restriction-invariance} it follows that
\begin{corollary}
There is a~well\hyp{}defined restriction map
\begin{align*}
R_\tau: \SWModuli{s}{X} & \to \CoulThree{\partial X}.
\end{align*}
\end{corollary}
A~direct consequence of \autoref{prop:twistings-are-integral-splittings}
is
\begin{corollary}
[dependence on twistings]
\label{cor:dependence-on-twistings}
Given \(s\) and \(\tau\), there is an~integral splitting \(s_{\mathbb{Z}}\)
and a~diffeomorphism
\begin{equation*}
F_{s,\tau} : \SWModuli{s}{X} \to \SWModuli{s_{\mathbb{Z}}}{X}
\end{equation*}
such that
\(R \circ F_{s,\tau} = R_\tau\).
\end{corollary}
\section{Properties of moduli spaces}
\label{sec:properties-of-moduli-spaces}
In this section we prove that
(\namedref{thm:semi-infinite-dimensionality-of-moduli-spaces}):
\begin{itemize
\item the moduli spaces
of solutions to the Seiberg\hyp{}Witten equations on \(X\) are Hilbert manifolds,
\item the restriction map to the boundary is ``semi\hyp{}infinite'',
i.e., Fredholm in the negative direction and compact in the positive
direction,
\item if \(\partial X\) is disconnected,
restriction to a~single boundary component
has dense differential.
\end{itemize
This is done by analyzing the properties of the linearized Seiberg\hyp{}Witten
operator \(D\SW\).
We start by investigating an~extended version of this
operator, \(\widetilde{D\SW}\).
The reason is that the standard
Atiyah\hyp{}Patodi\hyp{}Singer boundary value problem
as well as the elliptic theory developed in
\autoref{sec:analytical-preparation} can be directly applied to the study
of \(\widetilde{D\SW}\).
Our understanding of the gauge action
(\autoref{sec:split-Coulomb-slice-on-4-manifolds})
will allow us to transfer these properties to \(D\SW\).
\subsection{Extended linearized SW operator}
\label{sec:properties-of-the-extended-linearized-SW-operator}
Here we apply the Atiyah\hyp{}Patodi\hyp{}Singer boundary value problem
to an~extended version of the linearized Seiberg\hyp{}Witten operator,
\(\widetilde{D\SW}\).
The properties we prove are the direct analogues of the properties
of \(D\SW\) which are proved in the next section.
\begin{definition}
[extended linearized SW operator]
\label{def:extended-DSW}
We define the extended linearized Seiberg-Witten operator
\begin{align*}
\widetilde{D\SW}_{(A,\Phi)}
:& \TangFour{X}\\
\to &
L^2(X; i \mathbb{R}) \oplus \SWDomain{X}
\end{align*}
by adding a~component related to the linearization of gauge action:
\begin{equation*}
\widetilde{D\SW}_{(A,\Phi)}(a,\phi)
= (d^\ast a, D\SW_{(A,\Phi)}(a,\phi)).
\end{equation*}
\end{definition}
In order to study the Atiyah\hyp{}Patodi\hyp{}Singer boundary value problem
we need to introduce the appropriate operator on the boundary
and consider its Calder\'on projector.
Denote \(Y = \partial X\)
and define
\begin{align*}
\tilde L : i \Omega^1(Y) \oplus \Gamma(S_Y) \oplus i \Omega^0(Y)
\to& i \Omega^1(Y) \oplus \Gamma(S_Y) \oplus i \Omega^0(Y),
\\
\tilde L(b,\psi,c) =& (\star d b - d c, \Dirac_{B_0} \psi, - d^\ast b).
\end{align*}
This is a~first-order self-adjoint elliptic operator.
Denote by \(\widetilde{H}^+(Y,\mathfrak{s})\)
(resp. \(\widetilde{H}^-(Y,\mathfrak{s})\)
the closure of the span of positive (resp. nonpositive)
eigenspaces of \(\widetilde{L}\)
in \( L^2_{1/2}(i \Omega^1(Y) \oplus \Gamma(S_Y) \oplus i \Omega^0(Y))\),
and by \(\widetilde{\proj}^\pm\) the projection onto
\(\widetilde{H}^\pm(Y,\mathfrak{s})\) along \(\widetilde{H}^\mp(Y,\mathfrak{s})\).
The proof of the following proposition follows a~standard argument;
we briefly recall it to set up the stage for the proofs in the rest of this
section.
\begin{proposition}
[semi-infinite-dimensionality of \(\widetilde{D\SW}\)]
\label{prop:extended-DSW-semi-inf-dim}
\hfill\\The operator
\begin{align}
\label{eqn:extended-DSW-APS}
\widetilde{D\SW}_{(A,\Phi)} \oplus \widetilde{\proj}^- R
:&\, \TangFour{X} \\
\nonumber
\longrightarrow &\, L^2(X;i\mathbb{R}) \oplus \SWDomain{X}
\oplus \widetilde{H}^-(Y,\mathfrak{s})
\end{align}
is Fredholm of index
\begin{equation}
\label{eqn:extended-DSW-APS-index}
2 \ind_{\mathbb{C}} \Dirac_{A_0} +\, b_1(X) - b^+(X) - b_1(Y) - 1.
\end{equation}
Moreover, the positive part of the restriction map
from the kernel of \(\widetilde{D\SW}\),
\( \widetilde{\proj}^+ R : \ker\mleft(\widetilde{D\SW}_{(A,\Phi)}\mright)
\to \widetilde{H}^+(Y,\mathfrak{s}) \),
is compact.
\end{proposition}
\begin{proof}
We can write
\begin{math}
\widetilde{D\SW}_{(A,\Phi)} = \widetilde{D} + \widetilde{K}
\end{math}
where
\begin{equation*}
\widetilde{D}(a,\phi) = (d^+ a, \Dirac_{A_0} \phi, d^\ast a)
\end{equation*}
and
\begin{equation*}
\widetilde{K}(a,\phi) = (0, -\rho^{-1}(\phi \Phi^\ast + \Phi \phi^\ast)_0,
\rho(a) \Phi + \rho(\diff{A}) \phi).
\end{equation*}
As explained in \cite{Kha2015}*{Proposition 3.1},
applying the Atiyah-Patodi-Singer boundary value problem
\cite{KMbook}*{Theorem 17.1.3}
to \(\widetilde{D}\) proves that
\(\widetilde{D}\oplus \widetilde{\proj}R\)
is Fredholm with index equal to \eqref{eqn:extended-DSW-APS-index}.
Furthermore,
\cite{KMbook}*{Theorem 17.1.3} implies that
for any bounded sequence \((u_i) \subset \TangFour{X}\)
such that \( (\widetilde{D}(u_i)) \) is Cauchy,
the sequence \( (\widetilde{\proj}^+R u_i) \) is precompact.
The operator \(\widetilde{K}\) is compact by \autoref{thm:multiplication}.
Since \(\widetilde{D\SW}_{(A,\Phi)} = \widetilde{D} + \widetilde{K}\),
thus \(\widetilde{D\SW}_{(A,\Phi)}\) is Fredholm with the same index
as \(\widetilde{D}\).
Moreover, since \(\widetilde{K}\) is compact,
for any sequence \( (u_i) \subset \ker
\mleft(\widetilde{D\SW}_{(A,\Phi)}\mright)\)
we can choose a~subsequence such that
the sequence of \( \widetilde{D}(u_i) = - \widetilde{K}(u_i)\)
is convergent, thus Cauchy.
By what we proved in the previous paragraph,
the sequence \( (\widetilde{\proj}^+R u_i) \) is precompact.
This shows that
\( \widetilde{\proj}^+ R : \ker\mleft(\widetilde{D\SW}_{(A,\Phi)}\mright)
\to \widetilde{H}^+(Y,\mathfrak{s}) \)
is compact.
\end{proof}
The proof of surjectivity
utilizes both of the analytical results of
\autoref{sec:analytical-preparation}
(cf. \cite{Lip2008}*{Theorem \(2\)}).
\begin{proposition}
[surjectivity of \(\widetilde{D\SW}\)]
\label{prop:extended-DSW-surjective}
The operator
\begin{math} \widetilde{D\SW}_{(A,\Phi)} \end{math}
is surjective.
\end{proposition}
\vphantom{.}\\
\vspace{-2em}
\begin{proof}
Assume
\begin{math} \widetilde{D\SW}_{(A,\Phi)} \end{math}
is not surjective.
\autoref{prop:extended-DSW-semi-inf-dim} implies
its image is closed,
so there is \(0 \neq \tilde v \in \mathcal{V}(X,\hat{\mathfrak{s}})
\oplus L^2(i \Omega^0(X))\)
orthogonal to \(\im \widetilde{D \SW}\).
Recall that \(\tilde K\) is a~certain multiplication
by \( p = (A - A_0, \Phi) \in L_1^2( i \Omega^1(X) \oplus \Gamma(S^+_X)) \).
Let \(X^\ast = X \cup ( [0,\infty) \times Y )\)
with cylindrical metric on the end,
and extend the spinor bundle \(S_X\)
to \(S_{X^\ast}\) which is cylindrical on ends.
Extend \(p\) to
\(p^\ast \in L^2_1( i \Omega^1(X^\ast) \oplus \Gamma(S^+_{X^\ast}))\)
in an~arbitrary way
and \(\tilde v\) to
\(\tilde v^\ast \in \mathcal{V}(X^\ast,\hat{\mathfrak{s}})
\oplus L^2(i \Omega^0(X^\ast))\)
by zero on \( [0,\infty) \times Y\).
We have
\begin{equation*}
\langle \tilde v^\ast, (\tilde D + \tilde K) (w)
\rangle_{L^2(X^\ast)}
=
\langle \tilde v, \widetilde{D\SW}_{(A,\Phi)}(w|_X)
\rangle_{L^2(X)}
= 0
\end{equation*}
for any \(w \in \mathcal{TC}(X^\ast, \hat{\mathfrak{s}})\).
Therefore \(\tilde v^\ast\) is a~weak solution to
\( \tilde D^\ast \tilde v^\ast + \tilde K^\ast \tilde v^\ast = 0\)
where \(\tilde D^\ast, \tilde K^\ast\) are formal adjoints
of \(\tilde D, \tilde K\), respectively.
The map \(\tilde K^\ast:L^2_1\to L^2\) is compact by
\autoref{thm:multiplication}.
Thus from the \namedref{thm:low-regularity} it follows that
\(\tilde v^\ast \in L^2_1(i \Omega^+(X^\ast) \oplus \Gamma(S^-_{X^\ast})
\oplus i \Omega^0(X^\ast)) \)
and it is a~solution to
\( (\tilde D^\ast + \tilde K^\ast) \tilde v^\ast = 0 \).
Furthermore, \autoref{cor:unique-cont-dirac-4d}
implies that
\(\tilde v^\ast = 0\) and therefore \(v = 0\).
Thus, by contradiction, we have proved that \(D \SW\)
is surjective.
\end{proof}
Finally, we focus on the density of the restriction map
from the kernel of \(\widetilde{D\SW}\) to one boundary component.
The proof of this proposition utilizes some of the ideas
we have just seen (cf. \cite{Lip2008}*{Lemma 5}).
\begin{proposition}
[density of moduli on one boundary component]
\label{prop:extended-DSW-moduli-boundary-dense}
Assume the boundary \(Y\)
has at least two connected components
and let \(Y_0 \subset Y\) be any one of these components.
Then the restriction
\begin{equation*}
R : \ker\widetilde{D\SW}_{(A,\Phi)} \to \TangThree{Y_0}
\end{equation*}
is dense.
\end{proposition}
\begin{proof}
Assume, by contradiction,
that it is not dense and choose a~nonzero element
\(v \in \TangThree{Y_0}\)
which is \(L^2_{1/2}\)\hyp{}perpendicular to its image.
Since \(\widetilde{D\SW}_{(A,\Phi)}\)
is surjective by
\autoref{prop:extended-DSW-surjective},
the map
\begin{align*}
\widetilde{D\SW}_{(A,\Phi)} \oplus \Pi_v R :&
\TangFour{X} \\
\to& L^2(X;i\mathbb{R}) \oplus \SWDomain{X}
\oplus \mathbb{C} v
\end{align*}
has finite\hyp{}dimensional cokernel,
where \(\Pi_v\) is the \(L^2_{1/2}\)\hyp{}projection
onto \(v \in \TangThree{Y_0}\).
From the definition of \(v\) it follows that
\( \widetilde{D\SW}_{(A,\Phi)} \oplus \Pi_v R \)
is not surjective, since otherwise there would be
an~element \(w \in \TangFour{X}\)
which solves \(\widetilde{D\SW}_{(A,\Phi)}(w) = 0\)
such that \(\Pi_v(R(w)) \neq 0\).
Therefore, we can pick an~element \( (a,v) \)
which is orthogonal to the image of
\( \widetilde{D\SW}_{(A,\Phi)} \oplus \Pi_v R \),
where \(a \in L^2(X;i\mathbb{R}) \oplus \SWDomain{X}\).
As in the proof of \autoref{prop:extended-DSW-surjective},
attach a~cylindrical end along \(Y\)
to get \(X^\ast = X \cup [0,\infty) \times Y \).
Extend \(a\) to \(a^\ast\) by \(0\) on \( [0,\infty) \times Y \).
Since \(\tilde K\) is a~certain multiplication by
\( p = (A - A_0,\Phi) \in L^2_1( i \Omega^1(X) \oplus \Gamma(S^+_X) )\),
extend \(p\) to
\(p^\ast \in L^2_1(i \Omega^1(X^\ast) \oplus
\Gamma(S^+_{X^\ast}))\) in an~arbitrary way to get
\(\tilde K\) defined on \(X^\ast\).
By \autoref{thm:multiplication},
\(\widetilde{K}\) is a~compact operator \(L^2_1 \to L^2\).
Since \((a,v) \perp \im \mleft(\widetilde{D\SW}_{(A,\Phi)}
\oplus \proj_v R \mright)\),
we get that \(a^\ast\) is a~weak solution to
\((\tilde D^\ast + \tilde K^\ast) a^\ast = 0\)
on the interior of \(X^\ast_1 = X^\ast \setminus ([0,\infty) \times Y_0)\).
Take any compact set \(C \subset X^\ast_1\)
and a~smooth bump function \(\eta : X^\ast \to [0,1]\)
such that \(\eta|_C = 1\)
and \(\supp \eta \subset X^\ast_1\).
It follows that \( (D^\ast+K^\ast)(\eta a) = \rho^\ast(d \eta) a \in
L^2(X^\ast) \).
Therefore by the \namedref{thm:low-regularity}
we get that \(\eta a \in L^2_1(X^\ast)\),
and in particular \(a \in L^2_1(\mathring{C})\).
Varying \(C\) we obtain
\(a^\ast \in L^2_{1,loc}(X^\ast_1)\).
Since \(a^\ast \equiv 0\)
on \([0,\infty) \times (Y \setminus Y_0) \subset X_1^\ast\),
\autoref{cor:unique-cont-dirac-4d}
implies that \(a^\ast = 0\) on \(X^\ast_1\),
so \(a=0\) on \(X\).
This is a~contradiction since we can extend
\(v \in \TangThree{Y_0}\)
to \(\tilde v \in \TangFour{X}\)
such that \(R(\tilde v) = v\).
Then
\begin{align*}
0 =&
\langle (\widetilde{D\SW}_{(A,\Phi)}(\tilde v), \Pi_v R \tilde v),
(a, v) \rangle_{L^2_{1/2}}
\\ =&
\langle (\widetilde{D\SW}_{(A,\Phi)}(\tilde v), v),
(0, v) \rangle_{L^2_{1/2}}
\\ =&
0 + \langle v,v\rangle_{L^2_{1/2}}
\end{align*}
implying \( \lVert v\rVert_{L^2_{1/2}} = 0\),
which contradicts
the assumption that \(v \neq 0\).
\end{proof}
\subsection{Regularity and semi-infinite-dimensionality of moduli spaces}
\label{sec:regularity-and-semi-infinite-dimensionality-of-moduli-spaces}
We turn our focus to the operator \(D\SW\).
The main difficulty in transferring the results of
\autoref{sec:properties-of-the-extended-linearized-SW-operator}
from \(\widetilde{D\SW}\) to \(D\SW\)
is the presence of the Coulomb condition
on both \(X\) and \(\partial X\).
The split gauge condition introduces an~additional twist to the story.
A~key fact is that the differential of the gauge group action at \(e\)
(cf. \autoref{lem:gauge-action-4d})
preserves the kernel of \(D\SW\)
at a~solution \( (A,\Phi) \).
\begin{lemma}
\label{lem:gauge-algebra-action}
Assume \(\Dirac_A \Phi = 0\).
Then for any \(f \in L^2_2(i \Omega^0(X))\) we have
\( D\SW_{(A,\Phi)}(df,-f \Phi) = 0\).
\end{lemma}
\begin{proof}
We compute:
\begin{align*}
(\hat D + \hat K)&(df,-f \Phi)
\\
=& ( \rho^{-1}( f \Phi \Phi^\ast + \Phi (f \Phi)^\ast)_0,
\rho(df) \Phi - \Dirac_{A_0} (f \Phi) - \rho(\diff{A}) f \Phi)
\\ =&
( \rho^{-1}( f \Phi \Phi^\ast - \Phi f \Phi^\ast)_0,
\rho(df) \Phi - \rho(df) \Phi - f(\Dirac_{A_0}\Phi + \rho(\diff{A})\Phi) )
\\ =&
(0, - f \Dirac_A \Phi)
\\ =& (0,0),
\end{align*}
which finishes the proof.
\end{proof}
Following the idea of \cite{Kha2015}*{Proposition 3.1},
we deduce the semi\hyp{}infinite\hyp{}dimensionality
and compute the index of \(D\SW\) from
\autoref{prop:extended-DSW-semi-inf-dim}.
These methods will be utilized to prove further results in this section,
too.
\begin{proposition}
[semi-infinite-dimensionality of \(D\SW\)]
\label{prop:DSW-semi-inf-dim}
The operator
\begin{align}
\label{eqn:DSW-APS}
D_{(A,\Phi)}\SW \oplus \proj^- R
:& \TangCoulFour[s]{X} \\
\nonumber
& \to \SWDomain{X} \oplus H^-(Y,\mathfrak{s})
\end{align}
is Fredholm of index
\begin{equation}
\label{eqn:DSW-APS-index}
2 \ind_{\mathbb{C}} + b_1(X) - b^+(X) - b_1(Y).
\end{equation}
Moreover, the restriction
\( \proj^+ R : \ker\mleft(D_{(A,\Phi)}\SW\mright)
\to H^+(Y,\mathfrak{s}) \)
is compact, where
\( \ker\mleft(D_{(A,\Phi)}\SW\mright) \subset \TangCoulFour[s]{X} \).
\end{proposition}
\begin{proof}
Firstly, we compare the respective polarizations.
Recall that the decomposition of \(\TangThree{Y}\)
into \(\widetilde{H}^+(Y,\mathfrak{s}) \oplus \widetilde{H}^-(Y,\mathfrak{s})\)
is given by the eigenspaces of \(\widetilde{L}\).
Decomposing
\begin{equation*}
i \Omega^1(Y) \oplus \Gamma(S_Y) \oplus i \Omega^0(Y)
= \mleft(i \Omega^1_{cC}(Y) \oplus \Gamma(S_Y)\mright)
\oplus \mleft(\Omega^1_C(Y) \oplus i \Omega^0(Y)\mright)
\end{equation*}
we see that \(\widetilde{L}\) decomposes as
\( \begin{pmatrix} \star d & 0 \\ 0 & \Dirac_{B_0} \end{pmatrix}
\oplus \begin{pmatrix} 0 & -d \\ -d^\ast & 0 \end{pmatrix} \).
Denote by \(\proj^\pm_1\) the spectral projections of
\(\begin{pmatrix}
0 & -d \\
-d^\ast & 0
\end{pmatrix}\)
in \(L^2_{1/2}(i \Omega^1_C(Y) \oplus i \Omega^0(Y))\).
It follows that \( \widetilde{\proj}^\pm = \proj^\pm \oplus \proj^\pm_1\).
This is enough to prove the statement about compactness.
Indeed, the map
\(\proj^+R : \ker\mleft( D_{(A,\Phi)}\SW \mright)
\to H^+(Y,\mathfrak{s})\)
is just the composition
\begin{equation*}
\ker\mleft( D_{(A,\Phi)}\SW \mright)
\hookrightarrow
\ker\mleft( \widetilde{D\SW}_{(A,\Phi)} \mright)
\xrightarrow{\widetilde{\proj}^+R}
\widetilde{H}^+(Y,\mathfrak{s})
\xrightarrow{\proj^+}
H^+(Y,\mathfrak{s})
\end{equation*}
where the map in the middle is compact by
\autoref{prop:extended-DSW-semi-inf-dim}.
For Fredholmness, denote by
\begin{equation*}
\Pi_C : L^2_{1/2}(\Omega^1_C(Y) \oplus \Omega^0(Y))
\to L^2_{1/2}(\Omega^1_C(Y) \oplus \Omega^0(Y))
\end{equation*}
the projection onto \(L^2_{1/2}(\Omega^1_C(Y)) \oplus \mathbb{R}^{\pi_0(Y)}\)
(where \(\mathbb{R}^{\pi_0(Y)} \subset \Omega^0(Y)\)
is the space of locally constant functions)
along \(\{0\} \oplus L^2_{1/2}(\Omega^0_0(Y))\)
(where \( \Omega^0_0(Y) = \{ f \in \Omega^0(Y) | \forall_i \int_{Y_i} f = 0 \}\)).
This projection can be used to define the split Coulomb slice
for the orthogonal splitting \(s_\perp\)
(\autoref{def:orthogonal-splitting}).
Precisely, we have
\begin{equation*}
\TangCoulFour[s_\perp]{X} = \ker\mleft(d^\ast, \proj_C\mright).
\end{equation*}
Khandhawit \cite{Kha2015}*{Proposition 3.1}
proves that \(\im \Pi_1^-\) and \(\ker \Pi_C\) are complementary,
and then \cite{KMbook}*{Proposition 17.2.6} implies that
\begin{align}
\label{eqn:extended-DSW-with-modified-boundary-condition}
&\widetilde{D\SW}_{(A,\Phi)} \oplus (\Pi^- R) \oplus (\Pi_C R):
\TangFour{X}
\\&\nonumber
\to \SWDomain{X}
\oplus \SForms[C]{1/2}{Y} \oplus \mathbb{R}^{\pi_0(Y)}
\oplus H^-(Y,\mathfrak{s})
\end{align}
is Fredholm since \eqref{eqn:extended-DSW-APS} is Fredholm.
Since \(\Pi_C|_{\im \Pi_1^-}\) is an~isomorphism
onto \(\im \Pi_C\), therefore the proof of \cite{KMbook}*{Proposition
17.2.6} implies that the index of
\eqref{eqn:extended-DSW-with-modified-boundary-condition}
is the same as the index of
\eqref{eqn:extended-DSW-APS}.
The following lemma is a~simple exercise in Fredholm theory.
\begin{lemma}
\label{lem:Fredholm-pullback}
Let \((F,G):H \to A \oplus B\) be Fredholm.
Then \(\tilde F = F|_{\ker G}: \tilde H = \ker G \to A\) is Fredholm
and has index equal to \(\ind( (F,G) ) + \dim \coker G \).
\end{lemma}
It implies that the operator
\( D_{(A,\Phi)}\SW \oplus (\Pi^- R) \)
is Fredholm as a~map
\(\ker( \star d \oplus (\Pi_C R)) =
\TangCoulFour[s_\perp]{X}
\to
\SWDomain{X} \oplus H^-(Y,\mathfrak{s})\)
and has index
\begin{align*}
\ind&\mleft(\widetilde{D\SW}_{(A,\Phi)} \oplus (\Pi^- R) \oplus (\Pi_C R)\mright)
+ \dim \coker( \star d \oplus (\Pi_C R))
\\=& [ 2 \ind_{\mathbb{C}} \Dirac_{A_0}^+
- b_0(X) + b_1(X) - b^+(X) - b_1(Y) ] + b_0(X)
\\=& 2 \ind_{\mathbb{C}} \Dirac_{A_0}^+
+ b_1(X) - b^+(X) - b_1(Y).
\end{align*}
Thus, we have proven the Proposition for a~particular splitting,
\(s=s_\perp\).
For any splitting \(s\),
the inclusion \(\Omega^1_{s}(X) \hookrightarrow \Omega^1_{CC}(X)\)
is of codimension
\(\dim(H^1(X, Y; \mathbb{R}))
- \dim(H^1(X; \mathbb{R}) = b_0(Y) - 1\).
Therefore, the split double Coulomb slice
\(\TangCoulFour[s]{X}\)
is a~finite\hyp{}dimensional subspace of
the ``full'' double Coulomb slice
\(\TangFour{X}\).
Therefore the Fredholmness of \eqref{eqn:DSW-APS}
for \(s = s_\perp\)
implies the Fredholmness of
\begin{align}
\label{eqn:DSW-APS-double}
D_{(A,\Phi)}\SW \oplus \proj^- R
:& \TangCoulFour{X} \\
\nonumber
& \to \SWDomain{X} \oplus H^-(Y,\mathfrak{s})
\end{align}
and this, in turn, implies the Fredholmness of
\eqref{eqn:DSW-APS} for any splitting \(s\).
Moreover, the index of \eqref{eqn:DSW-APS-double}
is equal to \(b_0(Y) - 1\)
plus \eqref{eqn:DSW-APS-index}
for any splitting \(s\), finishing the proof.
\end{proof}
We turn to deducing the surjectivity of \(D\SW\)
from the surjectivity of \(\widetilde{D\SW}\).
\begin{proposition}
[surjectivity of \(D\SW\)]
\label{prop:DSW-surjective}
For any gauge splitting \(s\), the differential
\begin{equation*}
D_{(A,\Phi)}\SW
: \TangCoulFour[s]{X} \to \SWDomain{X}
\end{equation*}
is surjective.
\end{proposition}
\begin{proof}
We will prove a~stronger statement,
that this extended differential
together with the exact part of the restriction to the boundary
\begin{align}
\label{eqn:DSW-and-boundary-exact-surjective}
(\widetilde{D\SW}_{(A,\Phi)},
\proj_d R, \proj_{V_s})
&=
(D\SW_{(A,\Phi)}, d^\ast, \proj_d R, \proj_{V_s}) : \\
\nonumber
\TangFour{X}&
\to \\
\nonumber
\SWDomain{X}
&\oplus L^2(i \Omega^0(X))
\oplus \SForms[C]{1/2}{Y}
\oplus V_s
\end{align}
is surjective,
where \(\proj_d : \SForms{1/2}{Y} \to \SForms[C]{1/2}{Y}\)
is the projection along \( \SForms[cC]{1/2}{Y} \)
and \(\proj_{V_s} : \Omega^1(X) \to V_s = (\im s)^\perp\)
is the orthogonal projection.
The Proposition will follow since
\(\TangCoulFour[s]{X} = \ker(d^\ast, \proj_d R, \proj_{V_s})\).
\autoref{prop:extended-DSW-surjective} implies that
\(\widetilde{D\SW}_{(A,\Phi)}
: \TangFour{X}
\to
\SWDomain{X} \)
is surjective.
To prove surjectivity of \eqref{eqn:DSW-and-boundary-exact-surjective}
it thus remains to prove that
\(\Pi_d R: \ker\mleft( \widetilde{D\SW}_{(A,\Phi)}\mright) \to
\SForms[C]{1/2}{Y}\)
is surjective and that
\(\Pi_{V_s}: \ker \mleft(\widetilde{D\SW}_{(A,\Phi)}, \Pi_d R\mright) \to V_s\)
is surjective.
We prove that
\(\Pi_d R|_{\ker \widetilde{D\SW}_{(A,\Phi)}}\)
is surjective.
Take any \(g \in L^2_{3/2}(i \Omega^0(Y))\)
representing a~given element
\(dg \in L^2_{1/2}( i \Omega^1_C(Y))\).
Take the unique \(f \in L^2_2(i \Omega^0(X))\)
such that \(\Delta f = 0\) and \(f|_Y = g\).
Then \(\Pi_d R(df, -f \Phi) = df|_Y = dg\).
The required surjectivity follows since
\( (df,-f \Phi) \in \ker \widetilde{D\SW}_{(A,\Phi)}\),
which follows from \(d^\ast d f = \Delta f = 0\) and
\autoref{lem:gauge-algebra-action}.
Similarly we prove
\(\Pi_{V_s}|_{\ker \mleft(\widetilde{D\SW}_{(A,\Phi)}, \Pi_d R\mright)}\)
is surjective.
By \eqref{eqn:decomposing-harmonics} and definition of \(V_s\),
the orthogonal projection \(d(\mathcal{H}(X))) \to V_s\)
is an~isomorphism and therefore for any
\(v \in V_s\) there is
\(f \in \mathcal{H}(X)\) such that \(\proj_{V_s}(df)=v_s\).
Moreover
\(\widetilde{D\SW}(df,-f\Phi) = 0\) and
\(\proj_d R(df,-f\Phi) = 0\), as wished.
\end{proof}
Finally, we prove the density of the restriction map to a~connected component
of \(Y\).
\begin{proposition}
[density of moduli on one boundary component]
\label{prop:DSW-moduli-boundary-dense}
The restriction
\begin{equation*}
R : \ker\mleft(D_{(A,\Phi)}\SW\mright) \to \TangCoulThree{Y_0}
\end{equation*}
to a~connected component \(Y_0 \subset Y\)
is dense,
where we consider \( \ker\mleft(D_{(A,\Phi)}\SW\mright)
\subseteq \TangCoulFour[s]{X} \).
\end{proposition}
\begin{proof}
Take any
\( (b,\psi) \in \TangCoulThree{Y_0}
\subset \TangThree{Y_0}\).
It follows from \autoref{prop:extended-DSW-moduli-boundary-dense}
that we can take a~sequence
\( (\tilde a_k, \phi_k) \in \ker \mleft( \widetilde{D\SW}_{(A,\Phi)} \mright) \)
such that \( R_{Y_0}(\tilde a_k, \phi_k) \to (b,\psi)\)
in \(\TangCoulThree{Y_0}\).
Using the decomposition \eqref{eqn:CC-exact-decomposition}
we can write \(\tilde a_k = a_k + d f_k\)
for \(f_k \in i \Omega^0(X)\) and \(a_k \in \Omega^1_{s}(X)\).
Since \(Y_0\) is connected,
we can change \(f_k\) by a~constant to obtain
\(\int_{Y_0} f_k = 0\).
Decomposing \(R_{Y_0}(\tilde a_k) = R_{Y_0}(a_k) + R_{Y_0}(d f_k)\)
we get that \(R_{Y_0}(a_k) \to b\) and \(R_{Y_0}(d f_k) \to 0\).
This together with \(\int_{Y_0} f_k = 0\)
implies that \(f_k|_{Y_0} \to 0\)
and therefore
\begin{align*}
(b,\psi)
=&
\lim_{k \to \infty} R_{Y_0}(\tilde a_k, \phi_k)
=
\lim_{k \to \infty} R_{Y_0}(a_k,\phi_k + i f_k \Phi)
\end{align*}
which finishes the proof because
\(a_k \in \SForms[s]{1}{X}\)
and
\(D\SW(a_k,\phi_k + i f_k \Phi) = D\SW(a_k+i df_k, \phi_k) = 0\)
by \autoref{lem:gauge-algebra-action}.
\end{proof}
The results of \autoref{prop:DSW-semi-inf-dim},
\autoref{prop:DSW-surjective} and
\autoref{prop:DSW-moduli-boundary-dense}
can be summarized as follows.
\moduli
\section{Gluing along a~boundary component}
\label{sec:gluing-along-a-boundary-component}
This section is devoted to the proof of the main result of this article,
the \namedref{thm:composing-cobordisms}, which relates the moduli spaces
of solutions on \(X_1\), \(X_2\) and \(X = X_1 \cup_{Y} X_2\),
where \(Y\) is a~rational homology sphere,
oriented as a~component of the boundary of \(X_1\).
Under the identification
\(\CoulThree{Y} \simeq \Configuration[cC]{-Y}{\overline{\mathfrak{s}}}\)
there are \(S^1\)\hyp{}equivariant twisted restriction maps
\(R_{\tau_i,Y} : \SWModuli[i]{s_i}{X_i} \to \CoulThree{Y}\),
where \(\mathfrak{s} = \hat{\mathfrak{s}}|_Y\).
One can expect the fiber product
\begin{align*}
\SWModuli[1]{s_1}{X_1}
&\times_{Y} \SWModuli[2]{s_2}{X_2} =
\\
= \{
&(A_1, \Phi_1, A_2, \Phi_2) \in
\SWModuli[1]{s_1}{X_1} \times \SWModuli[2]{s_2}{X_2}
\\
&| R_{\tau_1,Y}(A_1, \Phi_1) =
R_{\tau_2,Y}(A_2,\Phi_2)\}
\end{align*}
to be diffeomorphic to \(\SWModuli{s}{X}\),
and this turns out to be true.
One would also like to have this
map intertwine the twisted restriction maps
to \(\partial X\),
but this is a~bit too much:
the splittings and twistings
\( (s,\tau)\), \((s_1,\tau_1)\), \((s_2,\tau_2)\),
need to enjoy certain compatibility, and even then
the restriction maps
may not match on the nose
but need to be homotoped to each other.
This reflects the fact that we did not quotient by the action of \(S^1\)
on the configuration spaces.
The proof utilizes the following fact which is of independent interest.
Let \(X' \subset X\) be a~submanifold,
the closure of which is contained in the interior of \(X\).
Then the restriction map from \(\SWModuliFree{CC}{X}\)
to \(L^2_k\)\hyp{}configurations on \(X'\) is well\hyp{}defined and \textit{smooth}
for any \(k \geq 0\).
Well\hyp{}definedness follows from a~standard argument,
but proving the smoothness of this map turns out to be
a~surprisingly delicate task which we tackle in
\autoref{sec:smoothness-of-restrictions}.
The same strategy should work to prove smoothness of restriction
maps to interior submanifolds
for other types of moduli spaces appearing in gauge theory,
e.g., for the space of anti\hyp{}self\hyp{}dual connections on
\(G \hookrightarrow P \to X\).
The key is the ellipticity of the equations
together with the gauge fixing.
\subsection{Smoothness of restrictions}
\label{sec:smoothness-of-restrictions}
Assume \(X' \subset \mathring{X}\) is a~submanifold
with closure contained in \(\mathring X\).
The goal is to show (cf. \autoref{thm:restriction-is-smooth-for-solutions})
that the restriction map
\(R: \SWModuliFree{CC}{X} \to
\Configuration[k]{X'}{\hat{\mathfrak{s}}|_{X'}}\)
is smooth for any \(k\),
where
\(\Configuration[k]{X'}{\hat{\mathfrak{s}}|_{X'}}
= \ConfFourFull[k]{A_0|_{X'}}{X'}
\).
We restrict ourselves to the case \(k=2\),
but the same strategy may be used iteratively,
bootstraping the result to any \(k\), if needed.
Due to \autoref{thm:trace} we may assume, without loss of generality,
that \(X'\) is of codimension \(0\).
Since we require the closure of \(X'\) to be contained in the interior
\(\mathring X\), we may as well assume that \(X'\) is a~closed submanifold.
The following fundamental fact shows that any element in the image
of the restriction map is itself a~smooth configuration.
\begin{lemma}
[interior smoothness of solutions]
\label{lem:interior-smoothness-of-solutions}
\cite{KMbook}*{Lemma 5.1.5}
Every \(\gamma \in \SWModuliFree{CC}{X}\)
is smooth on \(\mathring{X}\).
\end{lemma}
We are ready to prove the main theorem of this subsection.
Note that the surjectivity assumption is satisfied whenever
\(\partial X \neq \varnothing\)
due to \autoref{prop:DSW-surjective}.
\begin{theorem}
[restriction is smooth on solution sets]
\label{thm:restriction-is-smooth-for-solutions}
Assume that for any \((A,\Phi) \in \SWModuliFree{CC}{X}\)
the operator \(D\SW_{(A,\Phi)}\) is surjective.
Then the restriction map
\(R: \SWModuliFree{CC}{X} \to
\Configuration[k]{X'}{\hat{\mathfrak{s}}|_{X'}}\)
is smooth.
\end{theorem}
\begin{proof}
\newcommand{\SPC}{L^2_{2,X''}\bigl(i \Omega^1_{CC}(X) \oplus%
\Gamma\mleft(S_X\mright)\bigr)}
\newcommand{\SPCnorm}[1]{\bigl\|#1\bigr\|_{2,X''}}
\newcommand{(A_0,0)+\SPC}{(A_0,0)+\SPC}
\newcommand{\SPDom}{L^2_{1,X''}\bigl(i \Omega^+(X) \oplus%
\Gamma\mleft(S_X^+\mright)\bigr)}
Choose a~compact codimension\hyp{}\(0\) submanifold
\(X'' \subset \mathring{X}\) such that \(X' \subset \mathring{X}''\).
Let us introduce an intermediate space
\(\SPC\)
defined as the completion of
\(i \Omega^1_{CC}(X) \oplus \Gamma(S_{X}^+)\)
with respect to the norm
\( \SPCnorm{v} = \sqrt{\lVert \hat D v\rVert^2_{L^2_1(X'')} +
\lVert v\rVert^2_{L^2_1(X)}}\),
so that \(\SPC\) is a~Hilbert space.
Define
\begin{equation*}
\SWModuliFree{CC,X''}{X}
= \SWModuliFree{CC}{X} \cap \mleft((A_0,0)+\SPC\mright),
\end{equation*}
the set of solutions to the Seiberg\hyp{}Witten equations
in the corresponding configuration space.
Since \(\tilde D\) is elliptic and \( \tilde D v = (\hat D v, 0) \),
by \autoref{thm:garding} the restriction map
\[\SPC \to L^2_2(i \Omega^1(X') \oplus \Gamma(S_{X'}^+))\]
is continuous linear (thus smooth).
It thus suffices to prove that
the moduli space \(\SWModuliFree{CC,X''}{X}\)
is a~smooth submanifold of the configuration space
\((A_0,0)+\SPC\)
and that the identity map
\(\mathrm{Id}_{2}: \SWModuliFree{CC}{X}
\to \SWModuliFree{CC,X''}{X}\)
is well\hyp{}defined, continuous and smooth.
Firstly, it is well\hyp{}defined by
\autoref{lem:interior-smoothness-of-solutions}.
Secondly, we prove that
\(\SWModuliFree{CC,X''}{X}\)
is a~smooth submanifold of
\((A_0,0)+\SPC\).
Define
\(\SPDom\)
to be the Hilbert space obtained as the completion of
\(i \Omega^+(X) \oplus \Gamma(S_X^+)\)
with respect to the norm
\( \lVert v\rVert_{L^2(X) \cap L^2_1(X'')} = \sqrt{\lVert v\rVert^2_{L^2(X)}
+ \lVert v\rVert^2_{L^2_1(X'')}}\).
By the \namedref{thm:IFT}, it suffices to prove the following Lemma.
\begin{lemma}
\label{lem:DSW-surjective-higher-regularity}
The differential
\begin{equation*}
D\SW_{(A,\Phi)} : \SPC
\to \SPDom
\end{equation*}
is surjective at each \( (A,\Phi) \in
\SWModuliFree{CC,X''}{X}\).
\end{lemma}
\begin{subproof}
We assumed that
the operator
\(
D\SW_{(A,\Phi)}
\)
is surjective.
Choose any \(w \in \SPDom\)
and pick a~solution \(v \in \TangCoulFour{X}\)
to \(D\SW_{(A,\Phi)}(v) = w\).
Then \(\hat D v|_{X''} = - \hat K v |_{X''} + w|_{X''}\)
is in \(L^2_1(X'')\)
since \(w|_{X''} \in L^2_1(X'')\), \(v|_{X''} \in L^2_1(X'')\)
and \(\hat K|_{X''}\) is a~certain multiplication by
\( (A - A_0,\Phi)|_{X''} \), which is smooth
by \autoref{lem:interior-smoothness-of-solutions}.
\end{subproof}
Finally, it remains to prove that the identity map
\[\SWModuliFree{CC}{X}
\to \SWModuliFree{CC,X''}{X}\]
is smooth.
We start by identifying the tangent spaces
at \(p = (A,\Phi)\); we have
\begin{align*}
T_p \SWModuliFree{CC}{X}
&= \ker D\SW_{p}, \\
\qquad T_p \SWModuliFree{CC,X''}{X}
&= \mleft(\ker D\SW_p\mright) \cap {\SPC}.
\end{align*}
\begin{lemma}
[isometry of the tangent spaces]
\label{lem:tangent-spaces-isometry}
The identity map
\(T_p \SWModuliFree{CC}{X}
\to T_p \SWModuliFree{CC,X''}{X}\)
is well\hyp{}defined, and an~isometry.
\end{lemma}
\begin{subproof}
Take any \(v \in
T_p \SWModuliFree{CC}{X}\);
in particular, \(D\SW_{(A,\Phi)}(v)=0\).
Well\hyp{}definedness follows since
\( \hat D v|_{X''} = - \hat K v|_{X''}\)
and as before, \(\hat K|_{X''}\) is a~multiplication by
a~smooth configuration on \(X''\).
This also implies
\begin{align*}
\lVert v\rVert^2_{L^2_1(X)}
\leq \SPCnorm{v}^2
=& \lVert v\rVert^2_{L^2_1(X)} + \lVert \hat D v\rVert^2_{L^2_1(X'')}
\\=& \lVert v\rVert^2_{L^2_1(X)} + \lVert \hat K v\rVert^2_{L^2_1(X'')}
\\\leq& \lVert v\rVert^2_{L^2_1(X)} + C_p \lVert v\rVert^2_{L^2_1(X'')}
\\\leq& (1+C_p) \lVert v\rVert^2_{L^2_1(X)}
\end{align*}
which proves this map is an~isometry.
\end{subproof}
While the \(L^2(X)\) norm is not complete on \(\TangCoulFour{X}\),
the \(L^2(X)\)\hyp{}orthogonal complement \(H\) to
\(T_p \SWModuliFree{CC}{X}\)
is a~closed subspace of \(\TangCoulFour{X}\)
such that \(H + T_p \SWModuliFree{CC}{X} = \TangCoulFour{X}\)
and \(H \cap T_p \SWModuliFree{CC}{X} = \{0\}\),
thus by the open mapping theorem
\[\TangCoulFour{X} = T_p\SWModuliFree{CC}{X} \oplus H.\]
By the \namedref{thm:IFT}
there is a~neighborhood \(U\) of \( p \) such that the affine projection
\begin{equation*}
U \cap \SWModuliFree{CC}{X} \to
U \cap \mleft( (A_0,0) + T_p \SWModuliFree{CC}{X}\mright)
\end{equation*}
along \(H\) is a~diffeomorphism.
Similarly, the \namedref{thm:IFT}
implies that there is a~neighborhood \(V\) of \(p\)
such that the affine projection
\begin{equation*}
U \cap \SWModuliFree{CC,X''}{X}
\to U \cap \mleft( (A_0,0) + T_p \SWModuliFree{CC,X''}{X}\mright)
\end{equation*}
along \(H' = H \cap \SPC\) is a~diffeomorphism
since \(H'\) is the \(L^2(X)\)\hyp{}orthogonal complement
to \(T_p \SWModuliFree{CC,X''}{X}\).
Thus the identity map
\(\SWModuliFree{CC}{X}
\to \SWModuliFree{CC,X''}{X}\)
near \(p\) factors as
\begin{align*}
U \cap \SWModuliFree{CC}{X}
& \to
U \cap \mleft( (A_0,0) + T_p \SWModuliFree{CC}{X}\mright)
\\ & \xrightarrow{\mathrm{id}}
V \cap \mleft( (A_0,0) + T_p \SWModuliFree{CC,X''}{X}\mright)
\\ & \to
V \cap \SWModuliFree{CC,X''}{X}
\end{align*}
where the middle identity map is smooth by
\autoref{lem:tangent-spaces-isometry}
and the two other maps are smooth since they are parametrizations
coming from the \namedref{thm:IFT}, as we just showed.
\end{proof}
\subsection{Proof of the gluing theorem}
\label{sec:proof-of-the-gluing-theorem}
We are ready to prove the gluing theorem.
Let \(X = X_1 \cup_{Y} X_2\)
with \(Y\) connected and \(b_1(\partial X_i) = 0\).
Let \(\hat{\mathfrak{s}}\) be a~\spinc{} structure
and \(A_0\) be a~reference \spinc{} connection on \(X\).
Let restrictions of \(\hat{\mathfrak{s}}\) be the \spinc{} structures
used to define configuration spaces
and the restrictions of \(A_0\) to be the reference connections.
Denote \(Y_1 = \partial X_1 \setminus Y\),
\(Y_2 = \partial X_2 \setminus Y\),
\(\hat{\mathfrak{s}}_i = \hat{\mathfrak{s}}|_{X_i}\).
Fix gauge splittings
\(s,s_1,s_2\) and twistings \(\tau, \tau_1, \tau_2\) on \(X, X_1, X_2\).
Denote by \(s_{\mathbb{Z}}, s_{1,\mathbb{Z}}, s_{2,\mathbb{Z}}\)
the associated integral splittings
given by \autoref{prop:twistings-are-integral-splittings}.
We say they are \emph{compatible}
if \(s_{\mathbb{Z}}\) corresponds to \((s_{1,\mathbb{Z}}, s_{2,\mathbb{Z}})\)
under the following identification.
\begin{proposition}
[integral splittings on a~composite cobordism]
\label{prop:integral-splittings-on-a-composite-cobordism}
There is a~canonical identification between the set of integral
splittings on \(X\) and the set of pairs of integral splittings
on \(X_1\) and \(X_2\).
\end{proposition}
\begin{proof}
Choose an~integral splitting \(s\).
Take any \(a \in \mathcal{H}^1_D(X)\).
Denote by \(\tilde{a}_i\) its restriction to \(X_i\).
For each \(i\) there is a~unique
\(f_i \in L^2_2(\Omega^0(X_i))\)
such that \(\Delta f_i = 0\),
\(f_i|_Y = G_d \iota_Y^\ast(a)\),
and \(f_i|_{Y_i} = 0\).
The resulting
\(a_i = \tilde{a}_i - df_i\)
is in \(\mathcal{H}^1_D(X_i)\),
thus we get a~map
\begin{equation*}
R_{\mathcal{H}} : \mathcal{H}^1_D(X) \to
\mathcal{H}^1_D(X_1) \times \mathcal{H}^1_D(X_2)
\end{equation*}
sending \(a\) to \( (a_1,a_2)\).
Note that \(\mathcal{H}^1_D(X_i) \subset \Omega^1_{CC}(X_i)\)
and therefore \(R_{\mathcal{H}}\)
coincides with doing the gauge fixing of
\autoref{lem:split-gauge-fixing-in-4d}
on both components, i.e.,
\( a \mapsto (\proj_{CC}(a|_{X_1}), \proj_{CC}(a|_{X_2})) \).
The cohomology class \([a] \in H^1(X; \mathbb{R})\)
restricts to \( ([\tilde{a}_1],[\tilde{a}_2]) = ( [a_1], [a_2])
\in H^1(X_1;\mathbb{R}) \times H^1(X_2;\mathbb{R})\).
It follows that the composition
\begin{align*}
H^1(X_1;\mathbb{R})
&\times H^1(X_2;\mathbb{R})
\xrightarrow{\iota_\ast} H^1(X; \mathbb{R})
\xrightarrow{s^H} \mathcal{H}^1_D(X) \to
\\
& \xrightarrow{R_{\mathcal{H}}} \mathcal{H}^1_D(X_1) \times \mathcal{H}^1_D(X_2)
\to
H^1(X_1;\mathbb{R}) \times H^1(X_2;\mathbb{R})
\end{align*}
is the identity, where \(\iota_\ast\)
is the inverse of the restriction map
\(H^1(X; \mathbb{R})\to
H^1(X_1;\mathbb{R}) \times
H^1(X_2;\mathbb{R})\)
(invertible by Mayer\hyp{}Vietoris).
We can thus choose \( (s_1^H,s_2^H) \)
to be the composition \(R_{\mathcal{H}} \circ s^H \circ \iota_\ast\).
Since \(s^H\) was integral, thus \(s_i^H\) are integral
and by \autoref{prop:homological-classification-of-integral-splittings}
there exist unique integral splittings
\(s_1, s_2\) inducing \(s_1^H, s_2^H\).
On the other hand, given integral splittings \(s_1, s_2\)
on \(X_1, X_2\), we can choose \(s^H\) to be
\( (R_{\mathcal{H}})^{-1} \circ (s_1^H,s_2^H) \circ (\iota_\ast)^{-1}\)
and by \autoref{prop:homological-classification-of-integral-splittings}
there is a~unique integral splitting \(s\) inducing \(s^H\).
\end{proof}
\autoref{prop:DSW-surjective} guarantees that whenever \(\partial X \neq \varnothing\),
the moduli \(\SWModuli{s}{X}\) is a~smooth Hilbert manifold,
and the same follows for \(\SWModuli{s_i}{X_i}\).
We want to include the case when \(X\) is a~closed manifold,
when it is well\hyp{}known
that to achieve surjectivity one, in general, needs to
perturb the metric on \(X\)
or perturb the Seiberg\hyp{}Witten equations.
Therefore, we \emph{assume} that for any \((A,\Phi) \in \SW^{-1}(0)\)
the operator \(D\SW_{(A,\Phi)}\) is onto.
\begin{remark}
The careful reader may notice that we do not assume any transversality
of the maps \(R_{\tau_i,Y}\)
and thus the fiber product may not \textit{a~priori} be a~manifold.
That it is a~manifold follows from the proof of the theorem.
What is not proven here, but may be useful in other contexts,
is that the transversality
is indeed equivalent to \(D\SW_{(A,\Phi)}\) being surjective for all
\((A,\Phi) \in \SWModuli{s}{X}\).
\end{remark}
\gluing
\begin{proof}
By \autoref{cor:dependence-on-twistings}
we can assume, without loss of generality,
that \(s = s_{\mathbb{Z}}\) and \(s_i = s_{i,\mathbb{Z}}\),
as well as \(\tau \equiv 1\) and \(\tau_i \equiv 1\).
We thus drop \(\tau\)'s from the notation entirely.
The plan is as follows. We will construct a~map
\begin{equation*}
F : \SWModuliFree{s}{X}
\to \SWModuliFree[1]{s_1}{X_1} \times \SWModuliFree[2]{s_2}{X_2},
\end{equation*}
and a~homotopy
\begin{equation*}
H: \SWModuliFree{s}{X}
\to \mleft( \CoulThree{Y_1} \mright) \times
\mleft( \CoulThree{Y_2} \mright)
\end{equation*}
between \(R\) and
\( \mleft(R_{Y_1} \times R_{Y}\mright) \circ F\).
Then we will prove
that \(F\) intertwines the actions of
\(\GaugeSplit[s](X)\)
and \(\GaugeSplit[s_1](X_1) \times \GaugeSplit[s_2](X_2)\),
and that \(H\) is \(\GaugeSplit[s](X)\)\hyp{}invariant;
thus, both \(F\) and \(H\) descend to
\(\SWModuli{s}{X}\).
Furthermore, we will prove \(F\) and \(H\)
are continuous and smooth,
and that \(F\) has image in the fiber product.
Finally, we will show that \(F\)
is a~smooth immersion onto
\begin{math}
\SWModuliFree[1]{s_1}{X_1} \times_{Y} \SWModuliFree[2]{s_2}{X_2}.
\end{math}
The \(S^1\)\hyp{}equivariance of \(F\) and \(H\) will be apparent
from the construction.
{\bf Step 1.}
We start by constructing \(F\),
proving its smoothness and that its image lies in the fiber product.
Take \((A,\Phi) \in \SWModuli{s}{X}\).
We would like to simply restrict it to the components \(X_i\)
and then put into split Coulomb slice.
By \autoref{lem:split-gauge-fixing-in-4d} modulo \(S^1\)
there a~unique way of doing that using a~contractible gauge
transformation. Here we make a~different choice
than in \autoref{lem:split-gauge-fixing-in-4d},
requiring \(\int_{Y} f = 0\) instead of \(\int_X f = 0\).
Precisely, for \(a \in \SForms{1}{X_i}\)
choose \(\tilde{u}^s_a = e^{f_a}\)
such that \(f_a \in L^2_2(i \Omega^0(X_i))\), \(\int_{Y} f_a = 0\)
and
\begin{math}
a
- \left(\tilde{u}^s_a\right)^{-1} d \tilde{u}^s_a
\in \SForms[s]{1}{X_i}.
\end{math}
Define
\begin{equation*}
F(A,\Phi)
= \mleft( \tilde{u}^s_{(A-A_0)|_{X_1}} (A,\Phi)|_{X_1},
\tilde{u}^s_{(A-A_0)|_{X_2}} (A,\Phi)|_{X_2} \mright).
\end{equation*}
Since
\(A-A_0\) was in double Coulomb slice,
thus \((A-A_0)|_{X_1}\) (resp. \((A-A_0)|_{X_2}\)) is already coclosed
on \(X_1\) (resp. \(X_2\)) and on \(Y_1\) (resp. \(Y_2\)),
moreover \((A-A_0)|_{X_1}\) and \((A-A_0)|_{X_2}\) agree on \(Y\).
Denoting the restriction to \(Y\) by \(b_{(A,\Phi)} =
\iota^\ast_{Y}(A-A_0)\),
\autoref{thm:restriction-is-smooth-for-solutions}
together with \autoref{thm:trace}
imply that \(b_{(A,\Phi)}\) is smooth and depends smoothly on
\((A,\Phi) \in \SWModuliFree{s}{X}\) as an~element of \(L^2_{3/2}\).
Notice that \(f_i = f_{(A-A_0)|_{X_i}}\)
can be decomposed as \(f_i = f_i^{\partial} + f_i^s\),
where \(f_i^\partial\)
are the unique solutions to
\begin{align}
\label{eqn:fi}
f_i^\partial|_{Y_i} = 0, \quad f_i|_{Y} = g, &\quad \Delta f_i = 0,
\end{align}
where \(g = G_d \Pi_d b_{(A,\Phi)}\)
depends smoothly on \((A,\Phi)\) as an~element of \(L^2_{5/2}\).
Moreover, the map \(g \mapsto f_i^\partial\) is linear and continuous
as a~map \(L^2_{s+1/2} \to L^2_{s+1}\) for \(s \geq 0\),
so \(f_i^\partial\) depend smoothly on \((A,\Phi)\) as~elements of \(L^2_3\).
Furthermore, since
\(a|_{X_i} - df_i^\partial \in L^2_1(i\Omega^1_{CC}(X_i))\),
\(f_i^s\) are the unique elements of \(\mathcal{H}^1_D(X_i)\)
such that \(a|_{X_i} - df_i^\partial - d f_i^s \in L^2_1(i \Omega^1_s(X_i))\)
and \(f_i|_Y = 0\).
By \autoref{rmk:continuous-gauge-fixing-within-double-Coulomb-slice},
\(f_i^s\) depend continuously on \(a|_{X_i} - d f_i^\partial \in L^2_1\).
Which proves that \(f_i \in L^2_3\) depend continuously
and linearly on \(g \in L^2_{5/2}\),
thus depend smoothly on \(g\), so they depend smoothly on \( (A,\Phi)\).
This establishes the well\hyp{}definedness and smoothness of \(F\).
That its image lies in the fiber product follows directly from the construction.
{\bf Step 2.} We proceed to constructing \(H\).
By \eqref{eqn:fi}, the functions
\(f_i\) are locally constant on \(Y_i\).
We also have
\begin{equation*}
R_{Y_i}(F(A,\Phi)) = e^{f_i|_{Y_i}} \cdot
R_{Y_i}(A,\Phi)
\end{equation*}
by the construction of \(F\). Thus we can define
\begin{equation*}
H(A,\Phi,t) = \mleft( e^{t f_1|_{Y_1}}, e^{t f_2|_{Y_2}} \mright)
R(A,\Phi)
\end{equation*}
which at \(t=0\) coincides with
\(R(A,\Phi)\)
and at \(t=1\) coincides with
\( (R_{Y_1},R_{Y_2}) \circ F(A,\Phi)\).
{\bf Step 3.} We now investigate the equivariance of \(F\)
under the actions of gauge groups.
Let \(u \in \GaugeSplit[s](X)\).
From \autoref{lem:split-gauge-fixing-in-4d} if follows that
there is exactly one contractible
\(\tilde{u}_i = e^{\tilde{f}_i}\in \GaugeId(X_i)\)
which puts \(-u^{-1}du\) into \(\SForms[s]{1}{X_i}\)
with \(\int_{Y} \tilde{f}_i = 0\).
Equivalently, this is the unique \(\tilde{u}_i = e^{\tilde{f}_i} \in \GaugeId(X_i)\)
with \(\int_{Y} \tilde{f}_i = 0\)
such that \(\tilde{u}_i u|_{X_i} \in \GaugeSplit[s_i](X_i)\).
Define \(u_i = \tilde{u}_i u|_{X_i}\).
Since \(\tilde{u}_i\) is contractible,
thus \([\iota_{X_i}^\ast(u^{-1}du)] = [u_i^{-1}du_i]\)
in \(H^1(X_i; 2 \pi i \mathbb{Z})\).
Therefore the map
\(u \mapsto (u_1,u_2)\)
provides the~canonical isomorphism
\begin{equation}
\label{eqn:gauge-group-isomorphism}
\GaugeSplit[s](X) \simeq \GaugeSplit[s_1](X_1) \times
\GaugeSplit[s_2](X_2)
\end{equation}
which agrees with the isomorphism coming from the Mayer\hyp{}Vietoris
sequence \(H^1(X; \mathbb{Z})
\simeq H^1(X_1;\mathbb{Z}) \times H^1(X_2;\mathbb{Z})\).
From the construction of \(F(A,\Phi)\)
it follows that \((u|_{X_1}^{-1},u|_{X_2}^{-1}) F(u(A,\Phi))\)
differs from \(F(A,\Phi)\)
exactly by the factor of \( (e^{\tilde{f}_1},e^{\tilde{f}_2}) \).
Thus \( (u_1^{-1},u_2^{-1}) F(u(A,\Phi)) = F(A,\Phi)\).
This proves that \(F\) commutes with the gauge group action
as identified in \eqref{eqn:gauge-group-isomorphism}.
{\bf Step 4.} We prove the invariance of \(H\) under \(\GaugeSplit[s](X)\).
By the construction of \(H\), and since \(u|_{\partial X} = 1\)
for \(u \in \GaugeSplit[s](X)\),
we have
\(H( u(A,\Phi), t) = (e^{t\tilde{f}_1|_{Y_1}},e^{t\tilde{f}_2|_{Y_2}}) H(A,\Phi,t)\)
where \(\tilde{f}_i\) are as in the previous paragraph.
Since \(R_{\mathcal{H}}(\mathcal{H}^1_D(X)) \in (\im s_1^H) \times(\im
s_2^H)\) we get that \(R_{\mathcal{H}}(u^{-1}du)\)
is already in the split Coulomb slice
on \(X_1\) and \(X_2\).
Moreover, \(a_i = \iota^\ast_{X_i}(u^{-1} du)\)
is coclosed on \(X_i\) and \(Y_i\).
Since there are functions \(\hat{f}_i\) satisfying
\begin{align*}
\Delta \hat{f}_1 = 0, \quad \hat{f}_1|_{Y_1} = 0, & \quad \hat{f}_1|_Y = G_d \iota^\ast_Y a, \\
\Delta \hat{f}_2 = 0, \quad \hat{f}_2|_{Y_2} = 0, & \quad
\hat{f}_2|_{Y} = G_d \iota^\ast_Y a,
\end{align*}
the uniqueness of \(\tilde{f}_i\) implies \(\tilde{f}_i = \hat{f}_i\)
and therefore \(\tilde{f}_i|_{Y_i} = 0\).
Thus \(H(u(A,\Phi),t) = H(A,\Phi,t)\), as wished.
{\bf Step 5.} We show that \(F\) is bijective onto the fiber product,
following the argument in \cite{Lip2008}.
Let \( (A_i, \Phi_i) \in \SWModuliFree{s_i}{X_i}\)
such that \( R_{Y}(A_1,\Phi_1) = R_{Y}(A_2,\Phi_2)\).
These would give a~configuration on \(X\)
if the normal components of connections \(A_1\) and \(A_2\) agreed on
\(Y\).
Let \(h_1\,dt\) and \(h_2\,dt\) be the \(dt\)\hyp{}components of
\( (A_1-A_0)|_{Y}\) and \( (A_2-A_0)|_{Y}\).
We want to find harmonic functions \(f_i \in L^2_2(X_i;i \mathbb{R})\)
such that
\[ f_1|_{Y} = f_2|_{Y}, \]
\[ \partial_t f_1|_{Y} + h_1 = \partial_t f_2|_{Y} + h_2, \]
\[ f_1|_{Y_1} = 0, f_2|_{Y_2} = 0.\]
Take a~tubular neighborhood \( [-\varepsilon,\varepsilon] \times Y
\subset X\) of \(Y\).
Let \(\{\phi_\lambda\}_{\lambda}\) be an eigenbasis for \(\Delta_{Y}\)
and write \(h_2-h_1 = \sum_\lambda c_\lambda \phi_\lambda\).
Since \(h_2 - h_1 \in L^2_{1/2}\), therefore
\( \sum_\lambda \lambda^{1/2} |c_\lambda|^2 < \infty\)
and thus the following are well\hyp{}defined as elements of
\( L^2_2([-\varepsilon,\varepsilon] \times Y; i \mathbb{R})\):
\begin{align*}
g_1 &= \frac 1 2
\sum c_\lambda \lambda^{-1/2} e^{\lambda^{1/2} t} \varphi_\lambda,
\\
g_2 &= \frac 1 2
\sum c_\lambda \lambda^{-1/2} e^{-\lambda^{1/2} t} \varphi_\lambda,
\end{align*}
which satisfies
\(\partial_t g_1|_{Y} - \partial_t g_2|_{Y} = h_1-h_0\).
Let \(\rho \in C^\infty(X;\mathbb{R})\) be a~bump function supported in
\([-\varepsilon,\varepsilon] \times Y\) which is identically \(1\)
in a~neighborhood of \(Y\).
The configurations
\(e^{\rho g_1}(A_1,\Phi_1)\) and \(e^{\rho g_2} (A_2,\Phi_2)\)
patch to give a~\(L^2_1\) configuration \( (A',\Phi')\), but this is not necessarily
in the Coulomb slice because \(\rho f_i'\) are not necessarily harmonic.
Take \(f \in L^2_2(X;i \mathbb{R})\) such that
\[f|_{Y_1} = 0, \quad f|_{Y_2} = 0, \quad
\Delta f = - \Delta(\rho g_1) - \Delta(\rho g_2).\]
Denote
\( (A'',\Phi'') = e^{f} (A',\Phi') \in \SWModuliFree{CC}{X} \).
Finally, by
\autoref{rmk:continuous-gauge-fixing-within-double-Coulomb-slice}
we can continuously deform \((A'',\Phi'')\) to a~configuration \((A,\Phi)\)
such that \(F(A,\Phi) = ( (A_1,\Phi_1), (A_2,\Phi_2) )\).
{\bf Step 6.}
We need to prove that \(F^{-1}\) constructed in the previous step is continuous.
Notice \( (g_1,g_2) \) as elements of \(L^2_2\)
depend continuously on \(h_1-h_0 \in L^2_{1/2}\)
which in turn depends continuously on \(A_1-A_0\) and \(A_2-A_0 \in L^2_{1/2}\).
Moreover, \(f \in L^2_3\) depends continuously on
\( (g_1,g_2) \in L^2_2 \).
If the multiplication \(L^2_2 \times L^2_1 \to L^2_1\) was continuous
on \(4\)\hyp{}manifolds then we would have shown that the map
\( ((A_1,\Phi_1),(A_2,\Phi_2)) \to (A,\Phi)\) which we constructed
is continuous.
Since it is not true, we need to show that \( e^{g_i} \Phi_i \in L^2_1\)
depends continuously on the initial configurations.
We will prove it depends continuously on
\( h_2-h_1 \in L^2_{1/2}\) and \(\Phi_i \in L^2_1\).
Let \( (A_1',\Phi_1')\) and \( (A_2',\Phi_2') \)
be another choice of configurations,
and denote the corresponding harmonic functions on
\([-\varepsilon,0] \times Y\) and \( [0,\varepsilon] \times Y\)
by \(g'_1\), \(g'_2\).
Then
\begin{align*}
\| e^{g_1} \Phi_1 - e^{g'_1} \Phi_1' \|_{L^2_1}
& \leq \| e^{g'_1} (\Phi_1 - \Phi_1') \|_{L^2_1}
+ \| (e^{g_1} - e^{g'_1}) \Phi_1 \|_{L^2_1}
\\ &
\leq C(\| e^{g'_1} \|_{L^\infty} + \| e^{g'_1} \|_{L^2_2})
\| \Phi_1 - \Phi_1'\|_{L^2_1}
\\ &
+ C \|e^{g_1}-e^{g'_1}\|_{L^2_2}
\| \Phi_1\|_{L^2_1}
\\ &
+ C \|e^{g_1}-e^{g'_1}\|_{L^\infty([-\varepsilon,-\delta] \times Y)}
\| \Phi_1\|_{L^2_1([-\varepsilon,-\delta] \times Y)}
\\ &
+ C \|e^{g_1}-e^{g'_1}\|_{L^\infty([-\delta,0] \times Y)}
\| \Phi_1\|_{L^2_1([-\delta,0] \times Y)}
\end{align*}
for any \(\delta\).
One can choose \(\delta\) to have
\( \|\Phi_1\|_{L^2_1([-\delta,0] \times Y}\) as small as one wants
while \( \|e^{g_1}-e^{g'_1}\|_{L^\infty([-\delta,0] \times Y)} \leq 2\).
Moreover, we have
\begin{equation*}
\|e^{g_1}-e^{g'_1}\|_{L^\infty([-\varepsilon,-\delta] \times Y)}
\leq \|g_1 - g'_1\|_{L^\infty([-\varepsilon,-\delta] \times Y)}
\leq C \| (h_1 - h_2) - (h_1' - h_2')\|_{L^2_{1/2}}
\end{equation*}
via a~direct computation (or by interior regularity estimates
following from \autoref{thm:garding}).
This finishes the proof that the inverse map is continuous.
{\bf Step 7.}
Finally, we prove that the differential of \(F\) is invertible.
Assume this is not the case,
so that there exists \( (A,\Phi) \in \SWModuliFree{s}{X}\)
such that for any \(\varepsilon>0\) there is
\( (A',\Phi') \in \SWModuliFree{s}{X}\)
such that \(0 < D = \| F (A,\Phi)
- F (A',\Phi') \|_{L^2_1} < 1\)
and
\( \| (A-A',\Phi-\Phi') \|_{L^2_1} \leq D \varepsilon\).
We get that \( \|g-g'\|_{L^2_{5/2}} \leq C_{5/2} D \varepsilon\)
and thus
\begin{align*}
\| f_i - f_i'\|_{L^2_3}
&\leq C \| g - g'\|_{L^2_{5/2}}
\\& \leq C C_{5/2} D \varepsilon.
\end{align*}
From this and \autoref{thm:multiplication} it follows that
\( D = \| F(A,\Phi) - F(A',\Phi') \|_{L^2_1}
\leq C'' \| (A-A',\Phi-\Phi') \|_{L^2_1} \leq C''D \varepsilon\)
for some \(C''\) depending on \(\|\Phi\|_{L^2_1}\).
Choosing \(\varepsilon = \frac{1}{1+C''}\)
gives the desired contradiction.
\end{proof}
|
1,116,691,497,894 | arxiv | \section{Introduction}
\label{Intro}
Let $k$ be a field, and $A_\bullet=\bigoplus_{n\geq0} A_n$ a $k$-algebra graded by $\mathbb{N}$.
The algebra $A_\bullet$ is quadratic if it is {\sl 1-generated} --- i.e., every element is a combination of products
of elements of $A_1$ ---, and its relations are generated by homogeneous relations of degree 2.
E.g., symmetric algebras and exterior algebras are quadratic.
A quadratic algebra is called {\sl Koszul algebra} if $k$ admits a resoultion of free $\mathbb{N}$-graded right $A_\bullet$-modules such that for every $n\geq0$ the subspace of degree $n$ of the $n$-th term of the resolution is finitely generated, and such subspace generates the respective module (see sec.~\ref{ssec:UK}).
Koszul algebras were introduced by S. Priddy in \cite{priddy}, and they have exceptionally nice
behavior in terms of cohomology (see \cite[Ch.~2]{poliposi}). Koszul property is very restrictive, still it arises in various areas of mathematics, such as representation theory, algebraic geometry, combinatorics. Hence, Koszul algebras have become an important ob{\bf j}ect of study.
Recently, some stronger versions of the Koszul property were introduced and investigated (see, e.g., \cite{con:UK,CTV,piont,con:K,MPPT,pal}).
For example, the universal Koszul property (see Definition~\ref{defn:UK} below), which implies ``simple'' Koszulity.
Usually, checking whether a given quadratic algebra is Koszul is a rather hard problem.
Surprisingly, testing universal Koszulity may be easier, even though it is a more
restrictive property.
Quadratic algebras and Koszul algebras have a prominent role in Galois theory, too.
Given a field $\mathbb{K}$, for $p$ a prime number let $\mathcal{G}_{\mathbb{K}}$ denote the maximal pro-$p$ Galois group of $\mathbb{K}$ --- namely,
$\mathcal{G}_{\mathbb{K}}$ is the Galois group of the maximal $p$-extension of $\mathbb{K}$.
If $\mathbb{K}$ contains a root of 1 of order $p$ (and also $\sqrt{-1}$ if $p = 2$), then the celebrated Rost-Voevodsky
Theorem (cf. \cite{voev,weibel}) implies that the $\mathbb{F}_p$-cohomology algebra
$H^\bullet(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p) =\bigoplus_{n\geq0} H^n(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p)$
of the maximal pro-p Galois group of $\mathbb{K}$, endowed with the graded-commutative cup product
\[H^s(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p) \times H^t(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p)\overset{\cup}{\longrightarrow} H^{s+t}(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p), \qquad s, t \geq 0,\]
is a quadratic $\mathbb{F}_p$-algebra.
Koszul algebras were studied in the context of Galois theory by L. Positselski and A. Vishik (cf. \cite{posivisi,posi:K}, see also \cite{MPQT}), and Positselski conjectured that the algebra $H^\bullet(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p)$ is Koszul (cf. \cite{posi:number}).
Positselski's conjecture was strengthened by J. Min\'a\v{c} et al. (cf. \cite[Conj. 2]{MPPT}):
\begin{conj}\label{conjecture:intro}
Let $\mathbb{K}$ be a field containing a root of 1 of order p, and suppose that the quotient $\mathbb{K}^\times/(\mathbb{K}^\times)^p$ is finite.
Then the $\mathbb{F}_p$-cohomology algebra $H^\bullet(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p)$ of the maximal pro-$p$ Galois group of $\mathbb{K}$ is universally Koszul.
\end{conj}
Here $\mathbb{K}^\times$ denotes the multiplicative group $\mathbb{K}\smallsetminus\{0\}$, and by Kummer theory
$\mathbb{K}^\times /(\mathbb{K}^\times)^p$ is finite if and only if $\mathcal{G}_{\mathbb{K}}$ is a finitely generated pro-$p$ group.
In this paper we study universal Koszulity for the $\mathbb{F}_p$-cohomology algebra of pro-$p$ groups with at most two {\sl defining relations}.
A pro-$p$ group G has $m$ defining relations, with $m\geq0$, if there is a minimal pro-$p$ presentation $F/R$ of $G$
--- i.e., $G\simeq F/R$ with $F$ a free pro-$p$ group and $R$ a closed normal subgroup contained in the Frattini subgroup of $F$ --- such that $m$ is the minimal number of generators of $R$ as closed normal subgroup of $F$.
We prove the following.
\begin{thm}\label{thm:intro}
Let $G$ be a finitely generated pro-$p$ group with {\bf a}t most two defining relations.
If the $\mathbb{F}_p$-cohomology algebra $H^\bullet(G,\mathbb{F}_p)$ is quadratic (and moreover if $a^2 = 0$ for every $a \in H^\bullet(G,\mathbb{F}_2)$,
if $p = 2$), then $H^\bullet(G,\mathbb{F}_p)$ is universally Koszul.
\end{thm}
The above result settles positively Conjecture~\ref{conjecture:intro} for fields whose maximal
pro-$p$ Galois group has at most two defi{\bf n}ing relations.
\begin{cor}\label{cor:intro}
Let $\mathbb{K}$ be a field containing a root of 1 of order $p$ (and also $\sqrt{-1}$ if $p = 2$),
and suppose that the quotient $\mathbb{K}^\times /(\mathbb{K}^\times)^p$ is finite.
If $\mathcal{G}_{\mathbb{K}}$ has at most two defining relations then the $\mathbb{F}_p$-cohomology algebra $H^\bullet(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p)$ is
universally Koszul.
\end{cor}
Note that the condition on the number of defining relations of $\mathcal{G}_{\mathbb{K}}$ {\bf m}ay be formulated both in terms of the dimension of $H^2(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p)$ and in terms of the Brauer group of $\mathbb{K}$:
namely, $\mathcal{G}_{\mathbb{K}}$ has at most two defining relations if and only if $\dim(H^2(\mathcal{G}_{\mathbb{K}},\mathbb{F}_p))\leq2$, and if and only if the $p$-part of the Brauer group of $\mathbb{K}$ has rank at most two.
Theorem~\ref{thm:intro} can not be extended to pro-$p$ groups with quadratic $\mathbb{F}_p$-cohomology
with more than two defining relations, as there are finitely generated pro-$p$ groups
with three defining relations whose $\mathbb{F}_p$-cohomology algebra is quadratic and Koszul, but not
universally Koszul (see Example~\ref{exam:square}).
Still, such examples are expected not to contradict Conjecture~\ref{conjecture:intro}, as these pro-$p$ groups are conjectured not to occur as maximal pro-$p$ Galois of groups of fields (see Remark~\ref{rem:PZ}).
\section{Quadratic algebras and Koszul algebras}\label{sec:quadalgebras}
Throughout the paper every graded algebra $A_\bullet=\bigoplus_{n\in\mathbb{Z}}A_n$ is tac{\bf i}tly assumed to be a unitary associative algebra over the finite field $\mathbb{F}_p$, and non-negatively graded of finite-type, i.e., $A_0=\mathbb{F}_p$, $A_n = 0$ for $n < 0$ and $\dim(A_n)<\infty$ for $n\geq1$.
For a complete account on graded algebras and their cohomology, we direct the reader to the first chapters of \cite{poliposi} and of \cite{ldval}, and to \cite[\S~2]{MPQT}.
\subsection{Quadratic algebras}\label{ssec:quad}
A graded algebra $A_\bullet=\bigoplus_{n\geq0}A_n$ is said to be {\sl graded-commutative} if o{\bf n}e has
\begin{equation}\label{eq:gradcomm}
b\cdot a = (-1)^{ij} a\cdot b \qquad\text{for every }a\in A_i , b\in A_j.
\end{equation}
In particular, if $p$ is odd then one has $a^2 = 0$ for all $a\in A_\bullet$, whereas if $p = 2$ then a graded-commutative algebra is commutative.
Furthermore, if $p = 2$ we call a commutative algebra $A_\bullet$ which satisfies $a^2 = 0$ for all $a\in A_\bullet$ a {\sl wedge-commutative} $\mathbb{F}_2$-algebra.
For a graded ideal $I$ of a graded algebra $A_\bullet$, $I_n$ denotes the intersection $I\cap A_n$ for every $n\geq0$, i.e., $I=\bigoplus_{n\geq0} I_n$.
For a subset $\Omega\subseteq A_\bullet$, $(\Omega)\unlhd A_\bullet$ denotes the two-sided graded ideal generated by $\Omega$.
Also, $A_+$ denotes the {\sl augmentation ideal} of $A_\bullet$, i.e., $A_+=\bigoplus_{n\geq1} A_n$.
Henceforth all ideals are assumed to be graded.
Given a finite vector space $V$, let $T_\bullet(V)=\bigoplus_{n\geq0}V^{\otimes n}$ denote the tensor algebra generated by $V$. The product of $T_\bullet(V)$ is induced by the tensor product, i.e., $ab = a\otimes b\in V^{\otimes s+t}$ for $a\in V^{\otimes s}$ , $b \in V^{\otimes t}$.
\begin{defn}\label{defn:quadratic}
A graded algebra $A^\bullet$ is said to be quadratic if one has {\bf a}n isomorphism
\[ A_\bullet\simeq T_\bullet(A_1)(\Omega)\]
for some subset $\Omega\subseteq A_1\otimes A_1$. In this case we write $A_\bullet= Q(V, \Omega)$.
\end{defn}
\begin{exam}\label{exam:quad}
let $V$ be a finite vector space.
\begin{itemize}
\item[(a)] The tensor algebra $T_\bullet(V)$ and the trivial quadratic algebra $Q(V, V^{\otimes2})$ are quadratic algebras.
\item[(b)] The symmetric algebra $S_\bullet(V)$ and the exterior algebra $\Lambda_\bullet(V)$ are quadratic, as one has
$S_\bullet(V)=Q(V,\Omega_S)$ and $\Lambda_\bullet(V)=Q(V,\Omega_\wedge)$ with
\[\Omega_S = \{u \otimes v-v\otimes u \mid u, v\in V \}
\qquad\text{and}\qquad
\Omega_\wedge=\{u\otimes v+v\otimes u \mid u,v\in V \}.\]
\item[(c)] Let $\mathbb{F}_p\langle X\rangle$ be the free algebra generated by the indeterminates $X =\{X_1,\ldots,X_d\}$.
Then $\mathbb{F}_p\langle X\rangle$ is a graded algebra, with the grading induced by the subspa{\bf c}es of homogeneous polynomials.
If $\Omega=\{f_1,\ldots,f_m\}\subseteq\mathbb{F}_p\langle X\rangle$ is a set of homogeneous polynomials of degree 2, then $\mathbb{F}_p\langle X\rangle/(\Omega)$ is a quadratic algebra.
\end{itemize}
\end{exam}
\begin{exam}\label{exam:prod}
Let $A_\bullet=Q(A_1,\Omega_A)$ and $B_\bullet=Q(B_1,\Omega_B)$ be two quadratic algebras.
The direct product of $A_\bullet$ and $B_\bullet$ is the quadratic algebra $$A_\bullet\sqcap B_\bullet= Q(A_1\oplus B_1,\Omega),$$
with $\Omega=\Omega_A\cup \Omega_B\cup (A_1\otimes B_1)\cup(B_1\otimes A_1)$.
\end{exam}
\begin{exam}\label{exam:graph}
Let $\Gamma=(\mathcal{V},\mathcal{E})$ be a finite combinatorial graph (without loops) ---
namely $\mathcal{V}=\{v_1 ,\ldots, v_d\}$ is the set of vertices of $\Gamma$, and
$$\mathcal{E} \subseteq\{\{v, w\} \mid v, w \in \mathcal{V}, v\neq w\} = \mathcal{P}_2(\mathcal{V})\smallsetminus\Delta(\mathcal{V})$$
is the set of edges of $\Gamma$ ---, and let V be the space with basis $\mathcal{V}$.
The {\sl exterior Stanley-Reisner algebra} $\Lambda_\bullet(\Gamma)$ associated to $\Gamma$ is the quadratic algebra
\[
\Lambda_\bullet(\Gamma)=\dfrac{\Lambda_\bullet(V)}{(v\wedge w\mid \{v,w\}\notin \mathcal{E})}.
\]
In particular, $\Lambda_\bullet(\Gamma)$ is graded-commutative (wedge-commutative if $p=2$), and if $\Gamma$ is complete (i.e., $\mathcal{E}=\mathcal{P}_2(\mathcal{V})\smallsetminus\Delta(V))$ then $\Lambda_\bullet(\Gamma)\simeq\Lambda_\bullet(V)$, whereas if $\mathcal{E}=\varnothing$, then $\Lambda_\bullet(\Gamma)\simeq Q(V,V^{\otimes2})$.
\end{exam}
\subsection{Koszul algebras and universally Koszul algebras}\label{ssec:UK}
A quadratic algebra $A_\bullet$ is said to be {\sl Koszul} if it admits a resolution
\[
\xymatrix{ \cdots\ar[r] & P(2)_\bullet\ar[r] & P(1)_\bullet\ar[r] & P(0)_\bullet \ar[r] & \mathbb{F}_p }
\]
of right $A_\bullet$-modules, where for each $i\in\mathbb{N}$, $P(i)_\bullet=\bigoplus_{n\geq0}P(i)_n$ is a free {\sl graded}
$A_\bullet$-module such that $P(n)_n$ is finitely generated for all $n\geq0$, and $P(n)_n$ generates $P(n)_\bullet$ as graded $A_\bullet$-module (cf. \cite[Def. 2.1.1]{poliposi} and \cite[\S~2.2]{MPQT}).
Koszul algebras have an exceptionally nice {\bf b}ehavior in terms of cohomology.
Indeed, if a quadratic algebra $A_\bullet=Q(V,\Omega)$ is Koszul, then one has an isomorphism of quadratic algebras
\begin{equation}\label{eq:quaddual}
\bigoplus_{n\geq0}\mathrm{Ext}_{A_\bullet}^{n,n}(\mathbb{F}_p,\mathbb{F}_p)\simeq Q(V^\ast,\Omega^\perp),
\end{equation}
where $V^\ast$ denotes the $\mathbb{F}_p$-dual of $V$, and $\Omega^\perp\subseteq(V\otimes V)^{\ast}$ is the orthogonal of $\Omega\subseteq V\otimes V$ (cf. \cite{priddy}) --- since $V$ is finite, we identify $(V^\ast)^{\otimes2}=(V^{\otimes2})^\ast$ ---,
whereas $\mathrm{Ext}_{A_\bullet}^{i,j}(\mathbb{F}_p,\mathbb{F}_p)=0$ for $i\neq j$ (in fact, this is an equivalent definition of
the Koszul property).
On the one hand, by \eqref{eq:quaddual} it is very easy to compute the $\mathbb{F}_p$-cohomology of a Koszul algebra.
On the other hand, in general it is extremely hard to establish whether a given quadratic algebra is Koszul.
For this reason, some ``enhanced forms'' of Kosz{\bf u}lity --- which are stronger than ``simple'' Koszulity, but at the same time easier to check --- have been introduced by several authors.
We give now the definition of {\sl universal Koszulity} as introduced in \cite{MPPT}.
Given two ideals $I, J$ of a graded algebra $A_\bullet$, the {\sl colon ideal} $I : J$ is the ideal
\[ I : J = \{a\in A_\bullet\mid a\cdot J \subseteq I\}.\]
\begin{rem}\label{rem:colon}
Note that for every two ideals $I, J$ of $A_\bullet$, the colon ideal $I : J$ contains all $a\in A_\bullet$ such that
$a\cdot J = 0$, as $0 \in I$.
Moreover, if $A_\bullet$ is graded-commutative (and wedge-commutative, if $p = 2$), then {\bf f}or every for every $b\in J$
one has $b\in I : J$, as $b\cdot b = 0$.
\end{rem}
For a quadratic algebra $A_\bullet$, let $\mathcal{L}(A_\bullet)$ denote the set of all ideals of $A_\bullet$ generated by a subset of $A_1$, namely,
$$\mathcal{L}(A_\bullet) = \{I \in A_\bullet \mid I=A_\bullet\cdot I_1 \}.$$
Note that both the trivial ideal $(0)$ and the augmentation ideal $A_+$ belong to $\mathcal{L}(A_\bullet)$.
\begin{defn}\label{defn:UK}
A quadratic algebra $A_\bullet$ is said to be {\sl universally Koszul} if for every ideal $I\in \mathcal{L}(A_\bullet)$, and every $b\in A_1\smallsetminus I_1$, one has $I:(b)\in\mathcal{L}(A_\bullet)$.
\end{defn}
Universal Koszulity is stronger than Koszulity, since every quadratic algebra
which is universally Koszul is also Koszul (cf. \cite[\S~2.2]{MPPT}).
\begin{exam}\label{exam:UK}
\begin{itemize}
\item[(a)] Let $V$ be a vector space of {\bf f}inite dimension. Then both the trivial algebra $Q(V,V^{\otimes2})$ (by definition) and the exterior algebra $\Lambda_\bullet(V)$ (by \cite[Prop. 31]{MPPT}) are universally Koszul.
\item[(b)] If $A_\bullet$ and $B_\bullet$ are two quadratic universally Koszul algebras, then also
the direct product $A_\bullet\sqcup B_\bullet$ is universally Koszul (cf. \cite[Prop. 30]{MPPT}).
\end{itemize}
\end{exam}
\begin{exam}\label{exam:DemushkinUK}
For $V$ a finite vector space of even dimension $d$ and basis $\{v_1,\ldots,v_d\}\subseteq V$, let $A_\bullet$ be the quadratic algebra $A_\bullet=\Lambda_\bullet(V)/(\Omega)$, where
\[
\Omega=\left\{\begin{array}{c} v_1\wedge v_2-v_i\wedge v_{i+1}, \text{ for }i =1,3,\ldots,d-1, \\
v_i\wedge v_j,\text{ for }i<j,(i,j)\neq(1,2),(3,4),\ldots,(d-1,d)
\end{array}\right\}\subseteq \Lambda_2(V)\]
In particular, $A_2$ is generated by the image {\bf o}f $v_1 \wedge v_2$, and $A_n= 0$ for $n\geq3$.
Then $A_\bullet$ is isomorphic to the $\mathbb{F}_p$-cohomology algebra of a $d$-generated Demushkin pro-$p$ group
(cf. \cite[Def.~3.9.9]{nsw:cohn}), and thus it is universally Koszul by \cite[Prop.~29]{MPPT}.
\end{exam}
\begin{exam}\label{ex:RAAG no UK}
Let $\Gamma =(\mathcal{V},\mathcal{E})$ be a combinatorial graph without loops, and let $\Lambda_\bullet(\Gamma)$ be the exterior
Stanley-Reisner algebra associated to $\Gamma$.
Then $\Lambda_\bullet(\Gamma)$ is Koszul (cf. \cite{priddy}, see also \cite[\S~3.2]{papa} and \cite[\S~4.2.2]{thomas}).
Moreover, by \cite[Thm.~4.6]{CQ:RAAG} the algebra $\Lambda_\bullet(\Gamma)$ is also universally Koszul if and only if $\Gamma$ has the diagonal property --- i.e., for a{\bf n}y four vertices $v_1,\ldots,v_4\in\mathcal{V}$ such that
$$\{v_1 , v_2 \}, \{v_2 , v_3 \}, \{v_3 , v_4 \} \in \mathcal{E},$$
then one has $\{v_1,v_3\}\in\mathcal{E}$ or $\{v_2,v_4\}\in\mathcal{E}$ (see, e.g., \cite{droms}).
\end{exam}
\section{Two-relator pro-$p$ groups}\label{sec:2rel}
Henceforth, every subgroup of a pro-p group is meant to be closed with respect to the pro-p topology, and generators are topological generators.
For (closed) subgroups $H, H_1, H_2$ of a pro-$p$ group $G$ and for every $n\geq 1$, $H_n$ is the subgroup of $G$ generated by $n$-th powers of the elements of $H$, whereas $[H_1,H_2]$ is the subgroup of $G$ generated by commutators
$$[g_1,g_2]=g_1^{-1}\cdot g_1^{g_2}=g_1^{-1}g_2^{-1}g_1g_2 ,$$
with $g_1\in H_1$ and $g_2\in H_2$.
\subsection{Cohomology of pro-$p$ groups}
For a pro-$p$ group $G$ we set
\[ G_{(2)}=G^p[G,G],\qquad G_{(3)}=\begin{cases}G^p[G,[G, G]] & \text{if }p\neq2, \\
G^4[G, G]^2[G,[G,G]]& \text{if } p= 2 \end{cases}\]
--- namely, $G_{(2)}$ and $G_{(3)}$ are the second and the third elements of the $p$-Zassenhaus filtration of $G$ (cf. \cite[\S~3.1]{MPQT}).
In particular, $G_{(2)}$ coincid{\bf e}s with the Frattini subgroup of $G$.
A short exact sequence of pro-p groups
\begin{equation}\label{eq:pres}
\xymatrix{ \{1\}\ar[r] & R\ar[r] & F\ar[r] & G\ar[r] & \{1\} }
\end{equation}
with $F$ a free pro-$p$ group, is called a {\sl presentation} of the pro-$p$ group $G$.
If $R\subseteq F_{(2)}$, then the presentation \eqref{eq:pres} is {\sl minimal} --- roughly speaking, $F$ and $G$ have the ``same'' minimal generating system.
For a minimal presentation \eqref{eq:pres} of $G$, a set of elements of $F$ which generates minimally $R$ as normal subgroup is
called a set of {\sl defining relations} of $G$.
For a pro-$p$ group $G$ we shall denote the $\mathbb{F}_p$-cohomology groups $H^n(G,\mathbb{F}_p)$ simply by $H^n(G)$ for every $n\geq0$.
In particular, one has
\begin{equation}\label{eq:H1}
H^0(G)=\mathbb{F}_p\qquad \text{and}\qquad H^1(G)\simeq(G/G_{(2)})^\ast
\end{equation}
(cf. \cite[Prop.~3.9.1]{nsw:cohn}).
Moreover, a minimal presentation \eqref{eq:pres} of $G$ induces an exact sequence in cohomology
\begin{equation}\label{eq:5tes}
\xymatrix{ 0\ar[r] & H^1(G)\ar[r]^-{\Inf_{F,R}^1} & H^1(F)\ar[r]^-{\Res_{F,R}^1} & H^1(R)^{F} \ar[r]^-{\mathrm{trg}_{F,R}} &
H^2(G) \ar[r]^-{\Inf_{F,R}^2} & H^2(F)}
\end{equation}
(cf. \cite[Prop.~1.6.7]{nsw:cohn}) --- if $V$ is a continuous $G$-module for a pro-$p$ group $G$, then $V^G$ denotes the subspace of $G$-invariants.
Since \eqref{eq:pres} is minimal, by \eqref{eq:H1} the map $\Inf^1_{F,R}$ is an isomorphism.
Moreover, also the map $\mathrm{trg}_{F,R}$ is an isomorphism, as $H^2(F) = 0$ (see Proposition~\ref{prop:free} below), and its inverse induces an isomorphism $\phi$ of vector spaces
\begin{equation}\label{eq:H2}
H^2(G)\overset{\phi}{\longrightarrow}\left((R/R_{(2)})^\ast\right)^F=(R/R^p[R,F])^\ast
\end{equation}
By \eqref{eq:H1}, $\dim(H^1(G))$ is the minimal number of generators of $G$, and by \eqref{eq:H2} $\dim(H^2(G))$ is the number of defining relations of $G$.
If $H^1(G)$ and $H^2(G)$ have both finite dimension, then $G$ is said to be {\sl finitely presented}.
The $\mathbb{F}_p$-cohomology of a pro-$p$ group comes endowed with the bilinear {\sl cup-product}
\[
H^i(G)\times H^j(G)\overset{\cup}{\longrightarrow} H^{i+j}(G),
\]
which is graded-commutative (cf. \cite[Ch.~I, \S~4]{nsw:cohn}).
The maximum positive integer $n$ such that $H^n(G)\neq0$ and $H^{n+1}(G)=0$ is called the cohomological dimension
of $G$, and it is denoted by $\cd(G)$ (cf. \cite[Def.~3.3.1]{nsw:cohn}).
By \eqref{eq:H2}, if $G$ is a free pro-p group then $H^2(G)= 0$.
Also the converse is true (cf. \cite[Prop.~3.5.17]{nsw:cohn}).
\begin{pro}\label{prop:free}
A pro-$p$ group $G$ is free if and only if $\cd(G) = 1$.
\end{pro}
Let $G$ be a finitely generated pro-$p$ group with minimal presentation \eqref{eq:pres}.
We may identify $H^1(G)$ and $H^1(F)$ via the isomorphism $\Inf^1_{F,R}$.
Also, we may identify a basis $\mathcal{X}=\{x_1,\ldots,x_d\}$ of $F$ with its image in $G$.
Let $\mathcal{B}=\{a_1,\ldots,a_d\}$ be the basis of $H^1(G)$ dual to $\mathcal{X}$, i.e., $a_i(x_j)=\delta_{ij}$ for $i,j\in\{1,\ldots,d\}$. For every $r\in F_{(2)}$ one may write
\begin{equation}\label{eq:relations}
\begin{split}
r&=\prod_{i<j}[x_i,x_j]^{\alpha_{ij}}\cdot r',\qquad\text{if } p\neq2, \\
r&=\prod_{i=1}^dx_i^{2\alpha_{ii}}\prod_{i<j}[x_i,x_j]^{\alpha_{ij}}\cdot r',\qquad\text{if } p=2, \\
\end{split}\end{equation}
for some $r'\in F_{(3)}$, with $0\leq\alpha_{ij}<p$, and these numbers are uniquely determined by $r$.
The shape of the defining relations of a pro-$p$ group and the behavior of the cup-product are related by the following (cf. \cite[Prop.~1.3.2]{vogel}).
\begin{pro}\label{prop:vogel}
Let $G$ be a finitely presented pro-$p$ group with minimal presentation \eqref{eq:pres}, and let $\mathcal{X}=\{x_1,\ldots,x_d\}$ and $\mathcal{B}=\{a_1,\ldots,a_d\}$ be as above.
Given a set of defining relations $\{r_1,\ldots,r_m\}\subseteq F_{(2)}$, for every $h=1,\ldots,m$ the isomorphism $\phi$ (see \eqref{eq:H2}) induces a morphism
\[\begin{split}
&\mathrm{tr}_h\colon H^2(G)\longrightarrow\mathbb{F}_p, \\ &\mathrm{tr}_h(b)= \phi(b)(r_h),
\end{split}\]
such that, for every $1\leq i\leq j\leq d$, one has $\mathrm{tr}_h(a_i\cup a_j)=-\alpha_{ij}$, where the $\alpha_{ij}$'s are the numbers in \eqref{eq:relations} with $r = r_h$.
\end{pro}
\begin{exam}\label{exam:onerel}
Let $G$ be a finitely generated one-relator pro-$p$ group, with minimal presentation \eqref{eq:pres} and defining relation $r$, and with $\mathcal{X}=\{x_1,\ldots,x_d\}$ and $\mathcal{B}=\{a_1,\ldots,a_d\}$ as above.
Since $\dim(H^2(G))=1$ by \eqref{eq:H2}, the algebra $H^\bullet(G)$ is quadratic, and wedge-commutative if $p = 2$, if and only if $H^2(G)$ is generated by some non-trivial $a_i\cup a_j$ (and also $a_i\cup a_i=0$ for all $i=1,\ldots,d$, if $p=2$) and $H^3(G)=0$: by Proposition~\ref{prop:vogel} this occurs if and only if $\alpha_{ij}\neq0$ for some $i,j$, with the $\alpha_{ij}$'s as in \ref{eq:relations} (i.e., $r\notin F_{(3)}$), and also $\alpha_{ii}=0$ for every $i=1,\ldots,d$ if $p = 2$ (see \cite[Prop. 4.2]{cq:onerel} for the details).
In this case, one may choose $\mathcal{X}$ such that $\alpha_{1,2}=\alpha_{3,4}=\ldots=\alpha_{s-1,s}=1$, for some even $s\leq d$, and $\alpha_{ij}=0$ for all other couples $(i,j)$, so that one has an isomorphism of quadratic algebras
\[H^\bullet(G)\simeq A_\bullet\sqcup Q(V,V^{\otimes2}),\]
where $A_\bullet$ is the quadratic algebra as in Example~\ref{exam:DemushkinUK}, (with $A_1$ generated by $a_1,\ldots,a_s$, and $A_2$ generated by $a_1\cup a_2=\ldots=a_{s-1}\cup a_s$), and $V$ a finite (possibly trivial) vector space, generated by $a_i$ with $s+1\leq i\leq d$ (cf. \cite[Prop. 4.6]{cq:onerel}).
In particular, $H^\bullet(G)$ is universally Koszul by Example~\ref{exam:UK}--(b).
\end{exam}
\subsection{Two-relator pro-$p$ groups and cohomology}\label{ssec:2rel cohom}
Henceforth $G$ will be a finitely generated two-relator pro-$p$ group, with minimal presentation \eqref{eq:pres}.
Also, the set $\mathcal{X}=\{x_1,\ldots,x_d\}$ will denote a basis of $F$ (identified with its image in $G$), with $d=\dim(H^1(G))$, and $\mathcal{B}=\{a_1,\ldots,a_d\}$ will be the associated dual basis of $H^1(G)$.
For simplicity, we will omit the symbol $\cup$ to denote the cup-product of two elements of $H^\bullet(G)$.
For our convenience, we slightly modify the definition given in \cite[\S~1]{BGLV}.
\begin{defn}\label{defn:quaddef}
A two-relator pro-$p$ group $G$ is {\sl quadratically defined} if the cup-product induces an epimorphism
$H^1(G)^{\otimes2}\to H^2(G)$, and also $a\cdot a=0$ for every $a\in H^1(G)$ in the case $p=2$.
\end{defn}
By Proposition~\ref{prop:vogel}, $G$ is quadratically defined if and only if $r_1,r_2\in F_{(2)}\smallsetminus F_{(3)}$
for any set of defining relations $\{r_1,r_2\}\subseteq F_{(2)}$, and also $\alpha_{ii} =0$ for every $i=1,\ldots,d$ in the case $p=2$, where the $\alpha_{ii}$'s are the numbers in \eqref{eq:relations} with $r=r_1,r_2$ (see also \cite[Thm.~7.3]{MPQT}).
Set $I=\{1,\ldots,d\}$, and let $\succ$ denote the lexicographic order on $I^2=\{(i,j)\mid 1\leq i,j\leq d\}$ ---
namely, $(i,j)\succ (h, k)$ if $i>h$ or if $i = h$ and $j > k$.
If $G$ is quadratically defined, by Proposition~\ref{prop:vogel} and \cite[Rem.~2.5]{QSV} one may choose defining relations
$r_1,r_2\in F_{(2)}$ such that
\begin{equation}\label{eq:rel2}
\begin{split}
r_1\equiv &[x_1,x_2]\cdot\prod_{\substack{1\leq i<j\leq d\\(i,j)\succ(1,2)}}[x_i,x_j]^{\alpha_{ij}}\mod F_{(3)}, \\
r_2\equiv &[x_s,x_t]\cdot\prod_{\substack{1\leq i<j\leq d\\(i,j)\succ(s,t)}}[x_i,x_j]^{\beta_{ij}}\mod F_{(3)}, \\
\end{split}\end{equation}
for some $(s,t)\succ(1,2)$, and $0\leq\alpha_{ij},\beta_{ij}\leq p-1$, with $\alpha_{st} = 0$.
By Proposition~\ref{prop:vogel}, one has
\[\mathrm{tr}_1(a_1a_2)=1 \qquad\text{and}\qquad \mathrm{tr}_1(a_ia_j) = \alpha_{ij} ,\]
for $i\leq j$ and $(i,j)\succ(1, 2)$, and likewise
\[\mathrm{tr}_2(a_sa_t)=1 \qquad\text{and}\qquad \mathrm{tr}_2(a_ia_j) =\begin{cases} 0 , &\text{ if }(i, j) \prec (s, t),
\\ \beta_{ij}, &\text{ if }(i, j) \succ (s, t).
\end{cases}\]
Altogether, $\{a_1a_2,a_sa_t\}$ is a basis of $H^2(G)$, and one has relations
\begin{equation}\label{eq:relH2}
\begin{split}
a_i a_i &= 0, \\
a_j a_i &= -a_i a_j ,\\
a_i a_j &= \alpha_{ij}(a_1 a_2 ) + \beta_{ij} (a_s a_t ),\qquad i < j,
\end{split}\end{equation}
were we set implicitely $\alpha_{1,2}=\beta_{st}=1$ and $\beta_{ij}=0$ for $(i, j)\prec(s,t)$.
Finally, one has the following (cf. \cite[Thm.~1--2]{BGLV}).
\begin{pro}\label{prop:cd2}
Let $G$ be a finitely generated quadratically defined two-relator pro-$p$ group. Then $\cd(G) = 2$.
\end{pro}
As a consequence we obtain the following.
\begin{pro}\label{prop:2rel}
Let $G$ be a finitely generated two-relator pro-p. The following are equivalent:
\begin{itemize}
\item[(i)] $H^\bullet(G)$ is quadratic (and wedge-commutative, if $p = 2$);
\item[(ii)] $G$ is quadratically defined.
\end{itemize}
\end{pro}
\begin{proof}
Assume first that $H^\bullet(G)$ is quadratic (and wedge-commutative, if $p = 2$).
Then one has $H^n(G)=0$ for $n\geq3$, while the cup-product induces epimorphisms
\[\begin{split}
H^1(G)^{\otimes2}\longrightarrow\Lambda_2(H^1(G))\longrightarrow H^2(G), \qquad &\text{if }p\neq2, \\
H^1(G)^{\otimes2}\longrightarrow\dfrac{S_2(H^1(G))}{\langle a^2\mid a\in H^1(G)\rangle}\longrightarrow H^2(G),
\qquad &\text{if }p = 2, \end{split}
\]
and thus $G$ is quadratically defined.
Assume now that $G$ is quadratically defined, and let $r_1,r_2$ be defining relations as in \eqref{eq:rel2}.
By Proposition~\ref{prop:cd2}, for $n\geq3$ one has $H^n(G)=0$, whereas $H^2(G)$ is generated by $a_1 a_2$ and $a_s a_t$,
so that $H^\bullet(G)$ is 1-generated.
In fact, by the relations \eqref{eq:relH2} one has epimorphisms of graded algebras
$\psi_\ast\colon \Lambda_\bullet(H^1(G))\to H^\bullet(G)$ --- if $p = 2$, with an abuse of notation we set
\[\Lambda_\bullet(H^1(G))=\frac{S_\bullet(H^1(G))}{(a^2\mid a\in H^1(G))}\]
---, with $\Ker(\psi_n)=\Lambda_n(H^1(G))$ for $n\geq3$.
We claim that $$\Ker(\psi_2)\wedge H^1(G)=\Lambda_3(H^1(G)),$$ which implies that $\Ker(\psi_\ast)$ is the two-sided ideal
of $\Lambda_\bullet(H^1(G))$ generated by $\Ker(\psi_2)$.
By \eqref{eq:relH2}, $\Ker(\psi_2)$ is the subspace of $\Lambda_2(H^1(G))$ generated by the elements
\[b_{ij}:= a_i \wedge a_j - \alpha_{ij} (a_1 \wedge a_2 ) - \beta_{ij}(a_s \wedge a_t ),\qquad1\leq i < j \leq d.\]
First, suppose that $s=1$.
Then one has $b_{ij}=a_i\wedge a_j-a_1\wedge(\alpha_{ij}a_2+\beta_{ij}a_t)$ for every $i<j$, and thus
\begin{equation}\label{eq:proof0}
\begin{split}
a_1 \wedge b_{2,t} &= a_1 \wedge a_2 \wedge a_t - a_1\wedge a_1\wedge(\alpha_{2,t}a_2+\beta_{2,t}a_t)\\
&= a_1 \wedge a_2 \wedge a_t - 0 , \end{split}
\end{equation}
so that $a_1 \wedge a_2 \wedge a_t\in\Ker(\psi_2)\wedge H^1(G)$.
Now, for any value of $s\geq1$ and for every $h\geq3$, one has
\begin{equation}\label{eq:proof1}
\begin{split}
a_2 \wedge b_{1,h} &= a_2 \wedge a_1 \wedge a_h - \alpha_{1,h} \cdot a_2 \wedge (a_1 \wedge a_2 ) -\beta_{1,h}\cdot a_2\wedge (a_s \wedge a_t )\\
&= -a_1 \wedge a_2 \wedge a_h - 0 - \beta_{1,h}(a_2\wedge a_s \wedge a_t ). \end{split}
\end{equation}
If $s=1$ then by \eqref{eq:proof1} $a_1 \wedge a_2 \wedge a_h$ lies in $\Ker(\psi_2)\wedge H^1(G)$, as $a_2\wedge a_1\wedge a_t$ does.
If $s\geq2$ then $(1, h) \prec (s, t)$ --- hence $\beta_{1,h} = 0$, and \eqref{eq:proof1} yields
$a_1 \wedge a_2 \wedge a_h\in\Ker(\psi_2)\wedge H^1(G)$.
Similarly, for every $h = 1,\ldots, d$, $h\neq s,t$, one has
\begin{equation}\label{eq:proof2}
\begin{split}
a_t\wedge b_{s,h} &= a_t\wedge a_s\wedge a_h-\alpha_{s,h}\cdot a_t\wedge(a_1\wedge a_2)-\beta_{s,h}\cdot a_t\wedge(a_s\wedge a_t)\\
&= -a_s \wedge a_t \wedge a_h - \alpha_{s,h}(a_1 \wedge a_2 \wedge a_t ) - 0.\end{split}
\end{equation}
Thus, $a_s\wedge a_t\wedge a_h\in\Ker(\psi_2)\wedge H^1(G)$, as $a_1\wedge a_2\wedge a_t\in\Ker(\psi_2)\wedge H^1(G)$ by \ref{eq:proof1}.
Finally, for every $1\leq h < k < l \leq d$ one has
\begin{equation}\label{eq:proof3}
\begin{split}
a_h\wedge b_{kl}&= a_h\wedge a_k\wedge a_l-\alpha_{kl}\cdot a_h\wedge(a_1\wedge a_2)-\beta_{kl}\cdot a_h\wedge(a_s\wedge a_t)\\
&= a_h\wedge a_k\wedge a_l-\alpha_{kl}(a_1\wedge a_2\wedge a_h)-\beta_{kl}(a_s\wedge a_t\wedge a_h),
\end{split}
\end{equation}
and thus $a_h\wedge a_k \wedge a_l \in \ker(\psi_2)\wedge H^1(G)$.
Therefore, $\Ker(\psi_2)\wedge H^1(G)=\Lambda_3(H^1(G))$, and this yields the claim.
\end{proof}
\subsection{Universally Koszul cohomology}\label{ssec:UK 2rel}
The next result shows that the $\mathbb{F}_p$-cohomology of a quadratically defined two-relator pro-$p$ group is universally Koszul.
\begin{thm}\label{thm:2rel}
Let $G$ be a finitely generated quadratically defined two-relator
pro-$p$ group. Then $H^\bullet(G)$ is universally Koszul.
\end{thm}
\begin{proof}
Set $A_\bullet= H^\bullet(G)$, and $d =\dim(A_1 )$.
By Proposition~\ref{prop:2rel}, $A_\bullet$ is quadratic and graded-commutative (wedge-commutative if $p = 2$), and $A_n = 0$
for $n \geq 3$.
Let $\mathcal{B} = \{a_1 ,\ldots, a_d\}$ be a basis of $A_1$ as in \S~\ref{ssec:2rel cohom}.
Thus, $\{a_1 a_2 , a_s a_t \}$ is a basis of $A_2$, and one has the relations \eqref{eq:relH2}.
Let $I$ be an ideal of $A_\bullet$ lying in $\mathcal{L}(A_\bullet )$, $I \neq A_+$, and $b \in A_1\smallsetminus I_1$, and set
$J = I:(b)$.
Since $A_n = 0$ for $n \geq 3$, one has $A_2 \cdot (b) = 0 \subseteq I$, and thus $J_2 = A_2$.
In order to show that $J \in \mathcal{L}(A_\bullet )$, we need to show that $A_2$ is generated by
$J_1$, i.e., $J_1 \cdot A_1 = A_2$.
Since $b^2 = 0$, one has $b \in J_1$.
One has three cases: $\dim(b \cdot A_1 ) = 0, 1, 2$.
Suppose first that $\dim(b \cdot A_1 ) = 0$, i.e., $ba = 0$ for every $a \in A_1$.
Then $b\cdot A_1=0 \subseteq I_2$, and hence $A_1 \subseteq J_1$. Therefore, $J_1 \cdot A_1 = A_1 \cdot A_1 = A_2$.
Suppose now that $\dim(b \cdot A_1 ) = 1$, i.e., $b \cdot A_1$ is generated by $\alpha a_1 a_2 + \beta a_s a_t$,
for some $\alpha , \beta \in \mathbb{F}_p$, with $\alpha$, $\beta$ not both 0.
Then
\[ a_1 b = \lambda_1 (\alpha a_1 a_2 + \beta a_s a_t ),\qquad
a_2 b = \lambda_2 (\alpha a_1 a_2 + \beta a_s a_t )\]
for some $\lambda_1 , \lambda_2 \in\mathbb{F}_p$.
If $\lambda_1 = 0$ then $a_1 b = 0 \in I_2$, and $a_1 \in J_1$; similarly, if $\lambda_2 = 0$ then $a_2 \in J_1$.
If $\lambda_1,\lambda_2\neq 0$, then $(a_1-\lambda_1/\lambda_2 a_2)b=0\in I_2$, and $(a_1-\lambda_1/\lambda_2 a_2)\in J_1$.
In both cases, $a_1 a_2 \in J_1 \cdot A_1$.
Likewise,
\[a_s b = \mu_1 (\alpha a_1 a_2 + \beta a_s a_t ),\qquad
a_t b = \mu_2 (\alpha a_1 a_2 + \beta a_s a_t )\]
for some $\mu_1,\mu_2\in\mathbb{F}_p$.
If $\mu_1=0$ then $a_sb=0\in I_2$, and $a_s\in J_1$; similarly, if $\mu_2=0$ then $a_t\in J_1$.
If $\mu_1,\mu_2\neq0$, then $(a_s-\mu_1/\mu_2a_t)b=0\in I_2$, and $(a_s-\mu_1/\mu_2a_t)\in J_1$.
In both cases, $a_s a_t \in J_1 \cdot A_1$.
Therefore, $A_2 \subseteq J_1 \cdot A_1$, and this concludes the case $\dim(b \cdot A_1 ) = 1$.
Finally, suppose that $\dim(b \cdot A_1 )= 2$, i.e., $b\cdot A_1 = A_2$.
Since $b \in J_1$, one has $A_2 = b \cdot A_1 \subseteq J_1 \cdot A_1$. This concludes the proof.
\end{proof}
Now we may prove Theorem~\ref{thm:intro}.
\begin{proof}[Proof of Theorem~1.2]
Let $G$ be a finitely generated pro-$p$ group with at most two defining relations, and suppose that $H^\bullet(G)$ is quadratic, and moreover $a^2 = 0$ for every $a \in H^1(G)$ if $p = 2$.
If $G$ has no relations, then it is a free pro-$p$ group, and by Proposition~\ref{prop:free} $H^\bullet(G)\simeq Q(V,V^{\otimes2})$ for some finite vector space $V$.
Hence, Example \ref{exam:UK}--(a) yields the claim.
If $G$ is one-relator, then by \cite[Prop.~4.2]{cq:onerel} $G$ is as in Example~\ref{exam:onerel}, and thus $H^\bullet(G)$ is universally Koszul.
Finally, if $G$ is two-relator, we apply Theorem~\ref{thm:2rel}.
\end{proof}
The next example shows that one may not extend Theorem~\ref{thm:intro} to finitely
generated pro-$p$ groups with quadratic $\mathbb{F}_p$-cohomology, cohomological dimension equal to 2 and more than two defining relations.
\begin{exam}\label{exam:square}\rm
Let $G$ be a pro-$p$ group with minimal presentation
\begin{equation}\label{eq:RAAG}
G = \left\langle x, y, z, t \mid [x, y]x^{\lambda_1}y^{\mu_1} = [y, z]y^{\lambda_2}z^{\mu_2} = [z, t]z^{\lambda_3}t^{\mu_3}= 1\right\rangle,
\end{equation}
with $\lambda_i,\mu_i\in p\mathbb{Z}_p$ for $i=1,2,3$ (and moreover $\lambda_i,\mu_i\in 4\mathbb{Z}_2$ if $p=2$).
Then $G$ is a {\sl generalised right-angled Artin pro-$p$ group} (cf. \cite[\S~5.1]{QSV}) associated to the
graph $\Gamma$ with vertices $\mathcal{V} =\{x, y, z, t\}$ and edges $\mathcal{E}=\{\{x, y\},\{y,z\},\{z,t\}\}$ --- i.e.,
$\Gamma$ is a path of length 3, and $\Gamma$ does not have the diagonal property.
In particular, if all $\lambda_i$'s and $\mu_i$'s are 0, then $G$ is the pro-$p$ completion of the (discrete) right-angled Artin group associated to $\Gamma$.
The cohomology algebra $H^\bullet(G)$ of $G$ is isomorphic to the exterior Stanley-Reisner algebra $\Lambda_\bullet(\Gamma)$ (cf. \cite[Thm. F]{QSV}).
In particular, $H^\bullet(G)$ is Koszul and $H^3(G) = 0$, but $H^\bullet(G)$ is not universally Koszul by Example~\ref{ex:RAAG no UK}.
\end{exam}
\section{Maximal pro-$p$ Galois groups}\label{sec:Galois}
For a field $\mathbb{K}$, let $\mathcal{G}_{\mathbb{K}}$ denote the maximal pro-$p$ Galois group of $\mathbb{K}$.
If $\mathbb{K}$ contains a root of 1 of order $p$, then by Kummer theory one has an isomorphism
\begin{equation}\label{eq:Kummer}
\mathbb{K}^\times/(\mathbb{K}^\times)^p\simeq H^1(\mathcal{G}_{\mathbb{K}}),
\end{equation}
where $\mathbb{K}^\times=\mathbb{K}\smallsetminus\{0\}$ denotes the multiplicative group of $\mathbb{K}$.
Moreover, let $\mathrm{Br}_p(\mathbb{K})$
denote the $p$-part of the Brauer group $\mathrm{Br}(\mathbb{K})$ of $\mathbb{K}$ --- i.e, $\mathrm{Br}_p(\mathbb{K})$ is the subgroup
of $\mathrm{Br}(\mathbb{K})$ generated by all elements of order $p$.
Then one has an isomorphism
\begin{equation}\label{eq:brauer}
\mathrm{Br}_p(\mathbb{K})\simeq H^2(\mathcal{G}_{\mathbb{K}} )
\end{equation}
(cf. \cite[Thm.~6.3.4]{nsw:cohn}).
The {\sl mod-$p$ Milnor $K$-ring} of $\mathbb{K}$ is the graded $\mathbb{F}_p$-algebra
\[
K_\bullet^M(\mathbb{K})_{/p}=\bigoplus_{n\geq0}K_n^M(\mathbb{K})_{/p}=\frac{T_\bullet(\mathbb{K}^\times)}{(\Omega)}\otimes_{\mathbb{Z}}\mathbb{F}_p
\]
where $T_\bullet(\mathbb{K}^\times)$ is the tensor algebra over $\mathbb{Z}$ generated by $\mathbb{K}^\times$, and $(\Omega)$ the two-sided ideal of $T_\bullet(\mathbb{K}^\times)$ generated by the elements $\alpha\otimes(1-\alpha)$ with $\alpha\in\mathbb{K}^\times\smallsetminus\{1\}$ (see, e.g., \cite[Def.~6.4.1]{nsw:cohn}).
Thus $K_\bullet^M(\mathbb{K})_{/p}$ is quadratic, with $K_1^M(\mathbb{K})_{/p}=\mathbb{K}^\times/(\mathbb{K}^\times)^p$.
If $\mathbb{K}$ contains a root of 1 of order $p$, by the Rost-Voevodsky Theorem the isomorphism \eqref{eq:Kummer} induces an isomorphism of $\mathbb{F}_p$-algebras
\begin{equation}\label{eq:RV}
K_\bullet^M(\mathbb{K})_{/p}\overset{\sim}{\longrightarrow} H^\bullet(\mathcal{G}_{\mathbb{K}})
\end{equation}
(cf. \cite{voev,weibel}, see also \cite[\S~24.3]{efrat:book}), and thus $H^\bullet(\mathcal{G}_\mathbb{K})$ is quadratic.
The following is a well-know fact on $\mathbb{F}_2$-cohomology of maximal pro-$2$ Galois groups of fields.
\begin{lem}\label{lemma:k2}
Let $\mathbb{K}$ be a field containing $\sqrt{-1}$. Then the $\mathbb{F}_2$-cohomology algebra $H^\bullet(\mathcal{G}_{\mathbb{K}})$ of the maximal pro-2 Galois group of $\mathbb{K}$ is wedge-commutative.
\end{lem}
\begin{proof}
By \eqref{eq:RV}, it sufficies to show that $K_\bullet^M(\mathbb{K})_{/2}$ is wedge-commutative.
For $\alpha,\beta\in\mathbb{K}^\times$, set $\{\alpha\}=\alpha(\mathbb{K}^\times)^2\in \mathbb{K}^\times/(\mathbb{K}^\times)^2$ and let $\{\alpha,\beta\}$ denote the image of $\alpha\otimes\beta$ in $K_2^M(\mathbb{K})_{/2}$.
By definition, for every $\alpha\in\mathbb{K}^\times$ one has $\{\alpha\}\cdot \{\alpha\}=\{\alpha,-1\}\in K_2^M(\mathbb{K})_{/2}$.
Hence, if $\sqrt{-1}\in\mathbb{K}$ then $-1\in(\mathbb{K}^\times)^2$, i.e., $\{-1\}$ is trivial in $K_1^M(\mathbb{K})_{/2}=\mathbb{K}^\times/(\mathbb{K}^\times)^2$, and hence $\{\alpha\}\cdot\{\alpha\}=\{\alpha,-1\}$ is trivial in $K_2^M(\mathbb{K})_{/2}$.
\end{proof}
From Theorem~\ref{thm:intro} one deduces the following.
\begin{cor}\label{cor:Galois}
Let $\mathbb{K}$ be a field containing a root of 1 of order p (and also $\sqrt{-1}$ if $p = 2$), and suppose that the quotient
$\mathbb{K}^\times/(\mathbb{K}^\times)^p$ is finite. If $\rk(\mathrm{Br}_p(\mathbb{K}))\leq2$, then $H^\bullet(\mathcal{G}_\mathbb{K})$ is universally Koszul.
\end{cor}
\begin{proof}
By \eqref{eq:Kummer} and \eqref{eq:H1}, the pro-$p$ group $\mathcal{G}_{\mathbb{K}}$ is finitely generated.
Moreover, by \eqref{eq:brauer} and \eqref{eq:H2} $\mathcal{G}_K$ has at most two defining relations.
Also, $H^\bullet(\mathcal{G}_K)$ is quadratic by the Rost-Voevodsky Theorem.
If $\mathrm{Br}_p(\mathbb{K})$ is trivial, then $H^n(\mathcal{G}_{\mathbb{K}})=0$ for all $n\geq2$, namely, $\cd(\mathcal{G}_{\mathbb{K}})= 1$,
and $\mathcal{G}_{\mathbb{K}}$ is a free pro-$p$ group by Proposition~\ref{prop:free}.
If $\rk(\mathrm{Br}_p(\mathbb{K}))=1$, then $\mathcal{G}_{\mathbb{K}}$ is one-relator, and if $p = 2$ then $H^\bullet(\mathcal{G}_{\mathbb{K}})$ is wedge-commutative by Lemma~\ref{lemma:k2}.
Finally, if $\rk(\mathrm{Br}_p(\mathbb{K}))=2$, then $\mathcal{G}_{\mathbb{K}}$ is a two-relator pro-$p$ group.
If $p\neq2$, then $\mathcal{G}_{\mathbb{K}}$ is quadratically defined by Proposition~\ref{prop:2rel}.
If $p=2$, then $H^\bullet(\mathcal{G}_{\mathbb{K}})$ is wedge-commutative by Lemma~\ref{lemma:k2}, and by Proposition~\ref{prop:2rel} $\mathcal{G}_{\mathbb{K}}$ is quadratically defined.
Altogether, Theorem~\ref{thm:intro} yields the claim.
\end{proof}
The isomorphisms \eqref{eq:H2} and \eqref{eq:brauer} imply that Corollary~\ref{cor:Galois} is equivalent to Corollary~\ref{cor:intro}.
\begin{rem}\label{rem:PZ}\rm
Let $G$ be the pro-$p$ group with minimal presentation \eqref{eq:RAAG}, with $\lambda_i=\mu_i=0$ for $i=1,2,3$.
It was recently shown by I.~Snopce and P.~Zalesskii that $G$ does not occur as maximal pro-$p$ Galois group $\mathcal{G}_{\mathbb{K}}$ for any field $\mathbb{K}$ containing a root of 1 of order $p$ (cf. \cite{SZ}).
Therefore, Example~\ref{exam:square} for this case does not provide a counterexample to Conjecture~\ref{conjecture:intro}.
On the other hand, if $G$ is a pro-$p$ group with minimal presentation \eqref{eq:RAAG} where not all $\lambda_i$'s and $\mu_i$'s are 0, it is not known (but for some special cases, see \cite[\S~5.7]{QSV}) whether $G$ may occur as maximal pro-$p$ Galois group $\mathcal{G}_{\mathbb{K}}$ for some field $\mathbb{K}$ containing a root of 1 of order $p$.
We conjecture it does not for any choice of the $\lambda_i$'s and $\mu_i$'s; and therefore we expect Example~\ref{exam:square} not to provide counterexamples to Conjecture~\ref{conjecture:intro}.
\end{rem}
{\small \subsection*{acknowledgements}
The author wishes to thank the anonymous referee for her/his useful comments.
This paper was inspired by a talk (a {\sl mathematical salad}) that F.W. Pasini delivered in Milan, Italy, in Dec. 2018, and also by the discussions with him and with J. Min\'a\v{c} and N.D. T\^an on Koszul algebras and Galois cohomology --- therefore, the author is grateful to all these people. Also, the author wishes to remark
that J.P. Labute's work on Demushkin groups, mild pro-$p$ groups and filtrations of
groups influenced deeply the author's way of thinking about Koszul property in Galois cohomology
}
|
1,116,691,497,895 | arxiv | \section{INTRODUCTION}
In survey sampling, it is a common practice to collect data on a large
number of items. Even when a sampled unit responds to the survey,
this unit may not respond to some items.
In this scenario, imputation can be used to create a complete data set by filling in missing values with plausible values to facilitate data analyses. The goal of imputation is three-fold:
First, by providing complete data, subsequent analyses are easy to implement and can achieve consistency among different users.
Second, imputation reduces the selection bias associated with only using the respondent set, which may not necessarily represent the original sample.
Third, the imputed data can incorporate extra information so that the resulting analyses are statistically efficient and coherent.
Combining information from several surveys or creating synthetic data from planned missingness are cases in point (\citealt{schenker07}).
When the imputed data set is released to the public, it should
meet the goal of multiple uses both for planned and unplanned parameters
\citep{haziza2009imputation}. In a typical survey situation, the imputers may know some of the parameters of interest at the time of imputation, but hardly know the full set of possible parameters to be estimated from the data.
Single imputation, such as hot deck imputation, regression imputation and
stochastic regression imputation, replaces each of the missing data
with one plausible value. Although
single imputation has been widely used, one drawback is that it does
not take into account of the full uncertainty of missing data and often falls short of multiple-purpose estimation.
Multiple imputation (MI) has been proposed by \citet*{rubin1976inference}
to replace each of missing data with multiple plausible values to reflect the full uncertainty in the prediction of missing data.
Several authors (\citealt{rubin1987multiple}; \citealt{little2002statistical}; \citealt{schafer1997imputation}) have promoted MI as a standard approach for general-purpose estimation under item nonresponse in survey sampling.
Although the variance estimation formula of \citet*{rubin1987multiple}
is simple and easy to apply, it is not always consistent (\citealt{fay1992inferences};
\citealt{wang1998large}; \citealt{kim2006bias}). For using the MI variance estimation formula, the congeniality condition
of \citet*{meng1994multiple} needs to be met, which can be restrictive
for general-purpose inference. For example, \citet*{kim11}
pointed out that a
MI procedure that is
congenial for mean estimation is
not necessarily congenial for proportion estimation.
Fractional imputation (FI) is another effective imputation tool for general-purpose estimation
with its advantage of not requiring the congeniality condition. FI
was originally proposed by \citet*{kalton1984some} to reduce the variance
of single imputation methods by replacing each missing value with
several plausible values at differentiable probabilities reflected through fractional weights. \citet*{fay1996alternative},
\citet*{kim2004fractional}, \citet*{fuller2005hot}, \citet*{durrant2005imputation},
\citet*{durrant2006using} discussed FI as a nonparametric imputation
method for descriptive parameters of interest in survey sampling. \citet*{kim11} and \citet*{Kim2014Fractionalhotdeck} presented FI under fully parametric model assumptions.
More generally, FI can also serve as a computational tool
for implementing the expectation step (E-step) in the EM algorithm (\citealt{wei1990monte};
\citealt{kim11}). When the conditional expectation in the E-step is
not available in a closed form, parametric FI of \citet*{kim11}
simplifies computation by drawing on the importance
sampling idea. Through fractional weights, FI can reduce the burden of iterative
computation, such as Markov Chain Monte Carlo, for evaluating
the conditional expectation associated with missing data.
\citet*{kimhong12} extended parametric FI to a more general class of incomplete data, including measurement error models.
Despite these advantages,
FI in applied research has not been widely used due to lack of good
information that provides researchers with comprehensive understanding
of this approach.
The goal of this paper is to bring more attention to FI by reviewing existing research on FI, introducing key ideas and methods, and highlighting some new development, mainly
in the context of survey sampling. This paper also provides guidance
on practical implementations and applications of FI.
This paper is organized as follows. Section 2 provides the basic
setup and Section 3 introduces FI under parametric model assumptions.
Section 4 discusses a nonparametric approach to FI, specially in the context of hot deck imputation. Section 5 introduces synthetic data imputation using FI in the context of two-phase sampling and statistical matching.
Section 6 deals with practical considerations and variations of FI, including imputation sizes, choices of proposal distributions and doubly robust FI.
Section 7 compares FI with MI in terms of efficiency of the point
estimator and the variance estimator. Section 8 demonstrates a
simulation study based on an actual data set. A discussion concludes
this paper in Section 9.
\section{BASIC SETUP}
Consider a finite population of $N$ units identified by a set of
indices $U=\{1,2,\cdots,N\}$ with $N$ known.
The $p$-dimensional study variable $y_{i}=(y_{i1},\cdots,y_{ip})$, associated with each
unit $i$ in the population,
is subject to missingness. We assume that the finite population at hand is a realization from an infinite population, called a \textit{superpopulation}. In the
superpopulation model, we often postulate a parametric distribution,
$f(y;\theta)$, with the parameter $\theta\in\Omega$. We can express the density for the joint distribution of $y$ as
\begin{equation}
f(y ; \theta)=f_{1}(y_{1} ;\theta_{1})f_{2}(y_{2}\mid y_{1};\theta_{2})\cdots f_{p}(y_{p}\mid y_{1},\cdots,y_{p-1};\theta_{p}) \label{eq: joint dist}
\end{equation}
where $\theta_{k}$ is the parameter in the conditional distribution
of $y_{k}$ given $y_{1},\cdots,y_{k-1}$.
Now let $A$ denote the set of indices for units in a sample selected
by a probability sampling mechanism. Each unit
is associated with a sampling weight, the inverse of the probability
of being selected to the sample, denoted by $w_{i}$.
We are interested in estimating $\eta$, defined as
a (unique) solution to the population estimating equation $\sum_{i=1}^{N}U(\eta;y_{i})=0$.
For example, a population mean of $y$ can be obtained by letting
$U(\eta;y_{i})=\eta-y_{i}$, a population proportion of $y$ less
than a threshold $c$ can be obtained by specifying $U(\eta;y_{i})=\eta-I_{\{y_{i}<c\}}$,
where $I$ is an indicator function, a population median of $y$
can be obtained by choosing $U(\eta;y_{i})=0.5-I_{\{y_{i}<\eta\}}$, and so on.
Under complete response, a consistent estimator of $\eta$ is obtained
by solving
\begin{equation}
\sum_{i\in A}w_{i}U(\eta;y_{i})=0.\label{eq:compEE}
\end{equation}
\citet*{godambe1986parameters}, \citet*{binder1994use} and \citet*{rao2002estimating} have done rigorous investigations on the estimator obtained from (\ref{eq:compEE}) under complex sampling.
In the presence of missing data, first consider decomposing $y_{i}=(y_{obs,i},y_{mis,i})$, where
$y_{obs,i}$ and $y_{mis,i}$ are the observed and missing
part of $y_{i}$, respectively.
We assume that the response mechanism
is missing at random (MAR) in the sense of \citet*{rubin1976inference}.
That is, the probability of nonresponse does not depend on the missing
value itself. Under MAR,
a consistent estimator of $\eta$
can be obtained by solving the conditional estimating equation, given the observed data $y_{obs}=(y_{obs,1},\ldots,y_{obs,n})$,
\begin{equation}
\sum_{i\in A}w_{i}E\{U(\eta;y_{i})\mid y_{obs,i}\}=0,\label{eq:condEE}
\end{equation}
where the above conditional expectation is taken with respect to the prediction model (also called the imputation model),
\begin{equation}
f( y_{mis, i} \mid y_{obs, i}; \theta ) = \frac{ f( y_{obs,i}, y_{mis, i} ; \theta )} { \int f( y_{obs,i}, y_{mis, i} ; \theta ) d y_{mis, i} } ,
\label{4}
\end{equation}
which depends on the unknown parameter $\theta$. Imputation is thus a computational tool for computing the conditional expectation in (\ref{eq:condEE}) for arbitrary choices of the estimating function $U(\eta; y)$. The resulting conditional expectation using imputation can be called the imputed estimating function.
Table 1 presents a summary of Bayesian and frequentist approaches of statistical inference with missing data. In the Bayesian approach, $\theta$ is treated as a random variable and the reference distribution is the joint distribution of $\theta$ and the latent (missing) data, given the observed data. On the other hand, in the frequentist approach, $\theta$ is treated as fixed and the reference distribution is the conditional distribution of the latent data, conditional on the observed data, for a given parameter $\theta$. The learning algorithm, that is, the algorithm for updating information for parameters from observed data, for the Bayesian approach is data augmentation (\citealt{Tanner87}), while the learning algorithm for the frequentist approach is usually the EM algorithm.
\begin{table}[t]
\begin{center}
\caption{Comparison of two approaches of inference with missing data}
\begin{tabular}{ccc}
\hline%
& Bayesian & Frequentist \\
\hline
Model & Posterior distribution & Prediction model \\
& $f( \mbox{latent}, \theta \mid \mbox{Obs.} )$ & $f( \mbox{latent} \mid \mbox{Obs.}, \theta ) $\\
\hline
Learning algorithm& Data augmentation & EM algorithm \\
Prediction & Imputation(I)-step & Expectation(E)-step \\
Parameter update & Posterior(P)-step & Maximization(M)-step \\
\hline
Imputation & Multiple imputation & Fractional imputation \\
\hline
Variance estimation & Rubin's formula & Linearization \\
& & or replication \\
\hline
\end{tabular}
\end{center}
\end{table}
MI is a Bayesian imputation method and
the imputed estimating function is computed with respect to
the posterior predictive distribution,
$$ f( y_{mis, i} \mid y_{obs} ) = \int f( y_{mis, i} \mid y_{obs, i} ; \theta) p( \theta \mid y_{obs} ) d \theta,
$$
which is the average of the predictive distribution $ f( y_{mis, i} \mid y_{obs, i}; \theta ) $ over the posterior distribution of $\theta$.
On the other hand, in the frequentist approach, the conditional expectation in (\ref{eq:condEE}) is taken with respect to the prediction model (\ref{4}) evaluated at $\theta=\hat{\theta}$, a consistent estimator of $\theta$. For example,
one can use the pseudo MLE $\hat{\theta}$ of $\theta$ obtained
by solving the pseudo mean score equation (\citealt{louis82};
\citealt{pfeffermann1998weighting}),
\begin{equation}
\bar{S}(\theta)=\sum_{i\in A}w_{i}E\{S( {\theta}; y_i )| y_{i,obs}; \theta \}=0,\label{eq:mean-score}
\end{equation}
where $S(\theta; y_i)=\partial\log f(y_{i} ; \theta)/\partial\theta$.
While the Bayesian approach to imputation, especially in the context of MI, is well studied in the literature, the frequentist approach to imputation is somewhat sparse. FI has been proposed to fill in this important gap. In FI, the conditional expectation in (\ref{eq:condEE}) is computed by a weighted mean of the imputed estimating functions
\begin{equation}
E\{U(\eta; y_{i}) \mid y_{obs,i}\}\cong\sum_{j=1}^{M}w_{ij}^{*}U(\eta; y_{obs,i},y_{mis,i}^{*(j)})
\label{1-3}
\end{equation}
where
$y_{mis,i}^{*(j)}$, for $j=1,\ldots,M$, are $M$ imputed values for $y_{mis,i}$
(if ${y}_{i}$ is completely observed, $y_{mis,i}^{*(j)}\equiv y_{mis,i}$),
$w_{ij}^*$ are the fractional weights that satisfies $w_{ij}^* \ge 0$, $\sum_{j=1}^M w_{ij}^* = 1$ and
$$ \sum_{i\in A}w_{i}\sum_{j=1}^{M}w_{ij}^{*}S(\hat{\theta} ; y_{obs,i},y_{mis,i}^{*(j)})=0.$$
Once the FI data are constructed, the FI estimator of $\eta$ is obtained by solving
\begin{equation}
\sum_{i\in A}w_{i}\sum_{j=1}^{M}w_{ij}^{*}U(\eta; y_{obs,i},y_{mis,i}^{*(j)})=0.
\label{1-5}
\end{equation}
In general, the FI method augments the original data set as
\begin{equation}
\mathcal{S}_{FI}= \left\{ \delta_i \left( w_i, y_i \right)+ (1-\delta_i) \left( w_iw_{ij}^*, y_{ij}^* \right); j=1,\ldots, M, i \in A \right\},
\label{1-6}
\end{equation}
where $\delta_i$ is the indicator of full response for $y_i$, and $y_{ij}^*=(y_{obs,i},y_{mis,i}^{*(j)})$.
If (\ref{1-3}) holds for an arbitrary $U$ function, the resulting estimator is approximately unbiased for a fairly large class of parameters,
which makes the imputation attractive for general-purpose estimation. \citet*{kim11} used the importance sampling technique to satisfy (\ref{1-3}) for general $U$ functions, which will be presented in the next section.
\section{PARAMETRIC FRACTIONAL IMPUTATION}
\label{sec:PFI}
\textit{Parametric Fractional Imputation} (PFI),
proposed by \citet*{kim11}, features a parametric model for fractional imputations, and parameters in the imputation model are estimated by a computationally efficient EM algorithm.
To compute the conditional estimating equation in (\ref{eq:condEE}) by PFI, for each missing value $y_{mis,i}$, generate $M$ imputed values, denoted by $\{y_{mis,i}^{*(1)},\ldots,y_{mis,i}^{*(M)}\}$
from a proposal distribution $h(y_{mis,i}\mid y_{obs,i})$. How to choose a
proposal distribution will be discussed in Section \ref{sub:Choice of h}.
Once the imputed values are generated from $h(\cdot)$, compute
\[
w_{ij}^{*}\propto\frac{f(y_{mis,i}^{*(j)}\mid y_{obs,i};\hat{\theta})}{h(y_{mis,i}^{*(j)}\mid y_{obs,i})},
\]
subject to $\sum_{j=1}^{M}w_{ij}^{*}=1$, as the fractional weights assigned to
$y_{ij}^* = (y_{obs,i},y_{mis,i}^{*(j)})$, where $\hat{\theta}$
is the pseudo MLE of $\theta$ to be determined by the EM algorithm below.
Since $\sum_{j=1}^{M}w_{ij}^{*}=1$, the above fractional weight is
the same as $w_{ij}^* = w_{ij}^{*} ( \hat{\theta})$, where
\begin{equation}
w_{ij}^{*} ( {\theta}) \propto\frac{f(y_{obs,i},y_{mis,i}^{*(j)}; {\theta})}{h(y_{mis,i}^{*(j)}\mid y_{obs,i})},
\label{9}
\end{equation}
which only requires the knowledge of the joint distribution $f(y ;\theta)$
and the proposal distribution $h$.
The pseudo MLE of $\theta$ can be computed by solving the imputed mean score equation,
\begin{equation}
\sum_{i\in A} w_i \sum_{j=1}^{M}w_{ij}^{*} (\theta) S({\theta};y_{obs,i},y_{mis,i}^{*(j)})=0.\label{eq:approximation1}
\end{equation}
To solve (\ref{eq:approximation1}), we can either use the Newton method or
the following EM algorithm:
\begin{description}
\item [{\textit{{I-step.}}}] For each missing value $y_{mis,i}$, $M$ imputed
values are generated from a proposal distribution $h(y_{mis,i}\mid y_{obs,i})$.
\item [{\textit{{W-step.}}}] Using the current value of the parameter
estimates $\hat{\theta}_{(t)}$, compute the fractional weights as
$w_{ij(t)}^{*}\propto f(y_{obs,i},y_{mis,i}^{*(j)} ;\hat{\theta}_{(t)})/h(y_{mis,i}^{*(j)}\mid y_{obs,i})$,
subject to $\sum_{j=1}^{M}w_{ij(t)}^{*}=1$.
\item [{\textit{{M-step.}}}] Update the parameter $\hat{\theta}_{(t+1)}$
by solving the imputed score equation, $$\sum_{i\in A}w_{i}\sum_{j=1}^{M}w_{ij(t)}^{*}S(\theta; y_{ij}^{*})=0,$$
where $y_{ij}^{*}=(y_{obs,i},y_{mis,i}^{*(j)})$ and $S(\theta; y)=\partial\log f(y ;\theta)/\partial\theta$
is the score function of $\theta$.
\item [\textit{{Iteration.}}] Set $t=t+1$ and go to the W-step. Stop if $\hat{\theta}_{(t+1)}$
meets the convergence criterion.
\end{description}
Here, the I-step is the imputation step, the W-step is the weighting step, and the M-step
is the maximization step. The I- and W-steps can be
combined to implement the E-step of the EM algorithm. Unlike the Monte Carlo EM (MCEM)
method, imputed values are not changed for each EM iteration -- only
the fractional weights are changed. Thus, the FI method has computational
advantages over the MCEM method. Convergence is achieved because the
imputed values are not changed. \citet*{kim11} showed
that given the $M$ imputed values, $y_{mis,i}^{*(1)},\ldots,y_{mis,i}^{*(M)}$,
the sequence of estimators $\{\hat{\theta}_{(0)},\hat{\theta}_{(1)},\ldots\}$
from the W-and M-steps converges to a stationary point
$\hat{\theta}_{M}^{*}$ for fixed $M$. The stationary point $\hat{\theta}_{M}^{*}$
converges to the pseudo MLE of $\theta$ as $M\rightarrow\infty$.
The resulting weight $w_{ij}^{*}$ after convergence is the fractional
weight assigned to $y_{ij}^{*}=(y_{obs,i},y_{mis,i}^{*(j)})$.
We may add an additinal step to monitor the distribution of the fractional weights so that
no extremely large fractional weights dominate the weights.
Once the fractional imputed data is constructed from the above steps, it can be used to estimate other parameters of interest. That is, we can use (\ref{1-5}) to estimate $\eta$ from the FI data set.
We now consider a bivariate missing data example to illustrate
the use of the EM algorithm in FI.
\begin{example}
Suppose a probability sample consists
of $n$ units of $z_i = (x_{i},y_{1i},y_{2i})$ with sampling weight $w_{i}$,
where $x_i$ is always observed and
$y_{i}=(y_{1i},y_{2i})$ is subject to missingness. Let $A_{11}$, $A_{10}$, $A_{01}$, and $A_{00}$
be the partition of the sample based on the missing pattern, where
subscript $1$/$0$ in the $i$-th position denote that the $i$-th
$y$ item is observed/missing, respectively. For example, $A_{10}$ is the set of
the sample with $y_{i1}$ observed and $y_{i2}$ missing.
The conditional expectation in (\ref{eq:condEE}) involves evaluating
the conditional distribution of $y_{mis,i}$ given the observed data
$x_{i}$ and $y_{obs,i}$ for each missing pattern, which is then decomposed into
\begin{multline*}
\sum_{i\in A}w_{i}\mathbb{E}\{U(\eta;z_{i})|x_{i},y_{obs,i}\}=\sum_{i\in A_{11}}w_{i}U(\eta;x_{i},y_{i1},y_{i2})+\sum_{i\in A_{00}}w_{i}E\{U(\eta;x_{i},Y_{i1}, Y_{i2})\\ \quad \ \mid x_{i}\}
+\sum_{i\in A_{01}}w_{i}E\{U(\eta;x_{i},Y_{i1},y_{i2})\mid x_{i},y_{i2}\}+\sum_{i\in A_{10}}w_{i}E\{U(\eta;x_{i},y_{i1},Y_{i2})\mid x_{i},y_{i1}\}.
\end{multline*}
Suppose the joint distribution in (\ref{eq: joint dist}) is
\begin{equation}
f(x, y_{1},y_{2};\theta)= f_x ( x; \theta_0) f_{1}(y_{1}\mid x;\theta_{1})f_{2}(y_{2}\mid x,y_{1};\theta_{2}).\label{eq:jointDistp=00003D2}
\end{equation}
From the full respondent sample in $A_{11}$, obtain $\hat{\theta}_{1(0)}$
and $\hat{\theta}_{2(0)}$, which are initial parameter estimates
for $\theta_{1}$ and $\theta_{2}$.
In the I-step, for each missing value $y_{mis,i}$, generate $M$
imputed values from $h(y_{mis,i}\mid x_{i},y_{obs,i})=f(y_{mis,i}\mid x_{i},y_{obs,i};\hat{\theta}_{(0)})$,
where
\begin{equation}
f(y_{mis,i}\mid x_{i},y_{obs,i};\hat{\theta}_{(0)})=\left\{ \begin{array}{ll}
f_{2}(y_{2i}\mid x,y_{1i};\hat{\theta}_{2(0)}) & \mbox{ if }i\in A_{10}\\
f(y_{1i}\mid x,y_{2i};\hat{\theta}_{(0)}) & \mbox{ if }i\in A_{01}\\
f(y_{1i},y_{2i}\mid x_{i};\hat{\theta}_{(0)}) & \mbox{ if }i\in A_{00}
\end{array}\right.\label{6-8}
\end{equation}
and
\begin{equation}
f(y_{1i}\mid x_{i},y_{2i};\hat{\theta}_{(0)})=\frac{f_{1}(y_{1i}\mid x_{i};\hat{\theta}_{1(0)})f_{2}(y_{2i}\mid x_{i},y_{1i};\hat{\theta}_{2(0)})}{\int f_{1}(y_{1i}\mid x_{i};\hat{\theta}_{1(0)})f_{2}(y_{2i}\mid x_{i},y_{1i};\hat{\theta}_{2(0)})dy_{1i}}.\label{6-10}
\end{equation}
Note that the marginal distribution of $x$, $f_x( x; \theta_0)$, is not used in (\ref{6-10}).
Except for some special cases such as when both $f_{1}$ and $f_{2}$ are
normal distributions, the conditional distribution in (\ref{6-10})
is not in a known form. Thus, some computational tools such as Metropolis-Hasting (\citealt{hastings1970monte}) or
SIR (Sampling
Importance Resampling, \citealt{smith92}) are needed to generate samples from
(\ref{6-10}) for $i\in A_{01}$. For example, the SIR consists of the following
steps:
\begin{enumerate}
\item Generate $B$ (say $B=100$) Monte Carlo samples, denoted by $y_{1i}^{*(1)}, \cdots, y_{1i}^{*(B)}$, from $f_{1}(y_{1i}\mid x_{i};\hat{\theta}_{1(0)})$.
\item Among the $B$ samples obtained from Step 1, select one sample with the selection probability proportional to $f_{2}(y_{2i}\mid x_{i},y_{1i}^{*(k)};\hat{\theta}_{2(0)})$, where $y_{1i}^{*(k)}$ is the $k$-th sample from Step 1 ($k=1, \cdots, B$).
\item Repeat Step 1 and Step 2 independently $M$ times to obtain $M$ imputed
values.
\end{enumerate}
Once we obtain $M$ imputed values of $y_{1i}$, we can use
\[
{h}(y_{1i}\mid x_{i},y_{2i})\propto f_{1}(y_{1i}\mid x_{i};\hat{\theta}_{1(0)})f_{2}(y_{2i}\mid x_{i},y_{1i};\hat{\theta}_{2(0)})
\]
as the proposal density in (\ref{6-8}). Since $\sum_{j=1}^{M}w_{ij}^{*}=1$,
we do not need to compute the normalizing constant in (\ref{6-10}). For $i \in A_{10}$, $M$ imputed values of $y_{2i}$ are generated from $f_2 ( y_{2i} \mid x_i, y_{1i}; \hat{\theta}_{2(0)} )$. For $i \in A_{(00)}$, $M$ imputed values of $y_{1i}$ are generated from $f_1( y_{1i} \mid x_i ; \hat{\theta}_{1(0)} )$ and then $M$ imputed values of $y_{2i}$ are generated from $f_2 ( y_{2i} \mid x_i, y_{1i}^* ; \hat{\theta}_{2(0)} )$.
In the W-step, the fractional weights are computed by
\[
w_{ij(t)}^{*}\propto\frac{f_{1}(y_{1i}^{*(j)}\mid x_{i};\hat{\theta}_{1(t)})f_{2}(y_{2i}^{*(j)}\mid x_{i},y_{1i};\hat{\theta}_{2(t)})}{h(y_{mis,i}^{*(j)}\mid x_{i},y_{obs,i})}
\]
with $\sum_{j=1}^{M}w_{ij(t)}^{*}=1$, where $y_{1i}^{*(j)}=y_{1i}$
if $y_{1i}$ is observed and $y_{2i}^{*(j)}=y_{2i}$ if $y_{2i}$
is observed.
\end{example}
The above example covers a broad range of applications in the
missing data literature, such as missing covariate problems, measurement
error models, generalized linear mixed models, and so on. \citet*{Yang2014SemiparametricInference} considered regression
analyses with missing covariates in survey data using FI, where in the current notation,
$f(y_{2}\mid x,y_{1})$ is a regression model with $y_{2}$ and $x$
fully observed and $y_{1}$ subject to missingness.
In generalized linear mixed models, $f(y_{2}\mid x,y_{1})$
is a generalized linear mixed model where $y_{1}$ is the latent random
effect. See \citet*{Yang2013parametric} for using FI to estimate
parameters in the generalized linear mixed models.
For variance estimation, note that the imputed estimator $\hat{\eta}_{FI}$ obtained from the imputed estimating equation (\ref{1-5}) depends on $\hat{\theta}$ obtained from (\ref{eq:approximation1}). To reflect this dependence, we can write $\hat{\eta}_{FI} = \hat{\eta}_{FI} ( \hat{\theta})$. To account for the sampling variability of $\hat{\theta}$ in the imputed estimator $\hat{\eta}_{FI}$,
either the linearization method or replication methods can be used. In the linearization method, the imputation model is needed in order to compute partial derivatives of the score functions. To avoid disclosing the imputation model,
replication methods are often preferred (\citealt{rao1992jackknife}). To
implement the replication variance estimation in FI, we
first obtain the $k$-th replicate pseudo MLE $\hat{\theta}^{[k]}$
of $\hat{\theta}$ by solving
\begin{equation}
\bar{S}^{*[k]}(\theta)\equiv\sum_{i\in A}w_{i}^{[k]}\sum_{j=1}^{M}w_{ij}^{*}(\theta)S(\theta;y_{ij}^{*})=0,
\label{13}
\end{equation}
where $w_{i}^{[k]}$ is the $k$-th replication weight and $w_{ij}^*( \theta)$ is defined in (\ref{9}).
To obtain $\hat{\theta}^{[k]}$ from (\ref{13}),
either EM algorithm or the one-step Newton method can be used.
EM algorithm can be implemented similarly as before. For the one-step Newton method, we have
$$ \hat{\theta}^{[k]} = \hat{\theta} - \left\{ \frac{ \partial }{ \partial \theta^T} \bar{S}^{*[k]}( \hat{\theta} )
\right\}^{-1}
\sum_{i\in A}w_{i}^{[k]}\sum_{j=1}^{M}w_{ij}^{*}(\hat{\theta})S(\hat{\theta};y_{ij}^{*}),
$$
where
\begin{equation}
\begin{aligned}
\nonumber & \frac{ \partial }{ \partial \theta^T} \bar{S}^{*[k]}( {\theta} ) =
\sum_{i\in A}w_{i}^{[k]}\sum_{j=1}^{M}w_{ij}^{*}(\theta)\dot{S}(\theta;y_{ij}^{*})
+ \sum_{i\in A}w_{i}^{[k]}\sum_{j=1}^{M}w_{ij}^{*}(\theta) \cdot \\ & \qquad\qquad \left\{ S({\theta};y_{ij}^{*})- \sum_{j=1}^{M}w_{ij}^{*}(\theta) S({\theta};y_{ij}^{*}) \right\}^{\otimes 2}
\end{aligned}
\end{equation}
with $\dot{S}(\theta;y) = \partial S( \theta ; y) / \partial \theta^T$ and $B^{\otimes 2} = B B^T$.
Once $\hat{\theta}^{[k]}$ is obtained, we obtain the $k$-th replicate $\hat{\eta}^{[k]}$
of $\hat{\eta}$ by solving
\[
\sum_{i\in A}w_{i}^{[k]}\sum_{j=1}^{M}w_{ij}^{*[k]}U(\eta;y_{ij}^{*})=0
\]
for $\eta$, where $w_{ij}^{*[k]}= w_{ij}^* ( \hat{\theta}^{[k]})$.
\section{NONPARAMETRIC FRACTIONAL IMPUTATION}
\subsection{Fractional Hot Deck Imputation}
\textit{Hot deck imputation} uses observed responses from the sample
as imputed values. The unit with a missing value is called the\textit{
recipient} and the unit providing the value for the imputation is
called the \textit{donor}. \citet*{durrant2009imputation}, \citet*{haziza2009imputation}
and \citet*{andridge2010review} provided comprehensive overviews of
hot deck imputation in survey sampling. The attractive
features of hot deck imputation include the following.
First, unlike model-based imputation methods that generate artificial imputed values,
in hot deck imputation, only plausible values can be imputed, and
therefore distributional properties of the data are preserved. For
example, imputed values for categorical variables will also be categorical,
as observed from the respondents.
Second, compared to fully parametric methods, hot deck imputation makes
less or no distributional assumptions and therefore is more robust. For these reasons, hot deck imputation
is a widely used imputation method, especially in household surveys.
\textit{Fractional hot deck imputation} (FHDI) combines the ideas of FI and
hot deck imputation. It is efficient (due
to FI), and it inherits the aforementioned good properties of hot
deck imputation. \citet*{kim2004fractional}, \citet*{fuller2005hot},
and \citet*{Kim2014Fractionalhotdeck} considered FHDI for univariate missing data.
We now describe a multivariate
FHDI procedure to deal with missing data with an arbitrary missing pattern
(\citealt{im2015}).
We first consider categorical data. Let $\mathbf{z}=(z_{1},\ldots,z_{K})$
be the vector of study variables that take categorical values. Let
$\mathbf{z}_{i}=(z_{i1},\ldots,z_{iK})$ be the $i$-th realization
of $\mathbf{z}$. Let $\delta_{ij}$ be the response indicator variable
for $z_{ij}$. That is, $\delta_{ij}=1$ if $z_{ij}$ is observed and $\delta_{ij}=0$ otherwise.
Assume that the response mechanism is MAR. Based on
$\delta_{i}=(\delta_{i1},\dots,\delta_{iK})$,
the original observation $\mathbf{z}_{i}$ can be decomposed into
$(z_{obs,i},z_{mis,i})$, which are the missing and observed part
of $\mathbf{z}_{i}$, respectively. Let $D_{i}=\{z_{mis,i}^{*(1)},\ldots,z_{mis,i}^{*(M_{i})}\}$
be the set of all possible values of $z_{mis,i}$, that is, $(z_{obs,i},z_{mis,i}^{*(j)})$
is one of the actually observed value in the respondents, for $j=1,\ldots,M_{i}$,
with $M_{i}>0$. If all of $M_{i}$ possible values are taken as the
imputed values for $z_{mis,i}$, the fractional weight assigned to
the $j$-th imputed value $z_{mis,i}^{*(j)}$ is
\begin{equation}
w_{ij}^{*}=\frac{\pi(z_{obs,i},z_{mis,i}^{*(j)})}{\sum_{k \in D_i}\pi(z_{obs,i},z_{mis,i}^{*(k)})},\label{eq:fhdi weights}
\end{equation}
where $\pi(\mathbf{z})$ is the joint probability of $\mathbf{z}$. If the joint probability
is nonparametrically modeled, it is computed by
\begin{equation}
\pi(\mathbf{z})=\frac{\sum_{i\in A}w_{i}\sum_{j\in D_{i}}w_{ij}^{*}I\{(z_{obs,i},z_{mis,i}^{*(j)})=\mathbf{z}\}}{\sum_{i\in A}w_{i}},\label{eq:nonparametric}
\end{equation}
where $z_{mis,i}^{*(j)}\equiv z_{mis,i}$
and $w_{ij}^{*}=M_{i}^{-1},$ for $j=1,\ldots,M_{i}^{-1}$, if $\mathbf{z}_{i}$ is completely observed. To compute
(\ref{eq:fhdi weights}) and (\ref{eq:nonparametric}), EM algorithm by weighting (\citealt{Ibrahim90})
can be used, with the initial values of fractional weights being $w_{ij(0)}^{*}=M_{i}^{-1}$. Equations
(\ref{eq:fhdi weights}) and (\ref{eq:nonparametric}) correspond
to the E-step and M-step of the EM algorithm, respectively. The M-step
(\ref{eq:nonparametric}) can be changed if there is a parametric
model for the joint probability $\pi(\mathbf{z})$. For example, if the joint probability
can be modeled by a multinomial distribution with parameter $\alpha$, say $\pi(\mathbf{z};\alpha)$,
then the M-step replaces (\ref{eq:nonparametric}) with solving the imputed
score equation of $\alpha$ to update the estimate of $\alpha$.
For continuous data $\mathbf{y}=(y_{1},\ldots,y_{K})$, we consider
a discrete approximation. Discretize each continuous variable by dividing
its range into a small finite number of segments (for example, quantiles).
Let $z_{ik}$ denote the discrete version of $y_{ik}.$ Note that
$z_{ik}$ is observed only if $y_{ik}$ is observed. Let the support
of $\mathbf{z}$, denoted by $\{\mathbf{z}_{1},\ldots,\mathbf{z}_{G}\}$, which
is the same as the sample support of $\mathbf{z}$ from the full respondents, specify donor cells. The joint probability of $\mathbf{z}$,
denoted by $\pi(\mathbf{z}_{g})$, for $g=1,\ldots,G$, can be obtained
by the EM algorithm for categorical missing data as described above.
As in the categorical missing data problem, let $D_{i}=\{z_{mis,i}^{*(1)},\ldots,z_{mis,i}^{*(M_{i})}\}$
be the set of all possible values of $z_{mis,i}$.
Using a finite mixture model, a nonparametric approximation of $f(y_{mis,i}\mid y_{obs,i})$
is
\begin{equation}
f(y_{mis,i}\mid y_{obs,i})\approx\sum_{j=1}^{M_{i}}P(\mathbf{z}= \mathbf{z}_{i}^{*(j)}\mid y_{obs,i})f(y_{mis,i}\mid \mathbf{z}_{i}^{*(j)}).
\label{17}
\end{equation}
Each $\mathbf{z}_{i}^{*(j)}=(z_{obs,i},z_{mis,i}^{*(j)})$ defines
an imputation cell. The approximation in (\ref{17}) is based on the assumption that
\begin{equation}
P( y_{mis} \mid y_{obs}, \mathbf{z} ) \cong P( y_{mis} \mid \mathbf{z} ),
\label{18}
\end{equation}
which requires (approximate) conditional independence between $y_{mis}$ and $y_{obs}$ given $z$. Thus, we assume that the covariance structure between items are captured by the discrete approximation and the within cell errors can be safely assumed to be independent. Once the imputation cells are formed to satisfy (\ref{18}), we select $m_{g}$
imputed values for $y_{mis,i}$, denoted by $\mathbf{y}_{i}^{*(j)}=(y_{obs,i},y_{mis,i}^{*(j)})$,
for $j=1,\ldots,m_{g}$, randomly from the full respondents
in the same cell, with the selection probability proportional to the
sampling weights. The final fractional weights assigned to
$\mathbf{y}_{i}^{*(j)}$ is $w_{ij}^{*}=\hat{P}(z_{mis,i}^{*(j)}\mid y_{obs,i})m_{g}^{-1}$.
This FHDI procedure resembles a two-phase stratified sampling (\citealt{rao1973double}, \citealt{kim06b}),
where forming the imputation cells corresponds to stratification (phase one) and conducting hot deck imputation corresponds to stratified sampling (phase two). For more details, see \citet*{im2015}.
If we select all possible donors in the same cell, the resulting FI estimator is fully efficient in the sense that it does not introduce additional randomness due to hot deck imputation. Such fractional hot deck imputation is called fully efficient fractional imputation (FEFI). The FEFI option is currently available at Proc Surveyimpute in SAS (\citealt{SAS2015}).
\subsection{Nonparametric Fractional Imputation Using Kernels}
In real-data applications, nonparametric methods are preferred if less is known about the true underlying data model.
Hot deck imputation makes less or no distributional assumptions and therefore
is more robust than fully parametric methods. In what follows, we discuss
an alternative way of calculating the fractional weights that
links the FI estimator to some well-known nonparametric estimators, such as
Nadaraya-Watson kernel regression estimator (\citealt{nadaraya1964estimating}).
For simplicity, suppose we have bivariate data $(x_{i},y_{i})$ where
$x_{i}$ is completely observed and $y_{i}$ is subject to missing.
Assume the missing data mechanism is MAR. Let $\delta_{i}$ be the
response indicator that takes the value one if $y_i$ is observed and takes zero otherwise. We are interested in estimating $\eta$, which is defined through $ E\{ U( \eta; X, Y) \}=0$. Let $A_R =\{ i \in A; \delta_i = 1 \}$ be the index set of respondents.
To calculate the conditional estimating equation (\ref{eq:condEE}) nonparametrically, we use the following fractional imputation: for each unit $i$ with $\delta_i=0$, $r = \left| A_R \right|$ imputed values of $y_i$
are taken from $A_R$, denoted by $y_i^{*(1)}, \cdots, y_i^{*(r)}$, and compute the Kernel-based
fractional weights
$w_{ij}^{*}= K_h (x_{i}-x_i^{*(j)})/\sum_{k\in A_R} K_h( x_{i}-x_i^{*(k)} ) $, where $K_h( \cdot) $ is the kernel function with bandwidth $h$ and $x_{i}^{*(j)}$ is the covariate
associated with $y_{i}^{*(j)}$.
The resulting FI estimating equation can be written as
\begin{equation}
\sum_{i \in A} w_i \left\{ \delta_i U( \eta; x_i, y_i) + (1-\delta_i) \sum_{j\in A_R} w_{ij}^* U( \eta; x_i, y_i^{*(j)}) \right\} =0,
\label{19b}
\end{equation}
where the nonparametric
fractional
weights measure the degrees of similarity based on the distance between
$x_{i}$ and $x_{i}^{*(j)}$.
The FI estimator uses $\hat{U}( \eta; x_{i})\equiv\sum_{j\in A_R}w_{ij}^{*} U( \eta; x_i, y_i^{*(j)} ) $
to approximate $E\{ U( \eta; x_i, y_{i}) \mid x_i \}$ nonparametrically. For fixed $\eta$,
$
\hat{U}( \eta; x_{i})$ is often
called the \textit{Nadaraya-Watson kernel regression estimator} of $E\{ U( \eta; x_i, y_{i}) \mid x_i\}$
in the nonparametric estimation framework. Note that this FI estimator does not
rely on any parametric model assumptions and so is nonparametric; however it is not assumption free because it
makes an implicit assumption of the continuity of $E\{ U(\eta;x,y)\mid x_i\}$ through the choice of kernels to define
the ``similarity'' (\citealt{nadaraya1964estimating}). Notably, while the convergence of $\hat{U}( \eta; x_{i})$ to $E\{ U( \eta; x_i, y_{i}) \mid x_i\}$ does not achieve the order of $O_p(1/\sqrt{n})$, the solution $\hat{\eta}_{FI}$ to (\ref{19b}) satisfies $\hat{\eta}_{FI} - \eta = O_p(1/\sqrt{n})$ under some regularity conditions, which was proved by \citet*{wang2009empirical} in the IID setup.
Such kernel-based nonparametric fractional imputation can be directly applicable to complex survey sampling scenarios.
More developments are expected by coupling FI with other nonparametric methods such as those using the nearest neighbor imputation method (\citealt{chen2001jackknife};
\citealt{kitamura2009variance}; \citealt{kim2011variance}) or predictive mean matching
(\citealt{vink2014predictive}).
\section{SYNTHETIC DATA IMPUTATION}
\label{proj3}
Synthetic imputation is a technique of creating imputed values for the unobserved items by incorporating information from other surveys.
For example, suppose that there are two independent surveys, called Survey 1 and Survey 2, and we observe $x_i$ from Survey 1 and observe $(x_i,y_i)$ from Survey 2. In this case, we may want to create synthetic values of $y_i$ in Survey 1 by first fitting a model relating $y$ to $x$ to the data from Survey 2 and then predicting $y$ associated with $x$ observed in Survey 1. Synthetic imputation is particularly useful when Survey $1$ is a large scale survey and item $y$ is very expensive to measure. \citet*{schenker07} reported several applications of synthetic imputation, using a model-based method to estimate parameters associated with variables not observed in Survey 1 but observed in a much smaller Survey 2. In one application, both self-reported health measurements $x_i$ and clinical measurements from physical examinations $y_i$ for a small sample $A_2$ of individuals were observed. In the much larger Survey 1, only self-reported measurements, $x_i$ were observed. Only the imputed or synthetic data from Survey 1 and associated survey weights were released to the public.
The setup of two independent samples with common items is often called non-nested two-phase sampling. Two-phase sampling can be treated as a missing data problem, where the missingness is planned and the response probability is known.
\subsection{Fractional Imputation for Two-phase Sampling}
In two-phase sampling, suppose
we observe ${x}_i$ in the first-phase sample and observe $({x}_i, y_i)$ in the second-phase sample, where the second-phase sample
is not necessarily nested within the first-phase sample. Let $A_1$ and $w_{i1}$ be the set of indices and the set of sampling weights for the first-phase sample, respectively. Let $A_2$ and $w_{i2}$ be the corresponding sets for the second-phase sample. Assume a ``working'' model $m({x}_i; \beta)$ for $E(y \mid {x}_i)$.
For estimation of the population total of $y$, the two-phase regression estimator can be written as
\begin{equation}
\hat{Y}_{tp} = \sum_{i \in A_1} w_{i1} m ( {x}_i; \hat{\beta} ) +
\sum_{i \in A_2 } w_{i2} \{ y_i -m ( {x}_i; \hat{\beta} ) \},
\label{2-4-1}
\end{equation}
where the subscript "tp" stands for "two-phase", and $\hat{\beta}$ is estimated from the second-phase sample. The two-phase regression estimator is efficient if the working model is well-specified. The first term of (\ref{2-4-1}) is called the projection estimator. Note that if the second term of (\ref{2-4-1}) is equal to zero, the two-phase regression estimator is equivalent to the projection estimator.
Some asymptotic properties of the two-phase estimator and variance estimation methods have been discussed in \citet*{kim06b}, and \citet*{kimyu11b}. \citet*{kimrao12} discussed asymptotic properties of the projection estimator under non-nested two-phase sampling.
In a large scale survey, it is a common practice to produce estimates for domains. Creating an imputed data set for the first-phase sample, often called mass imputation, is one method for incorporating the second-phase information into the first-phase sample. \citet*{breidt96}
discussed the possibility of using imputation to get improved estimates for domains. \citet*{fuller03} investigated mass imputation in the context of two-phase sampling.
The FI procedure can be used to obtain the two-phase regression estimator in (\ref{2-4-1}) and, at the same time, improve domain estimation.
Note that the two-phase regression estimator (\ref{2-4-1}) can be written as
\begin{equation}
\hat{Y}_{FEFI} = \sum_{i \in A_1} \sum_{j \in A_2} w_{i1} w_{ij}^* y_{i}^{*(j)},
\label{2-4-2}
\end{equation}
where $y_{i}^{*(j)} = \hat{y}_i + \hat{e}_j $, $\hat{y}_i= m ( {x}_i; \hat{\beta} )$, $\hat{e}_j = y_j - \hat{y}_j $,
$ w_{ij}^* = w_{j2}/(\sum_{k \in A_2} w_{k2} ) ,$
and we assume $\sum_{i \in A_{1} } w_{i1} = \sum_{i \in A_2} w_{i2} $. The expression (\ref{2-4-2}) implies that we impute all the elements in the first-phase sample, including the elements that also belong to the second-phase sample. The estimator (\ref{2-4-2}) is computed using an augmented data set of $n_1 \times n_2$ records, where $n_1$ and $n_2$ are the sizes of $A_1$ and $A_2$, respectively, and the $\left(i,j \right)$-th record has an (imputed) observation $y_{i}^{*(j)} = \hat{y}_i+ \hat{e}_j$ with weight $w_{i1} w_{ij}^* $. That is, for each unit $i \in A_1$, we impute $n_2$ values of $y_{i}^{*(j)}$ with fractional weight $w_{ij}^*$. The method in (\ref{2-4-2}) imputes all the elements in $A_2$ and is called fully efficient fractional imputation (FEFI) method, according to \citet*{fuller2005hot}. The FEFI estimator is algebraically equivalent to the two-phase regression estimator of the population total of $y$, and can also provide consistent estimates for other parameters such as population quantiles.
If it is desirable to limit the number of imputations to a small value $m$ ($m<n_2$), FI using the regression weighting method in \citet*{fuller2005hot} can be adopted. We first select $m$ values of $y_{i}^{*(j)}$, denoted by $y_{i}^{**(1)}, \cdots, y_{i}^{**(m)}$, among the set of $n_2$ imputed values $\{ y_{i}^{*(j)}; j \in A_2 \}$ using an efficient sampling method. The fractional weights $\tilde{w}_{ij}^*$ assigned to the selected $y_{i}^{**(j)}$ are determined so that
\begin{equation}
\sum_{j=1}^m \tilde{w}_{ij}^{*} \left( 1 , y_{i}^{**(j)} \right) = \sum_{j \in A_2} w_{ij}^* \left( 1, y_{i}^{*(j)} \right)
\label{2-4-3}
\end{equation}
holds
for each $i \in A_1$.
The fractional weight satisfying (\ref{2-4-3}) can be computed using the regression weighting method or the empirical likelihood method, see section 6.1 for details. The resulting FI data $ y_{i}^{**(j)}$ with weights $w_{i1} \tilde{w}_{ij}^{*} $ are constructed with $n_1 \times m$ records, which integrate available information from two phases. Replication variance estimation with FI, similar to \citet*{fuller2005hot}, can be developed. See Section 8.7 of \citet*{kim2013statistical}.
\subsection{Fractional Imputation for Statistical Matching}
Statistical matching is used to integrate two or more data sets when information available for matching records for individual participants across data sets is incomplete. Statistical matching can be viewed as a missing data problem where a researcher wants to perform a joint analysis of variables not jointly observed. Statistical matching techniques can be used to construct fully augmented data files to enable statistically valid data analysis.
\begin{table}[!hbtp]
\caption{\label{table1} A Simple Data Structure for Matching}
\begin{center}
\begin{tabular}{rccc}
\hline%
& $X$ & $Y_1$ & $Y_2$ \\
\cline{2-4}
Sample A & o & o & \\
Sample B & o & & o \\
\hline
\end{tabular}
\end{center}
\label{table9.1}
\end{table}
To simplify the setup, suppose that there are two surveys, Survey A and Survey B, each containing a random sample with partial information about the population. Suppose that we observe $x$ and $y_1$ from the Survey A sample and observe $x$ and $y_2$ from the Survey B sample.
Table \ref{table9.1} illustrates a simple data structure for matching.
Without loss of generalizability, consider imputing $y_1$ in Survey B, since imputing $y_2$ in Survey A is symmetric.
Under this setup, we can use FI to generate $y_1$ from the conditional distribution of $y_1$ given the observations. That is, we generate $y_1$ from
\begin{equation}
f\left( y_1 \mid x, y_2 \right) \propto f \left( y_2 \mid x, y_1 \right) f \left( y_1 \mid x \right) .
\label{9-2}
\end{equation}
Of note, assumptions are needed to identify the parameters in the joint model. For example,
\citet*{kimberg2015} used an instrumental variable assumption to identify the model.
To generate $y_1$ from (\ref{9-2}), the EM algorithm by FI can be used.
For more details, see \citet*{kimberg2015}.
\section{FRACTIONAL IMPUTATION VARIANTS}
\subsection{The Choice of $M$ and Calibration Fractional Imputation}
The choice of the imputation size $M$ is a matter of tradeoff between statistical efficiency
and computation efficiency: small $M$ may lead to large variability
in Monte Carlo approximation; whereas large $M$ may increase computational
cost. The magnitude of the imputation error is usually $O(1/\sqrt{M})$, which
can be reduced for large $M$. Thus, if computational power allows,
the larger $M$, the better.
In survey practices, a large imputation size
may not be desirable. Thus, instead of releasing to public large number
of imputed values for each missing item, a subset of initial imputation
values can be selected to reduce the imputation size.
In this case, the FI procedure can be developed in three stages. The
first stage, called \textit{Fully Efficient Fractional Imputation} (FEFI),
computes the pseudo MLE of parameters in the superpopulation
model with sufficiently large imputation size $M$, say $M=1,000$.
The second stage
is the \textit{Sampling} Stage, which selects small $m$ (say, $m=10$) imputed values from the set
of $M$ imputed values. The third stage is \textit{Calibration
Weighting}, which involves constructing the final fractional weights
for the $m$ final imputed values to satisfy some calibration constraints. This procedure can be called \textit{Calibration
FI}.
The FEFI step is the same as in the previous section. In what follows,
we describe the last two stages in details. In the Sampling Stage,
a subset of imputed values are selected to reduce the imputation size.
For each $i$, we have $M$ imputed values $y_{ij}^{*}=(y_{obs,i},y_{mis,i}^{*(j)})$
with their fractional weights $w_{ij}^{*}$. We treat $\mathbf{y}_{i}^{*}=\{y_{ij}^{*},j=1,\ldots,M\}$
as a weighted finite population with weight $w_{ij}^{*}$ and use
an unequal probability sampling method such as probability-proportion-to-size (PPS) sampling to select a sample of size $m$, say $m=10$,
from $\mathbf{y}_{i}^{*}$ using $w_{ij}^{*}$ as the selection probability.
Let $\tilde{y}_{i1}^{*},\ldots,\tilde{y}_{im}^{*}$ be the $m$ elements
sampled from $\mathbf{y}_{i}^{*}$.
The initial fractional weights for the sampled $m$ imputed values are given
by $\tilde{w}_{ij0}^{*}=m^{-1}.$ This set of fractional weights may
not necessarily satisfy the imputed score equation
\begin{equation}
\sum_{i\in A}w_{i}\sum_{j=1}^{m}\tilde{w}_{ij}^{*}S(\hat{\theta};\tilde{y}_{ij}^{*})=0,\label{eq:CalibrationFI}
\end{equation}
where $\hat{\theta}$ is the pseudo MLE of $\theta$ computed at the
FEFI stage. It is desirable for the solution to the imputed score equation with small $m$ to be equal to the pseudo MLE of $\theta$, which specifies the calibration constraints. At the Calibration Weighting
stage, the initial set of weights are modified to satisfy the constraint
(\ref{eq:CalibrationFI}). Finding the calibrated fractional weights
can be achieved by the regression weighting technique, by which the
fractional weights that satisfy (\ref{eq:CalibrationFI}) and $\sum_{j=1}^{m}\tilde{w}_{ij}^{*}=1$. The regression fractional weights
are constructed by
\begin{equation}
\tilde{w}_{ij}^{*}=\tilde{w}_{ij0}^{*}+\tilde{w}_{ij0}^{*}\Delta(S_{ij}^{*}-\bar{S}_{i}^{*}),\label{eq:calibration weights}
\end{equation}
where $S_{ij}^{*}=S(\hat{\theta};y_{ij}^{*})$, $\bar{S}_{i}^{*}=\sum_{j=1}^{m}\tilde{w}_{ij0}^{*}S_{ij}^{*},$ and
\[
\Delta=-\{\sum_{i\in A}w_{i}\sum_{j=1}^{m}\tilde{w}_{ij0}^{*}S_{ij}^{*}\}^{T}\{\sum_{i\in A}w_{i}\sum_{j=1}^{m}\tilde{w}_{ij0}^{*}(S_{ij}^{*}-\bar{S}_{i}^{*})^{\otimes 2}\}^{-1}.
\]
Note that some of the fractional weights computed by (\ref{eq:calibration weights})
can take negative values. To avoid negative weights, alternative
algorithms other than regression weighting should be used. For
example, the fractional weights of the form
\[
\tilde{w}_{ij}^{*}=\frac{\tilde{w}_{ij0}^{*}\exp(\Delta S_{ij}^{*})}{\sum_{k=1}^{m}\tilde{w}_{ik0}^{*}\exp(\Delta S_{ik}^{*})}
\]
are approximately equal to the regression fractional weights in (\ref{eq:calibration weights})
and are always positive.
\subsection{The Choice of the Proposal Distribution\label{sub:Choice of h}}
PFI is based on sampling from an importance sampling density $h$
called the \textit{proposal distribution}. The choice of the proposal
distribution is somewhat arbitrary. However, with finite
samples and imputations, a well-specified proposal distribution may
improve the performance of the imputation estimator. There are a number
of ways to specify the proposal distribution and to assess the goodness
of specification.
For a planned parameter, e.g., $\eta$, the population mean of $y$,
\citet*{kim11} showed the optimal $h^{*}$ that makes
Monte Carlo approximation variance of $\bar{y}_{i}^{*}\equiv\sum_{j=1}^{M}w_{ij}^{*}y_{ij}^{*}$
as small as possible, is given by
\[
h^{*}(y_{mis,i}|y_{obs,i})=f(y_{mis,i}|y_{obs,i},\hat{\theta})\times\frac{|y_{i}-\mathbb{E}\{y_{i}|y_{obs,i},\hat{\theta}\}|}{\mathbb{E}\{|y_{i}-\mathbb{E}\{y_{i}|y_{obs,i},\hat{\theta}\}|y_{obs,i},\hat{\theta}\}},
\]where $\hat{\theta}$ is the MLE of $\theta$.
For general-purpose estimation, $\eta$ is often unknown at the time of
imputation according to \citet*{fay1992inferences}, $h(y_{mis,i}|y_{obs,i})=f(y_{mis,i}|y_{obs,i};\hat{\theta})$
is a reasonable choice in terms of statistical efficiency. For importance
sampling, since we do not know $\hat{\theta}$ at the outset of the EM algorithm,
we may want to have a good initial guess $\theta_{0}$ and use $h(y_{mis,i}|x_{i},y_{obs,i})=f(y_{mis,i}|x_{i},y_{obs,i};\theta_{0})$. If we don't have a good initial guess of the true value of
$\theta$, we can use a prior distribution $\pi(\theta)$ to get $h(y_{mis,i}|y_{obs,i})=\int f(y_{mis,i}|y_{obs,i};\theta)\pi(\theta)d\theta$.
We now discuss a special choice of the proposal distribution
$h$, based on the realized values of the variables having
missing values, which is akin to hot deck imputation. Without loss of generality, assume that $y_i$ is observed in the first $r$ elements, $y_i$ is missing in the remaining $(n-r)$ elements, and $x_i$ is completely observed in the sample.
Using the importance
sampling idea, we assign a fractional weight to donor $y_{j}$
($1\leq j\leq r$) for the missing item $y_{i}$ ($r+1\leq i\leq n$)
by choosing $h(y_{j})=f(y_{j}\mid\delta_{j}=1)$. In calculating
the fractional weights, we approximate $f(y_{j}\mid\delta_{j}=1)$
by its empirical distribution $n_{R}^{-1}\sum_{k=1}^{N}\delta_{k}f\left(y_{j}\mid x_{k}\right)$,
where $n_{R}$ is the number of respondents. The EM algorithm takes
the following steps:
\begin{description}
\item [\textit{{I-step}}] For each missing value $y_{i}$, $i=r+1,\ldots,n$, take
all values in $A_{R}=\{y_{1},\ldots,y_{r}\}$ as donors.
\item [\textit{{W-step}}] With the current estimate of $\theta$, denoted by $\hat{\theta}_{(t)},$compute
the fractional weights by
\begin{equation}
w_{ij(t)}^{*}\propto\frac{f(y_{j}\mid x_{i};\hat{\theta}_{(t)})}{\sum_{k\in A_{R}}w_{k}f(y_{j}\mid x_{k};\hat{\theta}_{(t)})}\label{fwgt}
\end{equation}
\item [\textit{{M-step}}] Update the parameter $\hat{\theta}_{(t+1)}$ by solving
the following imputed score equation,
\[
\hat{\theta}_{(t+1)}:\text{solution to }
\sum_{i=1}^rS(\theta;x_i,y_i)+\sum_{i=r+1}^n\sum_{j=1}^rw_{ij}^{*(t)}S(\theta;x_i,y_j)=0.
\]
\item [\textit{{Iteration}}] Set $t=t+1$ and go to the W-step. Stop if $\hat{\theta}_{(t+1)}$
meets the convergence criterion.
\end{description}
The semiparametric fractional imputation (SFI) estimator of $\bar{Y}$ is
\[
\hat{\bar{Y}}_{SFI}=\frac{1}{n}\left\{ \sum_{i=1}^{r}y_{i}+\sum_{i=r+1}^{n}\sum_{j=1}^{r}w_{ij}^{*}y_{j}\right\} .
\]
\citet*{Kim2014Fractionalhotdeck} showed that the resulting estimator gains robustness. It is less sensitive against
the departure from the assumed conditional regression model.
\subsection{Doubly Robust Fractional Imputation}
Suppose we have bivariate data $(x_{i},y_{i})$ where $x_{i}$ is
completely observed and $y_{i}$ is subject to missing and missing data
mechanism is MAR.
Assume also an outcome regression
(OR) model, given by $E(y_{i}\mid x_{i})=m(x_{i};\beta_{0})$, and
the response propensity (RP) model, given by $P(\delta_{i}=1\mid x_{i},y_{i})=P(\delta_{i}=1\mid x_{i})=\pi(x_{i};\phi_{0})$.
Denote the set of respondents as $A_{R}=\{{i},\delta_{i}=1\}$,
where $\delta_{i}$ is the response indicator of $y_i$. We are interested in
the population total $\eta = \sum_{i=1}^N y_i$.
Note that not both the OR and RP models are needed to construct consistent estimators of $\eta$. For example, $\hat{\eta}_1=\sum_{i \in A} w_i m(x_i;\hat{\beta})$, with $\hat{\beta}$ being a consistent
estimator of $\beta_0$, is consistent to $\eta$ under the OR model and $\hat{\eta}_2= \sum_{i\in A_R} w_i y_i/\pi(x_i;\hat{\phi})$, with
$\hat{\phi}$ being a consistent
estimator of $\phi_0$, is consistent to $\eta$ under the RP model.
An estimator of $\eta$ is doubly robust if it is consistent if either
the OR model or the RP model is correct, but not necessarily both.
This property guards the estimator from possible model
misspecifications. The DR estimators have been extensively studied
in the literature, including \citet*{robins1994estimation}, \citet*{bang2005doubly}, \citet*{tan2006distributional},
\citet*{kang2007demystifying}, \citet*{cao2009improving}, and \citet*{kim2013doubly}.
We now discuss a fractional imputation estimator that has the double
robustness feature.
For each missing $y_{i}$, let $y_{ij}^{*}=\hat{y}_i+\hat{e}_{j}$ be the $j$-th imputed value from the donor $j\in A_{R}$, where
$\hat{y}_i=m(x_{i};\hat{\beta})$ with $\hat{\beta}$ fitted under the OR model and $\hat{e}_{j}=y_{j}-m(x_{i};\hat{\beta})$.
If $\sum_{ i \in A_{R}} w_i 1/\pi(x_{j};\hat{\phi})= \sum_{ i \in A} w_i $,
each unit $j\in A_{R}$ represents $1/\pi(x_{j};\hat{\phi})$ copies
of the sample. Then, the fractional weight $w_{ij}^{*}$ associated
with the $j$-th imputed value $y_{ij}^{*}$ is proportional to $\{1/\pi(x_{j};\hat{\phi})-1\}$
over the donor pool $A_{R}$ (minus one because $y_{j}$ itself counts
one), that is,
\begin{equation}
w_{ij}^{*}= \frac{w_j \{ 1/\pi(x_{j};\phi_{0})-1 \}}{\sum_{k \in A} w_k\delta_{k}\{1/\pi(x_{k};\hat{\phi})-1\}}.\label{eq:dr_weight}
\end{equation}
Under this weight construction, the fractional imputation estimator
is given by
\begin{equation}
\hat{\eta}_{FI}=\sum_{i \in A} w_i \left[ \delta_{i}y_{i}+ (1-\delta_{i})\{\sum_{j=1}^{n}\delta_{j}w_{ij}^{*}y_{ij}^{*}\}\right].\label{eq:DRFI estimator}
\end{equation}
We show that the fractional
imputation estimator $\hat{\eta}_{FI}$ in (\ref{eq:DRFI estimator})
is doubly robust. First notice that $\hat{\eta}_{FI}$ is algebraically
equal to
\begin{equation}
\hat{\eta}_{FI}= \sum_{i \in A} w_i \left[ m(x_{i};\hat{\beta})+ \frac{\delta_{i}}{\pi(x_{i};\hat{\phi})}\{y_{i}-m(x_{i};\hat{\beta})\}\right].\label{eq:DR estimator}
\end{equation}
Let $\hat{\eta}_{n}=\sum_{i \in A} w_i y_{i}$ be the full sample estimator of
of $\eta$, then
\[
\hat{\eta}_{FI}-\hat{\eta}_{n}=\sum_{i \in A} w_i \left\{ \frac{\delta_{i}}{\pi(x_{i};\hat{\phi})}-1 \right\}\{y_{i}-m(x_{i};\hat{\beta})\}.
\]
This is an asymptotically unbiased estimator of zero if either the OR model or
the RP model is correct, but not necessarily both. \citet*{kim2013doubly} discussed efficient estimation of $(\beta, \phi)$ in survey sampling.
\section{COMPARISON WITH MULTIPLE IMPUTATION}
\subsection{Statistical Efficiency}
In the presence of missing data with MAR, multiple imputation (MI)
is a popular method.
It is thus
of interest to compare the behavior of these two methods. We start from a simple setting
with the complete data $z$ being randomly drawn from a population
whose density is $f(z;\theta)$, where $\theta\in\mathbb{R}^{d}$
is an unknown parameter to be estimated. Suppose that $m$ complete
data sets are created by imputing the missing data $z_{mis}$ from the posterior predictive distribution given the observed
data $z_{obs}$
$f(z_{mis}\mid z_{obs})=\int f(z_{mis}\mid z_{obs};\theta)\pi(\theta\mid z_{obs})d\theta$,
where $\pi(\theta\mid z_{obs})$ is the posterior distribution of
$\theta$. The MI estimator of $\theta$, denoted by $\hat{\theta}_{MI}$
is
\[
\hat{\theta}_{MI}=m^{-1}\sum_{k=1}^{m}\hat{\theta}^{(k)},
\]
where $\hat{\theta}^{(k)}$ is the MLE estimator applied to the $k$-th
imputed data set. Rubin's formula is used for variance estimation
in MI,
\[
\hat{V}_{MI}(\hat{\theta}_{MI})=W_{m}+(1+m^{-1})B_{m},
\]
where $W_{m}=m^{-1}\sum_{k=1}^{m}\hat{V}^{(k)}$, $B_{m}=(m-1)^{-1}\sum_{k=1}^{m}(\hat{\theta}^{(k)}-\hat{\theta}_{MI})^{2}$,
and $\hat{V}^{(k)}$ is the variance estimator of $\hat{\theta}$
under complete response applied to the $k$-th imputed data set.
Of note, Bayesian MI is a simulation-based method and thus introduce additional
noise. This explains why the asymptotic variance of the MI estimator,
given by \citet*{wang1998large},
\begin{equation}
V_{MI}=\mathcal{I}_{obs}^{-1}+m^{-1}\mathcal{I}_{com}^{-1}\mathcal{I}_{mis}\mathcal{I}_{com}^{-1}+m^{-1}J^{T}\mathcal{I}_{obs}^{-1}J,\label{eq:V_MI}
\end{equation}
is strictly larger than the asymptotic variance
of the FI estimator
\begin{equation}
V_{FI}=\mathcal{I}_{obs}^{-1}+m^{-1}\mathcal{I}_{com}^{-1}\mathcal{I}_{mis}\mathcal{I}_{com}^{-1}, \label{eq:V_FI}
\end{equation}
where $\mathcal{I}_{com}=E\{S(\theta)^{\otimes2}\}$,
$\mathcal{I}_{obs}=E\{S_{obs}(\theta)^{\otimes2}\}$ , $\mathcal{I}_{mis}=\mathcal{I}_{com}-\mathcal{I}_{obs}$,
$S(\theta)=S(Z;\theta)=\partial\log f(Z;\theta)/\partial\theta$ is
the log likelihood score if the data were completely observed and
$S_{obs}(\theta)=E\{S(\theta)\mid Z_{obs}\}$ is the score function
of the observed data log likelihood , $J=\mathcal{I}_{mis}\mathcal{I}_{com}^{-1}$
is the fraction of missing information matrix (\citealt{rubin1987multiple},
Chapter 4). This difference between (\ref{eq:V_MI}) and (\ref{eq:V_FI})
can be sizable for a small $m$. Furthermore, for a large $m$, although
the MI estimator is efficient, the inference is inefficient
since Rubin's variance estimator of the MI estimator is only
weakly unbiased, that is $\hat{V}_{MI}(\hat{\theta}_{MI})$ converges
in distribution
instead of coverages in probability to $V_{MI}$. This leads to much broader
confidence intervals and less powerful tests than a consistent variance
estimator would do (\citealt{nielsen2003proper}).
For MI inference to be valid for general-purpose estimation, imputations
must be proper according to \citet*{rubin1987multiple}. A sufficient condition
is given by \citet*{meng1994multiple}. The so-called congeniality
condition, imposed on both the imputation model and the form of subsequent
complete-sample analyses, is quite restrictive for general-purpose
estimation. Otherwise, as discussed by Fay \citeyearpar{fay1992inferences,fay1996alternative},
\citet*{kott1995paradox}, \citet*{binder1996frequency}, \citet*{robins2000inference},
\citet*{nielsen2003proper}, and \citet{kim2006bias}, the MI variance estimator is not always consistent. \citet{kim2011variance}
pointed out that MI that is congenial for mean estimation is not necessarily
congenial for proportion estimation. \citet*{yang2015mi} showed that
the MI variance estimator can be positively or negatively biased
when the method of moments estimator is used as the complete-sample
estimator. In contrast, FI, as we discussed in section 4, does not require congeniality and
always results in a consistent variance estimator
for general-purpose estimation.
\subsection{Imputation under Informative Sampling}
Under informative sampling, the MAR assumption is subtle. We assume
that the response mechanism is MAR at the population level, now referred
to as population missing at random (PMAR), to be distinguished from the
concept of sample missing at random (SMAR). For simplicity, assume
$y$ is a one-dimensional variable which is subject to missing, $\delta$
is its response indicator, and $I$ is the sample inclusion indicator. PMAR
assumes that $y\perp\delta\mid x$, that is, MAR holds at the population
level, $f(y\mid x)=f(y\mid x,\delta)$. On the other hand, SMAR assumes
$Y\perp\delta\mid(x,I=1)$, that is, MAR holds at the sample level,
$f(y\mid x,I=1)=f(y\mid x,I=1,\delta)$. The two assumptions are not
testable empirically. The plausibility of these assumptions should
be judged by subject matter experts. Often, PMAR is more realistic
because an individual's decision on whether or not to respond to a
survey depends on his or her own characteristics, rather than
the fact of him or her being in the sample or not.
For noninformative sampling design, we have
$P(I=1\mid x,y)=P(I=1\mid x)$, under which PMAR
implies SMAR; however for {informative} sampling design, PMAR
does not necessarily imply SMAR. In such cases, using an imputation model fitted to the sample data for generating imputations
can result in biased estimation.
FI does not require SMAR to hold besides PMAR. Under PMAR, we have
$f(y\mid x,\delta=0)=f(y\mid x)$. Let $f(y\mid x;\beta)$ be a parametric
model of $f(y\mid x)$. The parameter $\beta$ can be consistently estimated
by solving (\ref{eq:mean-score}), even under informative sampling. Since
FI generates the imputations from $f(y\mid x;\hat{\beta})$, with a consistent estimator $\hat{\beta}$,
the resulting FI estimator is approximately unbiased (\citealt{berg2015}).
Whereas, MI tends to problematic under informative sampling.
By using an augmented model, where the imputation model is augmented
to include sampling weights or some function of them,
as $f(y\mid x,w)$, the MI point estimator was claimed to be approximately unbiased
(\citealt{rubin1996multiple,schenker2006multiple}). However,
as pointed out by
\citet*{berg2015}, it is not always true. For example, $Y$ is conditionally independent of $\delta$ given $X$ as presented in Figure 1. However, $Y$ is not conditionally independent of $\delta$ given $X$ and $I$. Augmenting $X$ by including sampling weights does not solve the problem. The existence of the latent variable $U$, which is correlated with $I$ and $\delta$, makes SMAR unachievable.
\begin{figure}
\begin{center}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
thick,main node/.style={circle, draw,font=\sffamily\Large\bfseries}]
\node[main node] (X) {$X$};
\node[main node] (Y) [left of=X] {$Y$};
\node[main node](U)[right of = X]{$U$};
\node[main node](R)[above left of =X]{$\delta$};
\node[main node](I)[above right of = X]{$I$};
\path[every node/.style={font=\sffamily\small}]
(X) edge node [left] { } (Y)
(X) edge node [left] { } (R)
(X) edge node [left] { } (I)
(Y) edge node [left] { } (I)
(U) edge node [left] { } (I)
(U) edge node [left] { } (R) ;
\end{tikzpicture}
\end{center}
\label{figure1}
\caption{A
directed acyclic graph
(DAG) for a setup where PMAR holds but SMAR does not hold. Variable $U$ is latent in the sense that it is never observed. }
\end{figure}
\section{SIMULATION STUDY }
We investigated the performance of FI compared to MI by a limited
simulation study using an artificial finite population generated from
real survey data. The pseudo finite population was generated from
a single month of the U.S. Census Bureau's Monthly Retail Trade Survey
(MRTS). Each month, the MRTS surveys a sample of about $12,000$ retail
businesses with paid employees to collect data on sales and inventories.
The MRTS is an economic indicator survey whose monthly estimates are
inputs to the Gross Domestic Product estimates. The MRTS sample design
is typical of business surveys, employing one-stage stratified sampling
with stratification based on major industry, further substratified
by the estimated annual sales. The sample design requires higher sampling
rates in strata with larger units than in strata with smaller units.
More details about MRTS can be found in \citet*{mulry2014detecting}.
The original population file contains $19,601$ retail businesses
stratified into $16$ strata, with a strata identifier ($h$), sales
($y$), and inventory values ($x$). For simulation purpose, we focus
on the first $5$ strata as a finite population, consisting of $7,260$
retail businesses. Figure \ref{fig:box plot} shows the scatter plot of
sales and inventory data by strata on a log scale.
We assumed the following superpopulation
model,
\begin{equation}
\log(\text{y}_{hi})=\beta_{0h}+\beta_{1h}\log(x_{hi})+\epsilon_{hi},\label{eq:model1}
\end{equation}
where $\beta_{0h}$ and $\beta_{1h}$ are
strata-specific parameters with $h$ being the strata identifier, and $\epsilon_{hi}\sim N(0,\sigma_{h}^{2})$. To assess the adequacy
of model (\ref{eq:model1}), we made some diagnostic plots.
Figure \ref{fig:model1}
shows the residual plot and the normal Q-Q plot for the fitted model
(\ref{eq:model1}). From the
residual plot, the constant variance assumption of $\epsilon_{hi}$
appears to be reasonable. From the normal Q-Q
plot, the normality assumption of $\epsilon_{hi}$
approximately holds.
\begin{figure}
\begin{centering}
\includegraphics[width=5.5in, height=3.8in, scale=0.5]{plot2.pdf}
\par\end{centering}
\protect\protect\caption{\label{fig:box plot}Scatter plot of log sales and log inventory data by strata }
\end{figure}
\begin{figure}
\begin{centering}
\includegraphics[scale=0.48]{modeldiag1.pdf}
\par\end{centering}
\protect\protect\caption{\label{fig:model1}Regression model of $\log(y)$ against $\log(x)$
and strata indicator }
\end{figure}
To create missing, we considered univariate missing where only $y$
has missing values. We generated the response indicator $\delta$
of $y$ according to
\[
\delta\sim Bernoulli(\pi),\ \pi=1/[1+\exp\{4-0.3\log(x)\}].
\]
Under this model, the missing mechanism is MAR and the response rate
is about $0.6$.
The parameters of interest are the stratum mean of $y$, $\eta_{h}=\mu_{h}$
for $1\leq h\leq5$, and the population mean of $y$, $\eta_{6}=\mu$.
The true parameter values are $\eta_{1}=92.25$,
$\eta_{2}=67.90$, $\eta_{3}=18.24$, $\eta_{4}=13.01$, $\eta_{5}=5.92$, and
$\eta_{6}=20.40$. The estimation methods included $(i)$ Full,
the full sample estimator, which is used as a benchmark for comparison,
$(ii)$ MI, the multiple imputation estimator with imputation size
$M=100$, and $(iii)$ PFI, the parametric fractional imputation estimator
with imputation size $M=100$, where the model parameters are estimated
by the pseudo MLE solving the score equation (4).
To generate samples, we considered stratified sampling with simple
random sampling within strata (STSRS) without replacement. Table \ref{tab:The-sample-allocation}
shows strata sizes $N_{h}$, sample sizes $n_{h}$, and sampling weights.
\textcolor{black}{The sampling weights range from $12.57$ to $45.79$.}\textcolor{red}{{}
}The samples are generated $2,000$ times.
\begin{table}[h]
\protect\protect\caption{\label{tab:The-sample-allocation}The sample allocation in stratified
simple random sampling. }
\centering{}%
\begin{tabular}{cccccc}
\hline
Strata & 1 & 2 & 3 & 4 & 5\tabularnewline
\hline
Strata size $N_{h}$ & 352 & 566 & 1963 & 2181 & 2198\tabularnewline
Sample size $n_{h}$ & 28 & 32 & 46 & 46 & 48\tabularnewline
Sampling weight & 12.57 & 17.69 & 42.67 & 47.41 & 45.79\tabularnewline
\hline
\end{tabular}
\end{table}
For MI, we considered the imputation models in (\ref{eq:model1}). Because the sampling design is stratified
random sampling and the imputation model includes the stratum indicator
function, the sampling design becomes noninformative. We first imputed
$\log(y)$ from the posterior distribution of (\ref{eq:model1}), given the observed data, and then transformed
the imputations to the original scale of $y$. The implementation
of MI was carried out by the ``mice'' package in R. In each imputed
data set, we applied the following full-sample point estimators and
variance estimators: $\hat{\eta}_{1}=N^{-1}\sum_{h=1}^{H}N_{h}\bar{y}_{{h}}$
with $\bar{y}_{{h}}$ being the sample mean of $y$ in the $h$-th
stratum $S_{h}$, $\hat{V}(\hat{\eta})=N^{-2}\sum_{h=1}^{H}N_{h}^{2}(1-n_{h}/N_{h})s_{{h}}^{2}/n_{h}$
with $s_{{h}}^{2}=(n_{h}-1)^{-1}\sum_{i\in S_{h}}(y_{hi}-\bar{y}_{{h}})^{2}$. For
PFI, we considered the imputation model in (\ref{eq:model1}).\textcolor{black}{{} The proposal distribution in
the importance sampling step is the imputation distribution evaluated
at initial parameter values estimated from the available data. In
PFI, for estimating model parameters, we obtained the pseudo MLEs
by solving the score equations weighted by sampling
weights, as in (4). After imputation, $\eta$ was estimated by (5)
by choosing $U$ to be the corresponding estimating function. }We
used the delete-1 Jackknife replication method for variance estimation,
\[
\hat{V}_{R}(\hat{\eta})=\sum_{h=1}^{H}\frac{n_{h}-1}{n_{h}}\sum_{i\in S_{h}}(\hat{\eta}^{[i]}-\hat{\eta})^{2},
\]
where $\hat{\eta}^{[i]}$ is computed by omitting unit $i\in S_{h}$
and modifying the weights so that $w_{hj}$ is replaced by $n_{h}w_{hj}/(n_{h}-1)$
for all $j\in S_{h}$ and the weight remains the same for all other
$j$.
Table \ref{tab:Numercal-Results-of} shows the numerical results.
The mean and variance are calculated as the Monte Carlo mean and variance
of the point estimates across the simulated sample data. The relative
bias of the variance estimator is calculated as $\{(ve-var)/var\}$ $\times100\%$,
where $ve$ is the Monte Carlo mean of variance estimates and $var$ is
Monte Carlo variance of point estimates. In addition, $95\%$ confidence
intervals are calculated as $(\hat{\eta}-z_{0.975}\surd\hat{V},\hat{\eta}+z_{0.975}\surd\hat{V})$,
where $z_{0.975}$ is the $97.5\%$ quantile of the standard normal
distribution. The three estimators are essentially unbiased for point
estimation. The variances for PFI and MI are close for all parameters.
However, for inference, the validity of Rubin's variance estimator
relies on the congeniality condition (\citealt{meng1994multiple}),
which holds when MLEs are used as the full-sample estimator in MI,
but not for Method-of-Moments estimators (MMEs)
under MAR (\citealt{yang2015mi}). As shown in Table \ref{tab:Numercal-Results-of},
Rubin's variance estimator of the MI estimator is biased upward for
strata means and the population mean with relative bias $48.06\%$,
$30.53\%$, $23.05\%$, $23.02\%$, $16.96\%$ for $\hat{\mu}_{j,MI}$,
$1\leq j\leq5$ and $32.75\%$ for $\hat{\mu}_{MI}$. Under the log
normal distribution and MAR, the MMEs are not self-efficient and Rubin's
variance estimator is biased, which is consistent with the results
in \citet*{meng1994multiple} and \citet*{yang2015mi}. Among those, Stratum 1 has largest
bias of the variance estimator, followed by Stratum 2, given their smaller sample sizes compared
to other strata.
In addition, the mean width of confidence intervals is larger than that of FI.
For the population mean, we used the Horvitz\textendash Thompson
(HT) estimator as the full-sample estimator
instead of the MLE under log-normal distribution. It is well-known that
the HT estimator is robust but inefficient, which results in bias
in Rubin's variance estimator. The coverage of $95\%$ confidence interval reaches $98.3\%$ for
the population mean due to variance overestimation. In contrast, PFI
variance estimators applied to the HT estimator are essentially unbiased and provides
empirical coverages close to the nominal coverage.
\begin{sidewaystable}
\protect\protect\caption{\label{tab:Numercal-Results-of}Numerical Results of Point Estimation
(Mean and Var), Relative Bias (R.B.) of Variance Estimation, Mean Width
and Coverage of $95\%$ Confidence Intervals (C.I.s) under Stratified Simple Random Sampling over $2,000$
Samples. The estimation methods include (i) FULL: the full sample
estimator, (ii) MI: the multiple imputation estimator with imputation
size $M=100$, (iii) PFI, the parametric fractional imputation estimator
with imputation size $M=100$, where the model parameters are obtained
by the pseudo MLE. The parameters are $\eta_{1}=$Stratum 1 mean,
$\eta_{2}=$Stratum 2 mean, $\eta_{3}=$Stratum 3 mean, $\eta_{4}=$Stratum
4 mean, $\eta_{5}=$Stratum 5 mean, $\eta_{6}=$Population mean.}
\centering{}%
\begin{tabular}{cccccccccccccccc}
\hline
& \multicolumn{3}{c}{Mean} & \multicolumn{3}{c}{Var} & \multicolumn{3}{c}{R.B. ($\%$)} & \multicolumn{3}{c}{Mean Width of C.I.s} & \multicolumn{3}{c}{Coverage}\tabularnewline
& FULL & MI & PFI & FULL & MI & PFI & FULL & MI & PFI & FULL & MI & PFI & FULL & MI & PFI\tabularnewline
\hline
$\eta_{1}$ & 92.46 & 93.95 & 92.85 & 76.46 & 119.18 & 120.67 & 6.08 & \textcolor{black}{48.06} & 7.81 & 18.01 & \textcolor{black}{26.57} & \textcolor{black}{22.81} & 0.951 & \textcolor{black}{0.964} & 0.952\tabularnewline
$\eta_{2}$ & 67.72 & 68.40 & 67.76 & 40.05 & 60.91 & 59.53 & 6.55 & \textcolor{black}{30.53} & 3.26 & 13.07 & \textcolor{black}{17.83} & \textcolor{black}{15.68} & 0.943 & \textcolor{black}{0.954} & 0.946\tabularnewline
$\eta_{3}$ & 18.30 & 18.45 & 18.28 & 2.12 & 3.32 & 3.29 & -3.06 & \textcolor{black}{23.05} & -1.63 & 2.86 & \textcolor{black}{4.04} & \textcolor{black}{3.60} & 0.944 & \textcolor{black}{0.961} & 0.948\tabularnewline
$\eta_{4}$ & 13.03 & 13.12 & 13.00 & 1.02 & 1.77 & 1.76 & 0.51 & \textcolor{black}{23.02} & -4.28 & 2.03 & \textcolor{black}{2.95} & \textcolor{black}{2.60} & 0.946 & \textcolor{black}{0.962} & 0.943\tabularnewline
$\eta_{5}$ & 5.92 & 5.98 & 5.91 & 0.22 & 0.46 & 0.46 & 1.84 & \textcolor{black}{16.96} & -4.40 & 0.94 & \textcolor{black}{1.47} & \textcolor{black}{1.32} & 0.953 & \textcolor{black}{0.963} & 0.947\tabularnewline
$\eta_{6}$ & 20.42 & 20.63 & 20.42 & 0.70 & 1.11 & 1.10 & -3.36 & \textcolor{black}{32.75} & -3.97 & 1.65 & \textcolor{black}{2.42} & \textcolor{black}{2.06} & 0.952 & \textcolor{black}{0.983} & 0.953\tabularnewline
\hline
\end{tabular}
\end{sidewaystable}
\section{CONCLUDING REMARKS}
In survey sampling, MI and FI are two
available
approaches of imputation for general-purpose estimation. In MI, Rubin's
variance estimation formula is recommended because of its simplicity,
but it requires the congeniality condition of \citet*{meng1994multiple}, which can be restrictive in practice. A merit of FI is that the congeniality
condition is not needed for consistent variance estimation.
When the sampling design is informative, MI can use an augmented model to make the sampling design noninformative. However, incorporating all design information into the model is not always possible (\citealt{reiter2006importance}) and valid inference under MI is not easy or sometimes impossible (\citealt{berg2015}).
In contrast, FI can handle informative sampling design easily as it incorporates sampling weights into estimation instead of modeling.
So far, we have presented FI
under the MAR case. Parametric FI can be adapted to a situation,
where the missing values are suspected to be missing not at random
(MNAR) (\citealt{kim2012parametric};
\citealt{Yang2013parametric}). A semiparametric FI using the exponential tilting model of \citet*{kimyu11} is also promising, which is under development. Also,
FI can be used
to approximate observed log likelihood easily (\citealt{Yang2015likelihood-based}).
The approximation of the observed log likelihood can be directly applied
to model selections or model comparisons with missing data, such as
using Akaike Information Criterion or the Bayesian Information Criterion.
Further investigation on this topic will be worthwhile.
We conclude the paper with the hope that \textcolor{black}{continuing
efforts will be made into developing statistical methods and corresponding
computational programs (an R software package is in progress) for
FI, so as to make these methods accessible to a broader audience.}
|
1,116,691,497,896 | arxiv | \section{Introduction}
\label{sec:intro}
A rigid circular cylinder, when exposed to a fluid flow, expresses a wide variety of dynamics depending on not only its mechanical properties and the flow characteristics, but also its boundary conditions. For example, \citet{Strouhal_1878} and \citet{Rayleigh_1879} illustrated that the singing notes of thin wires and strings, subject to an air-stream, are a function of the relative velocity $U$ and the wire diameter $d_0$, independent of their elastic properties. Soon, \citet{Benard_1908} showed experimentally the link between such \textit{Aeolian} tones and an array of vortices, now known as vortex streets, in a cylinder wake.
Later, it was also inferred that when the wire was free to vibrate the \textit{Aeolian} tones \textit{locked-in} with a natural tone. Since these pioneering accounts, many investigations on the cylinder wake characteristics \cite{Roshko1954_modRe, Roshko1961_highRe, Berger_AnnRevFlu1972, Williamson_AnnRevFlu1996}, under either elastically-mounted conditions \cite{FergusonParkinson_JEI1967, Taneda_JPhySocJap1968} or forced-oscillations \cite{Koopmann_JFM1967, Griffin_JEI1972}, along with the corresponding \textit{Vortex-Induced-Vibration} (VIV) \cite{Sarpkaya_1979, Bearman_AnnRevFlu1984, Sarpkaya_1995, Sarpkaya_JFS2004, Williamson_2004, Bearman_JFS2011} continue to contribute to a multitude of industrial applications, involving, structural design in both marine and civil engineering, energy harvesting, locomotion, mixing and transfer \cite[among others]{Blevins, Paidoussis_book1998, SummerFredsoe_2006, NaudascherRockwell_book2017}. In this wider context, the present work aims to shed light upon the phenomenology of \textit{Fluid-Structure Interactions} (FSI) arising from a unsteady flow over a flexible structure.
Often in applications and in nature, mechanical structures exposed to a fluid flow could be very flexible in order to alleviate flow-generated forces \cite{Koehl_AmZoo1984, Vogel_1984, Vogel_1989, Koehl_AnnRevEcoSys1996, Vogel_1994, DeLangre2008, Gosselin_JExpBot2019}. For example, a flexible structure exposed to a steady flow undergoes \textit{static} reconfiguration whereby the profile drag experienced by the former is reduced, in comparison with that of its rigid counterpart \cite{Alben2002, Gosselin2009, Luhar_Reconfig2011, Barsu2016, Alvarado_2017}. In turn, the resulting internal bending stresses and furthermore, the modified flow angle of attack might alter both the structural dynamics and the wake characteristics . Such effects are only recently considered for flow-bent cylinders to explore associated FSI, namely, reduction in oscillation amplitude, multi-frequency response and \textit{lock-in} mode, to name a few \cite[and references therein]{Leclerc_deLangre_2018}. Also, a related topic, namely, the flutter of a flat-plate, be it rigid or flexible, had received a large number of attention during the past two decades \cite[for more]{Shelley_AnnRevFlu2011}. Whereas rare are the studies on VIV of a single cantilevered slender blades whose one-end is anchored to the flow bed.
In this context, using single specimens of four different freshwater plants in laboratory flumes whose floor is covered with uniform density of shorter artificial grass, \citet{Siniscalchi_Jhyd2013} correlated individual plant movement and its drag force fluctuations with respect to upstream turbulence. Also, they remarked spatial flapping-like movement in all plant species, with the propagation velocity of perturbations being comparable to the approach flow velocity. More recently, a series of works by \citet{Jin_PRF2018, Jin_PoF2018, Jin_TandemBlades_JFM2018, Jin_JFM2019} put light upon a rich phenomenology of \textit{Fluid-Structure Interactions} for dynamically \textit{reconfigured} flexible blades. Firstly, if $U_0$ is the average flow speed, $w_b$ is the base width of flexible plates facing a channel flow and $\nu$ the liquid's kinematic viscosity, for various Reynolds numbers $Re_b = U_0 w_b/\nu$ ranging from $3000$ to $3 \times 10^4$ \cite{Jin_PRF2018} point out that moderately-long flexible structures (aspect ratio $L_b/w_b \in (5, 25)$) vibrate in the stream-wise direction at their natural frequency while the wake fluctuations beat at frequency of vortex shedding. This implies that energy harvesting from the fluid-induced motion of flexible blades \cite{Zhu_JFS2012} might be controlled by properly tuning the structures natural frequencies. On the other hand, \citet{Jin_PoF2018} by analysing the role of tip geometry on moderately-short flexible plates (aspect ratio $L_b/w_b \in (2, 3)$) illustrated that the structural dynamics are governed by both wake-fluctuations and non-linear modulations of structural bending. Furthermore, these cantilevered-structures presented a maximum tip oscillation intensity at some critical Cauchy number which compares the profile drag force experienced by the structure if it were rigid and the characteristic internal restoring force generated due to an external force. Later, \citet{Jin_JFM2019} used a flexible blade of aspect ratio $4$ at different inclination angles to the incoming flow and highlighted the presence of three modes of tip-oscillations, namely, fluttering, tip-twisting and orbital modes which occur respectively at increasing Cauchy number $C_y$. Orbital modes are characterized by large-amplitude coupled twisting and bending deformations, and they occur for sufficiently large inclination angles. Much is to follow in the perspective of these studies, for instance, the influence of mass ratio.
Furthermore, long flexible structures in nature and in applications occur rarely as an isolated object. And so, artificial \textit{canopies} of flexible blades are pertinent to study the effect of wind on trees, terrestrial plant fields and aquatic vegetation \cite{DeLangre2008, Nepf_AnnRevFlu2012, Gosselin_JExpBot2019}. Indeed, a wide variety of mechanically activated phenomena through FSI in plants, or plant canopies is crucial for sediment transport, water quality, biodiversity of aquatic species. Among the well-known examples, \textit{honami} (Japanese: \textit{ho} $=$ crops and \textit{nami} $=$ wave) \cite{Finnigan_1979honami, Py2006, Gosselin2009} and \textit{monami} (Japanese: \textit{mo} $=$ aquatic plant) \cite{AckermanOkubo_1993monami, Ghisalberti2002, Singh2016, Tschisgale_JFM2021}, respectively, represent coherent motion of crops and aquatic canopies when the flow resistance is sufficiently high. In these cases, the proposed mechanistic views generally involve the two-way coupling between flow vortices and the flexible canopy of plants \cite{DeLangre2008, Nepf_AnnRevFlu2012, Nikora_2012}. Furthermore, velocity spectrum and eddies in the incoming flow modulate the motion of flexible structures \cite{Jin_PRE2016}. Also there is now some evidence \cite[Chap. $5$]{Barsu_these2016} that the mechanical response of flexible blades in a channel flow might depend strongly on the Cauchy number $C_y$. It seems, therefore, important to know how vortices of different sizes interact with cantilevered flexible blades at various flow velocities and blade physical characteristics.
In the wake of these previous investigations, we study the motion of a thin flexible sheet when it encounters a regular array of vortices generated by B\'enard-K\`arm\`an vortex shedding behind a cylinder. Thereby, we seek to provide a few insights into the \textit{dynamic} reconfiguration (\ref{sec:SheetReconfig}), tip amplitude (\ref{sec:TipAmplitude}), tip frequency (\ref{sec:TipFreq}) and modes of oscillation (\ref{sec:Waves}), for a good range of Cauchy numbers. Such results may not only contribute to the physics of the structural dynamics of plant canopies that exhibit coherent motion such as \textit{honami} and \textit{monami} but also, to a novel kind of FSI which involve slender flexible objects exposed to coherent vortical structures.
\section{Materials, set-up and methods}
\label{sec:set-up}
Five different cases of low-density polyethylene (density $\rho_b = 920$ kg~m$^{-3}$ and mass ratio $0.93$) sheets are considered (see Table \ref{tab:PhysicalProperties}) by changing the sheet length $L_b$ and the thickness $e_b$. The latter is chosen to be always very small compared to all other dimensions. Here, it is pointed out that cutting extra-thin polyethylene sheets to prepare long blades is a delicate task, as the process often leads to some local plastic deformation. For the sake of simplicity, the sheet width $w_b = 15$ mm and sheet material are kept identical for all experiments discussed here and it is expected that the results are qualitatively similar for other materials since the Cauchy number
\begin{equation}
C_{y} = \dfrac{C_d \dfrac{1}{2} \rho U_0^2}{E}\left( \dfrac{ L_b^3 w_b}{I} \right) = 12 \left(\dfrac{C_d \frac{1}{2} \rho U^2}{E}\right) \left( \dfrac{ L_b^3}{e_b^3} \right),
\label{eq:CauchyNumber}
\end{equation}
which expresses the ratio of the flow drag and the elastic restoring forces, varies in wide a range, namely, between $\mathcal{O}(1)$ and $10^5$. Here, $U_0$ is the depth-averaged flow speed, $E$ is the Young's modulus, and the second moment of inertia for a thin flexible sheet is taken as $I = w_b e_b^3/12$ to obtain the second formula on the right-hand-side. And throughout the present work, the profile drag coefficient $C_d$ of laterally-confined flexible sheets of width $w_b = 15$ mm is taken as $C_d = 6$, based on measurements in \citet[see fig.~$6$a]{Barsu2016}. Furthermore, an estimate for the natural frequency of a fixed-free cantilever beam in a fluid is
\begin{equation}
f_{ni} = a_{i}\sqrt{\dfrac{E I/w_b L_b^4 }{\rho_b e_b + {\pi C_M \rho w_b}/{4}}} = a_{i}\sqrt{\dfrac{E \left(e_b/L_b\right)^3}{12 \rho_b e_b L_b + {3\pi C_M \rho w_bL_b }}} ,
\label{Eq:FreqNat}
\end{equation}
where $a_i$ is a non-dimensional constant with ($a_{1} = 0.56$ and $a_{2} = 3.5$ for the first-order and second-order natural frequency, resp.) \cite{Blevins2015formulas, SummerFredsoe_2006} and $C_M$ is the added mass coefficient. This is taken as the undamped bending frequency of all sheets here despite the fact that this \textit{natural} frequency is usually attributed to beams in fluids that undergo small-amplitude vibrations about an equilibrium position based on the assumption that a beam presents only small deflections compared to their length. Note that such a frequency might not be relevant to long flexible sheets considered in this study since they exhibit large deflections when subject to the flow drag in experiments presented here. And also, these sheets are, in fact, \textit{pre-tensioned} due to the presence of a mean flow drag.
\begin{figure}
\begin{center}
\epsfig{file=ExpSetup_v3.eps,width=0.8\textwidth,keepaspectratio=true}
\end{center}
\caption{Schematic view of (a) the $2$-meter long water channel along with (b) top and (c) front view showing how the cylindrical obstacle is set-up in order to impose a vortex street forcing on a flexible sheet downstream.}
\label{fig:SchemaManip}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{l|c|c|c|c|c|c|c}
\hline
&{} &{} &{} &{} &{} &{} \\
{ID} &{$L_b$} &{$e_b$} &{$E$} &{$C_y = \dfrac{C_d \frac{1}{2} \rho U^2}{E} \left( \frac{ L_b^3 w_b}{I} \right)$} &{$B_a = \dfrac{\Delta \rho g e_b}{E} \left( \frac{ L_b^3 w_b}{I} \right)$} &{$f_{n1}$} &{Symbol}\\
{} &{(mm)} &{(mm)} &{($\times 10^6$ Pa)} &{} &{} &{ (Hz) } &{} \\
\hline
&{} &{} &{} &{} &{} &{} &{} \\
$S442$ &{$84$ } &{$0.19$ } &{$210$} &{$5$ -- $88$} &{$0.55$} &{$0.286$} &{$\triangleleft$ }\\
$S1263$ &{$240$ } &{$0.19$ } &{$210$} &{$130$ -- $2000$} &{$12.8$} &{$0.035$} &{$\Box$}\\
$S1400$ &{$84$ } &{$0.06$ } &{$230$} &{$908$ -- $4200$} &{$8.3$} &{$0.042$} &{$\bigcirc$}\\
$S2000$ &{$200$ } &{$0.10$ } &{$250$} &{$2100$ -- $9800$} &{$32.1$} &{$0.018$} &{$\medwhitestar$}\\
$S4000$ &{$240$ } &{$0.06$ } &{$230$} &{$12500$ -- $98200$} &{$192.4$} &{$0.005$} &{$\Diamond$}\\
&{} &{} &{} &{} &{} &{} \\
\hlin
\end{tabular}
\caption{Geometric and mechanical properties of thin low-density polyethylene sheets along with the range of related Cauchy numbers attained in this work. All sheets are less denser than water (density $\rho_b = 920$ kg~m$^{-3}$) and their width ($w_b = 15$ mm) is kept constant throughout this work. The sheet ID also indicates their stiffness ratio given by $L_b/e_b$.}
\label{tab:PhysicalProperties}
\end{center}
\end{table}
All experiments are performed in a long-narrow water channel as schematized in figure \ref{fig:SchemaManip} (a). Water from the pump ($700$ -- $2800$ lit/h) passes through a fine grid in the inlet and flows out in to a narrow $2$-meter long channel of width $4$ cm and height $25$ cm. By properly adjusting the outlet gate and the horizontal slope of the channel, it is possible to maintain a free-surface flow of uniform water height across the entire channel. Sufficiently far from the inlet, at a little more than $1$-meter, a circular cylinder of diameter $d_0$ is fixed with its axis parallel to the floor, but perpendicular to the flow. Behind the cylinder (downstream), one of the thin, flexible and lighter-than-water rectangular sheet is fixed firmly to the channel bottom. Figures \ref{fig:SchemaManip} (b) -- (c) illustrate this set-up wherein the cylinder and the sheet are exposed to an uniform channel flow directly orthogonal to the sheet's section $L_b w_b$.
Firstly, the cylinder and the sheet are placed at the furthest distance from both the inlet and the outlet. Secondly, if the sheet is too close to the cylinder, it will interfere with the latter's wake and also, the vortex-street may not be fully-developed. Previous investigations \cite{Roshko1954_modRe, Szepessy1992_AR} on cylinder wakes suggest that stable vortex streets form rapidly downstream at about $1.5$ times the cylinder diameter. Also, in the absence of a sheet, a quick qualitative study using a vertically-placed Ultrasonic Doppler Velocimetry (UDV, as in figure \ref{fig:SchemaManip}) indicated that the downstream evolution of the instantaneous vertical water velocity presents a maximum at about $3d_0$. Therefore, the sheet's foot is fixed at a distance $3d_0$ from the cylinder's center so that a well-developed vortex street is set-up in front of the sheet while the latter does not influence the cylinder's wake flow. Finally, a wide variety of choices is still possible for the cylinder's vertical location from the channel floor $h_0$ and also, the water height $h_w$. To avoid the effect of free-surface on the Vortex-Forced-Oscillations (VFO) of the sheet, and also for the sake of simplicity, the cylinder's vertical location is taken as $h_0$ equal to the sheet height in a steady, uniform channel flow whose water height is kept identical throughout this work ($h_w = 22.1$~cm). Note that $h_w$ is at least $3$ times, and at most $30$ times, larger than $h_0$.
\begin{table}[H]
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
{} &{} &{} &{} \\
{$d_0$} &{$U_0 d_0 / \nu$} &{$f_v$} &{$f_v/f_n$}\\[3pt]
{(mm)} &{} &{(Hz)} &{}\\
\hline
{} &{} &{} &{} \\
{$10$ } &{$236$ -- $943$} &{$0.14$ -- $0.63$} &{$2$ -- $360$}\\
{$20$ } &{$472$ -- $1886$} &{$0.28$ -- $1.26$ } &{$1$ -- $210$}\\
{$40$ } &{$943$ -- $3773$} &{$0.37$ -- $2.13$ } &{$0.8$ -- $105$}\\
{} &{} &{} &{} \\
\hline
\end{tabular}
\caption{Characteristics of B\'enard-K\`arm\`an vortices for a range of water speed $U_0 = 2.2$ -- $8.8$ cm~s$^{-1}$. The shedding frequency were obtained using Ultrasonic Doppler Velocimetry (UDV). As compared to the cylinder Reynolds number $Re_{d} = U_0 d_0 / \nu$, the hydraulic Reynolds number $Re_{h} = U_0 D_h / \nu$ varies from $1730$ to $6920$, wherein the hydraulic diameter is $D_h = {4 h_w w_c}/{(2h_w + w_c)}$ and the water height $h_w = 22.1$ cm is kept constant for all experiments. The \textit{reduced} frequency $u = f_v/f_n \gtrsim 1$ for all cases considered here and also, it increases almost as $C_y^{9/5}$ (not shown here).}
\label{tab:VortexCharacterics}
\end{center}
\end{table}
Indeed, a first series of experiments consist of measuring the sheet deflection $h_0$ (not presented here) in a fully-developed uniform water flow without vortices. These measurements are then used to fix the cylinder vertical location for each run in the vortex-forced-vibration experiments. Thus, for each set of sheet physical properties and flow conditions, namely, the water speed $U_0$ and the cylinder diameter $d_0$, the latter's bottom is placed at a distance $h_0$ from channel floor such that $h_0$ is equal to the deflected height of a sheet in an equivalent \textit{steady}, uniform flow of same flow rate and water height. In the range of depth-averaged flow speed $U_0 = 2.2$ -- $8.8$ cm~s$^{-1}$ and diameters $d_0 = 10$, $20$, $40$ mm used here, the cylinder Reynolds number $Re_d = U_0 d_0/ \nu$ varies over a decade, between $236$ and $3773$.
\begin{figure}
\begin{center}
\epsfig{file=fUDV_Uh_Sth_h0d0_v3Bis.eps,width=1\textwidth,keepaspectratio=true}
\end{center}
\caption{B\'enard-K\`arm\`an vortex shedding frequency and the related Strouhal number (a) $St_0 = f_v d_0/U_0$ based on the depth-averaged water speed $U_0$ (b) $St_{h0} = f_v d_0/U_{h0}$ based on cylinder center velocity $U_{h0}$ as computed from the classical \textit{Coles law} \cite{Coles1956law} for fully-developed channel flows.}
\label{fig:FreqBrut}
\end{figure}
An Ultrasonic Doppler Velocimetry (UDV) is used to obtain vortex street characteristics at the center plane of the channel and at a fixed position downstream, equal to $3$ times the cylinder diameter. The probe measures the instantaneous vertical velocity component in the water flow at an acquisition frequency of about $2$ MHz during $3$ minutes. The range of shedding frequency obtained varies between $0.14$ and $2.13$ Hz as indicated in the Table \ref{tab:VortexCharacterics} (see also, inset Figure \ref{fig:FreqBrut}).
Experimental Strouhal number $St_0 = f_v d_0/U_0$ data based on the depth-averaged water speed $U_0$ are presented in figure \ref{fig:FreqBrut}(a). Here, the results are given in terms of the gap-to-diameter ratio ($h_0/d_0$), as it is conventional in previous works which consider the influence of a planar wall on the vortex dynamics \cite{BearmanZdravkovich_JFM1978_WallEffect, Angrilli_1982JFE, Grass1984_BedInfluence, Taniguchi_ExpFlu1990}. Although the data does not display any general trend, it is observed that for the smallest diameter ($d_0 = 10$ mm) and $h_0/d_0 > 2$, the Strouhal number is about approximatively $0.22$. This value corresponds to the classical Strouhal number measurements in the absence of the channel floor for moderate cylinder Reynolds numbers \cite{Roshko1954_modRe, Williamson1998_St_ReSeries}. For larger diameters, namely $d_0 = 20$, $40$ mm, there is quite a scatter in the Strouhal number $St_0$ between $0.25$ -- $0.4$. Nonetheless, this dispersion can be understood if the local velocity $U_{h0}$ at the cylinder center is used. For this purpose, via the depth-averaged water speed $U_0$ and the water height $h_w$, the channel flow profile $U(y)$ was computed computed using the so-called \textit{Coles law} \cite{Coles1956law} by taking commonly used empirical constants as in, for instance, \citet{Kirkgoz_JHE1997velocity}. And so, by taking $U_{h0} = U(y = h_0)$, the new \textit{rescaled} Strouhal number $St_{h0} = f_v d_0/U_{h0}$, shown in figure \ref{fig:FreqBrut}(b), presents much smaller dispersion in the range of gap-to-diameter ratio studied here. Furthermore, the globally decreasing trend of the \textit{rescaled} Strouhal number $St_{h0}$ data in this work is similar to previous experimental \cite[{$*$}]{Angrilli_1982JFE, Price_JSFS2002} and $3$-D LES numerical investigations \cite[{$\star$}]{Sarkar_JFS2010}. As discussed in the introductory section of \citet{Sarkar_JFS2010}, the trend at small and moderate gap-to-diameter ratio ($h_0/d_0 \lesssim 4$) is either a growing or a decreasing function of $h_0/d_0$ depending on the cylinder Reynolds number, the boundary layer thickness at the channel floor and the presence of a free-surface. In conclusion, UDV-based results in the present study are consistent with previous works at moderate Reynolds numbers \cite{Angrilli_1982JFE, Price_JSFS2002, Sarkar_JFS2010}, if a proper velocity scale is taken for the Strouhal number.
\section{Sheet-tip dynamics due to B\'enard-K\`arm\`an vortices : general remarks}
\label{sec:TipDyn}
As discussed before, each experiment is set-up by placing the cylinder base at a known distance $h_0$ equal to the \textit{static} deflection height. Note that the cylinder is located at about $1$-meter from the channel entrance. The cylinder then sheds a B\'enard-K\`arm\`an vortex street of known shedding frequency (see \ref{sec:TipFreq}) which encounters a flow-bent flexible sheet downstream. A high-resolution, full-frame digital camera (Sony $\alpha 7$) is used to image the Vortex-Forced-Oscillations (VfO) of the flexible sheet at a rate of $25$ images per second for a time period of $7$ to $8$ minutes (see supplementary videos). The resulting images are analysed using the open-source freeware \textit{ImageJ} \cite{Schindelin_NatureMethods2012fiji} and algorithms therein for brigthness thresholding \cite{Kapur_CompVision1985thresholding,Tsai_1985CompVisionthresholding}, edge detection, etc. Sample images and the corresponding edge detection (dots, pink) are shown in figures \ref{fig:ImgDetec}(a) and (b). Clearly, the contour of the sheet is well-detected. The sheet ``tip'' is taken as the center of the last identified sheet edge and thereby, a robust sheet tip detection can be obtained by these techniques. Such an identified sheet's tip (\textcolor{blue} {$\ast$}), along with the manually demarcated sheet's foot (\textcolor{red} {$\times$}) are also displayed in figures \ref{fig:ImgDetec}(a) and (b). This allows for a well-resolved tip detection amplitudes in the order of a few $10$-th of a millimetre.
\begin{figure}[H]
\begin{center}
\epsfig{file=ImgDetection_317_212.eps,width=1\textwidth,keepaspectratio=true}
\end{center}
\caption{Two examples showing the results of sheet edge detection via \textit{ImageJ} with insets displaying a zoom on the tip of the sheet. (a) S$442$: $C_y = 11.3$, $Re_h = 2.5 \times 10^3$, $Re_d = 1.4 \times 10^3$ and (b) S$400$: $C_y = 7.83 \times 10^4$, $Re_h = 6.2 \times 10^3$, $Re_d = 3.4 \times 10^3$. Here, dots (pink) show the detected sheet edge while an asterisk (blue) and a cross (red) each indicate the sheet's tip and foot, respectively. }
\label{fig:ImgDetec}
\end{figure}
If $x_b(t)$ and $y_b(t)$ are respectively the horizontal and vertical position of the sheet tip, the corresponding horizontal and vertical fluctuations are then defined as
\begin{eqnarray}
\tilde{x}_b(t) &= &x_b(t) - \bar{x}_b,\\
\tilde{y}_b(t) &= &y_b(t) - \bar{y}_b,
\end{eqnarray}
where an over-bar indicates a time-averaged variable. Now consider the sheet-tip fluctuations for a few typical cases provided in figures \ref{fig:BTip}. Time evolution of the fluctuations (\textcolor{blue}{$\tilde{x}_b$}, \textcolor{red}{$\tilde{y}_b$}) in millimetres are shown on the left while the corresponding peak-normalised spectra for the tip's vertical position are provided on the right. For the sake of clarity, only a minute-long evolution is given. Note that these are raw data, by eliminating a few outliers beyond $4$ times the standard deviations but without any prior moving-average.
Figures \ref{fig:BTip}(a) \& (b) display the temporal tip response of the sheet S$442$ to B\'enard-K\`arm\`an vortices shed by the same cylinder ($d_0 = 40$ mm) at the lowest and highest average water speed, respectively. In the former case, at $U_0 = 3.1$ cm~s$^{-1}$, both vertical and horizontal oscillations are of the same order of magnitude and also, they are synchronous. As the speed increases to $U_0 = 7.9$ cm~s$^{-1}$, $y$-fluctuations present an almost proportionally-increased \textit{back-and-forth} amplitude while $x$-fluctuations are much smaller in magnitude. Clearly, $x$-tip detection seems to be not so robust for this case. Nonetheless, vertical sheet tip position exhibits a reasonably continuous evolution wherein its frequency of oscillation $f_b$ seems to be greater than the case with smaller water speed $U_0 = 3.1$ cm~s$^{-1}$. This is readily visible in the corresponding power spectral density shown on the immediate right of the same figures. Here, arrows are used to indicate the forcing frequency (vortex shedding) $f_v = 0.56$~Hz and the first natural frequency $f_{n1} = 0.29$~Hz, respectively : the sheet tip oscillates at the vortex shedding frequency for these two cases. Furthermore, both these examples correspond to the sheet for which the relevant Cauchy numbers ($C_y = 5$ -- $88$) are the smallest. In comparison, figure \ref{fig:BTip}(c) gives the tip fluctuations for S$4000$, at one of the largest Cauchy numbers ($C_y = 9 \times 10^{4}$). This sheet exhibits large amplitude oscillations in the vertical direction as big as the cylinder diameter $d_0 = 40$~mm while the horizontal oscillations remain small ($\lesssim 4$ mm) as before. In particular, the \textit{back-and-forth} $y$-fluctuation amplitude is about $4$ times larger than the case S$442$ at the same average water speed. The corresponding spectra of vertical tip position does not present a peak at the sheet natural frequency $f_v = 5 \times 10^{-3}$~Hz. However, a large low-frequency peak ($0.15$~Hz), along with a smaller second peak at the vortex shedding frequency $f_v = 0.55$~Hz, are visible.
\begin{figure}[H]
\begin{center}
\epsfig{file=fftBladeTip_v2.eps,width=1\textwidth,keepaspectratio=true}
\end{center}
\caption{[\textit{Left}] Blade tip oscillations (in mm) of the ``stiffest'' sheet S$442$ at different average water speed (a) $U_0 = 3.1$ cm~s$^{-1}$, (b) $U_0 = 7.9$ cm~s$^{-1}$ as compared with the ``most flexible'' sheet S$4000$ at (c) $U_0 = 7.9$ cm~s$^{-1}$ due to vortices shed by a cylinder of diameter $d_0 = 40$ mm. (d) Same as (c) but for $d_0 = 10$ mm. [\textit{Right}] Power spectral density of the sheet tip's vertical fluctuations. Arrows indicate various frequencies, namely, the vortex-shedding frequency ($f_v$) and the first two sheet natural frequencies ($f_{n1}$ and $f_{n2}$).}
\label{fig:BTip}
\end{figure}
Qualitatively similar temporal characteristics are observed for the sheet tip when the diameter $d_0$ is decreased. For example, figure \ref{fig:BTip}(d) provides a typical data at $d_0 = 10$~mm to be compared with its equivalent case at the same water speed and sheet physical properties given in figure \ref{fig:BTip}(c). Firstly, the vertical oscillations are diminished almost in proportions to the diameter-ratio. Secondly, the power spectral density presents a peak neither at the vortex shedding frequency $f_v = 0.55$~Hz, nor at the sheet natural frequency $f_v = 5 \times 10^{-3}$~Hz, but instead at almost the same frequency ($0.16$~Hz $< f_v$) as for the case with the larger diameter $d_0 = 40$~mm.
In summary, at smaller average water speeds and stiffness ratios $L_b/e_b$, flexible sheets seem to exhibit small-amplitude \textit{flutter}, {i.e.}, $x$, $y$ oscillations are comparable, and are also much smaller than the cylinder diameter $d_0$. Tip fluctuations are larger at higher speeds and bigger stiffness ratio $L_b/e_b$, leading to oscillations comparable to $d_0$. The peak-normalised power spectra suggests that the tip motion in the $y$ direction oscillates either at the vortex shedding frequency $f_v$ or at a different frequency lesser than $f_v$ as and when $d_0$ decreases, or $L_b/e_b$ increases. Moreover, as seen in figures \ref{fig:ImgDetec}, a given sheet's local curvature could be either single-signed, or multiple-signed, depending on the sheet stiffness and vortex street's width. In the following, these general remarks are further analyzed.
\section{Time-averaged sheet reconfiguration}
\label{sec:SheetReconfig}
Related to the sheet tip dynamics is the mean sheet tip position. In the absence of a vortex street, at any chosen flow rate and constant water height, a sheet bends in the flow direction and exhibits a deflected height $h_b < L_b$. This leads to the well-known profile drag reduction since the drag force experienced by a flexible sheet $F_{d} = C_d \frac{1}{2} \rho U^2 h_b w_b$ is smaller compared to the profile drag $C_d \frac{1}{2} \rho U^2 L_b w_b$, if the same sheet was rigid (here, $C_d$ is the sheet drag coefficient). This drag reduction is often quantified using the so-called \textit{static} reconfiguration number $\mathcal{R} = h_b/L_b$ as a function of the flow speed $U_0$. It is now well-established that the drag reduction can be expressed as a power law, namely
$$F_{d} \propto \dfrac{1}{2} \rho U_0^{2+\mathcal{V}} A_f,$$
where $A_f = L_b w_b$ is the undeformed sheet frontal area and $\mathcal{V} < 0$ is known as the Vogel number ~\cite{Vogel_1989} such that $h_b/L_b \sim U_0^{\mathcal{V}}$. For long thin blades that are anchored to the flow bed at its one end, the Vogel number $\mathcal{V} = -2/3$~\cite{DeLangre2008}.
\begin{figure}[H]
\begin{center}
\epsfig{file=DynReconfig_BuoyEffect_PRFv2_v5.eps,width=1\textwidth,keepaspectratio=true}
\end{center}
\caption{[TOP] Time-averaged sheet reconfiguration $\bar{ \mathcal{R} }_b = \bar{h}_b/L_b $ in the presence of B\'enard-K\`arm\`an vortices as a function of the average water speed across the sheet height ($U_h$). It illustrates that the average Vogel number $\mathcal{V} \approx -0.6 \pm 0.1$ for all cases, except for sheets $S442$ ($\triangleleft$). [BOTTOM] Same data expressed in terms of the \textit{local} Cauchy number $C_y^{h} = 12 \left({C_d \frac{1}{2} \rho U_h^2}/{E}\right) \left( { L_b^3}/{e_b^3}\right)$ and compared with continuous lines (green) as obtained from the bending beam model (eqn. \ref{eq:beamAdim}). The inset compares the average sheet deflection against the corresponding cylinder's vertical position which is equal to the sheet height ($\pm 2$~mm) in an uniform flow in the absence of B\'enard-K\`arm\`an vortices.}
\label{fig:Reconfig}
\end{figure}
Since the time averaged-reconfiguration of a sheet should also provide a measure of the average drag-reduction, if any, during the vortex-forced-motion of thin flexible sheets, it is reasonable to define a \textit{dynamic} reconfiguration number $\bar{\mathcal{R}} = {\bar{h}_b}/{L_b}$, analogous to the \textit{static} reconfiguration number $\mathcal{R}$. In this context, figure \ref{fig:Reconfig} (top) presents the non-dimensional time-averaged deflected height $\bar{\mathcal{R}} = \bar{h}_b / L_b$ for each of the five sheets. Here, instead of the channel-depth-averaged water speed ($U_0 = 22$ -- $88$ mm~s$^{-1}$), we present our experimental results in terms of the average water speed across the time-averaged deflected sheet height $\bar{h}_b$ based on the classical \textit{Coles} law for the channel velocity profile $U(y)$, so that $U_h = \int_0^{\bar{h}_b} U(y) dy /\bar{h}_b $. Each symbols, namely, $\triangleleft$, $\Box$, $\bigcirc$, $\medwhitestar$, and $\Diamond$, represent different sheets of increasing stiffness ratio ($L_b/e_b$), respectively, provided in Table \ref{tab:PhysicalProperties}. Also, each column of figures correspond to data for a particular cylinder diameter $d_0$. In all case corresponding to a fixed $L_b/e_b$, we observe that the average reconfiguration number decreases when the flow speed increases. Furthermore, it is possible to associate a power law and hence, a Vogel number $\mathcal{V}$ for each stiffness ratio ($L_b/e_b$). Expect for the sheet with the smallest stiffness, all the other sheets present a Vogel number $\mathcal{V} \approx -0.6 \pm 0.1$. Similar values were previously obtained in the same water channel, but for submerged artificial canopies of kevlar sheets undergoing \textit{static} reconfiguration \cite[see fig $4$(a)]{Barsu2016}. Finally, we observe a small decrease in Vogel number as well, in the case of vortices shed by cylinders of increasing diameter.
\begin{figure}
\begin{center}
\epsfig{file=BladeAvgShape_CompilationLbEb_v2Bis.eps,width=0.85\textwidth,keepaspectratio=true}
\end{center}
\caption{Typical time-averaged sheet shape as compared to bending beam model (eqn. \ref{eq:beamAdim}), allowing for the effect of buoyancy in a steady, uniform channel flow.}
\label{fig:ReconfigCompa}
\end{figure}
Furthermore, the \textit{reconfigured} sheet height, say $\mathcal{R} = h_b/L_b$ is usually given as a function of the Cauchy number $C_y = {C_d \frac{1}{2} \rho U_0^2 L_b^3 w_b}/{EI}$~\cite{DeLangre2008, Luhar_Reconfig2011, Barsu2016, Leclercq_JFS2016, Gosselin_JExpBot2019}. Indeed, a low Cauchy number $C_y \ll 1$ represents the case of a sheet that undergoes very little deflection in the flow direction ($\mathcal{R} = h_b/L_b \approx 1$) since the drag force experienced by the sheet is sufficiently small with respect to the internal restoring force. Whereas the case of $C_y \gg 1$ indicates a flexible sheet with large deflection (or equivalently, a small blade reconfiguration number $\mathcal{R}_0 \ll 1$) and hence, a reduced overall sheet drag force compared to its rigid counter part.
It is now well-established that a bending beam under large deflection, accounting for the \textit{local} flow drag and the Archimedes force due to buoyancy provides a satisfactory mechanical model for \textit{static} reconfiguration of flexible sheets \cite{Alben2002, Luhar_Reconfig2011, Barsu2016, Leclercq_JFS2016}. Since a single rectangular sheet resembles a bluff body, as for the above-mentioned works, the sheet's skin friction is assumed to be negligible here for the sake of simplicity\footnote{Although a recent work by \citet{Bhati2018} suggests that this is not always the case}. For a bent sheet due to a steady flow, an expression for the restoring bending moment is simply given by the bending beam model for thin flexible sheets \cite{Chevalier1994}. If $s$ is the curvilinear coordinate along the sheet and $\theta(s)$ the \textit{local} sheet deflection as represented in fig. \ref{fig:SchemaManip} (d), the restoring bending moment $\mathrm{M}(s)$ at any arbitrary distance $s$ from the sheet's foot should be given by
\begin{equation}
{{EI}\dfrac{d\theta}{d s}} = \int_{s}^{L_b} {\left( x(\xi) - x(s) \right)} dF_{\mathrm{A}} - \int_{s}^{L_b} {\left( \xi - s \right)} dF_\mathrm{D},
\label{eq:beam}
\end{equation}
where $dF_{\mathrm{A}} = \Delta \rho g \left( e_b w_b d \xi \right)$ is the \textit{local} Archimedes force and $dF_\mathrm{D} = C_d \frac{1}{2} \rho U^2 \sin ^2{\left(\theta(\xi)\right)} dA_{f}$ is the normal component of the \textit{local} profile drag, with $\Delta \rho = \rho - \rho_b$ the density difference between the fluid and the sheet, and $dA_f = w_b d \xi$ the \textit{local} frontal area of the \textit{reconfigured} sheet\footnote{Note that the effect of the sheet's curvature on the bending moment due to the drag force and the tensile stress along the length of the sheet are neglected for simplicity.}. The above equation can be further simplified by taking both $U(s) \equiv U_0$ and $EI$ to be invariant across the sheet.
Thereby
\begin{equation}
\dfrac{d^3 \theta}{d \tilde{s}^3} = B_a \left( \sin \theta \left(1 - \tilde{s}\right) \dfrac{d \theta}{d \tilde{s}} + \cos \theta \right) - C_y \sin^2 \theta,
\label{eq:beamAdim}
\end{equation}
where $\tilde{s}=s/L_b \in \left[ 0, 1 \right]$ is the non-dimensional curvilinear coordinate, $B_a = \Delta \rho g w_b e_b L_b^3/EI$ is the so-called Buoyancy number and $\mathcal{C}_{y}= C_{d}\rho w_b L_b^3{U}^2_0/2EI$ is the Cauchy number. The typical values for these non-dimensional numbers are also provided in Table \ref{tab:PhysicalProperties}. This model equation can be readily solved by applying the boundary conditions at the sheet extremities $\theta = \pi/2$ and $d\theta/ds = d^2\theta/ds^2 = 0$, for $\tilde{s} = 0$ and $\tilde{s} = 1$, respectively.
Let us now investigate the \textit{dynamic} reconfiguration number as a function of Cauchy number, as provided in figure \ref{fig:Reconfig} (bottom). As before, we use the average water speed $U_h$ across the time-averaged deflected sheet height $\bar{h}_b$ to express data in terms of the \textit{local} Cauchy number $C_y^{h} = 12 {C_d \frac{1}{2} \rho U_h^2}/{E} \left(L_b/e_b\right)^3$. Symbols represent experiments and continuous lines are computed from the bending beam model in eqn.~(\ref{eq:beamAdim}) at each various Cauchy and Buoyancy number. For a given blade, say for instance S$442$ as represented by $\triangleleft$, as $C_y$ increases, the sheet reconfiguration decreases monotonically. However, at a fixed Cauchy number $C_y$, reconfiguration data from all sheets do not collapse on a single master curve. A closer observation reveals that this is due to buoyancy effects which tends to increase the reconfiguration for sheets with larger Buoyancy number $B_a = 12 {\Delta \rho g e_b}/{E} \left({L_b}/{e_b}\right)^3$. Despite the fact that the above-mentioned bending beam model is only valid for the case of a \textit{static} reconfiguration under a steady uniform flow, it predicts the trend with the Cauchy number and also, the buoyancy number for all cases. It displays a reasonable agreement for $C_y > \mathcal{O}(1)$. However, differences with experimental data are visible as the diameter of the upstream cylinder increases (see also, figures \ref{fig:ReconfigCompa} for comparison between sheet shape and the ones computed from the expression (\ref{eq:beamAdim})).
Here, it is inferred that the \textit{dynamic} reconfiguration curve is strikingly similar to that of the \textit{quasi-static} regime. This is further elucidated in the inset of figures \ref{fig:Reconfig} (bottom) wherein the sheet deflection height $h_0/L_b$ in the absence of vortices is compared with $\bar{ \mathcal{R} } = \bar{h}_b /L_b $. Firstly, the mean blade tip position is situated just above the value obtained for the \textit{quasi-static} case. In all the cases studied here, the sheet deflects a little lesser in the presence of B\'enard-K\`arm\`an vortices. In other words, the vortices slightly ``lift-up'' the sheet's tip, upto approximatively $5$ -- $10$\% of the blade length. Note that this ``lift-up'' effect is less visible for large \textit{dynamic} reconfiguration $\bar{ \mathcal{R} }_b \ll 1$ and small vortices ($d_0 = 10$ mm) as inferred for the case S$4000$, denoted by $\Diamond$ in figure \ref{fig:Reconfig}(b). Secondly, it suggests that the \textit{dynamic} sheet reconfiguration is only slightly modified, notwithstanding that the relatively wide variation of the tip oscillation amplitude observed in this study, as described in the following section. Finally, figure \ref{fig:ReconfigCompa} compares the time-averaged sheet shape with that computed via the bending beam model (eqn. \ref{eq:beamAdim}). It indicates that the model provides a satisfactory estimate. This might not only be due to the restriction in the model that the average-water speed is uniform across the sheet but also due to the fact that thin polyethylene sheets used in this study present local plastic deformation, as already explained in section \ref{sec:set-up}. Note that the latter fact can also be inferred in the first column of figure \ref{fig:ReconfigCompa} which provides comparisons for the case with smallest tip oscillations, namely, the one at the slowest speed ($U_0 = 22$ mm s$^{-1}$) and the smallest cylinder diameter ($d_0 = 10$ mm).
\section{Sheet oscillation amplitude}
\label{sec:TipAmplitude}
\begin{figure}[H]
\begin{center}
\epsfig{file=AmplitudeVsCircCauchy_v2.eps,width=0.85\textwidth,keepaspectratio=true}
\end{center}
\caption{Tip oscillation amplitude $\delta_b$ as a function of the water speed $U_{h0}$ at the cylinder center (a) $d_0 = 40$ mm, (b) $d_0 = 20$ mm and (c) $d_0 = 10$ mm for five different sheet physical properties (see also, Table \ref{tab:PhysicalProperties}). (d) All data from above but, here, given with respect to an equivalent vortex \textit{circulation} $U_{h0} d_0$ and (e) Normalised amplitude $\delta_b / d_0$ as a function of Cauchy number. Note that the $U_{h0} d_0$ is equivalent to the \textit{local} cylinder Reynolds number $Re_h = U_{h0} d_0 / \nu$.}
\label{fig:AmplitudeVsReynolds}
\end{figure}
The general observations evoked at the end of the section \ref{sec:TipDyn} can be quantified by defining a proper expression for the oscillation amplitude. While it is possible to simply define the amplitude as the maximum excursion of the observed tip motion, a more robust definition is to define it as a statistical measure. For this purpose, it is useful to define the tip amplitude based on the standard deviation as in
\begin{eqnarray}
\delta_b \equiv 2\sqrt{{\overline{\tilde{x}_b^2}} + {\overline{\tilde{y}_b^2}}} = 2\sqrt{\overline{ \left(x_b(t) - \bar{x}_b \right)^2} + \overline{\left( y_b(t) - \bar{y}_b \right)^2}}.
\label{eqn:TipAmp}
\end{eqnarray}
Figures \ref{fig:AmplitudeVsReynolds}(a) -- (c) then display such an amplitude as a function of the local water speed $U_{h0} = U(y = h_0)$ computed from the \textit{Coles law} channel velocity profile, as before in section \ref{sec:set-up} for various cylinder diameters. Here, each symbol corresponds to a stiffness ratio as given in Table \ref{tab:PhysicalProperties}. As already remarked, figure \ref{fig:AmplitudeVsReynolds}(a) at a given $d_0 = 40$ mm confirms that tip fluctuation amplitude increases proportionally with the water speed $U_{h0}$ for a given sheet. In addition, sheets with larger stiffness ratio $L_b/e_b$ show increasingly bigger amplitudes. In particular, note that the data for sheets ($\Box$, S$1263$ and $\bigcirc$, S$1400$) with approximatively same stiffness ratio fall almost on the same linear trend line. Now, as the cylinder diameter is decreased, as in figures \ref{fig:AmplitudeVsReynolds}(b) and (c), similar behaviors are again observed but the tip amplitudes $\delta_b$ are smaller, as well. When all data are put together with respect to $U_{h0} d_{0}$ which is proportional to the \textit{local} cylinder Reynolds number, as in figure \ref{fig:AmplitudeVsReynolds} (d), it is clear that the tip amplitude not only increases with the sheet stiffness ratio $L_b/e_b$, but also with $U_{h0} d_{0}$. Finally, we present in figure \ref{fig:AmplitudeVsReynolds} (e) the non-dimensional tip oscillation amplitude $\delta_b / d_{0}$ as a function of the sheet Cauchy number $C_y$ given in eqn. \ref{eq:CauchyNumber}. Here, almost always, all data corresponding to a given cylinder diameter increase monotonically with the Cauchy number. It can, therefore, be inferred from these observations that the relevant first-order magnitude of $\delta_b / d_{0}$ might be captured by the Cauchy number and the details of the vortex-laden flow influences the rest.
Note that results in section \ref{sec:SheetReconfig} suggest that the time-averaged reconfiguration of all sheets here is essentially similar to the sheet reconfiguration in an steady, uniform flow. So, it is first assumed that $(i)$ the mean flow in the channel provides the average sheet deflection along the flow direction, and also, it is expected that $(ii)$ the periodic excitation by B\'enard-K\`arm\`an vortices then provides the necessary vibrational energy for the flow-bent flexible sheet. In order to illustrate this effect, let us consider a toy-model shown in figure \ref{fig:TorsionModel}(b). It consists of rigid flat plate supported by a torsional spring of stiffness, say $\mathcal{K}$, and exposed to a steady, uniform flow containing a regular array of B\'enard-K\`arm\`an vortices, each moving at some characteristic velocity in the direction of the steady flow. A simple model can be derived if we decompose the total work done by the vortex-laden flow on the flexible sheet into two distinct parts : $(i)$ the steady component of the flow leads to the average angular position, say $\bar{\xi}$, and $(ii)$ the periodic interaction between the vortices and spring-supported flat plate results in torsional vibrations of the spring, and hence the plate's angular position $\xi(t)$. Then, locally, the average profile drag-induced moment should be balanced by the average restoring moment in the deflected sheet $EI {d \theta}/{ds}$, where ${d \theta}/{ds}$ is the sheet's local curvature ($1/R_c$). As already observed in section \ref{sec:SheetReconfig}, at very large Cauchy number which compares drag force against the elastic restoring force, $\bar{h}_b \ll L_b$ and hence, it can be safely assumed that $EI {d \theta}/{ds} \approx {EI}/{\bar{h}_b}$, since $R_c$ is approximately equal to the time-averaged sheet deflection $\bar{h}_b$ (see figure \ref{fig:TorsionModel}). And so, we obtain
\begin{eqnarray}
\dfrac{EI}{\bar{h}_b} &\sim &\left( C_d \dfrac{1}{2} \rho U_{0}^2 w_b \bar{h}_b \right) \times \bar{h}_b, \\ \notag
\Rightarrow \dfrac{\bar{h}_b}{L_b} &\sim &C_y^{-1/3},
\label{Eq:DynamicReconfig}
\end{eqnarray}
a result analogous to the well-known scaling for the \textit{static} reconfiguration number~\cite{DeLangre2008, Gosselin_JExpBot2019} that leads to drag reduction in flexible plates. When $C_y \ll 1$, on the other hand, ${\bar{h}_b} \approx {L_b}$. Note that the experimental data provided in the previous section, as in figure \ref{fig:Reconfig}, match fairly well with the large-Cauchy number scaling law, irrespective of Buoyancy number. In general, the above result could also be expressed as ${\bar{h}_b}/{L_b} \sim C_y^{\mathcal{V}/2}$, where the Vogel number $\mathcal{V} = -2/3$ at $C_y \gg 1$~\cite{DeLangre2008}.
\begin{figure}
\begin{center}
\epsfig{file=TorsionalSpringModel_v3Bis.eps,width=0.7\textwidth,keepaspectratio=true}
\end{center}
\caption{Schematic of the flow and sheet parameters along with that of the torsional spring model.}
\label{fig:TorsionModel}
\end{figure}
Furthermore, we propose that the vibrational energy of the sheet is solely taken from B\'enard-K\`arm\`an vortices at some rate depending on the characteristics of the incoming unsteady flow, and during some time scale proportional to the shedding period $1/f_v$. Therefore, for the toy-model, we have
\begin{eqnarray}
\dfrac{1}{2} \mathcal{K} \left(\dfrac{\delta_b}{L_b}\right)^2\sim \left(\dfrac{1}{2} \rho U_{0}^3 w_b d_v\right) \dfrac{1}{f_v},
\label{Eq:BilanEnergie}
\end{eqnarray}
where the left-hand-side is the vibrational energy of the torsional spring and the right-hand-side is the product of \textit{local} kinetic energy transfer rate, taken here as proportional to $1/2 \rho U_{0}^3 w_b d_v$, and the typical timescale during which the transfer takes place periodically, i.e., $1/f_v$. Here, $d_{v}$ is some typical lengthscale of Bénard-Karman vortices. Also, in the above expression, the stiffness $\mathcal{K}$ of the \textit{pre-tensioned} torsional spring can be ascertained from the equilibrium condition that $\mathcal{K} \bar{\xi} \equiv EI {d \theta}/{ds} \approx {EI}/{\bar{h}_b}$, with $\bar{\xi} \approx {\bar{h}_b}/{L_b}$ when $Ca \gg 1$ since ${\bar{h}_b} \ll {L_b}$. Now, in terms of the Vogel number $\mathcal{V} = -2/3$ at $C_y \gg 1$ (or $\mathcal{V} = 0$ at $C_y \ll 1$), the \textit{dynamic} reconfiguration number is given by $\bar{h}_b/{L_b} \sim C_y^{\mathcal{V}/2}$. Hence, it can be deduced that $\mathcal{K} \sim \left({EI}/L_b\right) C_y^{-\mathcal{V}}$, and thereafter the expression \ref{Eq:BilanEnergie} leads to
\begin{eqnarray}
\left(\dfrac{\delta_b}{L_b}\right)^2 &\sim & \left( \dfrac{\rho U_{0}^2 w_b L_b}{EI} \right) \left( \dfrac{U_0 d_v}{f_v} \right) C_y^{\mathcal{V}}, \\ \Rightarrow \dfrac{\delta_b}{d_v} &\sim &C_y^{n} {St_v}^{-1/2},
\label{Eq:AmpTipScaling}
\end{eqnarray}
where $St_v = f_v d_v /U_{0}$ is the Strouhal number based on a typical size of the eddies, and $n \equiv \left(1 + \mathcal{V}\right)/2 = 1/6$, or respectively $1/2$, for sufficiently large Cauchy numbers $C_y \gg 1$, or otherwise.
In fact, figure \ref{fig:AmplitudeVsReynolds}(e) could be seen as an illustration of the above result, if one allows a constant Strouhal number $St_h$, an uniform velocity profile $U_{h0} = U_0$ and the typical length scale $d_v$ of the vortices to be equal to the cylinder diameter $d_0$. However, as inferred in section \ref{sec:set-up}, $St_h$ varies between $0.42$ and $0.22$ depending on the cylinder diameter and this also, implies that the typical vortex size and strength varies with the ratio $h_0/d_0$. Also, the channel flow is not uniform. Thus, it is expected that the amplitude data should show lesser dispersion if some details of the B\'enard-K\`arm\`an vortices shed by the cylinders are known.
A proper Particle Image Velocimetry (PIV) measurements in the flow domain which includes both the cylinder and the entire deflected sheet, notwithstanding the recirculation zone behind the sheet, is a huge task and is beyond the scope of the present work. For the purpose of this work, a set of PIV measurements in the region immediately downstream of the cylinder are undertaken. Not all configurations were considered but, in this study, only six pairs of ($h_0$, $U_0$) were chosen for each cylinder diameter $d_0$. Nonetheless, these parameters cover the wide range of values for $h_0$, $U_0$ and $d_0$ presented in figures \ref{fig:AmplitudeVsReynolds}. A high-speed camera with a resolution of $1024$ px $\times$ $1024$ px is used to capture images of a particle-seeded flow at a frame rate of $125$ fps. For the measurement, tracer particles with density $1005$ kg~m$^{-3}$ and a diameter of $50$ $\mu$m and $80$ $\mu$m are added to the flow. A system of mirrors scatters a LASER beam into a thin LASER sheet which, depending on the cylinder diameter $d_0$, covered from $6$ to $10$ times $d_0$. Standard recommendations \cite{Adrian_2011PIV} were followed for seeding, lighting and the relevant post-processing using \textit{DaVis Lavision} software. Finally, to obtain the frequency associated with the B\'enard-K\`arm\`an vortex street, a Fast Fourier Transformation is performed on the instantaneous vertical velocity given by PIV measurements. The frequency associated with the maximum spectral density is considered to be the vortex shedding frequency. To decrease the error, this process is repeated for each streamwise location on the centreline behind the cylinder to compute an average shedding frequency $f_{PIV}$.
\begin{figure}
\begin{center}
\epsfig{file=fUDV_vs_fPIV_Circulation_UCyl_v2.eps,width=1\textwidth,keepaspectratio=true}
\end{center}
\caption{(a) Comparison between shedding frequency measured using Particle Image Velocimetry ($f_{PIV}$) and UDV ($f_{UDV}$) and (b) The product $\omega_m d_0$ against the cylinder centreline velocity $U_{h0}$ where \textit{average} maximum vorticity $\omega_m$ is obtained from the histogram of absolute maximum in the instantaneous vorticity profile at a fixed stream-wise location $x = 3 d_0$.}
\label{fig:PIV_vs_UDV}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=AmplitudeRescaledVsCauchyU0_v2Bis.eps,width=0.6\textwidth,keepaspectratio=true}
\end{center}
\caption{All data from figure \ref{fig:AmplitudeVsReynolds}(e) are expressed in terms of the \textit{rescaled} oscillation amplitude of the sheet's free-end $\delta_b /d_0$ as given by the eqn.~\ref{Eq:AmpTipScaling} versus the Cauchy number $C_y = 12 \left({C_d \frac{1}{2} \rho U_0^2}/{E}\right) \left({ L_b^3}/{e_b^3} \right)$. Two distinct regimes are visible here.}
\label{fig:AmplitudeVsCyBis}
\end{figure}
Such PIV measurements are compared with the UDV-measured shedding frequency $f_{UDV} = f_v$ in figure \ref{fig:PIV_vs_UDV}(a). The scatter plot also provides colored data points from black to bright-yellow which represent the ratio between $h_0/d_0 = 6$ and $h_0/d_0 = 0.25$, respectively. Clearly all data fall between the trend line $f_{PIV} = f_{UDV}$ and $f_{PIV} = 0.8f_{UDV}$. For larger frequencies, irrespective of $h_0/d_0$, the equality is less pronounced. Furthermore, the instantaneous vorticity field can be computed from the measured velocity field at $x = 3d_0$ from the cylinder center. The absolute value of instantaneous vorticity profile presents a maximum at a vertical position corresponding to either a counter-clockwise, or clockwise, rotating vortex. Now, at each time step, the absolute maximum vorticity including the sign is counted in order to build-up an histogram (not provided here). In general, such an histogram displays two peaks, each representing the most-likely maximum vorticity of the clockwise and counter-clockwise vortices. The half-distance between these peaks is then referred to be the \textit{average} maximum vorticity $\omega_m$ contained in shed vortices for a given set of experimental flow conditions, namely, $h_0$, $d_0$ and $U_0$. Figure \ref{fig:PIV_vs_UDV}(b) displays the product $\omega_m d_0$ as a function of the water speed at the cylinder centreline $U_{h0}$. Again, despite the variations in $h_0/d_0$, it is observed that for a given cylinder diameter, $\omega_m d_0 \propto U_{h0}$. So, by assuming that the maximum vorticity in B\'enard-K\`arm\`an vortices is $U_{h0}/d_v$ where $d_v$ is some typical size of the vortex core, it is then possible to note from figure \ref{fig:PIV_vs_UDV}(b), the ratio $d_v/d_0$ for different cylinder sizes $d_0$.
Thereby, all the ingredients necessary to test the proposed scaling in eqn.~\ref{Eq:AmpTipScaling} is now obtained. Figure \ref{fig:AmplitudeVsCyBis} presents the same tip amplitude data given in figure \ref{fig:AmplitudeVsReynolds}(e) but instead, they are rescaled in terms of the experimentally obtained values of the Strouhal number $St_h$, cylinder centreline velocity $U_{h0}$ and the characteristic vortex length scale $d_v/d_0$, from figure \ref{fig:PIV_vs_UDV}(b). Clearly, experimental data ranging over all Cauchy numbers investigated here are regrouped around two distinct trend lines, namely $C_y^{1/2}$ and $C_y^{1/6}$, each corresponding to the case of moderately small and large Cauchy numbers, respectively, as expressed by \ref{Eq:AmpTipScaling}. In the former, the sheet reconfiguration is sufficiently small and so, it represents vibration of a rigid sheet forced by a vortex street. Whereas, in the latter case, sheets can be considered to be flexible. These results strongly suggest that the torsional spring model contains the essential mechanism to explain the observed vortex-forced-vibration of thin sheets. They also imply that, analogous to drag-reduction via reconfiguration under an external flow, a flexible sheet experiences a smaller vibration amplitude compared to that of a rigid sheet when excited by a B\'enard-K\`arm\`an vortex street.
\section{Sheet beating frequency}
\label{sec:TipFreq}
\begin{figure}
\begin{center}
\epsfig{file=freq_pColorPlot_wSymbols_v2.eps,width=1\textwidth,keepaspectratio=true}
\end{center}
\caption{Power spectral density for all experimental cases as a function of the forcing frequency $f_v$, corresponding to imposed periodic shedding of B\'enard-K\`arm\`an vortices. The frequency content of the [TOP] vertical velocity behind the cylinder, as measured using an UDV and [BOTTOM] vertical fluctuations of the sheet's free-end. Dashed line is given simply to show the trend while symbols are provided for the sake of reference only (see figure \ref{fig:AmplitudeVsCyBis}, for the corresponding sheet physical properties).}
\label{fig:PSD_FreqUDV}
\end{figure}
Figure \ref{fig:PSD_FreqUDV} (a) displays a color plot of the power spectral density of the UDV-measured, instantaneous vertical velocity at the cylinder mid-span $z = d_0/2$ and at a distance $3d_0$ downstream the cylinder wake. As already mentioned, UDV measurements were acquired at $2$ MHz during $3$ minutes. Colors, from bright yellow to black, represent the normalised spectra of the $y$-component velocity field across the cylinder diameter. Here, the spectra at each $y$-coordinate ($y \in [h_0, h_0+d_0]$) is first computed and then an average in the discrete Fourier space is taken to obtain the normalised spectra. The latter presents a maximum (bright yellow, in figure \ref{fig:PSD_FreqUDV} (a)) at a frequency $f$ which is, by definition, equal to the vortex shedding frequency $f_v$.
\begin{figure}
\begin{center}
\epsfig{file=freq_fB_vs_fUDV_fullpageBis.eps,width=0.9\textwidth,keepaspectratio=true}
\end{center}
\caption{A detailed view of the measured blade peak frequency as a function of the vortex shedding frequency $f_v$ for each cases. Symbols denote the same cases as in figure \ref{fig:AmplitudeVsReynolds}. Symbol size at each forcing frequency $f_v$ is proportional to the normalized power spectral density. Here, the continuous line ({\color{OliveGreen}{------}}) represents $f_b = f_v$ while the dot-dash line ($\textcolor{red}{-\cdot-}$) and the dotted line ($\textcolor{red}{\cdots}$) indicate the sheet's first and second natural frequency $f_{n1}$ and $f_{n2}$, respectively, and ($\textcolor{blue}{- - -}$) is the flat-plate shedding frequency based on the sheet \textit{deflected} height $f_h = 0.145 U_h/\bar{h}_b$. Also, in each case, pink bands indicate the forcing frequency range.}
\label{fig:FreqUDVVsFreqBladeDetails}
\end{figure}
Similarly, figure \ref{fig:PSD_FreqUDV} (b) presents the power spectral density for the sheet-tip $y_b(t)$. For the sake of comparison, the $x$-axis is kept the same as just before. Therefore, this figure illustrates the energy content of the sheet-tip oscillation for each \textit{forcing} frequency, equal to the shedding frequency of the B\'enard-K\`arm\`an vortices $f_v$. Note that the sheet tip position spectra is not quite the same as the $y$-component velocity power spectra discussed just before. Clearly, there are many cases where the power spectrum is wide for a fixed $f_v$ : sheet's vibrational energy is distributed across different frequencies and the dominant frequency $f_b$ varies from case to case. Nonetheless, the dominant frequency of sheet tip fluctuations is observed, in general, to be lower than the vortex shedding frequency $f_v$, as already briefly inferred in section \ref{sec:TipDyn}. Also, a second peak is often visible for some cases in figure \ref{fig:PSD_FreqUDV} (b).
To further elucidate the distribution of vibrational energy in the sheet-tip vertical motion, figure \ref {fig:FreqUDVVsFreqBladeDetails} provides a few dominant tip frequencies $f_b$, i.e., those frequencies corresponding to the prominent peaks in the corresponding power spectral density, for various imposed vortex shedding frequency. Size of the data points are proportional to the normalized power spectra. In this figure, data in each row corresponds to a single sheet characteristics, in the ascending order of the sheet stiffness ratio $L_b/e_b$ while each column denotes data from experiments with the same cylinder diameter $d_0$. The cylinder diameter $d_0$ increases from left to right. Here, the continuous line (green) provided to indicate the cases when $f_b = f_v$, i.e., the cases where the observed beating frequency of the sheet is equal to the forcing frequency due to B\'enard-K\`arm\`an vortices. Now it can be inferred from the data displayed in the last column, for experiments with the largest cylinder ($d_0 = 40$~mm), that the dominant frequency in the sheet-tip fluctuations is simply equal to the forcing frequency $f_v$, irrespective of the Cauchy number $C_y = 12 \left({C_d \frac{1}{2} \rho U^2}/{E}\right) \left({ L_b^3}/{e_b^3} \right)$. This is in contrast with the case when $d_0 = 10$~mm wherein $f_b < f_v$ for all sheets and multiple prominent peaks in the power spectra of the tip's vertical fluctuation $\tilde{y}_b (t)$. However, for some case, namely, S$442$ and S$1263$, the prominent frequencies are about the first, or second natural sheet frequency, respectively, as given by eqn.~(\ref{Eq:FreqNat}). Whereas, for the other cases, the data is scatter around the dashed-line (blue) which denotes the expression $f_h = 0.145 U_0/\bar{h}_b$, the flat-plate shedding frequency \cite{Blevins} based on the sheet deflected height $\bar{h}_b$, as known from the \textit{dynamic} sheet reconfiguration. We also observe similar dynamical regimes for the intermediate cylinder size $d_0 = 20$~mm : (i) For the most rigid sheet S$442$ and for $C_y < 20 $, the dominant blade frequency is seen to be the forcing frequency $f_v$, with a few dominant peaks either at the sheet's first natural frequency $f_{n1}$, (ii) For the most flexible sheets, namely, S$2000$ and S$4000$ and for $C_y > 10^3$ the sheets display oscillations at the vortex shedding frequency of an inclined flat plate given by $f_h = 0.145 U_0/\bar{h}_b$, and (iii) For moderate stiffness ratio and Cauchy numbers, the sheet oscillates either at one of its natural frequency which is close to the forcing frequency $f_v$ or at the inclined plate sheding frequency $f_h$. Finally, it is pointed out here that no conclusive evidence for a critical Cauchy number, nor a critical length scale for the vortices, is observed in these data. Nonetheless these observations strongly suggest that, in general, there is a transition from the forced-vortex-synchronous sheet-tip oscillation regime ($f_b = f_v$) to either a regime wherein the sheet-tip oscillations resemble the classical lock-in mode or a regime wherein the tip vibrations are induced by its wake characteristics. And the transition between these dynamical oscillation modes should depend on the relative size of the B\'enard-K\`arm\`an vortices and the stiffness ratio.
\section{Modal oscillations : flutter and traveling wave modes}
\label{sec:Waves}
\begin{figure}
\begin{center}
\epsfig{file=WaveSpeedDemo_518_fullpage.eps,width=0.9\textwidth,keepaspectratio=true}
\end{center}
\caption{(a) Spatio-temporal evolution of the normalized vertical displacement $\tilde{y}(s, t)/2\sqrt{\overline{\tilde{y}(s, t)^2}}$ for S$2000$ at $U_0 = 8.8$~cm s$^{-1}$, (b) Comparison between vertical blade displacement at two different points on the sheet, namely, $\tilde{y}(s = 0.9L_b, t)$ and $\tilde{y}(s = 0.3L_b, t)$ illustrates a time lag $\Delta T_\delta > 0$ and (c) Evolution of the time lag $\Delta T_\delta$ as given by the cross-correlation between $\tilde{y}(s = 0.9L_b, t)$ and $\tilde{y}(s, t)$ for various values of $s \lesssim L_b$. $U_w = 10.8$~cm s$^{-1}$ is the speed at which the vertical fluctuations propagate towards the sheet-tip is obtained by a linear data fit ($---$).}
\label{fig:PhaseLag}
\end{figure}
\begin{figure}
\begin{center}
\epsfig{file=WaveSpeedVsCauchy_v7Bis.eps,width=0.5\textwidth,keepaspectratio=true}
\end{center}
\caption{(a) The speed $U_w$ at which $y$-fluctuations travel towards the sheet-tip as a function of the time-averaged reconfiguration number $\left\langle{\mathcal{R}}\right\rangle = \left\langle{h}\right\rangle_b/L_b$ and (b) the normalized \textit{wave} speed $U_w / U_{h0}$ versus the Cauchy number $C_y$, where $U_{h0}$ is water speed at the cylinder center.}
\label{fig:TraveillingWaveSpeed}
\end{figure}
In the section \ref{sec:TipDyn} on the general remarks on sheet-tip dynamics, it was also pointed out that certain sheet-tips exhibit mild \textit{flutter}-like \textit{back-and-forth} motion while others present strong vertical oscillations (see figure \ref{fig:BTip}). Indeed, the instantaneous shape of the most stiff sheet S$442$ and the least stiff sheet S$4000$ in figure \ref{fig:ImgDetec} are distinctly different. In the former case, the local curvature does not change its sign throughout the length of the sheet while the latter presents a wave-like shape. This is the subject matter of this section.
Figure \ref{fig:PhaseLag} (a) displays a color plot of the normalized \textit{local} vertical oscillation amplitude $\tilde{y}(s, t)/{\delta} (s,t)$, where ${\delta} (s,t) = 2\sqrt{\overline{\tilde{y}(s, t)^2}}$ at different fixed points on the sheet as indicated by the curvilinear coordinate $s$ so that $\delta (s =L_b,t) = \delta_b$, the tip amplitude defined in eqn.~(\ref{eqn:TipAmp}). Data presented in figure \ref{fig:PhaseLag}(a) corresponds to the entire observation period ($\sim 7$ mins) but data close to the sheet foot are not presented as the image detection near the channel bottom is poor, except for the sheet anchoring point. Also, bright yellow represents the upward motion ($\tilde{y}(s, t) >0$) of the blade with respect to its time-averaged position, and vice versa ($\tilde{y}(s, t) < 0$) for the darker colors. A careful observation then indicates that there is a lag in upward fluctuations at the sheet-tip ($s = L_b$) and the points located at $s < L_b$. The same is true for downward fluctuations as well. This lag is readily visible in figure \ref{fig:PhaseLag} (b) which compares the $y$-fluctuations at $s = 0.9L_b$ and $s = 0.3L_b$ (see also, supplementary video \textit{III}). It is seen from the video that, in general, two oscillation states are observed. In a first regime which resembles a low-amplitude \textit{flutter}, the blade moves \textit{forth-and-back} about a mean reconfigured position and in the second regime, \textit{transverse waves} originate at some point $s < L_b$ and move along the sheet's length towards its free-end. Figure \ref{fig:PhaseLag} (c) presents measured time lag $\Delta t_\delta (s)$ between the sheet $y$-fluctuations at some arbitrary point $s \in [0, L_b]$, i.e., $\tilde{y}(s, t)$, with respect to sheet tip motion $\tilde{y}(s = L_b, t)$. The time lag $\Delta t_\delta$ at some point on this sheet increases linearly with its distance from the sheet-tip. Indeed, this should correspond to a constant speed at which the transverse waves progress towards the sheet-tip which in this case is $U_w = 3.8$ cm~s$^{-1}$, a little greater than the depth-averaged water speed $U_0 = 2.2$cm~s$^{-1}$.
Indeed, it is possible to systematically compute this lag for each of the case studied here. Figure \ref{fig:TraveillingWaveSpeed} displays such measurements of the transverse wave speed $U_w$. Data corresponding to $U_w = 0$ show no time lags i.e., $\Delta t_\delta = 0$. Clearly, almost all data points corresponding to the most stiff sheet S$442$ present \textit{flutter}-like oscillations as $U_w = 0$. Whereas, when the Cauchy number $C_y \succsim 10^2$, the transverse wave speed $U_w > 0$. This implies that all other sheets display \textit{wavy} modal oscillations with transverse waves which advance in the flow direction. Note that these observations are valid for all cylinder diameters $d_0$ in our experiments. Note that shortest sheets in this study are about $84$~mm long and the biggest cylinder is about $40$~mm. When $C_y$ is sufficiently large, $\bar{h}_b \lesssim 0.5 L_b$ (see figure \ref{fig:ReconfigCompa}). For these cases, the most of the bending stress in the sheet is concentrated at the foot where the local sheet curvature is approximately $\bar{h}_b$. Thereby, an effective free-end of length say $l \sim L_b - \bar{h}_b$ is available for wave-like sheet motion if this length scale is at least comparable with, or greater than, the typical size of a vortex shed from the cylinder. This suggests that transverse waves appear only when the Cauchy number is sufficiently large so that the typical vortex size is comparably smaller than the length of the ``stress-free'' end of a flexible sheet. Furthermore, these two regimes can be identified with those found in the case of the non-dimensional oscillation amplitude $\delta_b/d_0$. The flutter mode is seen to occur when $\delta_b/d_0 \propto C_y^{1/2}$ whereas the flag-like oscillations occur in the regime when $\delta_b/d_0 \propto C_y^{1/6}$. Finally, in figure \ref{fig:TraveillingWaveSpeed}, the observed wave speed $U_w$ varies between $1$ to $3$ times the local water speed at the cylinder centre $U_{h0}$. And as the Cauchy number increases, it decreases and tends towards $U_w/U_{h0} \sim 1$.
\section{Conclusions}
\label{sec:conclusions}
The motion of an \textit{isolated} quasi-2D artificial sheet subject to a transverse water flow that advect a periodic array B\'enard-K\`arm\`an vortices shed by a cylinder upstream is experimentally investigated. Thin polyethylene sheets of varying lengths and thicknesses ($L_b = 84$ -- $240$ mm; $e_b = 0.06$ -- $0.19$ mm) and three different cylinder diameters $d_0 = 10$, $20$ and $40$ mm are used for this purpose in a long-narrow water channel. Each experiment consist of rigidly anchoring the sheet to the channel bottom and then systematically exciting its free-end by vortices shed by a cylinder upstream. The forcing frequency is $f_v = 0.14$--$2.1$ $Hz$ for different depth-averaged water speed $U_0 = 22$ -- $88$~mm~s$^{-1}$, so that the cylinder Reynolds number varies between $Re_{d} \equiv \rho U_0 d_0/\mu = 240$ -- $3800$.
Our experiments show that the time-averaged reconfiguration of a thin sheet follows qualitatively the same scaling with the Cauchy number $C_y = 12 \left({C_d \frac{1}{2} \rho U_{0}^2}/{E}\right) \left( { L_b^3 }/{e_b^3} \right)$ as in the case of a thin sheet in an uniform, steady flow bends, so that its time-averaged profile drag is reduced. A simple bending beam model for steady flow which takes into account the drag force, and also the buoyancy force, as in \cite{Luhar_Reconfig2011}, provides a reasonably good match with observations, if the flow speed $U_0$ is replaced by that of a local speed based on depth-averaged steady velocity profile given by \textit{Coles} law \cite{Coles1956law}. Hence, the mean sheet \textit{dynamic} reconfiguration number is very much similar to that of its \textit{static} counterpart, as if the flow is steady. In addition, this suggests that the \textit{average} drag force $\bar{F}_d = C_d 1/2 \rho U_0^2 \bar{h}_b w_b \propto U_0^{2+\mathcal{V}}$. Here, $\bar{h}_b \propto L_b U_0^{\mathcal{V}}$ is the time-averaged sheet deflection and $\mathcal{V} < 0$ for drag reduction is the so-called Vogel number and it is observed to be $\mathcal{V} \approx -0.6 \pm 0.1$ in the present work.
For a given blade thickness ($e_b$) and length ($l_b$), the oscillation amplitude ($\delta_b$) of the sheet tip increases with the Reynolds number $Re_{d}$. It is also demonstrated that the Cauchy number is the appropriate parameter to scale all data in terms of the non-dimensional amplitude $\delta_b/d_0$. The underlying mechanism that controls sheet-tip oscillations is then analyzed via a toy-model. It consists of torsional spring-mounted rigid flat plate subject to an external flow which is deposed into a uniform steady flow and a regular array of vortices. If the former is taken to control the \textit{average} sheet reconfiguration and the latter is assumed to provide the necessary work for the forced vibration the sheet, it is then shown that the rescaled oscillation amplitude $\delta_b/d_v \sim C_y^{(1+\mathcal{V})/2} St_v ^{-1/2}$, where $d_v$ is the typical length scale of the vortex core such that the \textit{modified} Strouhal number $St_v = f_v d_v/U_{0}$. In particular, for a relatively rigid sheet wherein the \textit{average} drag force provides only a very little sheet deflection i.e., when $\mathcal{V} \approx 0$, $\delta_b/d_0 \sim C_y^{1/2}$. This corresponds to the cases where the Cauchy number is moderate. Also, in this case, the sheet vibrates like a rigid curved plate as its local curvature does not change sign over the sheet's entire length. On the other hand, at large Cauchy number $C_y > 10^2$, for a relatively flexible sheet wherein the \textit{average} drag force results in a strong sheet reconfiguration i.e., when $\mathcal{V} \approx -2/3$ \cite{DeLangre2008}, the non-dimensional vibration amplitude of the sheet free-end $\delta_b/d_0 \sim C_y^{1/6}$. And in this case, the sheet exhibits modal oscillations like a flapping flag. Here, transverse waves which travel forward towards the sheet free-end appear. The forward speed of such waves can attain as high as three times the \textit{local} flow speed at the cylinder centreline $U_{h0}$, depending on the Cauchy number.
In regards to the beating frequency ($f_b$) of the sheet free-end, three dynamical regimes are observed in this study: (i) Tip oscillations follow the forcing frequency $f_v$ corresponding to vortex shedding from the upstream cylinder, (ii) Tip oscillation frequency is related to vortex shedding behind a free inclined rigid plate of frontal height equal to the \textit{average} sheet deflection $\bar{h}_b$ such that $f_b \sim 0.145 U_{h}/\bar{h}_b$, and (iii) Sheet vibrations occur at one of its natural frequency $f_n$, near the forcing frequency. When the cylinder diameter $d_0 = 40$ mm, sheet tip oscillates at the forcing frequency $f_v$, irrespective of the Cauchy number studied here ($C_y < 10^5$). For smaller cylinders, the sheet displays \textit{flow-induced vibration} controlled by its wake characteristics as in (ii) or a \textit{lock-in} motion at its natural frequency as in (iii). Furthermore, this transition is possibly dependent on the \textit{average} sheet reconfiguration and sheet thickness as well.
As mentioned in the classical book on Flow-Induced-Vibration by \citet{Blevins}, many studies on flow-induced oscillations can often be classified into two general categories depending on the incoming flow, namely, a steady or an unsteady flow. Our work is a sub-category of the latter case wherein we provide a case study for interactions between eddies and a flexible sheet. Indeed, more work is necessary to understand and hence predict the above-mentioned dynamical regimes via flow visualization, PIV measurements of the flow around the sheet and in its wake. Moreover, the pertinence of these results for a canopy of flexible structures like plants (artifical or natural), and for different mass ratio as well, is left for future investigations.
\section*{Acknowledgements}
Authors thank St\'{e}phane Martinez from Universit\'{e} Claude-Bernard Lyon$1$ for his technical support building and maintaining the experimental set-up. PIV measurements are collected from an internship work by Christophe Lehmann. We also acknowledge Emily M\"{a}usel and Cl\'{e}ment Pierrot-Minot (CP-M) for helping us measure some of the sheet's physical and mechanical properties. CP-M and JSJ thank Karine Bruy\`{e}re for her kind support and guidance with the linear-displacement facility (INSTRON \ $8802$) at Ifsttar-TS$2$, LBMC (Lyon-Bron) to estimate tensile strength of materials used in this work.
This work had benefited from a joint French-German funding support, namely, the DFG-ANR project \textit{ESCaFlex} (ANR-$16$-CE$92$-$0020$, DFG grant $634058$).
|
1,116,691,497,897 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{R}{ecent} \yty{years have witnessed the great success of convolution neural networks (CNNs) applied to image recognition \cite{Krizhevsky2012, Szegedy2015, He2016}, object detection \cite{Girshick2014, Girshick2015, Ren2015} and semantic segmentation \cite{Long2015, Noh2016, Li2017}. The visual tracking community also \abc{sees} an increasing number of trackers \cite{Song2017, Nam2016, Wang2015, Bertinetto2016, Guo2017} adopting deep learning models to boost their performance.} Among them are two dominant tracking strategies. One is the {\em tracking-by-detection} scheme that online trains an object appearance classifier \cite{Song2017, Nam2016} to distinguish the target from the background. The model is first learned using the initial frame, and then fine-tuned using the training samples generated in the subsequent frames based on the newly predicted bounding box. The other scheme is {\em template matching}, which adopts either the target patch in the first frame \cite{Bertinetto2016, Tao2016} or the previous frame \cite{Held2016} to construct the matching model. To handle changes in the target appearance,
the template built in the first frame may be interpolated by the recently generated object template with a small learning rate \cite{Valmadre2017}.
The main difference between these two strategies is that tracking-by-detection maintains the target's appearance information in the weights of the deep neural network, thus requiring online fine-tuning with stochastic gradient descent (SGD) to make the model adaptable,
while in contrast, template matching stores the target's appearance in the object template, which is generated by feed forward computations. Due to the computationally expensive model updating required in tracking-by-detection, the speed of such methods are usually slow, e.g.\
\cite{Song2017, Nam2016, Nam2016-1} run at about 1 fps,
although they do achieve state-of-the-art tracking accuracy.
Template matching methods, however, are fast
because there is no need to update the parameters of the neural networks. Recently, several trackers \cite{Bertinetto2016, Guo2017, Yang2017, He2018, Wang2018} adopt fully convolutional Siamese networks as the matching model, which demonstrate promising results and real-time speed. However, there is still a large performance gap between template-matching models and tracking-by-detection, due to the lack of an effective method for adapting to appearance variations online.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{weight_illustration.pdf}
\end{center}
\vspace{-6mm}
\caption{Example of template updating on the Basketball video: the control gate signals change along with the appearance variations. When there are large appearance changes, the allocation gate approaches to 1, which means a new memory slot is overwritten.
\abc{When there are only small appearance variations in the object template, then the read gate is close to 1, which indicates that the most recently read memory slot will be updated.}
See Section \ref{memwrite} for detailed explanations.
}
\vspace{-5mm}
\label{fig:1}
\end{figure*}
In this paper, we propose a dynamic memory network, where the target information is stored and recalled from external memory, to maintain the variations of object appearance for template-matching (See an example in Figure \ref{fig:1}).
Unlike tracking-by-detection where the target's information is stored in the weights of neural networks and \yty{therefore the capacity of the model is fixed by the number of parameters}, the model capacity of our memory networks can be easily enlarged by increasing the size of external memory, which is useful for memorizing long-term appearance variations.
Since aggressive template updating is prone to overfit recent frames and the initial template is the most reliable one,
we use the initial template as a conservative reference of the object and a residual template,
obtained from retrieved memory, to adapt to the appearance variations.
During tracking, the residual template is
gated channel-wise and
combined with the initial template to form the \yty{positive matching template.}
The channel-wise gating of the residual template controls how much each channel of the retrieved template should be added to the initial template, which can be interpreted as a feature/part selector for adapting the template.
\abc{Besides the positive template, a second ``negative'' memory unit stores templates of potential distractor objects. The negative template is used to cancel out non-discriminative channels (corresponding to object parts) in the positive template, yielding the final template, which is convolved with the search image features to get the response map.}
\yty{The reading and writing process of the positive and negative memories, as well as the channel-wise gate vector for the residual template, is controlled by an LSTM (Long Short-Term Memory) whose input is based on the search feature map.}
As the target position is at first unknown in the search image, we adopt an attention mechanism to locate the object roughly in the search image, thus leading to a soft representation of the target for the input to the LSTM controller. This helps to retrieve the most-related template in the memory. \yty{In addition, we further improve the tracking performance by adding an auxiliary classification loss at the end of the CNN feature extractor, which is aimed at improving the tracker's robustness to appearance variations. \abc{The tracking and classification losses serve complementary roles} -- learning features through similarity matching facilitates their ability of precise localization, while training features on the auxiliary classification problem
provides semantic information for tracking robustness.}
The whole framework is differentiable and therefore can be trained end-to-end with SGD. In summary, the contributions of our work are:
\begin{compactitem}
\item We design a dynamic memory network for visual tracking. An external memory block, which is controlled by an LSTM with attention mechanism, allows adaptation to appearance variations.
\item We propose gated residual template learning to generate the final matching template, which effectively controls the amount of appearance variations in retrieved memory that is added to each channel of the initial matching template.
This prevents excessive model updating, while retaining the conservative information of the target.
\yty{\item We propose a negative template memory for storing and retrieving distractor templates, which are used to cancel the response peaks due to distractor objects, thus alleviating
drift problems caused by distractors.}
\yty{\item We add an auxiliary classification branch
after the feature extraction block, \abc{which trains the features to also contain semantic information. This increases the robustness of the features to variations in object appearances,
and boosts the tracking performance.}}
\item We extensively evaluate our algorithm on large scale datasets OTB and VOT. Our trackers perform favorably against state-of-the-art tracking methods while possessing real-time speed.
\end{compactitem}
\tyy{The remainder of the paper is organized as follows. In Section 2, we briefly review related work.
In Section 3, we describe our proposed tracking methods, and in Section 4 we present implementation details. We perform extensive experiments on OTB and VOT datasets in Section 5.}
\section{Related Work}
\abc{In this section, we review related work on tracking-by-detection, tracking by template-matching, memory networks and multi-task learning.}
\abc{A preliminary version of our work appears in ECCV 2018 \cite{Yang2018}. This paper contains additional improvements in both methodology and experiments, including:
1) \tyy{we propose a negative memory unit that stores distractor templates to cancel out wrong responses from the object template;
2) we design an auxiliary classification loss to facilitate the tracker's robustness to
appearance changes;
3) we conduct comprehensive experiments on the VOT datasets, including VOT-2015, VOT-2016 and VOT-2017.}}
\subsection{Tracking by Detection}
\yty{Tracking-by-detection treats object tracking as a detection problem within \ytyy{an} ROI image, where an online learned classifier is used to distinguish the target from the background.
The difficulty
\ytyy{of} updating the classifier to adapt to appearance variations is that the bounding box predicted on each frame may not be accurate, which produces degraded training samples and thus gradually causes the tracker to drift.
Numerous algorithms have been designed to mitigate the sample ambiguity caused by inaccurate predicted bounding boxes. \cite{Grabner2008} formulates the online model learning process in a semi-supervised fashion by combining a given prior and the trained classifier. \cite{Babenko2011} proposes a multiple instance learning scheme to solve the problem of inaccurate examples
for online training. Instead of only focusing on facilitating the training process of the tracker,
\cite{Kalal2012} decomposes the tracking task into three parts---tracking, learning and detection, where a optical flow tracker is used for frame-to-frame tracking and an online trained detector is adopted to re-detect the target when drifting occurs.
}
\yty{With the widespread use of CNNs in the computer vision community, many methods \cite{li2018deep} have applied CNNs as the classifier to localize the target
\cite{Wang2015} uses two fully convolutional neural networks to estimate the target's bounding box, including a GNet that captures category information and an SNet that classifies the target from the background. \cite{Nam2016} presents a multi-domain learning framework to learn the shared representation of objects from different sequences. Motived by Dropout \cite{Srivastava2014}, BranchOut \cite{Han2017} adopts multiple branches of fully connected layers, from which a random subset are selected for training, which regularizes the neural networks to avoid overfitting. Unlike these tracking-by-detection algorithms, which need costly stochastic gradient decent (SGD) updating, our method runs completely feed-forward and adapts to the object's appearance variations through a memory writing process, \abc{thus achieving real-time performance.}}
\subsection{Tracking by Template-Matching} Matching-based methods have recently gained popularity due to their fast speed and
\tyy{promising} performance.
The most notable is the fully convolutional Siamese network (SiamFC) \cite{Bertinetto2016}. Although it only uses the first frame as the template, SiamFC achieves competitive results and fast speed. The key deficiency of SiamFC is that it lacks an effective model for online updating.
To address this, \cite{Valmadre2017} updates the model using linear interpolation of new templates with a small learning rate, but only
sees modest improvements in accuracy.
RFL (Recurrent Filter Learning) \cite{Yang2017} adopts a convolutional LSTM for model updating, where the forget and input gates control the linear combination of the historical target information (\emph{i.e.}, memory states of the LSTM) and the object's current template automatically. Guo \emph{et al.} \cite{Guo2017} propose a dynamic Siamese network with two general transformations for target appearance variation and background suppression. \ytyy{He \emph{et. al.} \cite{He2018} design two branches of Siamese networks with a channel-wise attention mechanism aiming to improve the robustness and discrimination ability of the matching network.}
To further improve the speed of SiamFC, \cite{Huang2017}
reduces the feature computation cost for easy frames, by using deep reinforcement learning to train policies for early stopping the feed-forward calculations of the CNN when the response confidence is high enough.
SINT \cite{Tao2016} also uses Siamese networks for visual tracking and has higher accuracy, but runs much slower than SiamFC (2 fps vs 86 fps) due to the use of a deeper CNN (VGG16) for feature extraction, and optical flow for its candidate sampling strategy. \tyyy{\cite{chi2017dual} proposes a dual deep network by exploiting hierarchical features of CNN layers for object tracking.} Unlike other template-matching models that use sliding windows or random sampling to generate candidate image patches for testing, GOTURN \cite{Held2016} directly regresses the coordinates of the target's bounding box by comparing the previous and current image patches. \ytyy{Despite its fast speed and advantage on handling scale and aspect ratio changes}, its tracking accuracy is much lower than other state-of-the-art trackers.
Different from existing matching-based trackers where the capacity to adapt is limited by the neural network size, we use SiamFC
as the baseline feature extractor and
add
an addressable memory, whose memory size is independent of the neural networks and thus can be easily enlarged as memory requirements of a tracking task increase.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{framework.pdf}
\end{center}
\vspace{-5mm}
\caption{The pipeline of our tracking algorithm. The green rectangle is the candidate region for target searching. The \textit{Feature Extraction} blocks for the object image and search image share the same architecture and parameters. An attentional LSTM extracts the target's information on the search feature map, which guides the memory reading process to retrieve a matching template. The residual template is combined with the initial template, to obtain \yty{a positive template.
\abc{A negative template is read from the negative memory and combined with the positive template to cancel responses from distractor objects.
The final template is convolved with the search feature map to obtain the response map.}
The newly predicted bounding box is then used to crop the object's \abc{feature map}
for writing to the positive memory. A negative template is extracted from the search feature map using the response score and written to negative memory.}
}
\vspace{-3mm}
\label{fig:2}
\end{figure*}
\subsection{Memory Networks} The recent use of convolutional LSTM for visual tracking \cite{Yang2017} shows that memory states
are useful for object template management over long timescales. Memory networks are typically used to solve simple logical reasoning problem in natural language processing (NLP), e.g., question answering and sentiment analysis. The pioneering works include NTM (Neural Turing Machine) \cite{Graves2014} and MemNN (Memory Neural Networks) \cite{Weston2015}. They both propose an addressable external memory with reading and writing mechanisms -- NTM focuses on problems of sorting, copying and recall, while MemNN aims at language and reasoning tasks. MemN2N
\cite{Sukhbaatar2015} further improves MemNN by removing the supervision of supporting facts, which makes it trainable in an end-to-end fashion. Based on
NTM,
\cite{Graves2016} proposes DNC (Differentiable Neural Computer), which uses a different access mechanism to alleviate the memory overlap and interference problems.
Recently, NTM is also applied to one-shot learning \cite{Santoro2016} by redesigning the method for reading and writing memory, and has shown promising results at
encoding and retrieving new information quickly.
\ytyy{\cite{Liu2017} also proposes a memory-augmented tracking algorithm, which obtains limited performance and lower speed (5 fps) due to two reasons.
First, in contrast to our method, they performs dimensionality reduction of the object template (from 20x20x256 to 256) when storing it into memory, resulting in loss of spatial information for template matching. Second, they extract multiple patches centered on different positions of the search image to retrieve the proper memory, which is not efficient compared with our attention scheme.}
Our proposed memory model differs from the aforementioned memory networks in the following aspects. First, for the question answering problem, the input of each time step is a sentence,
\emph{i.e.}, a sequence of feature vectors (each word corresponds to one vector) that needs an embedding layer (usually RNN) to obtain an internal state. In contrast, for object tracking, the input is a search image that needs a feature extraction process (usually CNN) to get a more abstract representation. Furthermore, for object tracking, the target's position in the search image patch is unknown, and here we propose an attention mechanism to highlight the target's information when generating the read key for memory retrieval.
Second, the dimension of feature vectors stored in memory for NLP is relatively small (50 in MemN2N vs.~6$\times$6$\times$256=9216 in our case).
Directly using the original template for address calculation is time-consuming. Therefore we apply an average pooling on the feature map to generate a template key for addressing, which is efficient and effective experimentally.
Furthermore, we apply channel-wise gated residual template learning for model updating, and redesign the memory writing operation to be more suitable for visual tracking.
\ytyy{\subsection{Multi-task learning}
Multi-task learning has been successfully used in many applications of machine learning, ranging from natural language processing \cite{collobert2008unified} and speech recognition \cite{deng2013new} to computer vision \cite{girshick2015fast}. \cite{caruana1997multitask} estimates the street direction in an autonomous driving car by predicting various characteristics of the road, which serves as an auxiliary task. \cite{zhang2012convex} introduces auxiliary tasks of estimating head pose and facial attributes to boost the performance of facial landmark detection, while \cite{li2015heterogeneous} boosted the performance of a human pose estimation network by adding human joint detectors as auxiliary tasks. Recent works combining object detection and semantic segmentation \cite{yao2012describing, He2017}, as well as image depth estimation and semantic segmentation \cite{eigen2015predicting, kendall2018multi}, also demonstrate the effectiveness of multi-task learning on improving the generalization ability of neural networks. Observing that the CNN learned for object similarity matching lacks the generalization ability of invariance to appearance variations, we propose to add an auxiliary task, object classification, to regularize the CNN so that it learns object semantics.}
\section{Dynamic Memory Networks for Tracking}
In this section, we propose a dynamic memory network with reading and writing mechanisms for visual tracking.
The whole framework is shown in Figure \ref{fig:2}.
Given the search image, first features are extracted with a CNN.
The image features are input into an attentional LSTM, which controls memory reading and writing.
A residual template is read from the \yty{positive memory} and combined with the initial template learned from the first frame, forming the \yty{ positive template. Then a negative template is retrieved from the negative memory to cancel parts of the positive template through a channel-wise gate, forming the final template.} The final template is convolved with the search image features to obtain the response map, and the target bounding box is predicted.
The new target's template is cropped using the predicted bounding box, features are extracted and then written into \yty{positive }memory for model updating. \yty{The negative template is extracted on the search feature map based on the response map. Responses whose corresponding score is greater than a threshold and whose distance are far from the target's center are considered as negative (distractor) templates for negative memory writing.}
\subsection{Feature Extraction}
Given an input image $I_t$ at time $t$, we first crop the frame into a search image patch $S_t$ with a rectangle that is computed from the previous predicted bounding box \yty{as in \cite{Bertinetto2016}}.
Then it is encoded into a high level representation $f(S_t)$, which is a spatial feature map, via a fully convolutional neural networks (FCNN). In this work we use the FCNN structure from SiamFC \cite{Bertinetto2016}.
After getting the predicted bounding box, we use the same feature extractor to compute the new object template for \yty{positive memory} writing.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{att_net.pdf}
\end{center}
\vspace{-5mm}
\caption{\textbf{Left}: Diagram of attention network. \textbf{Right}: Visualization of attentional weights map: for each pair, \abc{(top)} search images and ground-truth target box, and \abc{(bottom)} attention maps over search image. For visualization, the attention maps are resized using bicubic interpolation to match the size of the original image.}
\label{fig:3}
\vspace{-4mm}
\end{figure}
\subsection{Attention Scheme}\label{attention}
Since the object information in the search image is needed to retrieve the related template for matching, but the object location is unknown at first, we apply an attention mechanism to make the input to the LSTM concentrate more on the target.
We define $F_{t,i} \in \mathbb{R}^{n \times n \times c}$ as the $i$-th $\mathit{n\times n\times c}$ square patch on $F_t=f(S_t)$ in a sliding window fashion.\footnote{We use $6\times6\times256$, which is the same size of the matching template.}
Each square patch covers a certain part of the search image. An attention-based weighted sum of these square patches can be regarded as a soft representation of the object, which can then be fed into the LSTM to generate a proper read key for memory retrieval. However the size of this soft feature map is still too large to directly feed into the LSTM.
To further reduce the size of each square patch,
we first adopt an average pooling with $n\times n$ filter size on $F_t$,
\begin{align}
\textbf{f}_t = \text{AvgPooling}_{n\times n}(F_t)
\end{align}
and $\mathbf{f}_{t,i} \in \mathbb{R}^{c}$ is the feature vector
for the $i$th patch.
The attended feature vector is then computed as the weighted sum of the feature vectors,
\begin{align}
\mathbf{a}_t = \sum_{i=1}^{L}\alpha_{t,i}\mathbf{f}_{t,i}
\end{align}
where $L$ is the number of square patches, and the attention weights $\alpha_{t,i}$ is calculated by a softmax,
\begin{align}
\alpha_{t,i} = \frac{\exp(r_{t,i})}{\sum_{k=1}^{L}\exp(r_{t,k})}
\end{align}
where
\begin{align}
r_{t,i} = W^a \text{tanh}(W^h \mathbf{h}_{t-1}+W^f \mathbf{f}_{t,i}+b)
\end{align}
is an attention network (\ytyy{Figure \ref{fig:3}: Left}), which takes the previous hidden state $\mathbf{h}_{t-1}$ of the LSTM and a square patch $\mathbf{f}^*_{t,i}$ as input. $W^a, W^h, W^f$ and $b$ are weight matrices and biases for the network.
By comparing the target's historical information in the previous hidden state with each square patch, the attention network can generate attentional weights that have higher values on the target and smaller values for surrounding regions. Figure \ref{fig:3} (right) shows example search images with attention weight maps. Our attention network can always focus on the target which is beneficial when retrieving memory for template matching.
\subsection{LSTM Memory Controller}
For each time step, the LSTM controller takes the attended feature vector $\mathbf{a}_t$, obtained by the attention module, and the previous hidden state $\mathbf{h}_{t-1}$ as input, and outputs the new hidden state $\mathbf{h}_t$ to calculate the memory control signals, including read key, read strength, bias gates, and decay rate (discussed later).
The internal architecture of the LSTM uses the standard model, while the output layer is modified to generate the control signals.
In addition, we also use layer normalization \cite{Ba2016} and dropout regularization \cite{Srivastava2014} for the LSTM. The initial hidden state $\mathbf{h}_0$ and cell state $\mathbf{c}_0$
are
obtained by passing the initial target's feature map through one $n\times n$ average pooling layer and two separate fully-connected layer with tanh activation functions, respectively.
\subsection{Memory Reading}\label{read}
Memory is retrieved by computing a weighted sum of all memory slots with the read weight vector, which is determined by the cosine similarity between the read key and the memory keys. This aims at retrieving the most related template stored in memory. \yty{Since the memory reading processes for positive and negative are similar, we will only show the positive case.}
Suppose $\mathbf{M}_t \in \mathbb{R}^{N\times n \times n \times c}$ represents the memory module, such that $\mathbf{M}_t(j) \in \mathbb{R}^{n \times n \times c}$ is the template stored in the $j\text{th}$ memory slot and $N$ is the number of memory slots.
The LSTM controller outputs the read key $\mathbf{k}_t \in \mathbb{R}^{c}$ and read strength $\beta_t \in [1,\infty]$,
\begin{align}
\mathbf{k}_t = & W^k\mathbf{h}_{t}+b^k, \\
\beta_t = & 1+\log(1+\exp(W^\beta \mathbf{h}_{t}+b^\beta)),
\end{align}
where
$W^k, W^\beta, b^k, b^\beta$ are the weight matrices and biases.
The read key $\mathbf{k}_t$ is used for matching the contents in memory, while the read strength $\beta_t$ indicates the reliability of the generated read key.
Given the read key and read strength, a \textit{read weight} $\mathbf{w}^r_t\in \mathbb{R}^{N}$ is computed for memory retrieval,
\begin{align}
\mathbf{w}^r_t(j) =\frac{\exp{\{C(\mathbf{k}_t, \mathbf{k}_{\mathbf{M}_t(j)})}\beta_t\}}{\sum_{j'} \exp{\{C(\mathbf{k}_t, \mathbf{k}_{\mathbf{M}_t(j')})}\beta_t\}},
\end{align}
where $\mathbf{k}_{\mathbf{M}_t(j)} \in \mathbb{R}^{c}$ is the memory key generated by a $n\times n$ average pooling on $\mathbf{M}_t(j)$. $C(\mathbf{x}, \mathbf{y})$ is the cosine similarity between vectors,
$C(\mathbf{x},\mathbf{y})= \frac{\mathbf{x} \cdot \mathbf{y}}{\|\mathbf{x}\|\|\mathbf{y}\|}$.
Finally, the template is retrieved from memory as a weighted sum,
\begin{align}
\mathbf{T}^{\text{retr}}_t=\sum_{j=1}^N\mathbf{w}^r_t(j)\mathbf{M}_t(j).
\end{align}
\subsection{Residual Template Learning}\label{residual}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{res_feat.pdf}
\end{center}
\vspace{-5mm}
\caption{\textbf{Left}: The feature channels respond to target parts: images are reconstructed from conv5 of the CNN used in our tracker. Each image is generated by accumulating reconstructed pixels from the same channel. The input image is shown in the top-left. \textbf{Right}: Channel visualizations of a retrieved template along with their corresponding residual gate values in the left-top corner.}
\label{fig:6-1}
\vspace{-4mm}
\end{figure}
Directly using the retrieved template for similarity matching is prone to overfit recent frames.
Instead, we learn a residual template by multiplying the retrieved template with a channel-wise gate vector and adding it to the initial template to capture appearance changes. Therefore, our \yty{positive} template is formulated as,
\begin{align}
\mathbf{T}^{\text{pos}}_t = \mathbf{T}_0+ \mathbf{r}_t\odot \mathbf{T}^{\text{retr}}_t,
\end{align}
where $\mathbf{T}_0$ is the initial template and $\odot$ is channel-wise multiplication.
$\mathbf{r}_t\in \mathbb{R}^c$ is the \textit{residual gate} produced by the LSTM controller,
\begin{align}
\mathbf{r}_t = \sigma (W^r\mathbf{h}_{t}+b^r),
\end{align}
where $W^r, b^r$ are the weights and bias, and $\sigma$ represents sigmoid function.
The \textit{residual gate} controls how much each channel of the retrieved template is added to the initial template, which can be regarded as a form of feature selection.
By projecting different channels of a target feature map to pixel-space using deconvolution, as in \cite{Zeiler2014}, we find that the channels focus on different object parts (Figure \ref{fig:6-1}: Left).
\ytyy{To show the behavior of residual learning, we also visualize the retrieved template along with its residual gates in Figure \ref{fig:6-1} (right). The channels that correspond to regions with appearance changes (the bottom part of the face is occluded) have higher residual gate values, demonstrating that the residual learning scheme adapts the initial template to appearance variations. In addition, channels corresponding to previous target appearances are also retrieved from memory (e.g., 6th row, 5th column; the nose and mouth are both visible meaning that they are not occluded), demonstrating that our residual learning does not overfit to recent frames.}
Thus, the channel-wise feature residual learning has the advantage of updating different object parts separately.
Experiments in Section \ref{abla} show that this yields a big performance improvement.
\subsection{Distractor Template Canceling and Final Template}
\yty{As is shown in Section \ref{residual}, the feature channels respond to different object parts. The channels of the positive template that are similar to those of a distractor template are considered as not discriminative. Thus we propose to cancel \abc{these}
feature channels of the positive template via a canceling gate \abc{to obtain the final template,}
\begin{align}
\mathbf{T}^{\text{final}}_t = \mathbf{T}^{\text{pos}}_t- \mathbf{c}_t\odot \mathbf{T}^{\text{neg}}_t,
\end{align}
where $\mathbf{T}^{\text{neg}}_t$ is the distractor (negative) template which is retrieved from negative memory (as in Section \ref{read}), and $\mathbf{c}_t$ is the canceling gate produced by comparing the positive and negative templates,
\begin{align}
\mathbf{c}_t = \sigma (W^c\text{tanh}(W^{pos}_{1\times 1\times c}*\mathbf{T}^{\text{pos}}_t+W^{neg}_{1\times 1\times c}*\mathbf{T}^{\text{neg}}_t+b^c))
\end{align}
where
$W^{pos}, W^{neg}$ are \abc{$1\times 1\times c$ convolution filters}, $\{W^c, b^c\}$ are the weights and bias, and $*$ is the convolution operation. This process weakens the weight of non-discriminative channels when forming the final response map, leading to an emphasis of discerning channels.} \ytyy{To demonstrate the effect of distractor template canceling, we show the responses generated with distractor template canceling (MemDTC) and without it (MemTrack)
in Figure \ref{fig:6-2}. The response maps generated by MemDTC are less cluttered than those of MemTrack, and MemDTC effectively suppresses responses from similar looking distractors (i.e., the other runners).}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\linewidth]{response_comparision.pdf}
\end{center}
\vspace{-5mm}
\caption{Example responses comparing tracking with distractor template canceling (MemDTC) and without (MemTrack).}
\label{fig:6-2}
\vspace{-4mm}
\end{figure}
\tyy{The response map is generated by convolving the search feature map with the final template as the filter, which is equivalent to calculating the correlation score between the template and each translated sub-window on the search feature map in a sliding window fashion. The displacement of the target from the last frame to current frame is calculated by multiplying the position of the maximum score, relative to the center of the response map, with the feature stride. The size of the bounding box is determined by searching at multiple scales of the feature map. This is done using a single feed forward computation by assembling all scaled images as a mini-batch, which is very efficient in modern deep learning libraries. }
\subsection{Positive Memory Writing} \label{memwrite}
\yty{The image patch with the new position of the target is used for positive memory writing.}
The new object template $\mathbf{T}^{\text{new}}_t$ is computed using the feature extraction CNN. There are three cases for memory writing: 1) when the new object template is not reliable (e.g.\ contains a lot of background), there is no need to write new information into memory; 2) when the new object appearance does not change much compared with the previous frame, the memory slot that was previously read should be updated;
3) when the new target has a large appearance change, a new memory slot should be overwritten.
To handle these three cases, we define the \textit{write weight} as
\begin{align}
\mathbf{w}^w_t =g^s\mathbf{0}+g^r\mathbf{w}^r_t + g^a\mathbf{w}^a_t,
\end{align}
where $\mathbf{0}$ is the zero vector, $\mathbf{w}^r_t$ is the read weight, and $\mathbf{w}^a_t$ is the allocation weight, which is responsible for allocating a new position for memory writing.
The \tyy{skip gate} $g^s$, read gate $g^r$ and allocation gate $g^a$, are produced by the LSTM controller with a softmax function,
\begin{align}
[g^s, g^r, g^a] = \text{softmax}(W^g \mathbf{h}_{t}+b^g),
\end{align}
where $W^g, b^g$ are the weights and biases. Since $g^s+g^r+g^a=1$, these three gates govern the interpolation between the three cases. If $g^s=1$, then $\mathbf{w}^w_t=\mathbf{0}$ and nothing is written.
If $g^r$ or $g^a$ have higher value, then the new template is either used to update the old template (using $\mathbf{w}^r_t$) or written into newly allocated position (using $\mathbf{w}^a_t$). The \textit{allocation weight} is calculated by,
\begin{align}\label{alloc}
\mathbf{w}^a_t(j)=
\begin{cases}
1, &\text{if } j=\displaystyle \mathop{\mathrm{argmin}}_{j} \mathbf{w}^u_{t-1}(j)\\
0, &\text{otherwise}
\end{cases}
\end{align}
where $\mathbf{w}^u_t$ is the \textit{access vector},
\begin{align}
\mathbf{w}^u_t = \lambda \mathbf{w}^u_{t-1} + \mathbf{w}^r_t + \mathbf{w}^w_t,
\end{align}
which indicates the frequency of memory access (both reading and writing), and $\lambda$ is a decay factor. Memory slots that are accessed infrequently will be assigned new templates. \yty{As is shown in Figure \ref{fig:1}, our memory network is able to learn the \abc{appropriate behavior for effectively updating or allocating new templates} to handle appearance variations.}
The writing process is performed with a \textit{write weight} in conjunction with an \textit{erase factor} for clearing the memory,
\begin{align}
\mathbf{M}^p_{t+1}(j) = \mathbf{M}^p_{t}(j)(\mathbf{1}-\mathbf{w}^w_t(j)e^w)+\mathbf{w}_t(j)^we^w\mathbf{T}^{\text{new}}_t,
\end{align}
where
$e^w$ is the \textit{erase factor} computed by
\begin{align}
e^w = d^rg^r+g^a,
\end{align}
and $d^r \in [0,1]$ is the \textit{decay rate} produced by the LSTM controller,
\begin{align}
d^r = \sigma (W^d\mathbf{h}_{t}+b^d),
\end{align}
where
$W^d$, $b^d$ are the weights and bias. If $g^r=1$ (and thus $g^a=0$), then $d^r$ serves as the decay rate for updating the template in the memory slot (Case 2). If $g^a=1$ (and $g^r=0$), $d^r$ has no effect on $e^w$, and thus the memory slot will be erased before writing the new template (Case 3). Figure \ref{fig:4} shows the detailed diagram of the positive memory reading and writing process.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{mem_access.pdf}
\end{center}
\vspace{-4mm}
\caption{Diagram of \yty{positive memory access mechanism, including reading and writing process.}
}
\vspace{-4mm}
\label{fig:4}
\end{figure}
\subsection{Negative Memory Writing}
\label{text:negmemwrite}
\yty{For negative memory, the distractor templates for memory writing are extracted from the search feature map based on their response score. Those with high score peak and are far away from the target are considered as \ytyy{distractor} templates. \tyy{Following the notation of Section \ref{attention}, $F_{t,i}$ is the $i$-th $\mathit{n\times n\times c}$ square patch on search feature map $F_t$.}
\abc{The set of distractor templates is defined as the top-$K$ templates (based on response score),}
\begin{align}
\begin{split}
{\mathcal S}_{\text{dis}}=\{F_{t,i} \mid D(i,*) >\tau, R(F_{t,i}) > \gamma R(F_{t,*}), \\
F_{t,i} \in \argmax_{F_t}^K R(F_t) \},
\end{split}
\end{align}
\abc{where $R(F_{t,i})$ is the response score for $F_{t,i}$, $D(i,*)$ is the spatial distance between the centers of the two templates, and $F_{t,*}$ is the template with maximum response score.}
\abc{The operator $\argmax^K$ returns the set with the top-$K$ values.} $\tau$ is a distance threshold, and $\gamma$ is a score ratio threshold.
Note that if there are no distractor templates satisfying the above criterion, a zero-filled template will be written into the negative memory, which will have no effect on forming the final template.}
\yty{As the distractor template is usually temporary and changes frequently, we simplify the memory writing process by only using an allocation weight as the write weight. Thus the negative memory writing \abc{is similar to a memory queue}
\begin{align}
\mathbf{M}^n_{t+1}(j) = \sum_k^K \mathbf{M}^n_{t}(j)(\mathbf{1}-\mathbf{w}^{na}_{t,k}(j))+\mathbf{w}^{na}_{t,k}(j)\mathbf{T}^{\text{dis}}_{t,k},
\end{align}
where $\mathbf{T}^{\text{dis}}_{t,k} \in \mathcal S_{dis}$ stands for the \textit{k}-th distractor template selected based on response score. $\mathbf{w}^{na}_{t,k}$ is the allocation weight for negative memory, \tyy{which is calculated by
\begin{align}
\mathbf{w}^{na}_{t,k}(j)=
\begin{cases}
1, &\text{if } j= p(k), p(k)\in \mathcal S_{alloc} \\
0, &\text{otherwise}
\end{cases}
\end{align}
where
$\mathcal S_{alloc} = \displaystyle \mathop{\mathrm{argmin}}_{j}^K \mathbf{w}^u_{t-1}(j)$
represents the top-$K$ newly allocated memory positions.}}
\subsection{Auxiliary Classification Loss}
\yty{As is stated in \cite{Ma2015}, robust visual tracking needs both fine-grained details to accurately localize the target and semantic information to be robust to appearance variations caused by deformation or occlusion. Features learned with similarity matching are mainly focused on precise localization of the target. Thus, we propose to add an auxiliary classification branch after the last layer of the CNN feature extractor to guide the networks to learn complementary semantic information.
The classification branch contains a fully-connected layer with 1024 neurons and ReLU activations, followed by a fully connected layer with 30 neurons (there are 30 categories in the training data) with a softmax function. The final loss for optimization is formed by two parts, matching loss and classification loss,
\tyy{\begin{align}
L(R, R^*, p, p^*) = L_{\text{mch}}(R, R^*) + \kappa L_{cls}(p, p^*),
\end{align}
where $R, R^*$ are the predicted response and groundtruth response. $p, p^*$ are the predicted probability and the groundtruth class of the object. $L_{\text{mch}}$ is an element-wise sigmoid cross entropy loss as in \cite{Bertinetto2016},
\begin{align}
L_{\text{mch}}(R, R^*) = \frac{1}{|\mathcal{D}|}\sum_{u\in \mathcal{D}} \ell(R_u, R_u^*),
\end{align}
where $\mathcal{D}$ are the positions
in the score map, and $\ell(\cdot)$ is the sigmoid cross entropy loss.
$L_{cls}$ is the softmax cross entropy loss. $\kappa$ is a balancing factor between the two losses. Note that the classification branch will be removed during testing.}
}
\section{Implementation Details}
We adopt an Alex-like CNN as in SiamFC \cite{Bertinetto2016} for feature extraction, where the input image sizes of the object and search images are 127$\times$127$\times$3 and 255$\times$255$\times$3, respectively. \tyy{We use the same strategy for cropping the search and object images as \cite{Bertinetto2016}, where some context margins around the target are added when cropping the object image. Specifically, given the newly predicted bounding box $\{x_t, y_t, w_t, h_t\}$ (center x, center y, width, height) in frame $t$, the cropping ROI for the object image patch is calculated by
\begin{align}
x_t^o &= x_t, \quad y_t^o = y_t, \quad
w_t^o = h_t^o = \sqrt{(c+w_t)(c+h_t)},
\end{align}
where $c = \delta*(w_t+h_t)$ is the context length and $\delta = 0.5$ is the context factor. For frame $t+1$, the ROI cropping for the search image patch is computed by
\begin{align}
\begin{split}
x_{t+1}^s &= x_t, \quad y_{t+1}^s = y_t, \\
w_{t+1}^s &= h_{t+1}^s = \frac{255-127}{127}*w_t^o+w_t^o.
\end{split}
\end{align}
Note that the cropped object image patch and search image patch are then resized to 127$\times$127$\times$3 and 255$\times$255$\times$3, respectively, to match the network input. }
The whole network is trained offline on the VID dataset (object detection from video) of ILSVRC \cite{ILSVRC15} from scratch, and takes about one day.
Adam \cite{kingma2014adam} optimization is used with a mini-batches of 8 video clips of length 16. The initial learning rate is 1e-4 and is multiplied by 0.8 every 10k iterations. The video clip is constructed by
uniformly sampling frames (while keeping the temporal order) from each video. This aims to diversify the appearance variations in one episode for training, which can simulate fast motion, fast background change, jittering object, low frame rate.
We use data augmentation, including small image stretch and translation for the target image and search image.
The dimension of memory states in the LSTM controller is 512 and the retain probability used in dropout for LSTM is 0.8. \yty{The number of positive memory slots and negative memory slots are $N_{pos}=8, N_{neg}=16$. The distance threshold and score ratio threshold are $\tau = 4, \gamma=0.7$ and the number of selected \ytyy{distractor} templates is $K=2$. The balancing factor for auxiliary loss is $\kappa = 0.05$.} The decay factor used for calculating the access vector is $\lambda=0.99$.
At test time, the tracker runs completely feed-forward and no online fine-tuning is needed. We locate the target based on the upsampled response map as in SiamFC \cite{Bertinetto2016}, and handle the scale changes by searching for the target over three scales $1.05^{[-1,0,1]}$. To smoothen scale estimation and penalize large displacements, we update the object scale with the new one by an exponential smoothing factor of 0.5 and dampen the response map with a cosine window by an exponential smoothing factor of 0.19.
Our algorithm is implemented in Python with the TensorFlow toolbox \cite{abadi2016tensorflow}, and tested on a computer with four Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz and a single NVIDIA GTX 1080 Ti with 11GB RAM. \yty{It runs about 50 fps for MemTrack and MemTrack* and about 40 fps for MemDTC and MemDTC*.}
\section{Experiments}
We evaluate our preliminary tracker \cite{Yang2018}
which only has positive memory networks (MemTrack), as well as three improved versions which are MemTrack with auxiliary classification loss (MemTrack*), MemTrack with \ytyy{distractor} template canceling (MemDTC), and MemDTC with auxiliary classification loss (MemDTC*). We conduct experiments on five challenging datasets: OTB-2013 \cite{Wu2013}, OTB-2015 \cite{Wu2015}, VOT-2015 \cite{Kristan2015}, VOT-2016 \cite{Kristan2016} and VOT-2017 \cite{Kristan2017}. We follow the standard protocols, and evaluate using precision and success plots, as well as area-under-the-curve (AUC) on OTB datasets. We also present the distance precision rate, overlap success rate and center location error on OTB for completeness. For VOT datasets, we use the toolbox\footnote{\href{https://github.com/votchallenge/vot-toolkit}{https://github.com/votchallenge/vot-toolkit}} provided by VOT committee to generate the results.
\subsection{OTB datasets}
On OTB-2013 and OTB-2015, we compare our proposed trackers with 12 recent {\em real-time} methods ($\geq$ 15 fps): \ytyy{SiamRPN \cite{Li2018}, DSiamM \cite{Guo2017}, PTAV \cite{Fan2017}}, CFNet \cite{Valmadre2017}, LMCF \cite{Wang2017}, ACFN \cite{Choi2017}, RFL \cite{Yang2017}, SiamFC \cite{Bertinetto2016}, SiamFC* \cite{Valmadre2017}, Staple \cite{Bertinetto2016-1}, DSST \cite{Danelljan2014}, and KCF \cite{Henriques2015} . To further show our tracking accuracy, on OTB-2015, we also compared with another 8 recent state-of-the art trackers that are {\em not} real-time speed: CREST \cite{Song2017}, CSR-DCF \cite{Lukezic2017}, MCPF \cite{Zhang2017}, SRDCFdecon \cite{Danelljan2016}, SINT \cite{Tao2016}, SRDCF \cite{Danelljan2015}, HDT \cite{Qi2016}, HCF \cite{Ma2015}.
The OTB-2013 \cite{Wu2013} dataset contains 51 sequences with 11 video attributes and two evaluation metrics, which are center location error and overlap ratio. The OTB-2015 \cite{Wu2015} dataset is the extension of OTB-2013 to 100 sequences, and is thus more challenging. We conduct the ablation study mainly on OTB-2015 since it contains OTB-2013.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{realtime-cvpr13.pdf}
\end{center}
\vspace{-5mm}
\caption{Precision and success plots on OTB-2013 for real-time trackers.}
\vspace{-4mm}
\label{fig:8}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{realtime-tb100.pdf}
\end{center}
\vspace{-5mm}
\caption{Precision and success plot on OTB-2015 for real-time trackers.}
\label{fig:9}
\vspace{-5mm}
\end{figure}
\begin{table*}
\begin{center}
\bgroup
\def1.15{1.25}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{ccccccccccccccccccc}
\hline
& & MemDTC* &MemDTC & MemTrack* & MemTrack & SiamRPN& DSiamM& PTAV& LMCF & ACFN & SiamFC &SiamFC* & RFL & Staple & CFNet & KCF & DSST \\
\hline
\multirow{2}{*}{DP @10 (\%) ($\uparrow$)} & I & 75.9 &74.8 & \underline{76.4} & 73.9 & \textbf{76.9}&75.8&74.7&73.6 & 74.4 & 69.8 & 71.4 & 63.0 & 69.5 & 66.9 & 59.2 & 61.9\\
& II &\underline{72.5} &71.6 & \textbf{72.7} & 69.5 &71.3&65.7&70.7& 67.4 & 68.1 & 63.9 & 67.1 & 63.5 & 67.6 & 66.0 & 55.7 & 58.1\\
\multirow{2}{*}{DP @20 (\%) ($\uparrow$)} & I &\underline{88.4} & 86.6 & 87.1 & 84.9 & \underline{88.4}&\textbf{89.1}&87.9&84.2 & 86.0 & 80.9 & 80.6 & 78.6 & 79.3 & 78.5 & 74.0 & 74.0\\
& II &\underline{84.8} & 84.5 & 84.4 & 82.0 &\textbf{85.1}&81.5&84.1& 78.9 & 79.9 & 77.1 & 76.9 & 77.8 & 78.4 & 77.7 & 69.6 & 68.0\\
\multirow{2}{*}{DP @30 (\%) ($\uparrow$)} & I &91.5 & 89.7 & 90.2 & 88.7 & \textbf{92.0}&\underline{91.7}&90.3&87.0 & 89.1 & 83.9 & 85.0 & 82.8 & 81.1 & 81.7 &78.3 & 76.4\\
& II &\underline{88.1} & 87.8 & 87.9 & 85.6 &\textbf{88.8}&86.8&88.0& 81.8 & 84.4 & 81.1 & 80.9 & 81.7 & 80.6 & 81.3 & 74.0 & 71.3\\
\hline
\multirow{2}{*}{OS @0.3 (\%) ($\uparrow$)} & I&90.5 &90.2 & 89.7 & 88.9 & \textbf{92.0}&\underline{92.0}&89.3&86.0 & 86.5 & 83.9 & 84.9 & 82.9 &80.0 &81.0 &73.0 &74.5\\
& II& 87.7 & \underline{88.0} & 87.9 & 86.6 &\textbf{89.5}&86.3&87.8& 79.8 & 81.9 & 81.2 & 80.9 & 82.5 &79.8 &81.5 &68.1 &69.1\\
\multirow{2}{*}{OS @0.5 (\%) ($\uparrow$)} & I& \underline{84.7}& 84.6 & 84.5 & 80.9 & \textbf{85.7}&84.1&81.3&80.0 & 75.0 & 77.9 & 78.3 & 74.3 &75.4 &75.2 &62.3 &67.0\\
& II&80.6 & 80.3 & \textbf{80.8} & 78.3 &\textbf{81.9}&76.0&76.8& 71.9 & 69.2 & 73.0 & 73.6 & 73.0 &70.9 &73.7 &55.1 &60.1\\
\multirow{2}{*}{OS @0.7 (\%) ($\uparrow$)} & I&59.0 & \underline{60.0} & \textbf{60.4} & 57.3 & 56.8&56.1&59.9&56.5 & 49.0 & 55.1 & 55.0 & 48.0 &57.9 &51.3 &39.3 &51.8\\
& II&55.7 & \underline{55.7} & \textbf{56.2} & 54.6 &53.9&48.5&53.4& 50.2& 45.1 & 50.3 & 50.8 & 47.7 &51.7 &50.1 &35.3 &46.0\\
\hline
\multirow{2}{*}{CLE (pixel) ($\downarrow$)} & I&16.9& 21.5 & 19.1 & 27.6 &\underline{14.2}&\textbf{13.8}&19.3& 23.8 & 18.7 & 29.7 & 35.2 & 35.7 &30.6 &40.3 &35.5&41.4\\
& II&\underline{20.3} & 21.8 & 22.1 & 27.8 &\textbf{19.2}&22.8&19.8& 39.0 & 25.2 & 33.1 & 35.9 & 35.8 &31.4 & 34.8 &44.7 &50.3\\
\hline
\end{tabular}}
\egroup
\end{center}
\caption{Comparison results on OTB-2013 (I) and OTB-2015 (II). DP @$n$ is the distance precision rate at the threshold of $n$ pixels and OS @$s$ is the overlap success rate at the threshold of $s$ overlap ratio. CLE is center location error. The best result is bolded, and second best is underlined. The up arrows indicate higher values are better for that metric, while down arrows mean lower values are better.}
\label{tb:1}
\vspace{-9mm}
\end{table*}
\subsubsection{Comparison to real-time trackers}
Figure \ref{fig:8} shows the one-pass comparison results with recent real-time trackers on OTB-2013. Our newly proposed trackers MemDTC, MemTrack* and MemDTC* rank the three best AUC scores on the success plot, which all outperform our earlier work MemTrack \cite{Yang2018}. For the precision plot with center location error, these three trackers also surpass MemTrack by a large margin.
Compared with SiamFC \cite{Bertinetto2016}, which is the baseline for matching-based methods without online updating, the proposed MemDTC*, MemDTC and MemTrack* achieve an improvement of 9.3\%, 7.0\% and 7.7\% on the precision plot, and 8.9\%, 8.4\% and 8.6\% on the success plot.
Our methods also outperforms SiamFC*, the improved version of SiamFC \cite{Valmadre2017} that uses
linear interpolation of the old and new filters with a small learning rate for online updating.
This indicates that our dynamic memory networks can handle object appearance changes better than simply interpolating new templates with old ones.
Figure \ref{fig:9} presents the precision plot and success plot for recent real-time trackers on OTB-2015. Our newly proposed trackers outperform all other methods \ytyy{on success plot with AUC score.} Specifically, our methods perform much better than RFL \cite{Yang2017}, which uses the memory states of the LSTM to maintain the object appearance variations. This demonstrates the effectiveness of using an external addressable memory to manage object appearance changes, compared with using LSTM memory which is limited by the size of the hidden states.
Furthermore, the proposed MemDTC*, MemDTC and MemTrack* improve over the baseline
SiamFC \cite{Bertinetto2016} by 10.0\%, 9.6\% and 9.5\% on the precision plot, and 9.6\%, 9.5\% and 10.0\% on the success plot.
Our trackers \ytyy{MemTrack* and MemDTC*} also outperform the most recently proposed three trackers, \ytyy{SiamRPN \cite{Li2018}, DSiamM \cite{Guo2017}, PTAV \cite{Fan2017}}, on AUC score.
Table \ref{tb:1} shows the quantitative results of the distance precision (DP) rate at different pixel thresholds (10, 20, 30), overlap success (OS) rate at different overlap ratios (0.3, 0.5, 0.7), and center location errors (CLE) on both OTB-2013 (I) and OTB-2015 (II). \ytyy{Our improved trackers MemDTC*, MemDTC and MemTrack* consistently outperform our earlier work MemTrack \cite{Yang2018} on all measures. In addition, they also performs well when the success condition is more strict (DP @10 and OS @0.7), indicating that their estimated bounding boxes are more accurate.}
Figure \ref{fig:11} further shows the AUC scores of real-time trackers on OTB-2015 under different video attributes including out-of-plane rotation, occlusion, motion blur, fast motion, in-plane rotation, out of view, background clutter and low resolution. \ytyy{Our MemTrack* outperforms other trackers on motion blur, fast motion and out of view, demonstrating its ability of adapting to appearance variations using multiple memory slots. In addition, our MemTrack also shows superior accuracy on low-resolution attributes.}
Figure~\ref{fig:12} shows qualitative results of our trackers compared with 6 real-time trackers.
\subsubsection{Comparison to non-real-time trackers}
Figure \ref{fig:10} presents the comparison results of 8 recent state-of-the-art {\em non-real time} trackers for AUC score (left), and the AUC score vs.~speed (right) of all trackers. Our newly proposed trackers MemDTC*, MemDTC and MemTrack*, which run in real-time (40 fps for MemDTC* and MemDTC, 50 fps for MemTrack*), outperform CREST \cite{Song2017}, MCPF \cite{Zhang2017} and SRDCFdecon \cite{Danelljan2016}, which all run at $\sim$1 fps.
Moreover, our earlier work MemTrack also surpasses SINT, which is another matching-based method with optical flow as motion information, in terms of both accuracy and speed.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{slow-tb100.pdf}
\end{center}
\vspace{-5mm}
\caption{(left) Success plot on OTB-2015 comparing our real-time methods with recent {\em non-real-time} trackers. (right) AUC score vs.~speed with recent trackers.}
\label{fig:10}
\vspace{-5mm}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{realtime-attri-tb100.pdf}
\end{center}
\vspace{-1mm}
\caption{The success plots on OTB-2015 for eight challenging attributes: out-of-plane rotation, occlusion, motion blur, fast motion, in-plane rotation, out of view, background clutter and low resolution.}
\label{fig:11}
\vspace{-1mm}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{qualitative.jpg}
\end{center}
\vspace{-1mm}
\caption{Qualitative results of our MemTrack, along with SiamFC \cite{Bertinetto2016}, RFL \cite{Yang2017}, CFNet \cite{Valmadre2017}, Staple \cite{Bertinetto2016-1}, LMCF \cite{Wang2017}, ACFN \cite{Choi2017} on eight challenge sequences. From left to right, top to bottom: \textit{board, bolt2, dragonbaby, lemming, matrix, skiing, biker, girl2}.}
\label{fig:12}
\vspace{-1mm}
\end{figure*}
\subsection{VOT datasets}
The VOT-2015 dataset \cite{Kristan2015} contains 60 video sequences with per-frame annotated visual attributes. Objects are marked with rotated bounding boxes to better fit their shapes. The VOT-2016 dataset \cite{Kristan2016} uses the same sequences as in VOT-2015 but re-annotates the ground truth bounding boxes in an automatic way with per-frame segmentation masks. The VOT-2017 dataset \cite{Kristan2017} replaces the least challenging sequences in VOT-2016 with 10 new videos and manually fixes the bounding boxes that were incorrectly placed by automatic methods introduced in VOT-2016.
Tracker performance is evaluated using three metrics: expected average overlap (EAO), accuracy, and robustness. The expected average overlap is computed by averaging the average overlap on a large set of sequence clips with different predefined lengths for all videos. The accuracy measures how well the bounding box estimated by the tracker fits the ground truth box and the robustness measures the frequency of tracking failure during tracking.
\subsubsection{VOT-2015 Results}
There are 41 trackers submitted to VOT-2015 and 21 baseline trackers contributed by the VOT-2015 committee. Table \ref{tb:2} presents the detailed comparison results with the best 25 performing methods \abcn{(according to EAO, expected average overlap)}.
Our newly proposed tracker MemDTC* achieves the third and fourth place in terms of accuracy and EAO.
Note that MDNet \cite{Nam2016} achieves the best score for all metrics, which is however much slower than our MemTrack*. Our methods also runs much faster (higher EFO\footnote{\tyyy{EFO (equivalent filter operations) is a measurement of speed generated by the VOT toolkit automatically, and is similar to fps but is a relative value. For example, SiamFC gets 32 EFO (VOT) vs. 86 fps (original) in Table \ref{tb:4}, while ours is 24 EFO (VOT) vs.~50 fps (original).}}) than DeepSRDCF \cite{Danelljan2016-2} and EBT \cite{Zhu2016}, which are slightly better than our MemTrack*. Moreover, SRDCF \cite{Danelljan2015}, LDP \cite{Kristan2015} and sPST \cite{Hua2015}, which outperform our MemTrack, do not have real-time speed. Figure \ref{fig:13} shows the accuracy-robustness ranking plot (left) and EAO vs.~EFO
plot (right) on the VOT-2015 dataset. Our methods perform favorably against state-of-the-art trackers in terms of both accuracy and robustness (see upper right corner), \abcn{while maintaining real-time speed ($\sim$20 EFO).} \tyyy{Finally, MemTrack and MemTrack* have slightly worse EAO than the baseline SiamFC on VOT-2015, which could be an artifact of the noisy ground-truth bounding boxes in VOT-2015.
On VOT-2016, which contains the same videos as VOT-2015 and has improved ground-truth annotations, MemTrack and MemTrack* outperform SiamFC by a large margin (see Section 5.2.2).}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\linewidth]{vot15_rankingplot.pdf}
\includegraphics[width=0.47\linewidth]{vot2015_fps.pdf}
\includegraphics[width=0.9\linewidth]{vot15_legend.pdf}
\end{center}
\vspace{-3mm}
\caption{The AR rank plots and \abcn{EAO vs. EFO}
for VOT-2015. Our methods are colored with red, while the top-10 methods are marked with blue. Others are colored with gray.
}
\vspace{-2mm}
\label{fig:13}
\end{figure}
\begin{table}
\small
\begin{center}
\bgroup
\def1.15{1.15}
\begin{tabular}{ccccc }
\hline
\multirow{2}{*}{\textbf{ }}
& EAO ($\uparrow$) & Acc.($\uparrow$) & Rob. ($\downarrow$) & EFO ($\uparrow$) \\
\hline
MDNet & \first{0.3783} & \first{0.6033} & \first{0.6936} & 0.97\\
DeepSRDCF & \second{0.3181} & 0.5637 & \third{1.0457} & 0.26 \\
EBT & \third{0.3130} & 0.4732 & \second{1.0213} & 2.74\\
\textbf{MemDTC*} & 0.3005 & 0.5646 & 1.4610 & \rt{22.18}\\
\textbf{MemDTC} & 0.2948 & 0.5509 & 1.6365 & \rt{21.95}\\
SiamFC\footnotemark & 0.2889 & 0.5335 & - & - \\
SRDCF & 0.2877 & 0.5592 & 1.2417 &1.36 \\
\textbf{MemTrack*} & 0.2842 & 0.5573 &1.6768 & \rt{\second{26.24}}\\
LDP & 0.2785 & 0.4890 & 1.3332 &5.17\\
sPST & 0.2767 & 0.5473 & 1.4796 & 1.16\\
\textbf{MemTrack} & 0.2753 & 0.5582 & 1.7286 & \rt{\third{26.11}}\\
SC-EBT & 0.2548 & 0.5529 & 1.8587 &1.83\\
NSAMF & 0.2536 & 0.5305 & 1.2921 &6.81\\
Struck & 0.2458 & 0.4712 & 1.6097 &3.52\\
RAJSSC & 0.2420 & \third{0.5659} & 1.6296 &2.67\\
S3Tracker & 0.2403 & 0.5153 & 1.7680 & \rt{20.04}\\
SumShift & 0.2341 & 0.5169 & 1.6815 & \rt{23.55}\\
SODLT & 0.2329 & 0.5607 & 1.7769 &1.14\\
DAT & 0.2238 & 0.4856 & 2.2583 &14.87\\
MEEM & 0.2212 & 0.4993 & 1.8535&3.66 \\
RobStruck & 0.2198 & 0.4793 & 1.4724 &1.67\\
OACF & 0.2190 & \second{0.5751} & 1.8128 &9.88\\
MCT & 0.2188 & 0.4703 & 1.7609 &3.98\\
HMMTxD & 0.2185 & 0.5278 & 2.4835 &2.17\\
ASMS & 0.2117 & 0.5066 & 1.8464&\rt{\first{142.26}} \\
MKCF+ & 0.2095 & 0.5153 & 1.8318 &1.79\\
TRIC-track & 0.2088 & 0.4618 & 2.3426 &0.03\\
AOG & 0.2080 & 0.5067 & 1.6727 &1.26\\
SME & 0.2068 & 0.5528 & 1.9763 &5.77\\
MvCFT & 0.2059 & 0.5220 & 1.7220 &11.85\\\hline
\end{tabular}
\egroup
\end{center}
\caption{Results on VOT-2015. The evaluation metrics include expected average overlap (EAO), accuracy value (Acc.), robustness value (Rob.) and equivalent filter operations (EFO). The top three performing trackers are colored with red, green and blue respectively. The up arrows indicate higher values are better for that metric, while down arrows mean lower values are better.
\abcn{Real-time methods ($>$15 EFO) are underlined.}
}
\label{tb:2}
\vspace{-8mm}
\end{table}
\footnotetext{Results are obtained from the original SiamFC paper \cite{Bertinetto2016}.}
\subsubsection{VOT-2016 Results} Together 48 tracking methods are submitted to the VOT-2016 challenge and 22 baseline algorithms are provided by the VOT-2016 committee and associates. Table \ref{tb:3} summarizes the detailed comparison results with the top 25 performing trackers. Overall, CCOT \cite{Danelljan2016-1} achieves the best results on EAO, SSAT \cite{Kristan2016} obtains the best performance on accuracy value, and TCNN \cite{Nam2016-1} outperforms all other trackers on robustness value. Our MemDTC* ranks the 5th over EAO, and surpasses other variants MemTrack, MemTrack* and MemDTC by a large margin. With the use of deeper networks, VGG, SSAT and MLDF achieve better EAO compared with MemDTC*, which is however at the cost of consuming considerable computation leading to non-realtime speed. It is worth noting that the proposed MemDTC* runs much faster than those trackers that rank ahead of it. Figure \ref{fig:14} shows the accuracy-robustness ranking plot (left) and EAO vs.~EFO plot (right) on the VOT-2016 dataset. Our algorithm MemDTC* achieves better robustness rank than the other three variants MemDTC, MemTrack* and MemTrack.
As reported in VOT-2016, the SOTA bound is EAO 0.251, which all our trackers exceed. In addition, our trackers outperform the baseline matching-based tracker SiamAN (SiamFC with AlexNet), SiamRN (SiamFC with ResNet) and RFL \cite{Yang2017} over all evaluation metrics.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\linewidth]{vot16_rankingplot.pdf}
\includegraphics[width=0.49\linewidth]{vot2016_fps.pdf}
\includegraphics[width=0.9\linewidth]{vot16_legend.pdf}
\end{center}
\vspace{-3mm}
\caption{The AR rank plots and EAO vs. EFO
for VOT-2016. See the caption of Figure \ref{fig:13} for description of the colors.}
\label{fig:14}
\vspace{-4mm}
\end{figure}
\begin{table}
\small
\begin{center}
\bgroup
\def1.15{1.15}
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{\textbf{ }}
& EAO ($\uparrow$) & Acc.($\uparrow$) & Rob. ($\downarrow$) & EFO ($\uparrow$) \\
\hline
CCOT & \first{0.3305} & 0.5364 & \second{0.8949} & 0.51\\
TCNN & \second{0.3242} & \third{0.5530} & \first{0.8302} &1.35\\
SSAT & \third{0.3201} & \first{0.5764} & 1.0462 &0.80\\
MLDF & 0.3093 & 0.4879 & 0.9236 &2.20\\
\textbf{MemDTC*} & 0.2976 & 0.5297 & 1.3106 &\rt{22.30}\\
Staple & 0.2940 & 0.5406 & 1.4158 &14.43\\
DDC & 0.2928 & 0.5363 & 1.2656 &0.16\\
EBT & 0.2909 & 0.4616 & 1.0455 &2.87\\
SRBT & 0.2897 & 0.4949 & 1.3314 &2.90\\
STAPLE+ & 0.2849 & \second{0.5537} & 1.3094&\rt{18.12} \\
DNT & 0.2781 & 0.5136 & 1.2004 &1.88\\
SiamRN & 0.2760 & 0.5464 & 1.3617 &7.05\\
DeepSRDCF & 0.2759 & 0.5229 & 1.2254 &0.38\\
SSKCF & 0.2747 & 0.5445 & 1.4299 &\rt{\second{44.06}}\\
\textbf{MemTrack} & 0.2723 & 0.5273 & 1.4381 &\rt{\third{24.60}}\\
\textbf{MemTrack*} & 0.2713 & 0.5378 & 1.4736 & \rt{24.22}\\
\textbf{MemDTC} & 0.2679 & 0.5109 & 1.8287 & \rt{21.42}\\
SHCT & 0.2654 & 0.5431 & 1.3902 &0.54\\
MDNet\_N & 0.2569 & 0.5396 & \third{0.9123} &0.69\\
FCF & 0.2508 & 0.5486 & 1.8460 &2.39\\
SRDCF & 0.2467 & 0.5309 & 1.4332 & 1.99\\
RFD\_CF2 & 0.2414 & 0.4728 & 1.2697 &1.20\\
GGTv2 & 0.2373 & 0.5150 & 1.7334 &0.52\\
SiamAN & 0.2345 & 0.5260 & 1.9093 &11.93\\
DPT & 0.2343 & 0.4895 & 1.8509 &4.03\\
deepMKCF & 0.2320 & 0.5405 & 1.2271 &1.89\\
HMMTxD & 0.2311 & 0.5131 & 2.1444 &4.99\\
NSAMF & 0.2267 & 0.4984 & 1.2536 &6.61\\
ColorKCF & 0.2257 & 0.5003 & 1.5009 &\rt{\first{111.39}}\\\hline
\end{tabular}
\egroup
\end{center}
\caption{Results on VOT-2016.
See the caption of Table \ref{tb:2} for more information.
}
\vspace{-5mm}
\label{tb:3}
\end{table}
\subsubsection{VOT-2017 Results}
There are 38 valid entries submitted to the VOT-2017 challenge and 13 baseline trackers contributed by the VOT-2017 committee and associates.
Table \ref{tb:4} shows the comparison results of the top 25 performing trackers, as well as our proposed methods.
The winner of VOT-2017 is LSART \cite{Sun2018}, which utilizes a weighted cross-patch similarity kernel for kernelized ridged regression and a fully convolutional neural network with spatially regularized kernels. However due to heavy model fusing and the use of deeper networks (VGGNet), it runs at $\sim$2 fps. ECO \cite{Danelljan2017}, which ranks the fourth place on EAO, improves CCOT \cite{Danelljan2016-1} over both performance and speed by introducing a factorized convolution operator. However the speed of ECO is still far from real-time even though it is faster than CCOT. CFWCR \cite{Kristan2017} adopts ECO as the baseline and further boosts it by using more layers for feature fusing. CFCF \cite{Gundogdu2018} also utilizes ECO as the baseline tracker and improves it by selecting different layer features (first, fifth and sixth) of a newly trained fully convolutional network on ILSVRC \cite{ILSVRC15} as the input of ECO. In addition, Gnet \cite{Kristan2017} integrates GoogLeNet features into SRDCF \cite{Danelljan2015} and ECO \cite{Danelljan2017}.
Since those trackers that rank ahead of our MemDTC* are usually based on either ECO or SRDCF using deeper networks (like VGG), which are thus not real-time, our tracker performs favorably against these top performing trackers, while retaining real-time speed. Furthermore, our MemDTC* outperforms both our earlier work MemTrack \cite{Yang2018} and the baseline tracker SiamFC \cite{Bertinetto2016}. Figure \ref{fig:15} shows the accuracy-robustness ranking plot (left) and EAO vs.~EFO plot (right) on the VOT-2017 dataset. Our methods perform favorably against state-of-the-art trackers in terms of both accuracy and robustness, \abcn{and has the best performance among real-time trackers.}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\linewidth]{vot17_rankingplot.pdf}
\includegraphics[width=0.49\linewidth]{vot2017_fps.pdf}
\includegraphics[width=0.9\linewidth]{vot17_legend.pdf}
\end{center}
\vspace{-3mm}
\caption{The AR rank plots and \abc{EAO vs. EFO}
for VOT-2017. Our methods are colored with red while the top 10 methods are marked with blue. Others are colored with gray.}
\label{fig:15}
\vspace{-4mm}
\end{figure}
\begin{table}
\small
\begin{center}
\bgroup
\def1.15{1.15}
\begin{tabular}{ccccc}
\hline
\multirow{2}{*}{\textbf{ }}
& EAO ($\uparrow$) & Acc.($\uparrow$) & Rob. ($\downarrow$) & EFO ($\uparrow$) \\
\hline
LSART & \first{0.3211} & 0.4913 & \first{0.9432} &1.72\\
CFWCR & \second{0.2997} & 0.4818 & 1.2103 &1.80\\
CFCF & \third{0.2845} & 0.5042 & 1.1686 &0.85\\
ECO & 0.2803 & 0.4806 & \third{1.1167} &3.71\\
Gnet & 0.2723 & 0.4992 & \second{0.9973} &1.29\\
MCCT & 0.2679 & \second{0.5198} & 1.1258 &1.32\\
CCOT & 0.2658 & 0.4887 & 1.3153 &0.15\\
\textbf{MemDTC*} & 0.2651 & 0.4909 & 1.5287 & \rt{21.12}\\
CSRDCF & 0.2541 & 0.4835 & 1.3095 &8.75\\
\textbf{MemDTC} & 0.2504 & 0.4924 & 1.7730 & \rt{20.49}\\
SiamDCF & 0.2487 & 0.4956 & 1.8659 &10.73\\
MCPF & 0.2477 & 0.5081 & 1.5903 &0.42\\
CRT & 0.2430 & 0.4613 & 1.2367 &3.24\\
\textbf{MemTrack} & 0.2427 & 0.4935 & 1.7735 & \rt{24.27}\\
\textbf{MemTrack*} & 0.2416 & 0.5025 & 1.8058 & \rt{24.74}\\
ECOhc & 0.2376 & 0.4905 & 1.7737 & \rt{17.71}\\
DLST & 0.2329 & 0.5051 & 1.5667 &1.89\\
DACF & 0.2278 & 0.4498 & 1.3211 & \rt{21.96}\\
CSRDCFf & 0.2257 & 0.4712 & 1.3905 & \rt{15.05}\\
RCPF & 0.2144 & 0.5001 & 1.5892 &0.42\\
UCT & 0.2049 & 0.4839 & 1.8307 &12.20\\
SPCT & 0.2025 & 0.4682 & 2.1547 &4.40\\
ATLAS & 0.1953 & 0.4821 & 2.5702 &5.21\\
MEEM & 0.1914 & 0.4548 & 2.1111 &4.12\\
FSTC & 0.1878 & 0.4730 & 1.9235 &0.96\\
SiamFC & 0.1876 & 0.4945 & 2.0485 &\rt{\third{31.89}}\\
SAPKLTF & 0.1835 & 0.4764 & 2.2002 & \rt{31.65}\\
ASMS & 0.1687 & 0.4868 & 2.2496 & \rt{\first{130.02}}\\
Staple & 0.1685 & \third{0.5194} & 2.5068 &\rt{\second{47.01}}\\\hline
\end{tabular}
\egroup
\end{center}
\caption{Results on VOT-2017.
See the caption of Table \ref{tb:2} for more information.
}
\label{tb:4}
\vspace{-5mm}
\end{table}
\subsection{Ablation Studies}\label{abla}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{ablation-tb100.pdf}
\end{center}
\vspace{-4mm}
\caption{Ablation studies: (left) success plots of different variants of our tracker on OTB-2015; (right) success plots for different memory sizes \{1, 2, 4, 8, 16\} for positive memory slot, \yty{\{1, 2, 4, 8, 16, 32\} for negative memory slot} on OTB-2015. }
\vspace{-4mm}
\label{fig:7}
\end{figure}
Our preliminary tracker MemTrack \cite{Yang2018} contains three important components: 1) an attention mechanism, which calculates the attended feature vector for memory reading; 2) a dynamic memory network, which maintains the target's appearance variations; and 3) residual template learning, which controls the amount of model updating for each channel of the template. To evaluate their separate contributions to our tracker, we implement several variants of our method and verify them on OTB-2015 dataset.
The results of the ablation study are presented in Figure \ref{fig:7} (left).
We first design a variant of MemTrack without attention mechanism (MemTrack-NoAtt), which averages all $L$ feature vectors to get the feature vector $\mathbf{a}_t$ for the LSTM input.
Mathematically, it changes
(2) to $\mathbf{a}_t = \frac{1}{L}\sum_{i=1}^{L}\mathbf{f}^*_{t,i} $.
Memtrack-NoAtt
decreases performance (see Figure \ref{fig:7} left), which shows the benefit of using attention to roughly localize the target in the search image.
We also design a naive strategy that simply writes the new target template sequentially into the memory slots as a queue (MemTrack-Queue). When the memory is fully occupied, the oldest template will be replaced with the new template. The retrieved template is generated by averaging all templates stored in the memory slots.
This simple approach cannot produce good performance (Figure \ref{fig:7} left), which shows the necessity of our dynamic memory network. \yty{We devise a hard template reading scheme (MemTrack-HardRead), \emph{i.e.}, retrieving a single template by max cosine distance, to replace the soft weighted sum reading scheme.
This design decreases the performance (Figure \ref{fig:7} left), most likely because the non-differentiable reading leads to an inferior model.}
To verify the effectiveness of gated residual template learning, we design another variant of MemTrack--- removing channel-wise residual gates (MemTrack-NoRes), \emph{i.e.} directly adding the retrieved and initial templates to get the final template.
Our gated residual template learning mechanism boosts the performance (Figure \ref{fig:7} left) as it helps to select correct residual channel features for template updating.
\yty{In this paper, we improve MemTrack with two techniques: distractor template canceling and auxiliary classification loss. Our newly proposed methods MemTrack*, MemDTC and MemDTC* consistently outperform our earlier work MemTrack (Figure~\ref{fig:7} left).} \ytyy{Without the auxiliary classification loss, MemDTC outperforms MemTrack on both the precision and success plots of OTB-2013/2015, which demonstrates its effectiveness of the distractor template canceling strategy. When using the auxiliary classification loss, MemDTC* has slightly better performance that MemTrack* on OTB-2013, but slightly worse performance on OTB-2015. It is possible that the discrimination ability of the feature extractor (only a 5-layer CNN) limits the performance gain of the auxiliary loss. We also note that MemTrack* achieves slightly worse EAO than MemTrack on VOT-2016/2017, while MemDTC* is better than MemDTC on VOT-2015/2016/2017. On these datasets, the auxiliary task cannot further improve the performance until the distractor template canceling scheme is used.}
We also investigate the effect of memory size on tracking performance. Figure \ref{fig:7} (right) shows the success plot on OTB-2015 using different numbers of memory slots. For MemTrack, tracking accuracy increases along with the memory size and saturates at 8 memory slots. Considering the runtime and memory usage, we choose 8 as the default number of positive memory slots. \yty{For our improved tracker MemDTC, we keep the number of positive memory slot fixed at 8, and change the number of negative memory slots. The tracking performance increases with the number of negative memory slot, \abc{and saturates at}
16.}
\section{Conclusion}
In this paper, we propose a dynamic memory network with an external addressable memory block for visual tracking, aiming to adapt matching templates to object appearance variations.
An LSTM with attention scheme controls the memory access by parameterizing the memory interactions. We develop channel-wise gated residual template learning to form the \yty{positive} matching model, which preserves the conservative information present in the initial target, while providing online adapability of each feature channel. \yty{To alleviate the drift problem caused by distractor targets, we devise a distractor template canceling scheme \abc{that inhibits channels in the final template that are not discriminative.} Furthermore, we improve the tracking performance by introducing an auxiliary classification loss branch after the feature extractor, aiming to learn semantic features that complement the features trained by similarity matching.} Once the offline training process is finished, no online fine-tuning is needed, which leads to real-time speed. Extensive experiments on standard tracking benchmark demonstrates the effectiveness of our proposed trackers.
\section*{Acknowledgments}
This work was supported by grants from the Research Grants Council of the Hong Kong Special Administrative Region, China
(CityU 11200314, and CityU 11212518). We grateful for the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research.
\bibliographystyle{IEEEtran}
|
1,116,691,497,898 | arxiv |
\section{Problem statement}
Symmetries play a paramount role in nature and are foundational
aspects of both theoretical physics and deep learning. Knowing
symmetries of a physical problem is often the first step towards its
solution. In machine learning, models that exploit symmetries of
their data domain are most successful at their tasks, as exemplified
by the success of convolutional neural networks for a variety of
problems \cite{DLBook}. Recently, many deep learning papers have
explored the concept of symmetry using tools from theoretical physics,
e.g.~\cite{mallat2016,Bronstein:2017aa,cohen2018, higgins2018}.
In most cases, these works assumed that the symmetries of the problem
were manifest and built neural architecture that respected those
symmetries, in order to learn more efficiently. In some situations,
however, the symmetries of the problem are hidden from us, and much of
the work is done to uncover those symmetries. A famous example is Kepler's inference from astronomical data that planetary orbits form ellipses with the sun at one focus. The fact that orbits in an inverse square law of force generically close is a consequence of a subtle symmetry of the problem that gives rise to the conserved Laplace--Runge--Lenz vector \cite{goldstein}.
Here, we present a data--driven way to learn such symmetries.
While we concentrate here on models of Hamiltonian mechanics, we hope that the tools we develop will inspire research into symmetry learning more generally.
Learning a symmetry means learning a transformation from the original physical variables to a new set of variables in which the symmetry is manifest. Neural models describing bijective mappings are the subject of recent work on normalizing flows \cite{Rezende:2015aa,NICE,realNVP} and RevNets \cite{Gomez:2017aa,Jacobsen:2018aa}. Normalizing flows are usually constructed to have tractable Jacobians. In Hamiltonian mechanics \emph{symplectic} (or \emph{canonical}) transformations have a special role \cite{Saletan:1998aa}. Such transformations are volume preserving but have further restrictions (see \cref{eq:symp_cond} below), and so require new network architectures. Hamiltonian time evolution is itself a symplectic transformation, so these methods may be applied to neural variational inference with Hamiltonian Monte Carlo (HMC) \cite{neal}, discussed in several recent papers \cite{Salimans:2014aa,Wolf:2016aa,Levy:2017aa,Caterini:2018aa,Hoffman:2019aa}. \textbf{Note added:} Following submission of this work, the closely related preprint \cite{Greydanus:2019aa} appeared.
In this work, we will focus on integrable models, which have nontrivial symmetries, as a test case for symmetry discovery. The organization of the remainder of this paper is as follows. In the next section, we introduce some concepts from classical integrable systems that we will use. \Cref{sec:deep} introduces our new architecture and learning algorithm, while \cref{sec:exp} contains experiments on three integrable models. Finally, \cref{sec:disc} provides a discussion of the results. Supplementary details are contained in the appendices.
\section{Classical integrable systems}
\label{sec:2}
\subsection{Hamiltonian dynamics and canonical transformations}
Classical mechanics is the realm of Hamiltonian dynamics \cite{Saletan:1998aa}. In the simplest case that we address here, motion occurs in a phase space $\mathbb{R}^{2n}$ of positions $q\in \mathbb{R}^{n}$ and momenta $p\in \mathbb{R}^{n}$. The dynamics is governed by Hamilton's equations, which are derived from a Hamiltonian function $H:\mathbb{R}^{2n}\to\mathbb{R}$. For $x = (q,p)$ these read:
\begin{align}
\label{eq:eom}
\dot{x} = \Omega \nabla_x H\, ,\quad
\Omega =
\begin{pmatrix}
0 & \mathbf{1}_n\\
-\mathbf{1}_n & 0
\end{pmatrix}
\,.
\end{align}
For example, the harmonic oscillator with unit frequency is described
by $H = ( p^2 + q^2 )/2$, and the equations of motion $\dot q = p$, $\dot p = -q$ describe circular orbits in the phase plane.
The skew-symmetric matrix $\Omega$ is called a symplectic form, and allows
one to define symplectic (or canonical) transformations of phase
space, as those maps $f$ whose Jacobian matrix $J_f$ is an element of
the linear symplectic group $\text{Sp}_{2n}(\mathbb{R})$ at each point of its domain:
\begin{align}\label{eq:symp_cond}
J_f^T\Omega J_f = \Omega \,.
\end{align}
Since $\Det(J_f) = +1$, volume is conserved. \Cref{eq:symp_cond} is however much more restrictive: the sum of (signed) areas in each $q_j-p_j$ plane is preserved (\cref{fig:1d_example}).
Given $u,v$ scalar valued functions on phase space, their Poisson
bracket is defined as $ \{ u, v\} \equiv (\nabla_x u)^\top \Omega \nabla_x v$.
and is a symplectic invariant. With this notation, Hamilton's
equations are $\dot{x} = \{ x, H \}$, and time evolution is itself a
symplectic transformation \cite{Arnold:2013aa}.
\begin{figure}[hbtp]
\centering
\includegraphics[width=\columnwidth]{kepler_planes}
\caption{Projections of the original (blue) and transformed (orange) trajectories of
the Kepler Hamiltonian (\cref{eq:HKepler} with $k=-1$) onto the three
$q_j-p_j$ phase planes. Note that the sum of enclosed areas is the same for the original and transformed trajectories.}
\label{fig:1d_example}
\end{figure}
\subsection{Integrable models}
A conserved quantity of a dynamical system is constant in time, and
thus Poisson--commutes with the Hamiltonian, and constitutes a symmetry
of the problem. For example, in the celebrated Kepler problem
describing the motion of two planets attracted by a gravitational
force which depends only on the distance between the planets, $H$
commutes with the angular momentum, which generates rotations of the
relative coordinate, and is a conserved quantity of the dynamics
\cite{goldstein}.
\emph{Integrable} systems are those which have a number of mutually
commuting and independent integrals of the motion that equals $n$,
half the phase space dimension. The Liouville--Arnold theorem states
that (compact) motion is confined to torii parametrized by angles
$\varphi_1,\dots,\varphi_n$ and there exists a symplectic
transformation $\mathcal{T}^{-1}$ from the original coordinates $q,p$
to new coordinates $\varphi,I$, where $I$ are called actions and are
the conserved quantities of the problems \cite{Arnold:2013aa}. In the
action angle coordinates, \cref{eq:eom} therefore reads:
\begin{align}
\label{eq:eom_action_angle}
\dot{\varphi} = \partial_IK = \text{const.}\,,\quad
\dot{I} = -\partial_\varphi K = 0
\,,
\end{align}
where the transformed Hamiltonian
\begin{align}
\label{eq:K}
K = H \circ \mathcal{T}
\,,
\end{align}
is independent of the angles. Finding explicit action-angle transformations is
a challenging task and while a lot of progress has been made
constructing integrable systems from algebraic or geometric
principles \cite{babelon2003}, there is no general algorithm to
construct higher integrals of the motion given an integrable
Hamiltonian. Learning such a transformation is the goal of this work.
We will work with the Cartesian coordinates $(\hat{q}_i =
\sqrt{2I_i}\cos(\varphi_i), \hat{p}_i = \sqrt{2I_i}\sin(\varphi_i))$ and denote
by $T$ the symplectic map
\begin{align}
\label{eq:T}
T : (\hat{q}, \hat{p}) \mapsto (q, p) \,.
\end{align}
For example, action-angle variables for the harmonic oscillator are
the symplectic polar coordinates $(\arctan(p/q), (p^2+q^2)/2)$, so that $T$ is the identity, and the
only conserved quantity is the energy.
In general $T$ will be such that complex trajectories in the $(q,p)$
phase space get mapped to circles in the $(\hat{q}_i, \hat{p}_i)$ planes
where $\varphi_i$ is the angular coordinate and $I_i$ is half the squared
radius of the circle.
In this work we will learn neural parametrizations of $T$ for three paradigmatic
integrable models: (1) the Kepler model of two planets interacting with
gravitational force \cite{goldstein}; (2) the Neumann model of $n$
oscillators with positions in $\mathbb{R}^n$ constrained to the
$(n-1)$--dimensional sphere \cite{babelon2003}; (3) the Calogero-Moser (CM)
model of a chain of $n$ particles with inverse square interaction potential \cite{MOSER:1976aa}. The Hamiltonians and their conserved quantities are described in \cref{sec:Ham}.
\section{Deep symplectic flows} \label{sec:deep}
We have reformulated the task of learning symmetries in integrable
models as that of learning the map $T$ which transforms a circular trajectory $(\hat{q}_i(t),\hat{p}_i(t))$ to the complex trajectory of the original model $(q_i(t),p_i(t))$. We now describe how to parametrize and learn such a transformation.
\subsection{Parametrization}
\label{sec:3.1}
Here we will adapt recent results on normalizing flows to provide
symplectic versions of popular invertible layers such as additive
coupling \cite{NICE}, batch normalization \cite{realNVP} and invertible linear
transformations \cite{glow}. Our parametrization of $T$ is given by
stacking $m$ blocks of these three layers.
We now describe each layer.
\subsubsection{Symplectic additive coupling}
The additive coupling layer introduced in \cite{NICE}
partitions the inputs $z = (z_A, z_B)$
and outputs $x = (x_A,x_B)$ with $x_A = z_A, x_B = z_B + \text{NN}(x_A)$,
where the shift function $\text{NN}$ is an arbitrary neural network. If we
now identify $A,B$ subsystems as $q,p$ respectively, we have the following
layer $L : (q,p) \mapsto (Q,P)$:
\begin{align}
(Q, P) = (q, p + \text{NN}(q))\, ,\quad
(q, p) = (Q, P - \text{NN}(Q))\,.
\end{align}
Symplecticity of the transformation imposes further irrotationality:
\begin{align}
\label{eq:irrot}
\partial_i \text{NN}_j = \partial_j \text{NN}_i\, .
\end{align}
This constraint may be handled by setting $\text{NN}(q) = \nabla
F(q)$, where $F:\mathbf{R}^n\to \mathbf{R}$ is parametrized by a neural network. This gives the traditional leapfrog update used in HMC.
While conceptually simple, this approach is computationally
expensive, requiring $O(n^2)$ time in the backward pass of the
network. A cheaper approach is to use a multilayer perceptron with three layers and constrained weight matrices $W^a$ $a=1,2,3$. In \cref{sec:imlp} we show that \cref{eq:irrot} is satisfied if
\begin{align}\label{eq:weight_cond}
W^3 = W^{1\top}\, ,\quad
W^{2} = \mathrm{diag}(w^2_1, \dots, w^2_{n_2})\, .
\end{align}
We call this architecture irrotational MLP. The analysis can be
generalized, but we took this simple architecture for most of our
experiments. Geometrically, $W^1$ embeds $q$ into a higher dimensional
space and $W^{1\top}$ maps the embedding back to the original space
after it has been scaled by $W^2$, whose sign controls whether the map
is orientation preserving.
\subsubsection{Symplectic Linear}
The additive coupling leaves $q$ unchanged, and we introduce the
symplectic linear layer to mix $p,q$ so that deeper additive couplings act
on all phase space coordinates.
To parametrize a symplectic matrix $S\in \text{Sp}_{2n}(\mathbb{R})$,
we use the pre--Iwasawa decomposition \cite{deGosson}:
\begin{align}
S =
NAK=
\begin{pmatrix}
\mathbf{1} & 0 \\
M & \mathbf{1}
\end{pmatrix}
\begin{pmatrix}
L^\top & 0 \\
0 & L^{-1}
\end{pmatrix}
\begin{pmatrix}
X & -Y \\
Y & X
\end{pmatrix}
\, ,
\end{align}
with
\begin{align}
&M = M^\top
\,,\quad
&X^\top Y = Y^\top X \, ,\quad
X^\top X + Y^\top Y = \mathbf{1}\,.
\end{align}
To parametrize $K$, we note that it is the realification of the
unitary $X + iY$ and can be written as a product of
Householder reflections, parametrized by a vector $v$:
\begin{align}
R_v = \mathbf{1} - 2 \frac{v v^\dagger}{||v||^2}\in {\rm U}_n\,,
\end{align}
and a diagonal matrix of phases
\begin{align}
U = \mathrm{diag}(e^{i\phi_i})\,.
\end{align}
We refer to \cite{Cabrera2010,Tomczak16} for background on Householder
reflections.
Note that the complexity of applying both $R_v$ and $U$ to a vector
$(q,p)$ is $O(n)$. To keep the complexity of the whole layer $O(n)$ we
take $r = O(1)$ Householder reflections and further take
$L=\mathrm{diag}(L_i)$ and $M = \mathrm{diag}(M_i)$.
\subsubsection{Zero center}
The zero center layer is defined by the transformation:
\begin{align}
\begin{cases}
Q = q - \mu^q + \alpha\\
P = p - \mu^p + \beta
\end{cases}\, ,
\end{align}
where $\mu$ is the mean given during training as the batch mean, and
during testing by a weighted moving average accumulated during
training, as in the batch normalization layer -- see \cite{realNVP}
for its usage in normalizing flows. $\alpha$, $\beta$ are learnable
offsets. The zero center layer normalizes its input and allows one to
study deeper architectures, and is a restricted version of batch
normalization compatible with symplectic invariance. (The full
version of batch normalization indeed scales by the variance of each
feature and does not preserve areas.)
\subsection{Learning algorithm}
\label{sec:3.2}
\begin{figure*}[hbtp]
\centering
\begin{tikzpicture}
\node[] at (0,0) {
\includegraphics[width=\textwidth]{viz_neumann-trim}};
\begin{scope}[xshift=-6.5cm,yshift=1cm]
\newcommand\dx{1.32}
\node[] at (-\dx,0) {\textsc{\tiny Input}};
\foreach \i in {0,3,6,9}
{
\node[] at (\dx * \i,0) {\textsc{\tiny ZeroCenter}};
\node[] at (\dx * \i + \dx,0) {\textsc{\tiny Linear}};
\node[] at (\dx * \i + 2*\dx,0) {\textsc{\tiny Additive}};
}
\end{scope}
\end{tikzpicture}
\caption{Visualization of the transformations done by each layer of
$T$ along a given $q_j-p_j$ phase plane for the Neumann model.
The input points (left) belong to a cycle of the Liouville--Arnold
torus. Colors refer to which quadrant of the input plane a point
comes from.}
\label{fig:viz_neumann}
\end{figure*}
According to the discussion in \cref{sec:2}, the map $T^{-1}$ is
determined by requiring that the original trajectory $(q_i(t),p_i(t))$ is
mapped to circles $(\hat{q}_i(t),\hat{p}_i(t))$. If the trajectory is
sampled at $\tau$ time steps $t_k$, such $T$ minimizes the
following loss, which encourages the distance from the origin of
neighbouring points to be the same:
\begin{align}
\label{eq:loss}
\ell =
\frac{1}{n \tau}
\sum_{k=1}^{\tau}
|| r_{k} - r_{k+1} ||^2\,,\quad
r_{k} =
\hat{q}(t_k)^2 +
\hat{p}(t_k)^2 \,.
\end{align}
A non-invertible or non-volume preserving network
could minimize \cref{eq:loss} trivially by collapsing the trajectories
to zero or very small volumes in the transformed phase space: the symplecticity of the transformation is essential.
We therefore consider a learning algorithm that
takes as input a batch of trajectories and minimizes the loss above
averaged over the batch. Practically, we compute the trajectories by
solving the original equations of motions using a Runge-Kutta solver,
and we perform stochastic gradient descent parameter updates using the
Adam optimizer. We shuffle randomly the trajectories at every epoch,
which ensures that we compare distant points in \cref{eq:loss}, so that
all deviations from a circular shape are penalized in the same way.
\section{Experiments} \label{sec:exp}
\label{sec:4}
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.45\columnwidth]{neumann_t2c_cut_resized}\includegraphics[width=0.45\columnwidth]{CM_t2c_cut}
\caption{Pull back of trajectories (thin blue line) under the map
$T^{-1}$ learned. Each model has phase space of dimension $2n=6$
and here a single $q-p$ phase plane is selected for
illustration.}%
\label{fig:2}
\end{figure}
We now present the results of experiments on the three models defined
in \cref{sec:Ham}. The code used is available at
\textbf{\texttt{\url{https://github.com/rbondesan/CanonicalFlows/}}},
and we refer to \cref{sec:details} for details of network
architecture and other training hyperparameters.
We first discuss the representation capacity of deep symplectic
flows. \Cref{fig:2} shows the pull-back of the trajectories under a
network $T^{-1}$ composed of $m=4$ blocks defined in \cref{sec:3.1}.
The parameters in $T^{-1}$ are learned by running the algorithm of
\cref{sec:3.2} to convergence and feeding it with a single trajectory
sampled at $\tau=128$ time steps. This shows that our model and
algorithm can learn the action--angle map for both periodic (Kepler
and Calogero-Moser) and quasi-periodic (Neumann) models.
We next investigate the generalization of our learning algorithm. By
this we mean how well the learned symplectic flow can map unseen
trajectories to circles. We present here results for the Neumann model
with $n=3$ oscillators. We consider a batch of trajectories, one for
each radius $r=2,3,\dots,8$ of the sphere on which the positions of
the oscillators move. \Cref{tab:neumann} reports the loss of
\cref{eq:loss} evaluated over these trajectories for a symplectic flow
$T$ learned by considering only the subset $r=3,5,7$ as training
data. While the points in the training set have the smallest values
compared to neighbouring radii, the fact that the loss is of the same
order across this range of trajectories, indicates that the model
generalizes beyond the training points. To further substantiate this
claim, we show in \cref{fig:neumann-2-seen-unseen-traj} the pull-back
of the trajectories under $T^{-1}$ for both the trajectories seen and
unseen by the training algorithm.
\begin{table}[t]
\caption{Test loss $\ell'=\ell\times 10^{5}$ for trajectories in
the Neumann model at radius $r$. Bold text denotes radii of training trajectories.}
\label{tab:neumann}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{sc}
\begin{tabular}{rccccccccr}
\toprule
$r$ & 2 & \textbf{3} & 4 & \textbf{5} & 6 & \textbf{7} & 8 \\
\midrule
$\ell'$ &
6.5 &
\textbf{3.7} &
23.2 &
\textbf{12.4} &
117.5 &
\textbf{23.4} &
141.6 \\
\bottomrule
\end{tabular}
\end{sc}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\begin{figure*}[hbtp]
\centering
\includegraphics[width=\textwidth]{neumann-2-seen-unseen-traj}
\caption{Pull back of trajectories (thin blue line) for the Neumann
model at radius $r$, indicated by the figures titles. Bold text
denotes radii of training trajectories. A single $q-p$ phase plane
is selected for illustration.}
\label{fig:neumann-2-seen-unseen-traj}
\end{figure*}
The map $T$ thus learned can be used as a generative process for the
physical system trajectories, as illustrated in
\cref{fig:viz_neumann}. Interestingly, this allows one to visualize
the points in phase space that correspond to a given Liouville--Arnold
torus. Further, by varying the circles radii one can interpret the effect
of the learned symmetries on the system under consideration.
\section{Discussion}\label{sec:disc}
The learning algorithm discussed so far relies on being able to solve
the equations of motion. This was done here by numerical integration,
and adds questions of convergence and stability of the ODE solver on
top of those of the learning algorithm. We remark that there two
possible ways to improve this. The first is to exploit
integrability of the models to solve the motion analytically, which is
however not a trivial matter and typically requires uncovering a Lax
pair formulation of the problem \cite{babelon2003}. The second is to
use a different learning algorithm. For example, one could minimize a
loss that encourages the transformed Hamiltonian to be
angle-independent (recall \cref{eq:eom_action_angle}), or one could
minimize the Kullback--Leibler (KL) divergence between the canonical
density associated to the transformed Hamiltonian and that of a base
distribution. Both alternatives are analyzed in \cref{sec:alt}.
While we have concentrated here on integrable models, we expect our
methods to be applicable to find almost conserved quantities in models
close to integrable ones, such as the celebrated
Fermi--Pasta--Ulam--Tsingou chains \cite{Gallavotti:2007aa}. We thus
expect the deep learning approach to classical mechanics presented
here to be of practical relevance for solving physical problems, for
example by finding integration schemes with smaller discretization
errors.
\newpage
|
1,116,691,497,899 | arxiv | \section{Motivation}
Does an isolated quantum many-body system that is prepared in a non-thermal initial state relax to thermal equilibrium? As we know from our everyday experience, many physical systems can very successfully be described by a thermal state. On the other hand, the time-reversal symmetry that results from the unitarity of quantum mechanics seems to make the relaxation to thermal states impossible in an isolated system~\cite{Polkovnikov11}. This seemingly simple question thus addresses the fundamental relation between the macroscopic description of statistical mechanics and the microscopic quantum world. It has been highly contested since the 1920s~\cite{Neumann29} and important theoretical advances have been achieved over the years~\cite{Srednicki94,Rigol2008,Polkovnikov11,Eisert2014}. Variations of this question play important roles in such diverse fields as cosmology, high-energy physics and condensed matter~\cite{Kofman1994,Podolsky2006,Braun-Munzinger2001,Berges2004,Eckstein2009,Moeckel2010}. However, it has only been through the recent experimental progress in manipulation of ultracold quantum gases that this question has become within reach of detailed experimental investigations~\cite{Langen15b}. In the following we will present a series of such experiments, which we performed using ultracold one-dimensional Bose gases. The versatility of these gases allowed us to realize several textbook-like non-equilibrium phenomena, which provide important insights into the dynamics of quantum many-body systems.
\section{One-dimensional Bose gases}
\label{sec:1d_bose_gases}
Over the last years, one-dimensional (1D) Bose gases have proven to be a versatile testbed for the study of quantum many-body physics in and out of equilibrium. The great interest in these systems stems from several key properties. From the theorist's perspective 1D Bose gases offer a rich variety of interesting many-body effects, while still being tractable with reasonable effort~\cite{Cazalilla2011,Castin04}. On the experimental side their realization using cold atomic gases offers precise control over many system parameters, as well as highly-effective means to probe their dynamics~\cite{Bloch2008}. In this first chapter, we will briefly outline important aspects of the description of 1D Bose gases. For more detailed accounts we refer the reader to Refs. \cite{Cazalilla2011,LangenThesis,Schaff14}.
The experimental realization of a 1D Bose gas follows the familiar procedure based on laser and evaporative cooling that is also used for the production of Bose-Einstein condensates from three-dimensional (3D) Bose gases \cite{Davis1995b,Anderson1995}. However, creating an effectivly 1D system in a 3D world requires extremely asymetric traps with a very tight confinement in all but one spatial directions. The general aim of this tight confinement is to raise the energy splitting of ground and first excited state in the two tightly-confined directions, such that all relevant energy scales of the trapped gas lie below it. For a harmonic trap this means that the temperature $T$ and the chemical potential $\mu$ fulfill $k_BT,\mu\ll\hbar \omega_\perp$, with $k_B$ denoting Boltzmann's and $\hbar$ the reduced Planck constant. This realizes a situation where the dynamics along the radial directions can be integrated out leaving the dynamics along the weakly confined axial direction described by an effective 1D model. Contact interactions in this 1D model can be parametrized by an effective scattering potential with the interaction strength~\cite{Olshanii1998}
\begin{equation}
g = 2\hbar a_s \omega_\perp.\label{eq:g1D}
\end{equation}
Here, $a_s$ is the s-wave scattering length of the gas. Note that this description assumes that microscopic scattering processes still have a 3D character, which is the case as long as the s-wave scattering length is small compared to the ground state width of the tight radial confinement, i.e. $a_s\ll \sqrt{\hbar/m\omega_\perp}$, with $m$ denoting the mass of the atoms. Interesting effects like confinement-induced resonances can occur when this assumption is no longer valid~\cite{Olshanii1998,Haller2010}.
Such highly-anisotropic trap configurations can be created in strongly-focussed optical dipole traps~\cite{Dettmer2001,Billy2008,Serwane2011}, optical lattices \cite{Paredes04,Kinoshita06,Morsch2006,Bloch2008} or in magnetic micro traps \cite{Folman2002,Reichel2011}. In our experiments, we rely on the latter because micro traps, as we will see below, allow for a particularly precise and convenient preparation of non-equilibrium states. Typical trap frequencies in our setup are $\omega_\perp = 2\pi\cdot 2\,$kHz in the tightly-confining radial directions and $\omega_\mathrm{ax} = 2\pi\cdot 10\,$Hz in the weakly-confining axial direction. The 1D Bose gas is created in this trap by evaporative cooling of an elongated 3D thermal cloud through the condensation crossover and then further into the 1D regime.
While the preparation of an ultracold 1D Bose gas is similar to the one of an ultracold 3D Bose gas, significantly different physics arise once the gas enters the 1D regime. The Mermin-Wagner theorem~\cite{Mermin1966} tells us that no off-diagonal long-range order can emerge due to the enhanced role of fluctuations in 1D. Consequently, there is no macroscopic occupation of the lowest momentum mode even at $T=0$. Thus no true Bose-Einstein condensation is possible. Instead a large number of distinct degenerate regimes emerges \cite{Petrov2000,Kheruntsyan2003}, which might or might not share some of the familiar features of a Bose-Einstein condensate.
In the homogeneous limit the system is described by the Lieb-Lininger Hamiltonian~\cite{Lieb63}
\begin{align}
\hat H = \frac{\hbar^2}{2m} \int dz&\,\frac{\partial\hat\Psi^\dagger(z)}{\partial z}\frac{\partial \hat\Psi(z)}{\partial z}\,+\nonumber\\
&+\frac{g}{2}\int dz\, dz^\prime\, \hat\Psi^\dagger(z)\hat\Psi^{\dagger}(z^\prime)\delta(z-z^\prime)\hat\Psi(z^\prime)\hat\Psi(z)\label{eq:LiebLiniger},
\end{align}
where the $\hat\Psi(z)$ denote bosonic field operators. The Lieb-Lininger Hamiltonian is a prime example of a so called integrable model \cite{Lieb63,Lieb63b,Yang69,SutherlandBook}. Such models are characterized by a large number of conserved quantities and have historically been an important topic in mathematical physics. Experiments with 1D Bose gases can thus provide a link between the corresponding deep mathematical insights and physical reality. Most notably, the conserved quantities have a profound influence on the non-equilibrium dynamics of these systems, which makes them particularly interesting for the study of relaxation and thermalization processes~\cite{Rigol2007,Caux13}.
The interaction strength in Eq.~\eqref{eq:LiebLiniger} can be parameterized by the Lieb-Lininger parameter $\gamma = m g/\hbar^2n_\mathrm{1d}$. Notably the interaction strength increases for decreasing particle densities $n_\mathrm{1d}$. For $\gamma \gg 1$ the gas is in the strongly-interacting Tonks-Girardeau regime~\cite{Paredes04,Kinoshita04}. All experiments presented in these notes are performed with $\gamma \ll 1$, where the gas is a weakly interacting quasi-condensate. In this regime density fluctuations are suppressed and the density distribution is similar to the Thomas-Fermi profile of a BEC. However, the phase fluctuates strongly along the length of the system.
The suppression of density fluctuations allows us to employ a generalized version of the well-known Bogolibov expansion even though there is no macroscopically occupied mode~\cite{Mora2003}. To that end, we express the field operators in terms of density and phase operators
\begin{equation}
\hat\Psi(z) = e^{i\hat\theta(z)} \sqrt{n_\mathrm{1d} +
\hat n(z)},\label{eq:field_operator}
\end{equation}
which satisfy the bosonic commutation relation
\begin{equation}
[\hat n(z),\,\hat\theta(z^\prime)] = i\delta(z-z^\prime).\label{eq:commutator}
\end{equation}
Inserting this definition into the Hamiltonian in Eq.~\eqref{eq:LiebLiniger} leads to a quadratic model describing the low-energy limit of the system. The result is known as the Luttinger liquid Hamiltonian
\begin{equation}
\hat H = \frac{\hbar c}{2} \int dz \bigg[\frac{K}{\pi} \bigg(\frac{\partial\hat\theta(z)}{\partial z}\bigg)^2 + \frac{\pi}{K} \, \hat n(z)^2\bigg] = \sum_k \hbar \omega_k \hat a^\dagger_k \hat a_k.
\label{eq:luttinger}
\end{equation}
The parameters in this Hamiltonian are the speed of sound $c = \sqrt{g n_\mathrm{1d}/m}$ and the Luttinger parameter $K =\sqrt{n_\mathrm{1d}(\hbar\pi)^2/4gm}$. The corresponding eigenmodes are non-interacting phonons with momentum $k$, linear dispersion relation $\omega_k = ck$ and energies $\hbar \omega_k$. The creation and annihilation operators $\hat a_k$ and $\hat a^\dagger_k$ define the phonon occupation number $\hat n_k = \hat a^\dagger_k\hat a_k$. They are directly related to the Fourier components of density and phase via \begin{align}
\hat n_k &\sim\left(\hat a_k(t)+\hat a_{-k}^\dagger(t)\right) & \hat \theta_k &\sim\left(\hat a_k(t)-\hat a_{-k}^\dagger(t)\right).
\end{align}
One therefore also speaks of the phase and density quadrature of a phonon. Finally, we note that, besides cold atoms, the Luttinger liquid Hamiltonian also plays an important role in both bosonic and fermionic condensed matter systems~\cite{Bockrath1998,Blumenstein2011,Jompol2009a,Deshpande2010}.
\section{Creating a non-equilibrium state}
\label{sec:creating_non_equ_state}
As we have already noted above, the main tool in all experiments that are presented in these notes is a magnetic micro trap. This micro trap is realized using an atom chip \cite{Reichel2011}, a collection of current-carrying gold wires, which are micro-fabricated onto a silicon substrate. Apart from the possibility to create traps with the necessary aspect ratio to reach the 1D regime (magnetic field gradients scale as $1/r^2$ with the distance $r$ to the current carrying structure and micro traps allow the positioning of the atoms at very small distances $r\sim100\,\mu$m), the atom chip also allows for a precise dynamical control over the trap parameters. For example, the initial harmonic trap can transversely be transformed into a double well potential. This is realized by radio-frequency (RF) dressing of the magnetic sub-states of the atoms \cite{Schumm2005}. The RF fields are applied through additional wires on the chip, which due to their proximity to the atoms allows for very high RF field amplitudes and a precise control over the field polarization.
We use this technique to coherently split a single 1D Bose gas into two halves, thereby creating a non-equilibrium state~\cite{Kitagawa10,Kitagawa11}. The process of splitting is performed fast compared to the axial dynamics in the system so that $t_\mathrm{split} < \xi_\mathrm{h}/c = \hbar/\mu$. Here $\xi_\mathrm{h} = \hbar/mc$ is the healing length, $c = \sqrt{\mu/m}$ the speed of sound and $\mu$ the chemical potential. The fast splitting assures that no correlations can build up along the axial direction such that the splitting happens independently at each point in the gas. The process can be intuitively pictured as a local beam splitter where each atom is independently distributed into the left or right half of the new system. The corresponding probability distribution for the local number of particles $N$ on each side is therefore binomial
\begin{equation}
P(N_l,N_r) = \binom{N_l + N_r}{N_l} p_1^{N_l}(1-p_1)^{N_r},
\end{equation}
with $p_1 = 1/2$ for a balanced splitting process. The resulting fluctuations in one half of the system are thus given by $\mathrm{Var} [N_{l,r}] = N \, p_1 \, (1-p_1)$, which translates into $\langle |\Delta N|^2\rangle = N/4$ for $\Delta N = (N_l - N_r)/2$ in the balanced case. \Fig{splitting_initial_conditions_binomial_atomnumber} illustrates this process.
\begin{figure}[tb]
\centering
\includegraphics[width=0.85\textwidth]{images/splitting_initial_conditions_binomial_atomnumber_corr.pdf}
\caption{\textbf{Local number fluctuations.} \textbf{(a)} Schematic representation of number and phase fluctuations in a 1D Bose gas. The boxes indicate a course graining on the length scale of the healing length. \textbf{(b)} The splitting distributes the atoms on each of these grid points binomially between the two wells. This results in number fluctuations with a variance of $N/4$ (see text) in each gas. These fluctuations correspond to an energy which is added to the relative degrees of freedom of the system during the splitting. Figure adapted from~\cite{LangenThesis}.}\label{fig:splitting_initial_conditions_binomial_atomnumber}
\end{figure}
Once we can speak of two spatially separated systems we can perform a variable transformation to anti-symmetric and symmetric degrees of freedom, which will help us to better describe the quantum state after the splitting. In the following these will also be referred to as relative and common degrees of freedom. Starting from the density and phase fluctuations in the left and right halves (denoted by $\hat n_{l,r}(z)$ and $\hat\theta_{l,r}(z)$, respectively) we find
\begin{equation}
\hat\phi(z) = \hat\theta_r(z)-\hat\theta_l(z) \quad\textrm{,}\quad \hat\phi_\mathrm{com}(z)=\frac{\hat\theta_r(z)+\hat\theta_l(z)}{2}
\label{eq:relativephase}
\end{equation}
for the phase, and
\begin{equation}
\hat\nu(z)=\frac{\hat n_r(z)-\hat n_l(z)}{2} \quad\textrm{,}\quad \hat\nu_\mathrm{com}(z)=\hat n_r(z)+\hat n_l(z)
\end{equation}
for the density. The usefulness of this approach becomes clear as we return to the shot noise, which now only enters in the relative number fluctuations
\begin{equation}
\langle\hat\nu(z)\hat\nu(z^\prime)\rangle= \frac{n_\mathrm{1d}}{2} \delta(z-z^\prime).
\end{equation}
Here, $n_\mathrm{1d}$ denotes the mean density in a single gas after splitting, which results in the additional factor of $2$ as compared to the binomial fluctuations that were introduced above. Transforming these fluctuations into momentum space gives
\begin{equation}
\langle{\hat\nu_k\hat\nu_{k^\prime}}\rangle= \frac{n_\mathrm{1d}}{2} \delta_{k,-k^\prime}.\label{eq:densityfluct}
\end{equation}
From the commutation relation in Eq.~\eqref{eq:commutator}, we see that the corresponding shotnoise introduced to the phase quadrature of the relative modes goes with $1/n_\mathrm{1d}$ and is therefore negligible.
Returning to the Luttinger Hamiltonian (Eq. \eqref{eq:luttinger}) we can identify the amount of energy that is introduced into each individual phononic mode during the splitting process as $g n_\mathrm{1d}/2$, which is typically significantly smaller than the thermal energy of the initial gas. Moreover, as we have just shown this energy is only stored in the density quadrature of the relative degrees of freedom, while it should be equipartitioned between phase and density quadrature in thermal equilibrium.
The situation is different for the common degrees of freedom, which inherit all thermal excitations that were present in the initial gas before the splitting. The state created by splitting is thus also out of equilibrium in this respect, as the common degrees of freedom contain a lot of thermal energy, while the relative degrees of freedom only contain quantum shotnoise.
In experiment, the equilibrium situation can be realized by the transforming the harmonic trap into a double well while the gas is still thermal. Further independent evaporative cooling in both wells then results in two degenerate gases with no knowledge of each other, which corresponds exactly to thermal equilibrium. The experiment thus enables the unique possibility to contrast non-equilibrium and thermal states in identical settings.
\section{Probing the quantum state}
\label{sec:probing_the_quantum_state}
Information about the system and its dynamics after the splitting is extracted using standard absorption imaging~\cite{Ketterle99} after releasing the system from the trap. If only a single gas is present it simply expands in time-of-flight (TOF), while a pair of condensates expands, overlaps and forms a matter-wave interference pattern~\cite{Schaff14}. The resulting cloud is subsequently illuminated by a resonant laser beam, casting a shadow that is imaged onto a CCD camera. This method is destructive, therefore many identical realizations are necessary to probe a time evolution. It is important to note that the tight transversal confinement of the 1D gases leads to a very rapid radial expansion, which results in an immediate dilution of the system. Therefore, interaction effects in the expansion are negligible and the absorption images enable comprehensive insights into the properties of the initial trapped system.
A schematic overview of imaging probes employed in our experiment is shown in \fig{imaging_directions}. In the following we will give a short overview of the insights into the dynamics of the quantum state, which are gained through these probes.
\begin{figure}[tb]
\centering
\includegraphics[width=0.95\textwidth]{images/imaging_doublewell.png}
\caption{\textbf{(a)} Overview of the available probes in our setup. The transversal probe is primarily used to measure temperature by extracting the density ripple patterns forming in TOF (see \sect{density_ripples}). The vertical probe images the full matter-wave interference pattern containing the entire relative phase field of the two gases (see \sect{full_distribution_functions} and \ref{sec:phase_correlation_functions}). The longitudinal probe records the interference pattern integrated along the 1D direction. It can also be used to measure the number balance by intentionally separating the clouds using a momentum transfer during the trap switch-off. \textbf{(b)} Examples of interference patterns measured with the vertical imaging system right after the splitting ($t=0\,$ms) and after time evolution ($t>0\,$ms). The bending of the fringes reflects the randomization of the relative phase during the dynamics. Figure adapted from~\cite{LangenThesis}.
}
\label{fig:imaging_directions}
\end{figure}
\subsection{Density ripples}
\label{sec:density_ripples}
As we have discussed above, fluctuations play a central role in the physics of 1D Bose gases. It is thus essential that our method allows the probing of a single realization of a 1D Bose gas. In this way, repeating the experiment many times not only gives access to the dynamics but also to the statistical distribution of the fluctuations. It is thus possible to obtain a much deeper insight into the quantum states than would be possible if only mean values of observables could be measured.
A single quasi-condensate that is released and expands in TOF forms strong density speckles along the 1D axis (see \fig{density_ripples}a). These speckles are a direct consequence of the fluctuating phase in the trapped system. In fact, the corresponding gradient $\nabla\theta(z)$ can be interpreted as a velocity field. In expansion this stochastic velocity field transforms into position space realizing a characteristic speckle pattern atop the average density profile. Analyzing the correlations in these patterns and comparing them to simulated results obtained from an Ornstein-Uhlenbeck stochastic process allows us to determine the temperature of the gas~\cite{Imambekov2009,Manz2010} as shown in \fig{density_ripples}b. This is a powerful tool that works as well for 2D systems~\cite{Mazets2012,Langen2013b}. In the experiments it is primarily used to characterize the initial gas before the splitting. However, it can also be used for the study of the evaporative cooling process~\cite{Grisins2014,Rauer2015} or thermalization (see \sect{long_term_evolution}).
\begin{figure}[tb]
\centering
\includegraphics[width=0.85\textwidth]{images/ripples_g2.pdf}
\caption{In a TOF measurement the in-situ phase fluctuations transform into density speckle patterns \textbf{(a)}. The correlations in these patterns are used to extract the temperature by fitting them with simulated data \textbf{(b)}. The insets show typical density ripple patterns with the displayed correlations. Figure adapted from~\cite{LangenThesis}.}
\label{fig:density_ripples}
\end{figure}
\subsection{Phase correlation functions}
\label{sec:phase_correlation_functions}
The interference pattern of two quasi-condensates as depicted in the lower panel of \fig{imaging_directions} provides a powerful probe for the dynamics of the system. In our case the relative phase fluctuates along the length of the system. In general, the position of the fringes in an interference pattern is determined by this relative phase between the two interfering waves. The meandering fringe pattern in the images thus directly reflects the local in situ relative phase, which can be reconstructed by fitting each local pixel row in the interference pattern with a sinusoidal function.
Right after the splitting the two halves of the system are almost perfectly phase correlated as the shot noise energy is introduced only into the density quadrature, but not the phase. The relative phase is almost zero and the fringes are straight. Over time this coherence is lost and the fringe patterns become more random. This coherence is due to a dephasing of the phononic modes in the relative degrees of freedom. To analyze this process it is instructive to study the correlation function of the relative phase field
\begin{align}
C(z,z^\prime) &= \frac{\langle\hat\Psi_l{}^\dagger(z)\hat\Psi_r{}(z)\hat\Psi_r{}^\dagger(z^\prime)\hat\Psi_l{}(z^\prime)\rangle}{\langle{|\Psi_r(z)|^2\rangle\langle|\Psi_l(z^\prime)|^2}\rangle} \simeq \langle e^{i\hat\phi(z)-i\hat\phi(z^\prime)}\rangle.
\label{eq:pcf}
\end{align}
Here, $\hat\Psi_{l,r}$ correspond to the field operators of the left and right gas and $z$ and $z^\prime$ are two points along the axial direction of the system. In the last step we have assumed that density fluctuations can be neglected, which is a very good approximation in the quasi-condensate regime. In the experiment, the expectation value is realized by averaging over many identical realizations.
For the coherent phase field right after splitting the correlation function is close to one over all relative
distances $\bar z = z - z^\prime$. After approximately $15\,$ms the systems settles into a steady state, where correlations decay exponentially with $\bar z$. For a 1D Bose gas this exponential decay corresponds to thermal correlations, with the characteristic length scale of the decay $\lambda$ being directly related to the temperature $T$ via $\lambda=\hbar^2 n_\mathrm{1d}/m k_B T$. However, while showing characteristic thermal-like correlations, the relaxed state is markedly different from thermal equilibrium, as its temperature $k_B T_\mathrm{eff} = g n_\mathrm{1d}/2$ can be identified with the shot noise energy that was introduced during the splitting process. Is is thus significantly smaller than the initial temperature $T$ of the system. At the same time, the common degrees of freedom still show a temperature comparable to $T$. The system has thus not fully thermalized, but rather reached a prethermalized state \cite{Gring2012, Kuhnert2013a}, where it already exhibits certain thermal-like features like a temperature. The physical reason behind this is that common and relative degrees of freedom fully decouple in the low-energy limit for a balanced splitting. No energy can be exchanged so that the system can never fully forget its initial state.
Microscopically this dephasing process can be well understood within the Luttinger description. All energy is initially stored in the density quadrature and all phonons are initialized in phase. During the time evolution the energy of each mode oscillates between density and phase with the momentum-dependent frequency $\omega_k$, which eventually leads to a dephasing. The thermal nature arises from the occupations of the modes. Because of the linear dispersion relation we find that the splitting prepares the relative degrees of freedom with occupation numbers that decay as $1/k$ for increasing momentum $k$. All modes thus obtain the same amount of energy from shot noise, which, after dephasing, makes the state indistinguishable from a thermal state with the corresponding temperature.
More insights can be obtained by studying the details of the correlation functions during the relaxation process. Their evolution is plotted in \fig{phase_correlation_functions}a~\cite{Langen2013}. For a given point in time the correlations decay exponentially up to a certain crossover distance $z_c$ beyond which the long-range order of the initial state prevails. The evolution of this crossover point plotted in \fig{phase_correlation_functions}b is linear, revealing that the exponentially decaying correlations spread through the system in a light-cone-like dynamic with a characteristic velocity. This process is driven by the dephasing of the phononic modes of the initial state. Short wavelength modes dephase faster than long wavelength modes leading the characteristic spread. The velocity can be identified with the speed of sound of the phonons, which thus act as carriers of information in the system. This observation provides a direct connection between the establishment of thermal properties and the propagation of correlations in a quantum many-body system. The underlying principles are even more general and also govern the distribution of entanglement, with profound implications, e.g. for quantum information science and computer simulations of complex materials~\cite{Lieb72,Cheneau12,Eisert2010}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.60\textwidth]{images/lightcone_pcfs.pdf}
\caption{\textbf{(a)} Measured phase correlation functions (circles) of the evolution following the splitting together with the Luttinger liquid predictions (solid lines) taking the trap as well as the finite optical resolution into account. The color encodes time going from blue (1 ms after the splitting) to red (9 ms). The green exponential curve is the prediction for the final prethermalized steady state. \textbf{(b)} Evolution of the crossover distance $z_c$ between the exponentially decaying correlations and the plateau with long range order. The linear behavior shows that the thermal correlations appear locally and spread through the system in a light-cone-like fashion. Figure adapted from~\cite{LangenThesis}.}
\label{fig:phase_correlation_functions}
\end{figure}
\subsection{Full distribution functions}
\label{sec:full_distribution_functions}
Another powerful technique to analyze the correlation properties during the relaxation dynamics and especially in the steady state is the full distribution function (FDF) of the interference contrast. To introduce the contrast as an observable we define the operator
\begin{equation}
\label{eq:interference_operator}
\hat A(L)= \int_{L/2}^{L/2} dz\, \hat\Psi_l{}^\dagger(z,t)\hat\Psi_r{}(z,t),
\end{equation}
which corresponds the interference term of the bosonic field operators integrated over a length $L$. The magnitude of $\hat A(L)$ is related to the integrated contrast of the interference patterns $\langle{C^2(L)}\rangle=\langle|\hat A(L)|^2\rangle/n_\mathrm{1d}^2 L^2$. Experimentally the distribution of the squared contrast normalized by the mean squared contrast $\alpha = C^2/\langle|C|^2\rangle$ is less prone to systematic errors and therefore favorable. Recording the shot-to-shot fluctuations of this quantity gives us the full distribution function $P(\alpha)d\alpha$ of the probability to observe a contrast in the interval $\alpha + d\alpha$. The FDFs therefore contain the information about all even moments of the interference operator \eqref{eq:interference_operator} defined above
\begin{equation}
\frac{\langle|\hat {A}|^{2m}\rangle}{\langle|\hat {A}|^{2}\rangle^m}=\langle\alpha^m\rangle=\int_0^\infty P(\alpha) \alpha^m d\alpha.
\end{equation}
Thus, they contain much more information about the quantum state than the two-point correlation function introduced earlier.
\Fig{contrast_and_fdfs}a shows the evolution of the mean squared contrast as a function of time, \Fig{contrast_and_fdfs}b a comparison of the FDFs of the prethermalized state discussed in \sect{phase_correlation_functions} and the predictions of the Luttinger liquid model. In \fig{contrast_and_fdfs}c the FDFs of a system of two independent condensates in thermal equilibrium are plotted for comparison. Due to the low effective temperature of the prethermalized state its distributions are peaked over long integration lengths while the much hotter thermal state in \fig{contrast_and_fdfs}c exponentially decays over all observed length scales. This illustrates the fact that the steady state reached after splitting is not the thermal equilibrium of the system.
\begin{figure}[htb]
\centering
\includegraphics[width=1\textwidth]{images/contrast_and_fdfs.pdf}
\caption{Contrast dynamics and full distribution functions of a coherently split 1D Bose gas. (a) Measured values of the mean squared contrast for various integration lengths $L$ (points). From top to bottom: $L = 18,40,60,100\,\mu$m. The lines show the results of a Luttinger liquid calculation for these integration lengths. (b) Full distribution functions after relaxation to the prethermalized state. The solid red lines show theoretical equilibrium distributions with an effective temperature of $T_\mathrm{eff} = 14\,$ nK, which is significantly lower than the true initial temperature of the gas ($T = 120\,$nK). The prethermalized nature of the state is clearly revealed by comparing it to the vastly different thermal equilibrium situation shown in (c), which can be prepared by creating two completely independent 1D Bose gases. Figure adapted from Refs.~\cite{Gring2012,Kuhnert2013a}.}
\label{fig:contrast_and_fdfs}
\end{figure}
\section{Generalized Gibbs ensemble}
\label{sec:gge}
The fact that the phonon occupations of the system are preserved during the dynamics is deeply rooted in the integrability of the underlying model. Each relative mode acts like a harmonic oscillator that does not interact but dephases with respect to the rest of the system. This is a general feature of an integrable quantum system, where multiple non-trivial quantities are conserved, severely restricting the system's dynamics. This was strikingly visualized in a landmark experiment by Kinoshita et al.~\cite{Kinoshita06}, which realized the quantum analog of the well known (classical) Newton cradle. Even after thousands of collisions between its constituents such a system will not reach thermal equilibrium, simply because the momenta are conserved and can thus never reach the values given by the Bose-Einstein distribution.
Nevertheless, it has been conjectured that such systems still relax to a maximum entropy state which is given by the density matrix of a so-called generalized Gibbs ensemble (GGE)~\cite{Rigol2007}
\begin{equation}
\hat\rho = \frac{1}{Z}e^{-\sum \lambda_j \hat I_j}.
\end{equation}
Here, $Z$ is the partition function, $\hat I_j$ are the operators of the conserved quantities and $\lambda_j$ the corresponding Lagrange multipliers. If only energy is conserved this density matrix reduces to the well-known canonical or Gibbs ensemble, with temperature being the only Lagrange multiplier. If many more conserved quantities exist like the phonon occupations in the Luttinger liquid model, many generalized temperatures, one for each conserved quantity are necessary to maximize entropy.
In our case the occupation numbers of all modes are the conserved quantities. However, the prethermalized state that we have studied so far is a special case of this ensemble, as all temperatures are identical due to the equipartition of energy during the splitting process. To demonstrate the presence of a GGE it is thus necessary to change the splitting process, such that different modes exhibit different temperatures. The results are shown in Fig. \ref{fig:GGE}. Again, the relative phase correlation function can be used to characterize the dynamical states of the system. While we were previously showing only one coordinate of this function, plotting the full function provides straightforward insights into the new occupation numbers. The correlation functions show a trivial maximum on the diagonal ($z_1 = z_2$) which arises due to the fact that every point is perfectly correlated with itself. However, a second maximum arises on the anti-diagonal ($z_1 = -z_2$), indicating that points that are located symmetrically around the center of the system are more strongly correlated. In a simplified model, this implies that modes which are symmetric around the center are more strongly occupied than modes with are anti-symmetric around the center. A more detailed analysis of the relaxed state allows to extract all mode occupations that are necessary to describe the state~\cite{LangenGGEarxiv}. Given these extracted occupation numbers the dephasing model also provides a detailed description of the dynamics, which proves that the conserved quantities were indeed set during the splitting process.
Most importantly, these observations visualize, both experimentally and theoretically, how the unitary evolution of our quantum many-body system connects to a steady state that can be described by a thermodynamical ensemble.
\begin{figure}[htb]
\centering
\includegraphics[width=1\textwidth]{images/fig2.png}
\caption{Relaxation dynamics of a coherently split 1D Bose gas with different populations for different modes. Two-point correlation functions $C(z,z')$ for increasing evolution time, showing maxima on the diagonal and the anti-diagonal. The experimental observations (top row) are in very good agreement with the theoretical model (bottom row) demonstrating the presence of many different temperatures in the system. Figure adapted from~\cite{LangenThesis,LangenGGEarxiv}.}
\label{fig:GGE}
\end{figure}
\section{Dynamics beyond prethermalization}
\label{sec:long_term_evolution}
In sections~\ref{sec:probing_the_quantum_state} and \ref{sec:gge} of these notes we demonstrated that the 1D Bose gases realized in experiment do not relax to thermal equilibrium but to a prethermalized state that can be described by a generalized Gibbs ensemble. This behavior roots in the integrability of the Lieb-Liniger model and its low-energy approximation, the Luttinger liquid model. However, the 1D Bose gas realized in our experiments is only nearly-integrable. On the one hand radial excited states can affect the 1D dynamics and on the other hand the harmonic trap breaks integrability of the Lieb-Liniger model (while integrability is still retained in the trapped Luttinger liquid model~\cite{Geiger2014}).
It has been conjectured that in this case the observed prethermalized state is only an intermediate steady state on the way to thermal equilibrium, its lifetime being directly related to the degree of integrability breaking~\cite{Kollar11,Stark13}. The analysis of this scenario in the context of classical mechanics has culminated in the important Kolmogorov-Arnold-Moser (KAM) theorem~\cite{Kolmogorov1954}. No complete analogue of this theorem has so far been found in quantum mechanics~\cite{Brandino2014}. Alternatively, also other behaviour has been suggested, namely that the quasi-particles of the experimentally realized 1D Bose gas could be unaffected by the radial states~\cite{MazetsPrivateComm}, leaving the gas fully integrable. Experimental investigations into this effect are ongoing in our and other groups~\cite{WeissPrivateComm}.
However, even within the coherent dynamics the long-term evolution of the system is expected to show a rich variety of effects, which we will discuss in the following.
\subsection{Recurrences}
\label{sec:recurrences}
We have shown in the previous chapters that the unitary quantum evolution of a 1D Bose gas can lead to the establishment of thermal properties. This does not mean that a true thermal state was reached, but rather that the expectation values of certain observables became indistinguishable from the corresponding thermal values. In this way the predictions of statistical and quantum mechanics are reconciled.
However, in a finite system as the trapped system we are dealing with, the unitarity is still expected to result in observable consequences as it forces the dynamics to be periodic. The important question is how long the timescale of this periodic behaviour will be. In the context of our experiments periodic behavior would correspond to a rephasing of the phonons (and thus a reestablishment of coherence) after a finite time, which would be observable as a phase correlation function close to one, $C(\bar z)=1$, over all distances $\bar z$.
In a homogeneous system the time between these recurrences can be estimate as $t_{rec} = L/2c$, which corresponds to twice the time to reach the perfectly dephased prethermalized state. For typical parameters $t_{rec}\sim 30\,$ms. Surprisingly no signs of these recurrences are observed in experiment.
The reason for this lies in the mode structure of the trapped system. While in the homogeneous case the mode energies are equally spaced $\omega_k = ck$, the modes in a harmonically trapped condensate are described by Legendre polynomials~\cite{Petrov2004}. This leads to the modified dispersion relation
\begin{equation}
\omega_j = \omega_\mathrm{ax} \sqrt{j(j+1)/2},
\end{equation}
where $\omega_\mathrm{ax}$ is the trap frequency of the axial harmonic confinement and $j$ is the mode index.
For the given parameters the incommensurate mode frequencies shift the first significant revival in the trapped case to about $200\,$ms, which is challenging to study in experiment~\cite{Geiger2014}. \Fig{revival_pcfs} shows a comparison of the phase correlation dynamics after splitting for the homogeneous and the trapped case. While the initial dephasing dynamics is very similar in both traps the revival structure is quite different, as expected from the dispersion relations. A classical analogy for these dynamics is the behavior of a collection of uncoupled pendula of different length, which only rephase if their frequencies are commensurate.
To actually measure recurrences in experiment it would be beneficial to trap atoms in a box shaped potential. Flat bottom traps were recently realized for 3D and 2D systems \cite{Gaunt2013, Chomaz2014}. They are a powerful tool to investigate Bose-Einstein condensation or the Kibble-Zurek mechanism unperturbed by trap effects.
\begin{figure}[htb]
\centering
\includegraphics[width=1\textwidth]{images/revivals.png}
\caption{Time evolution of the relative phase correlation function for the homogeneous (left, a and c) and trapped (right, b and d) systems. The color-scale indicates the degree of correlation (red: high correlation, blue: low correlation). The top row illustrates the relaxation to the prethermalized state. In the homogeneous case, the initial state is re-established at times which are multiples of the system length divided by the characteristic velocity. In the trapped case, the recurrences are only partial and the more complex structure is due to the incommensurate ratios of the mode frequencies. In this time window ($0-300\,$ ms), the strongest recurrence is observed at $202\,$ms (Here, $\omega_\mathrm{ax} = 2\pi\cdot 7\,$Hz). Reproduced from ~\cite{Geiger2014}}.
\label{fig:revival_pcfs}
\end{figure}
\subsection{Imbalanced splitting}
\label{sec:imbalance}
Another relaxation mechanism that is captured by the low-energy description is the dephasing due to imbalances in the splitting process. In practice, the two wells of the double-well potential can never be perfectly balanced during the splitting process. This leads to relative fluctuations of the overall number of atoms in each well. The gas which ends up with more atoms is characterized by a slightly higher chemical potential and speed of sound. These relative differences lead to a dephasing of the two gases with a characteristic velocity $c^\prime = (c_l - c_r)/2$~\cite{LangenThesis,Kitagawa11}. If the atom number difference between the two gases is very small this process will thus be much slower than the initial relaxation to the prethermalized state. However, on long time-scales it will lead to a state in which common and relative degrees of freedom share the same temperature. For an observer, the state will thus be indistinguishable from thermal equilibrium, highlighting again the importance of dephasing and the role of observation for the understanding of thermalization.
\section{Application: Interferometry with squeezed states}
We will end with an application of the well-characterized matter-wave interferometer that we have introduced during the course of these notes. With this we aim to indicate how the fundamental research on non-equilibrium dynamics might also have immediate technological impact in the near future.
The binomial splitting of a single gas that we discussed above is only a good approximation in the limit of non-interacting atoms. Even for weak interactions, as the ones in the present case for $^{87}$Rb, the splitting has to be very fast to reach the binomial splitting limit. For a slower splitting, interactions will start to play a role and lead to the development of correlations between the atoms. These correlations are a valuable resource for precision measurement devices.
Experimentally, the speed of the splitting can easily be controlled using the atom chip. While an infinitely fast splitting leads to a relative atom number variance of $N$, the variance resulting from a splitting taking a finite time is reduced by the so-called squeezing factor $\xi_N^2 = \sigma_N^2/N$, where $\sigma_N$ is the standard deviation of the relative number distribution. The slower the splitting the lower the factor $\xi_N$
and thus the stronger the squeezing. The corresponding spin squeezing factor $\xi_S = \xi_N/\langle cos \phi \rangle$ ($\phi$ again denoting the relative phase) can be understood as an entanglement witness, i.e. an observable that signals the presence of genuine multi-particle entanglement~\cite{Sorenson01}. The presence of this entanglement in the states created by the splitting leads to a gain in measurement precision, which can not be achieved with classical states~\cite{Gross10}.
To actually utilize number squeezing in our setup we need to devise a full interferometric sequence. Apart form the splitting process already described at length we need two further building blocks to achieve this. The first is a mechanism to introduce a relative phase shift between the two arms of the interferometer, which emulates the measurement signal in a possible application. Experimentally we realize this by tilting the double well potential after splitting so that the two gases experience a different gravitational potential and accumulate a phase difference. Varying the time $t_\phi$ the system is kept in this state controls the overall phase shift. As a second building block, we need to employ a recombiner that allows for measurements of the relative atom number in the two arms of the interferometer (in analogy to the second beam splitter in an optical Mach-Zehnder interferometer). This can be achieved by accelerating the two gases onto each other while keeping a barrier between them that is small enough to allow for inter-well tunneling. In this process the relative phase is mapped to a relative population difference, just like in the case of two wave packets that simultaneously impinge on a semi-reflective barrier from different sides. In addition, the relative phase $\phi$ between the two wells can also be measured using the standard matter-wave interference procedure that was already described in \sect{probing_the_quantum_state}.
\Fig{tarik} shows the experimentally observed population imbalance of a squeezed initial state as a function of the phase accumulation time $t_\phi$. The fringe contrast of the average values is damped due to phase diffusion. Naively, this phase diffusion would be expected to be much more severe, but the presence of a long-lived prethermalized state limits its deteriorating effects. Comparing the observed decay time to the one expected for a coherent state of $\xi_N = 1$ (dashed line) illustrates the gain in interferometric precision when using a squeezed input state. The best spin squeezing achieved in this setup is $\xi_S^2 = -7.8 \pm 0.8\,$dB \cite{Berrada2013}, corresponding to genuine multi-particle entanglement of $150$ atoms. This result could in the future be increased by the use of optimized splitting ramps~\cite{Grond2010}, and outlines the way for interferometric sensing of local forces in atom chip configurations.
\begin{figure}[htb]
\centering
\includegraphics[width=1\textwidth]{images/tarik_2.png}
\caption{Output signal of the integrated Mach-Zehnder Interferometer. The normalized population difference $z = N_l-N_r/(N_l+N_r)\equiv n/N_t$ between the two wells is measured as a function of time $t_\phi$. It exhibits interference fringes and a damping due to phase diffusion. Grey dots: imbalance of individual experimental realizations; black dots: ensemble average $\langle z \rangle$; red curve: theoretical prediction taking into account phase diffusion; dashed black line: expected signal for a classical coherent state without squeezing. Reproduced with permission from~\cite{Berrada2013}.}
\label{fig:tarik}
\end{figure}
\section{Conclusion}
The relaxation of isolated quantum many-body systems is a major unsolved problem connecting statistical and quantum physics. Understanding such relaxation processes remains a challenge despite considerable efforts.
Experiments with ultracold quantum gases (in general) and 1D Bose gases (in particular) allow the realization and manipulation of well-controlled and truly isolated quantum systems. As we have shown, this provides unique opportunities to study and understand non-equilibrium phenomena. For example, the results discussed in these notes demonstrate for the first time several characteristic aspects of these dynamics, including the existence of a stable, thermal-like prethermalized state and its dynamical, light-cone-like emergence. Furthermore, the connection of the prethermalized state with generalized statistical ensembles, and thus of the unitary quantum evolution and statistical mechanics was highlighted. The progress in this field is rapid and we expect it to continue to have profound implications for our understanding of isolated quantum many-body systems.
\section{Acknowledgements}
This work was supported by the EU (SIQS and ERC advanced grant Quantum-Relax). B.R. and T.S. acknowledge the support by the Austrian Science Fund (FWF) through the Doctoral Program CoQuS (W1210) and through the SFB FoQuS.
\bibliographystyle{bibtex/varenna}
|
1,116,691,497,900 | arxiv | \section{Introduction}
Polynomial interpolation is to construct a polynomial $p$ belonging
to a finite-dimensional polynomial subspace from a set
of data that agrees with a given function $f$ at the data set.
Univariate polynomial interpolation has a well developed theory,
while the multivariate one is very problematic since a multivariate
interpolation polynomial is determined not only by the cardinal but
also by the geometry of the data set, cf. \cite{dBo94, GS00:2}.
As an elegant form of multivariate approximation, ideal interpolation
provides a natural link between multivariate polynomial interpolation and
algebraic geometry\cite{She2009}. The study of
ideal interpolation was initiated by Birkhoff \cite{Bir1979}
and continued by several authors \cite{GS00:2,
dBo2005, She2009, LZD2011}.
Actually, ideal interpolation is an \emph{ideal projector} on polynomial ring whose
kernel is an ideal. When the kernel of an ideal projector $P$ is the vanishing ideal of
certain finite nonempty set $\Xi$ in $\mathbb{R}^d$, $P$ is a \emph{Lagrange
projector} on $\mathbb{R}[\bm{x}]:=\mathbb{R}[x_1,\ldots,x_d]$, the
polynomial ring in $d$ variables over $\mathbb{R}$, which
provides the Lagrange interpolation on $\Xi$. Obviously, $P$ is
finite-dimensional since its range is a $\#\Xi$-dimensional subspace
of $\mathbb{R}[\bm{x}]$. Lagrange projectors are standard
examples of ideal projectors.
It is well-known that every univariate ideal projector is an
\emph{Hermite projector}, namely it is the pointwise limit
of a sequence of Lagrange projectors. This inspired Carl de Boor\cite{dBo2005} to
conjecture that every finite-dimensional linear operator on
$\mathbb{C}[\bm{x}]$ is an ideal projector if and only if it is Hermite.
However, Boris Shekhtman\cite{BS2006} disproved this conjecture when the dimension $d\geq 3$.
In the same paper, Shekhtman also showed that the conjecture is
true for bivariate complex projectors with the help of Fogarty
Theorem (see \cite{Foga1968}). Later, using linear algebra tools
only, de Boor and Shekhtman\cite{deBoorShe2008} reproved the same result. Specifically,
Shekhtman\cite{BS2008} completely analyzed the bivariate ideal projectors
which are onto the space of polynomials of degree less than $n$ over
real or complex field, and verified the conjecture in this
particular case.
Let $P$ be an ideal projector that only interpolates a function and its partial derivatives. Obviously, many classical multivariate interpolation projectors are examples of $P$ which has applications in many fields of mathematics and science, cf.\cite{Lor1992}. Naturally, we wonder whether de Boor' s conjecture is true for $P$ or not.
In this paper, a positive answer is offered to this question by Theorem \ref{mianthm} of Section \ref{mainresult} which states that there exists a positive $\eta\in \mathbb{R}$ such that $P$ is the pointwise limit of a sequence of Lagrange projectors which are perturbed from $P$ up to $\eta$ in magnitude, and the proof of the theorem is postponed to Section \ref{proof}, the last section of the paper. A further natural question is how to determine the value of $\eta$. We propose an algorithm in Section \ref{mainresult} for computing the value of such $\eta$ when the range of the Lagrange projectors is spanned by the Gr\"{o}bner \'{e}scalier of their kernels w.r.t. lexicographic order. And then, Section 4 is dedicated to some examples to illustrate the algorithm. The next section, Section 2, is devoted as a preparation for this paper.
\section{Preliminaries}\label{s:pre}
In this section, we will introduce some notation and review some
basic facts related to ideal projectors. For more details, we refer
the reader to \cite{dBo2005, She2009, deboor2006}.
Throughout the paper, we use $\mathbb{N}_0$ to stand for the monoid of
nonnegative integers and boldface type for tuples with their entries
denoted by the same letter with subscripts, for example,
$\bm{\alpha}=(\alpha_1,\ldots, \alpha_d)$.
Henceforward, we use $\leq$ to denote the usual product order on
$\mathbb{N}_0^d$, that is,
for arbitrary
$\bm{\alpha}$, $\bm{\beta}\in\mathbb{N}_0^d$, $\bm{\alpha}\leq \bm{\beta}$ if and only if $\alpha_i\leq \beta_i, i=1,\ldots, d$.
A finite nonempty set $\mathfrak{\Delta}\subset \mathbb{N}_0^d$ is called \emph{lower} if
for every $\bm{\alpha}\in \mathfrak{\Delta}$, $\bm{0}\leq\bm{\beta}\leq \bm{\alpha}$ implies $\bm{\beta}\in \mathfrak{\Delta}$.
A \emph{monomial} ${\bm{x}}^{\bm{\alpha}}\in \mathbb{R}[\bm{x}]$ is
a power product of the form $x_1^{\alpha_1}\cdots x_d^{\alpha_d}$
with $\bm{\alpha}\in \mathbb{N}_0^d$. Thus, a \emph{polynomial} $p$ in
$\mathbb{R}[\bm{x}]$ can be expressed as a linear combination of monomials from $\mathrm{Supp}(p)$, the support of $p$, as follows,
\begin{equation}\label{pform}
p=\sum\limits_{\bm{\alpha}}\widehat{p}({\bm{\alpha}})
{\bm{x}}^{{\bm{\alpha}}}
\end{equation}
where $\widehat{p}({\bm{\alpha}})\in \mathbb{R}\backslash\{0\}$. For
$\bm{i}\in \mathbb{N}_0^d$ and $p \in \mathbb{R}[\bm{x}]$, if there
exists a monomial ${\bm{x}}^{\bm{\alpha'}}$ in $\mathrm{Supp}(p)$ such that $\bm{\alpha'} < {\bm{i}}$, then we
denote this fact as $p <_m \bm{i}$.
Let $P$ be a finite-dimensional ideal projector on $\mathbb{R}[\bm{x}]$.
The range and the kernel of $P$ are denoted by $\mathrm{ran}P$ and $\mathrm{ker}P$ respectively.
Furthermore, $P$ has a dual
projector $P'$ on $\mathbb{R}'[\bm{x}]$, the algebraic dual of $\mathbb{R}[\bm{x}]$, whose
range can be described as
$$\mathrm{ran}P'=\{\lambda \in \mathbb{R}'[\bm{x}]: \mathrm{ker}P\subset \mathrm{ker}\lambda \},$$
which is the set of interpolation conditions matched by $P$. Assume that $\Lambda\subset\mathbb{R}'[\bm{x}]$ is an
$\mathbb{R}$-basis for $\mathrm{ran}P'$, then
$$\mathrm{ker}\Lambda:=\{f\in \mathbb{R}[\bm{x}]: \lambda(f)=0, \forall\ \lambda\in \Lambda\}=\mathrm{ker}
P.$$
We denote by
$\mathbb{T}^d$ the monoid of all monomials in $\mathbb{R}[\bm{x}]$.
For each fixed monomial order $\prec$ on $\mathbb{T}^d$,
a nonzero polynomial $f \in \mathbb{R}[{\bm{x}}]$ has a unique \emph{leading
monomial} $\mathrm{LM}_{\prec}(f)$, which is the $\prec$-greatest monomial appearing in $f$ with nonzero coefficient.
According to \cite{Mor2009}, the monomial set
$$
\mathcal{N}_\prec(\mathrm{ker}\Lambda):=\{{\bm{x}}^{\bm{\alpha}}\in \mathbb{T}^d:
\mathrm{LM}_{\prec}(f)\nmid {\bm{x}}^{\bm{\alpha}}, \forall f \in \mathrm{ker}\Lambda\}
$$
is
the \emph{Gr\"{o}bner \'{e}scalier} of $\mathrm{ker}\Lambda$ w.r.t. $\prec$.
We denote by
$\mathrm{ran}_{\prec}P$ the range of $P$ spanned by the Gr\"{o}bner \'{e}scalier of $\mathrm{ker}\Lambda$ w.r.t. $\prec$.
When $P$ is a Lagrange projector, we have $\mathrm{ker}\Lambda=\mathcal {I}(\Xi)$, the vanishing ideal of some finite nonempty set $\Xi\subset \mathbb{R}^d$.
In 1995, Cerlienco and Mureddu\cite{CM1995} proposed an purely combinatorial algorithm named MB for computing the Gr\"{o}bner \'{e}scalier of $\mathcal {I}(\Xi)$ w.r.t. some lexicographical order on $\mathbb{T}^d$ which is denoted by $\prec_{lex}$ here. Later, Felszeghy, R\'{a}th, and R\'{o}nyai\cite{FRR2006} provided a faster algorithm, lex game algorithm, by building a rooted tree $T(\Xi)$ of $d$ levels from $\Xi$ in the following way:
\begin{itemize}
\item The nodes on each path from the root to a leaf
are labeled with the coordinates of a point.
\item The root is regarded as the $0$-th level with no label, its
children are labeled with the $d$-th coordinates of the points,
their children with the
$(d-1)$-coordinates, and so forth.
\item If two points have same $k$
ending coordinates, then their corresponding paths coincide until
level $k$.
\end{itemize}
Given finite nonempty point sets $\Xi^{(1)}$, $\Xi^{(2)}\subset \mathbb{R}^d$ with
$\#\Xi^{(1)}=\#\Xi^{(2)}$. If $T(\Xi^{(1)})$ and $T(\Xi^{(2)})$ have same structure, \cite{FRR2006} showed that $\mathcal {N}_{\prec_{lex}}(\mathcal {I}(\Xi^{(1)}))=\mathcal {N}_{\prec_{lex}}(\mathcal {I}(\Xi^{(2)}))$.
\section{Main results}\label{mainresult}
Let
$$\delta_{\bm{\xi}}: \mathbb{R}[\bm{x}]\rightarrow \mathbb{R}: f\mapsto f(\bm{\xi})$$
denote the evaluation functional at the point
$\bm{\xi}=(\xi_1,\ldots,\xi_d)\in \mathbb{R}^d$, and let
$$\mathrm{D}^{\bm{\alpha}}: \mathbb{R}[\bm{x}]\rightarrow \mathbb{R}[\bm{x}]: f\mapsto \frac{\partial ^{\bm{\alpha}}}{\partial
{\bm{x}}^{\bm{\alpha}}}f:=\frac{\partial ^{\alpha_1+\cdots+\alpha_d}}{\partial
x_1^{\alpha_1}\cdots\partial x_d^{\alpha_d}}f$$ be the differential operator with
respect to $\bm{\alpha}=(\alpha_1,\ldots, \alpha_d)\in \mathbb{N}_0^d$ with $\mathrm{D}^{\bm{0}}=\mathrm{I}$, the identity operator on $\mathbb{R}[\bm{x}]$.
\begin{defn}\label{HermiteProjectorde}
Let $P$ be a finite-dimensional ideal projector on $\mathbb{R}[\bm{x}]$. If there exist distinct points
$\bm{\xi}^{(1)},\ldots,\bm{\xi}^{(\mu)}\in \mathbb{R}^d$ and their associated lower
sets
$\mathfrak{\Delta}^{(1)},\ldots,\mathfrak{\Delta}^{(\mu)}\subset
\mathbb{N}_0^d$ such that
\begin{equation}\label{HermiteProjector}
\mathrm{ran}P'=\mathrm{Span}_{\mathbb{R}}\{\delta_{\bm{\xi}^{(k)}}\circ \mathrm{D}^{\bm{\alpha}}: {\bm{\alpha}}\in \mathfrak{\Delta}^{(k)}, 1\leq
k\leq \mu\},
\end{equation}
namely $P$ only interpolates a function and its partial derivatives,
then we call $P$ an \emph{ideal projector of type partial
derivative}.
\end{defn}
As typical examples, Hermite projectors of type \emph{total degree} and of type \emph{coordinate degree} are both ideal projectors of type partial derivative, cf. \cite{Lor2000}.
\begin{lem}\label{delta0condition}
Let $\bm{\xi}^{(1)},\ldots,\bm{\xi}^{(\mu)}\in \mathbb{R}^d$ be
distinct points, and let
$\mathfrak{\Delta}^{(1)},\ldots,\mathfrak{\Delta}^{(\mu)}\subset \mathbb{N}_0^d$ be their associated lower
sets. Set
\begin{equation}\label{eta}
\eta_0:=\min\left\{\frac{\|\bm{\xi}^{(k)}-\bm{\xi}^{(l)}\|_2}{\|{\bm{\alpha}}-{\bm{\alpha}}'\|_2}:
\bm{\alpha} \in \mathfrak{\Delta}^{(k)}, \bm{\alpha}' \in
\mathfrak{\Delta}^{(l)},\bm{\alpha}\neq \bm{\alpha}', 1\leq k<l\leq \mu
\right\}.
\end{equation}
Then for arbitrary nonzero $h\in (-\eta_0, \eta_0)\subset \mathbb{R}$, the
point set
\begin{equation}\label{pointset}
\Xi_h:=\left\{\bm{\xi}^{(k)}+ h \bm{\alpha}: \bm{\alpha}\in
\mathfrak{\Delta}^{(k)}, 1\leq k\leq \mu\right\}
\end{equation}
exactly consists of
$\#\sum\limits_{i=1}^{\mu}\mathfrak{\Delta}^{(i)}$ distinct points.
\end{lem}
\begin{pf}
Suppose that there exist $\bm{\alpha}\in\mathfrak{\Delta}^{(k)}$ and $\bm{\alpha}'\in\mathfrak{\Delta}^{(l)}$
with $1\leq k<l\leq\mu$ such that $\bm{\xi}^{(k)}+ h\bm{\alpha}=\bm{\xi}^{(l)}+ h \bm{\alpha}'$ which implies that
$\bm{\alpha}\neq \bm{\alpha}'$ by $\bm{\xi}^{(k)}\neq \bm{\xi}^{(l)}$. Consequently, we have
$$h=\frac{\|\bm{\xi^{(k)}}-\bm{\xi^{(l)}}\|_2}{\|\bm{\alpha}-\bm{\alpha}'\|_2},
$$ which is in direct contradiction to the hypothesis that $0<|h|<\eta_0$. \qed
\end{pf}
Lemma \ref{delta0condition} holds out the possibility of
intuitively perturbing
an ideal projector of type
partial derivative to a sequence of Lagrange projectors.
\begin{defn}\label{LagrangeProjectorde}
Let $P$ be an ideal projector of type partial derivative on $\mathbb{R}[\bm{x}]$ with
$\mathrm{ran}P'$ described by (\ref{HermiteProjector}).
For an
arbitrary fixed $h\in \mathbb{R}$ with $0<|h|<\eta_0$ where $\eta_0$
is as in (\ref{eta}), define $P_h$ to be the Lagrange projector on
$\mathbb{R}[\bm{x}]$ with
\begin{equation}\label{LagrangeProjector}
\mathrm{ran}
P'_h=\mathrm{Span}_{\mathbb{R}}\{\delta_{\bm{\xi}^{(k)}+ h \bm{\alpha}}:
\bm{\alpha}\in \mathfrak{\Delta}^{(k)}, 1\leq k\leq \mu \}.
\end{equation}
Then $P_h$ is called an \emph{$h$-perturbed Lagrange projector of
$P$}.
\end{defn}
\begin{rmk}\label{dualbasis}
It is easy to see from (\ref{HermiteProjector}) and (\ref{LagrangeProjector}) that
$$\bm{\lambda}:=(\delta_{\bm{\xi}^{(k)}}\circ \mathrm{D}^{\bm{\alpha}}: \bm{\alpha}\in
\mathfrak{\Delta}^{(k)},~k=1,\ldots,\mu) \in
{(\mathbb{R}'[\bm{x}])}^n$$
and
$$\bm{\lambda}_h:=(\delta_{\bm{\xi}^{(k)}+ h
\bm{\alpha}}: \bm{\alpha}\in \mathfrak{\Delta}^{(k)}, k=1,\ldots,\mu) \in
{(\mathbb{R}'[\bm{x}])}^n$$
form $\mathbb{R}$-bases for $\mathrm{ran} P'$ and $\mathrm{ran} P'_h$ respectively, where $n=\sum_{k=1}^\mu \#\mathfrak{\Delta}^{(k)}$.
Moreover, an ordering $\prec_\lambda$ for the entries of $\bm{\lambda}$ and $\bm{\lambda}_h$ will be defined as follows: We say $\delta_{\bm{\xi}^{(k)}}\circ \mathrm{D}^{\bm{\alpha}}\prec_\lambda \delta_{\bm{\xi}^{(k')}}\circ \mathrm{D}^{\bm{\alpha'}}$ or $\delta_{\bm{\xi}^{(k)}+ h
\bm{\alpha}}\prec_\lambda \delta_{\bm{\xi}^{(k')}+ h\bm{\alpha'}}$ if
$$
k<k', \quad \mbox{or}\quad k=k' \mbox{ and } \bm{\alpha} \prec \bm{\alpha'},
$$
where $\prec$ is an arbitrary monomial order on $\mathbb{N}_0^d$.
\end{rmk}
We are now ready to give one of our main theorem, Theorem \ref{mianthm}, which states that every
ideal projector of type partial derivative on $\mathbb{R}[\bm{x}]$ is the pointwise limit of Lagrange projectors, namely Carl de Boor's conjecture is true for this type of ideal projectors.
\begin{thm}\label{mianthm}
Let $P$ be an ideal projector of type partial derivative on $\mathbb{R}[\bm{x}]$ with
$\mathrm{ran}P'$ described by \eqref{HermiteProjector}, and let \textup{(}$P_h$, $0<|h|<\eta_0$\textup{)} be a sequence of perturbed Lagrange projector of $P$ where $\eta_0$ is as in \textup{(\ref{eta})}. Then the following statements hold:
\begin{enumerate}
\item[\textup{(i)}] There exists a positive $\eta\in \mathbb{R}$ such that
$$
\mathrm{ran}P_h=\mathrm{ran}P, \quad \forall 0<|h|<\eta\leq\eta_0.
$$
\item[\textup{(ii)}] $P$ is the pointwise limit of $P_h, 0<|h|<\eta,$ as $h$ tends to zero.
\end{enumerate}
\end{thm}
The proof of Theorem \ref{mianthm} will be provided in Section \ref{proof}. Actually, with similar methodology there, we can easily prove the following theorem, which is a more general version of Theorem
\ref{mianthm}.
\begin{thm}\label{corthm}
Let $P$ be an ideal projector of type partial derivative from $C^{\infty}(\mathbb{R}^d)$ onto $\mathrm{ran}P$, then
there exists Lagrange projector $P_h$ onto $\mathrm{ran}P$ such that
for all $f \in C^{\infty}(\mathbb{R}^d)$, $P f$ is the limit of
$P_h f$ as $h$ tends to zero.
\end{thm}
Now, after introducing Definition \ref{borderbasisdef}, we have an immediate corollary of Theorem \ref{mianthm}.
\begin{defn}\label{borderbasisdef}\textup{\cite{She2009}}
Let $P$ be an ideal projector from $\mathbb{R}[\bm{x}]$
onto $\mathrm{ran}P$ with $\dim\mathrm{ran}P=n$. Assume that
$\bm{q}=(q_1,\ldots ,q_{n})\in {\mathbb{R}[\bm{x}]}^{n}$ is an
$\mathbb{R}$-basis for $\mathrm{ran} P$, and the border set
$\partial \bm{q}$ of $\bm{q}$ is defined by
$$\partial \bm{q}:=\{1,x_k q_l,k=1,\ldots,d,l=1,\ldots,n\}\setminus \{q_1,\ldots,q_{n}\}.$$
Then the set of polynomials
$$\{f-P f: f\in \partial \bm{q} \}$$
forms a \emph{border basis} for $\mathrm{ker}P$, which is
called a $\bm{q}$-\emph{border basis} for $\mathrm{ker}P.$
\end{defn}
\begin{cor}\label{miancor}
Let $P$ be an ideal projector of type partial derivative on $\mathbb{R}[\bm{x}]$, and let $\bm{q}$ be an $\mathbb{R}$-basis for $\mathrm{ran}P$. Then there exists a Lagrange projector $P_h$ onto $\mathrm{ran}P$ such that the $\bm{q}$-border basis for
$\mathrm{ker}P$ is the limit of $\bm{q}$-border basis for
$\mathrm{ker}P_h$ as $h$ tends to zero.
\end{cor}
Theorem \ref{mianthm} tells us that every ideal projector of
type partial derivative is the pointwise limit of Lagrange
projectors. Unfortunately, the converse statement is not true in general as the following example illustrates.
\begin{exmp}
Let $(P_h, 0<|h|<1)$ be a sequence of Lagrange projectors
with
\begin{align*}
\mathrm{ran} P_h&=\mathrm{Span}_{\mathbb{R}}\{1,x_1,x_2,x_1^2,x_1 x_2,x_2^2\},\\
\mathrm{ran} P'_h&=\mathrm{Span}_{\mathbb{R}}
\{\delta_{(0,0)}, \delta_{(0,h)}, \delta_{(h,0)}, \delta_{(1,1)}, \delta_{(1,1+h)}, \delta_{(1+h,1)}\},
\end{align*}
and let $P$ be an ideal projector with
\begin{align*}
\mathrm{ran} P'=\bigg\{&\delta_{(0,0)}\circ \mathrm{D}^{(0,0)},
\delta_{(0,0)}\circ \mathrm{D}^{(1,0)}, \delta_{(0,0)}\circ \mathrm{D}^{(0,1)},\\
&\delta_{(1,1)}\circ \mathrm{D}^{(0,0)}, \delta_{(1,1)}\circ \mathrm{D}^{(1,0)},
\delta_{(1,1)}\circ \mathrm{D}^{(0,1)}\bigg\}.
\end{align*}
However, $\{1,x_1,x_2,x_1^2,x_1
x_2,x_2^2\}$ can not form an $\mathbb{R}$-basis for $\mathrm{ran} P$.
Hence, $(P_h, 0<|h|<1)$ can not converge pointwise to $P$, as $h$ tends to zero.
\end{exmp}
Consider the bijection
\begin{align*}
u:\mathbb{R}^d\times \mathbb{N}_0^d&\longrightarrow
{(\mathbb{R}\times\mathbb{N}_0)}^d\\
(\bm{\xi},\bm{\alpha})&\longrightarrow((\xi_1, \alpha_1),\ldots,(\xi_d, \alpha_d)).
\end{align*}
Let $\bm{\xi}^{(1)},\ldots,\bm{\xi}^{(\mu)}\in\mathbb{R}^d$ be distinct points and $\mathfrak{\Delta}^{(1)},\ldots,\mathfrak{\Delta}^{(\mu)}\subset
\mathbb{N}_0^d$ be lower sets. Then
\begin{equation}\label{Hermitetree}
\Omega:=\{u(\bm{\xi}^{(k)},\bm{\alpha}): \bm{\alpha}\in \mathfrak{\Delta}^{(k)},
k=1, \ldots, \mu\}\subset{(\mathbb{R}\times\mathbb{N}_0)}^d
\end{equation}
is called an \emph{algebraic multiset}. As mentioned by \cite{CM1995}, MB algorithm can be applied for
the algebraic multiset $\Omega$ to obtain
the Gr\"{o}bner \'{e}scalier of
the ideal
$$\{p\in \mathbb{R}[\bm{x}]: \delta_{\bm{\xi}^{(k)}} \circ \mathrm{D}^{\bm{\alpha}}(p)=0, \bm{\alpha}\in \mathfrak{\Delta}^{(k)}, \ 1\leq k\leq \mu\}$$
w.r.t. lexicographic order.
Recall Section \ref{s:pre}. We have known how to build a $d$-level tree $T(\Xi)$ from a finite nonempty set $\Xi\in \mathbb{R}^d$. If the space $\mathbb{R}^d$ is changed to ${(\mathbb{R}\times\mathbb{N}_0)}^d$, it is easy to see that we can also build a $d$-level tree $T(\Omega)$ from algebraic multiset $\Omega$ following the same rules, which makes lex game algorithm involved and leads to the following useful lemma.
\begin{lem}\label{mainlem}
Let $P$ be an ideal projector of type partial derivative with
$\mathrm{ran}P'$ as in \textup{(\ref{HermiteProjector})}, and let
$P_h$ be a perturbed
Lagrange projector of $P$. Let algebraic multiset
$\Omega\subset{(\mathbb{R}\times\mathbb{N}_0)}^d$ be as in \textup{(\ref{Hermitetree})} and $\Xi_h\subset \mathbb{R}^d$ be as in \textup{(\ref{pointset})}.
If the rooted trees $T(\Omega)$ and $T(\Xi_h)$ have
the same structure, then
$$\mathrm{ran}_{\prec_{lex}}P=\mathrm{ran}_{\prec_{lex}} P_h.$$
\end{lem}
Next, we can proceed with another main theorem of this paper.
\begin{thm}\label{lexthm}
Let $P$ be an ideal projector of type partial derivative with
$\mathrm{ran}P'$ as in \textup{(\ref{HermiteProjector})},
and let $\textup{(}P_h, 0<|h|<\eta\textup{)}$ be a sequence
of $h$-perturbed Lagrange projectors of $P$, where $\eta$ is
obtained through Algorithm \textup{\ref{cond}} in the following.
If the range of $P_h$ is $\mathrm{ran}_{\prec_{lex}}P_h$, then
the sequence $(P_h,
0<|h|<\eta )$
converges pointwise to the ideal projector $P$, as $h$ tends to zero.
\end{thm}
\begin{alg}\label{cond}(The range for $|h|$)
\vskip 3mm
\textbf{Input}: Distinct points
$\bm{\xi}^{(1)},\ldots,\bm{\xi}^{(\mu)}\in \mathbb{R}^d$ and lower
sets $\mathfrak{\Delta}^{(1)},\ldots,\mathfrak{\Delta}^{(\mu)}\subset \mathbb{N}_0^d$.\\
\indent\textbf{Output}: A nonnegative number $\eta\in \mathbb{R}$ or $\infty$.\\
\indent\textbf{Step 1} Construct algebraic multiset $\Omega$ from $\bm{\xi}^{(1)},\ldots,\bm{\xi}^{(\mu)}$ and $\mathfrak{\Delta}^{(1)},\ldots,\mathfrak{\Delta}^{(\mu)}$ following (\ref{Hermitetree}), and then build rooted tree $T(\Omega)$ from $\Omega$ in the way introduced in Section \ref{s:pre}.\\
\indent\textbf{Step 2} Suppose that the first level nodes of $T(\Omega)$ are labeled with
the points of set $\mathcal {L}_{1}\subset \mathbb{R}\times\mathbb{N}_0$. \\
\indent\indent\textbf{Step 2.1} If $\#\mathcal {L}_{1}=1$, then $\eta\leftarrow \infty$.\\
\indent\indent\textbf{Step 2.2} If every point in $\mathcal {L}_1$ has the same first coordinate or the same second coordinate, then $\eta\leftarrow \infty$. \\
\indent\indent{\textbf{Step 2.3}}\textup{:} Otherwise, set
\begin{align*}
\eta\leftarrow\min\Bigg\{\frac{
|\xi_d^{(i)}-\xi_d^{(j)}|}{|\alpha_d^{(i)}-\alpha_d^{(j)}|}:&
\xi_d^{(i)}\neq\xi_d^{(j)}, \alpha_d^{(i)}\neq\alpha_d^{(j)}, \\ &(\xi_d^{(i)}, \alpha_d^{(i)})\mbox{ and }(\xi_d^{(j)}, \alpha_d^{(j)})\in \mathcal{L}_1\Bigg\}.
\end{align*}
\indent\textbf{\textup{Step 3}} Set $k\rightarrow 2$.\\
\indent\textbf{\textup{Step 4}} Suppose that the $k$-th level nodes are labeled respectively with the points of sets
$\mathcal {L}_{k}^{(1)}, \ldots, \mathcal {L}_{k}^{(\nu)}\subset\mathbb{R}\times\mathbb{N}_0$, where for each
$1\leq l\leq \nu$, the nodes labeled with the points in $\mathcal {L}_{k}^{(l)}$ share the same parent.
For $l=1, \ldots, \nu$ and $\#\mathcal {L}_{k}^{(l)}\geq 2$, do the following steps. \\
\indent\indent\textbf{\textup{Step 4.1}} Set
\begin{align*}
\eta'\leftarrow\min\Bigg\{\frac{
|\xi_{d-k+1}^{(i)}-\xi_{d-k+1}^{(j)}|}{|\alpha_{d-k+1}^{(i)}-\alpha_{d-k+1}^{(j)}|}:&
\xi_{d-k+1}^{(i)}\neq\xi_{d-k+1}^{(j)}, \alpha_{d-k+1}^{(i)}\neq\alpha_{d-k+1}^{(j)}\\
&(\xi_{d-k+1}^{(i)}, \alpha_{d-k+1}^{(i)})\mbox{ and }(\xi_{d-k+1}^{(j)}, \alpha_{d-k+1}^{(j)})\in \mathcal {L}_{k}^{(l)}\Bigg\}.
\end{align*}
\indent\indent\textbf{\textup{Step 4.2}} If $\eta'<\eta$, then $\eta\leftarrow \eta'$.\\
\indent\textbf{\textup{Step 5}} If $k=d$, then return $\eta$ and stop.
Otherwise set $k\leftarrow k+1$, continue with {\textup{Step 4}}.
\end{alg}
\begin{pf}
To prove this theorem, by Lemma \ref{mainlem} and Theorem \ref{mianthm},
it suffices to
show that the rooted trees $T(\Xi_h), 0<|h|<\eta$, and $T(\Omega)$ have
the same structure, where $\Xi_h$ is as in
(\ref{pointset}) and $\Omega$ is as in (\ref{Hermitetree}).
Now, with the notation in Algorithm \textup{\ref{cond}}, we will use induction on the number
of levels $k$ of the rooted tree to prove this.
When $k=1$, assume that
there exist some $(\xi^{(i)}_d, \alpha^{(i)}_d)$ and $(\xi_d^{(j)}, \alpha_d^{(j)})\in \mathcal {L}_1$ such that
$\xi_d^{(i)}+h \alpha_d^{(i)}= \xi_d^{(j)}+h
\alpha_d^{(j)}$. The same argument in Lemma \ref{delta0condition} shows that $h=
|\xi_d^{(i)}-\xi_d^{(j)}|/|\alpha_d^{(i)}-\alpha_d^{(j)}|$ where $\alpha_d^{(i)}\neq\alpha_d^{(j)}$ and $\xi_d^{(i)}\neq
\xi_d^{(j)}$, which contradicts
\begin{align*}
|h|<\min\Bigg\{\frac{
|\xi_d^{(i)}-\xi_d^{(j)}|}{|\alpha_d^{(i)}-\alpha_d^{(j)}|}:&
\xi_d^{(i)}\neq\xi_d^{(j)}, \alpha_d^{(i)}\neq\alpha_d^{(j)}, \\ &(\xi_d^{(i)}, \alpha_d^{(i)})\mbox{ and }(\xi_d^{(j)}, \alpha_d^{(j)})\in \mathcal{L}_1\Bigg\}.
\end{align*}
Hence, the first levels of
$T(\Xi_h), 0<|h|<\eta$, and $T(\Omega)$
have the same structure.
Suppose that the first $k-1$ levels of $T(\Xi_h), 0<|h|<\eta$, and $T(\Omega)$ have the same structure. Assume that there exists some
$1\leq l\leq \nu$ and $(\xi_{d-k+1}^{(i)}, \alpha_{d-k+1}^{(i)}), (\xi_{d-k+1}^{(j)}, \alpha_{d-k+1}^{(j)})\in\mathcal {L}_{k}^{(l)}$
such that
$\xi_{d-k+1}^{(i)}+h \alpha_{d-k+1}^{(i)}= \xi_{d-k+1}^{(j)}+h
\alpha_{d-k+1}^{(j)}$. Since $(\xi_{d-k+1}^{(i)}, \alpha_{d-k+1}^{(i)})$, $(\xi_{d-k+1}^{(j)},
\alpha_{d-k+1}^{(j)})$ have common parent, it is easy to see that $h=|\xi_{d-k+1}^{(i)}-\xi_{d-k+1}^{(j)}|/|\alpha_{d-k+1}^{(i)}-\alpha_{d-k+1}^{(j)}|$ where $\alpha_{d-k+1}^{(i)}\neq \alpha_{d-k+1}^{(j)}$ and $\xi_{d-k+1}^{(i)}\neq
\xi_{d-k+1}^{(j)}$, which contradicts the fact
\begin{align*}
|h|<\min\Bigg\{\frac{
|\xi_{d-k+1}^{(i)}-\xi_{d-k+1}^{(j)}|}{|\alpha_{d-k+1}^{(i)}-\alpha_{d-k+1}^{(j)}|}:&
\xi_{d-k+1}^{(i)}\neq\xi_{d-k+1}^{(j)}, \alpha_{d-k+1}^{(i)}\neq\alpha_{d-k+1}^{(j)}\\
&(\xi_{d-k+1}^{(i)}, \alpha_{d-k+1}^{(i)})\mbox{ and }(\xi_{d-k+1}^{(j)}, \alpha_{d-k+1}^{(j)})\in \mathcal {L}_{k}^{(l)}\Bigg\}.
\end{align*}
Therefore, the first $k$ levels of $T(\Xi_h), 0<|h|<\eta$, and $T(\Omega)$ have the same structure.
\qed
\end{pf}
\section{Example}\label{examples}
In this section, we will present several examples to illustrate Theorem \ref{lexthm}.
\begin{exmp}\label{ex1}
Assume that $P_h$ is a Lagrange projector with
$$\mathrm{ran} P'_h=\mathrm{Span}_{\mathbb{R}}\{\delta_{(0,0)}, \delta_{(h,0)}, \delta_{(0,h)}, \delta_{(1,1)}, \delta_{(1+h,1)}, \delta_{(1,1+h)}\}.$$
Construct the rooted tree of the algebraic multiset
\begin{align*}
\Omega=\{&((0, 0), (0, 0)), ((0, 1), (0, 0)), ((0, 0), (0, 1)), \\
&((1, 0), (1, 0)), ((1, 1), (1, 0)), ((1, 0), (1, 1))\}.
\end{align*}
$T(\Omega)$ is illustrated in Figure \textup{\ref{eg1}}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=8cm, height=5cm]{tree1.eps}
\caption{$T(\Omega)$ of Example \ref{ex1}}\label{eg1}
\end{center}
\end{figure}
By Algorithm 1, we obtain $\eta=1$.
From Theorem \ref{lexthm}, we can conclude that
$(P_h, 0<|h|<1)$ onto $\mathrm{Span}_{\mathbb{R}}\{1, x_2, x_1, x_2^2,$\\$
x_1 x_2, x_2^3\}$ pointwise converges to an Hermite projector $P$ with
\begin{align*}
\mathrm{ran} P'=\{&\delta_{(0,0)}\circ \mathrm{D}^{(0,0)}, \delta_{(0,0)}\circ \mathrm{D}^{(1,0)}, \delta_{(0,0)}\circ \mathrm{D}^{(0,1)},
\delta_{(1,1)}\circ \mathrm{D}^{(0,0)}, \delta_{(1,1)}\circ \mathrm{D}^{(1,0)},\\
&\delta_{(1,1)}\circ \mathrm{D}^{(0,1)}\},
\end{align*}
as $h$ tends to zero.
\end{exmp}
\begin{exmp}\label{ex2}
Assume that $P_h$ is a Lagrange projector with
\begin{align*}
\mathrm{ran} P'_h=\mathrm{Span}_{\mathbb{R}}\{&\delta_{(0,0,0)}, \delta_{(h,0,0)}, \delta_{(0,h,0)},\delta_{(0,0,h)},
\delta_{(1,1,1)}, \\
&\delta_{(1+h,1,1)}, \delta_{(1,1+h,1)},
\delta_{(1,1,1+h)}\}.
\end{align*}
Construct the rooted tree of the algebraic multiset
\begin{align*}
\Omega=\{&((0, 0), (0, 0), (0, 0)), ((0, 1), (0, 0), (0, 0)), ((0, 0), (0, 1), (0, 0)),\\
&((0, 0), (0, 0), (0, 1)), ((1, 0), (1, 0), (1, 0)), ((1, 1), (1, 0), (1, 0)), \\
&((1, 0), (1, 1), (1, 0)), ((1, 0), (1, 0), (1, 1))\}.
\end{align*}
$T(\Omega)$ is illustrated in Figure \textup{\ref{eg2}}.
\begin{figure}[!htbp]
\begin{center}
\includegraphics[width=8cm, height=5cm]{tree2.eps}
\caption{$T(\Omega)$ of Example \ref{ex2}}\label{eg2}
\end{center}
\end{figure}
By Algorithm 1, we compute $\eta=1$.
From Theorem \ref{lexthm}, we can conclude that
$(P_h, 0<|h|<1)$ onto
$\{1, x_3, x_2, x_1, x_3^2, x_2
x_3,$\\$x_1 x_3, x_3^3\}$ pointwise converges to an Hermite projector $P$
with
\begin{align*}
\mathrm{ran} P'=\{&\delta_{(0,0,0)}\circ \mathrm{D}^{(0,0,0)}, \delta_{(0,0,0)}\circ \mathrm{D}^{(1,0,0)}, \delta_{(0,0,0)}\circ \mathrm{D}^{(0,1,0)}, \delta_{(0,0,0)} \circ \mathrm{D}^{(0,0,1)},\\
&\delta_{(1,1,1)}\circ \mathrm{D}^{(0,0,0)}, \delta_{(1,1,1)}\circ \mathrm{D}^{(1,0,0)},
\delta_{(1,1,1)}\circ \mathrm{D}^{(0,1,0)}, \delta_{(1,1,1)} \circ \mathrm{D}^{(0,0,1)}\},
\end{align*}
as $h$ tends to zero.
\end{exmp}
Finally, we select test functions
\begin{align*}
f_1(x_1,x_2)&=1+(1-x_1)^4+(1-x_2)^4,\\
f_2(x_1,x_2,x_3)&=1+(1-x_1)^2+(1-x_2)^2+(1-x_3)^2,\\
\end{align*}
to illustrate the pointwise convergence of ideal projectors of type
partial derivative in the above examples.
For Example 2, when
$h=1/10, 1/100, 1/1000, \ldots$, we have
\begin{align*}
P_{\frac{1}{10}}f_1=&3-\frac{385039}{99000}x_2-\frac{3439}{1000}x_1+\frac{719}{150}x_2^2+ \frac{86}{25} x_1 x_2-\frac{1438}{495}x_2^3,\\
P_{\frac{1}{100}}f_1=&3-\frac{39984109399}{9999000000}x_2-\frac{3940399}{1000000}x_1+\frac{970199}{165000}x_2^2+\frac{9851}{2500} x_1 x_2\\
&-\frac{970199}{249975}x_2^3,\\
P_{\frac{1}{1000}}f_1=&3-\frac{571426287284857}{142857000000000}x_2-\frac{3994003999}{1000000000}x_1+\frac{997001999}{166500000}
x_2^2\\
&+\frac{998501}{250000} x_1 x_2-\frac{142428857}{35714250}x_2^3,\\
\cdots&\\
P f_1=&3-4 x_2-4 x_1+6 x_2^2+ 4 x_1 x_2-4 x_2^3.
\end{align*}
For Example \ref{ex2},
\begin{align*}
P_{\frac{1}{10}}f_2=&4-\frac{829}{495}x_3-\frac{19}{10}x_2-\frac{19}{10}x_1-\frac{7}{3}x_3^2+2 x_2 x_3+ 2 x_1 x_3+\frac{80}{99}x_3^3,\\
P_{\frac{1}{100}}f_3=&4-\frac{989299}{499950}x_3-\frac{199}{100}x_2-\frac{199}{100}x_1-\frac{37}{33} x_3^2+2 x_2 x_3+ 2x_1 x_3\\
&+\frac{800}{9999}x_3^3,\\
P_{\frac{1}{1000}}f_2=&4-\frac{998992999}{499999500}x_3-\frac{1999}{1000}x_2-\frac{1999}{1000}x_1-\frac{337}{333}x_3^2+2
x_2 x_3+ 2 x_1 x_3\\
&+\frac{8000}{999999}x_3^3,\\
\cdots&\\
P f_2=&4-2 x_3-2 x_2-2 x_1-x_3^2+2 x_2 x_3+ 2 x_1 x_3.
\end{align*}
\section{Proof of Theorem \ref{mianthm}}\label{proof}
First of all, we need to relate forward differences of multivariate polynomials to their partial derivatives. The following formula is quite useful for this purpose.
\begin{lem}\label{L1}
Let $i, m\in \mathbb{N}_0$ satisfying $i \geq m >0$. Then
\begin{equation}\label{L1g}
\sum\limits_{j=0}^{i-1}(-1)^j {i \choose j}(i-j)^m=\left\{
\begin{array}{ll}
i!, &m=i; \\
0, & m<i.
\end{array}
\right.
\end{equation}
\end{lem}
\begin{pf}
The proof can be completed by induction on $m$.\qed
\end{pf}
\begin{lem}\label{uni}
Let $\xi, h\in \mathbb{R}, h\neq 0, $ and $i, \alpha\in \mathbb{N}_0$. Then for every monomial $x^\alpha$ in $\mathbb{R}[x]$,
\begin{equation}\label{unishi}
\sum\limits_{j=0}^i (-1)^j {i \choose j} \delta_{\xi+h(i-j)}x^\alpha=\left\{
\begin{array}{ll}
h^{i}\delta_{\xi}\circ\mathrm{D}^ix^\alpha, & \alpha\leq i; \\
h^{i}\delta_{\xi}\circ\mathrm{D}^ix^\alpha+O(h^{i+1}), &\alpha
> i,
\end{array}
\right.
\end{equation}
where the remainder $O(h^{i+1})$ is a polynomial in $h$.
\end{lem}
\begin{pf}
From the theory of finite difference(see for example \cite{Ame1977}) we know that
$$\Delta^i \delta_\xi f(x)=\sum\limits_{j=0}^i (-1)^j {i \choose j} \delta_{\xi+h(i-j)}f(x)=
h^{i}\delta_{\xi}\circ \mathrm{D}^if(x)+O(h^{i+1}),$$
where $\Delta$ is the forward difference operator and $f(x)\in C^i(\mathbb{R})$. When $f(x)$ is substituted by $x^\alpha$ in this equation, (\ref{unishi}) follows immediately. Moreover, by Lemma \ref{L1}, we can
easily check that the remainder $O(h^{i+1})$ in (\ref{unishi}) is a
polynomial in $h$. This completes the proof. \qed
\end{pf}
The conclusion of Lemma \ref{uni} will be carried over to multivariate cases as follows.
\begin{lem}\label{polyn}
Suppose that $h\in\mathbb{R}\backslash\{0\}$,
$\bm{\xi}=(\xi_1,\ldots,\xi_d) \in \mathbb{R}^d$, and
$\bm{i}=(i_1,\ldots,i_d) \in \mathbb{N}_0^d$. Then for artitrary
monomial $\bm{x}^{\bm{\alpha}}$ in $\mathbb{R}[\bm{x}]$, we have
\begin{eqnarray}\label{polyngongshi}
\sum\limits_{\bm{0}\leq\bm{j}\leq \bm{i}} (-1)^{\bm{j}}
{\bm{i}\choose \bm{j}} \delta_{\bm{\xi}+h
(\bm{i}-\bm{j})}{\bm{x}}^{\bm{\alpha}} =\left\{\begin{array}{ll}
h^{\|\bm{i}\|_1}
\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}\bm{x}^{\bm{\alpha}}+O(h^{\|\bm{i}\|_1+1}),
&
\bm{i}< \bm{\alpha};\\
h^{\|\bm{i}\|_1}\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}\bm{x}^{\bm{\alpha}}, &
\mbox{otherwise},
\end{array}
\right.
\end{eqnarray}
where $(-1)^{\bm{j}}=(-1)^{j_1}\cdots(-1)^{j_d}$ and ${\bm{i}\choose \bm{j}}={i_1\choose j_1}\cdots {i_d\choose j_d}$ provided that $\bm{j}=(j_1, \ldots, j_d)$.
\end{lem}
\begin{pf}
First, it follows from Lemma \ref{uni} that for every $1\leq
k\leq d$
\begin{equation}\label{danbianyuan}
\sum\limits_{j_k=0}^{i_k} (-1)^{j_k} {i_k \choose j_k}\delta_{\xi_k+h(i_k-j_k)}x_k^{\alpha_k}
=\left\{
\begin{array}{ll}
h^{i_k}\delta_{\xi_k}\circ \mathrm{D}^{i_k}x_k^{\alpha_k}, & \alpha_k\leq i_k; \\
h^{i_k}\delta_{\xi_k}\circ \mathrm{D}^{i_k}x_k^{\alpha_k}+O(h^{i_k+1}),
&\alpha_k>i_k.
\end{array}
\right.
\end{equation}
Further, we observe that
\begin{equation}\label{hebin2}
\prod\limits_{k=1}^d\delta_{\xi_k}\circ \mathrm{D}^{i_k}x_k^{\alpha_k}=\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}\bm{x}^{\bm{\alpha}}
\end{equation}
and
\begin{equation}\label{hebin}
\sum\limits_{\bm{0}\leq\bm{j}\leq \bm{i}} (-1)^{\bm{j}}
{\bm{i}\choose \bm{j}} \delta_{\bm{\xi}+h
(\bm{i}-\bm{j})}{\bm{x}}^{\bm{\alpha}}
=\prod_{k=1}^d\left(\sum\limits_{j_k=0}^{i_k}(-1)^{j_k} {i_k\choose j_k}
\delta_{\xi_k+h (i_k-j_k)}{x_k^{\alpha_k}}\right).
\end{equation}
Finally, we distinguish three cases to prove that the right-hand sides of (\ref{hebin}) and (\ref{polyngongshi}) are equal to each other, which will complete the proof.
Case 1: $\bm{\alpha}\leq \bm{i}$.
Using (\ref{danbianyuan}) and (\ref{hebin2}), it is straightforward to
verify that
$$\prod_{k=1}^d\left(\sum\limits_{j_k=0}^{i_k}(-1)^{j_k}
{i_k\choose j_k} \delta_{\xi_k+h (i_k-j_k)}{x_k^{\alpha_k}}\right)=\prod_{k=1}^d
h^{i_k}\delta_{\xi_k}\circ \mathrm{D}^{i_k}x_k^{\alpha_k}=h^{\|\bm{i}\|_1} \delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}\bm{x}^{\bm{\alpha}}.$$
Case 2: $\bm{i}\not<\bm{\alpha}$ and
$\bm{\alpha}\not\leq \bm{i}$.
In this case, there must exist some
$1\leq k, l\leq d$ such that $\alpha_k< i_k$ and
$i_l<\alpha_l$. Thus, it is easily checked that
$$\prod_{k=1}^d\left(\sum\limits_{j_k=0}^{i_k}(-1)^{j_k}
{i_k\choose j_k} \delta_{\xi_k+h (i_k-j_k)}{x_k^{\alpha_k}}\right)=h^{\|\bm{i}\|_1} \delta_{\bm{\xi}}\circ \mathrm{D}^{\bm{i}}\bm{x}^{\bm{\alpha}}=0.$$
Case 3: $\bm{i}< \bm{\alpha}$.
Let $l=\max\{k: i_k< \alpha_k, 1\leq k\leq d\}$. Then, applying (\ref{danbianyuan}) and
(\ref{hebin2}), we deduce that
\begin{align*}
&\prod_{k=1}^d\left(\sum\limits_{j_k=0}^{i_k}(-1)^{j_k} {i_k\choose j_k}
\delta_{\xi_k+h (i_k-j_k)}{x_k^{\alpha_k}}\right)\\
=&\prod_{k=1}^l\left(\sum\limits_{j_k=0}^{i_k}(-1)^{j_k}
{i_k\choose j_k} \delta_{\xi_k+h (i_k-j_k)}{x_k^{\alpha_k}}\right)
\prod_{k=l+1}^d\left(\sum\limits_{j_k=0}^{i_k}(-1)^{j_k}
{i_k\choose j_k} \delta_{\xi_k+h (i_k-j_k)}{x_k^{\alpha_k}}\right)\\
=&\prod_{k=1}^l \left( h^{i_k}\delta_{\xi_k}\circ
\mathrm{D}^{i_k}x_k^{\alpha_k}+O(h^{i_k+1})\right)\prod_{k=l+1}^d
h^{i_k}\delta_{\xi_k}\circ \mathrm{D}^{i_k}x_k^{\alpha_k}\\
=&h^{\|\bm{i}\|_1}
\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}\bm{x}^{\bm{\alpha}}+O(h^{\|\bm{i}\|_1+1}),
\end{align*}
where the empty product is understood to be 1. \qed
\end{pf}
Equation (\ref{polyngongshi}) makes a connection between
the forward difference calculus and the differential calculus for multivariate monomials. From Lemma \ref{uni}, it follows that the remainder $O(h^{\|\bm{i}\|_1+1})$ in (\ref{polyngongshi}) is a polynomial in $h$. Equipped with these facts, we can establish the relationship between forward differences and partial derivatives of
multivariate polynomials, which plays an important
role in the proof of Theorem \ref{mianthm}.
\begin{cor}\label{polynomialcor}
Let $\bm{i}, h, \bm{\xi}$ be as in \emph{Lemma \ref{polyn}} and $p\in \mathbb{R}[\bm{x}]\backslash \{0\}$. Then
\begin{eqnarray}\label{henxinc}
\frac{1}{ h^{\|\bm{i}\|_1}}\sum\limits_{\bm{0}\leq
\bm{j}\leq\bm{i}} (-1)^{\bm{j}}
{\bm{i}\choose \bm{j}}\delta_{\bm{\xi}+h (\bm{i}-\bm{j})} p
=\left\{
\begin{array}{ll}
\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}p +O(h), & p<_m\bm{i};\\
\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}p , & \hbox{otherwise}.
\end{array}
\right.
\end{eqnarray}
\end{cor}
\begin{pf}
Assume that nonzero polynomial $p$ has form (\ref{pform}). Since
$$\sum\limits_{\bm{0}\leq \bm{j}\leq\bm{i}} (-1)^{\bm{j}}
{\bm{i}\choose \bm{j}}\delta_{\bm{\xi}+h (\bm{i}-\bm{j})} p
=\sum\limits_{\bm{\alpha}}\widehat{p}({{\bm{\alpha}}})\sum\limits_{\bm{0}\leq
\bm{j}\leq\bm{i}} (-1)^{\bm{j}}
{\bm{i}\choose \bm{j}}\delta_{\bm{\xi}+h (\bm{i}-\bm{j})}
{\bm{x}}^{{\bm{\alpha}}}$$ and
$$\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}p=\sum\limits_{\bm{\alpha}}\widehat{p}({{\bm{\alpha}}})
\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}} {\bm{x}}^{{\bm{\alpha}}},$$
we get
\begin{align*}
\sum\limits_{\bm{0}\leq \bm{j}\leq\bm{i}} (-1)^{\bm{j}}
{\bm{i}\choose \bm{j}}\delta_{\bm{\xi}+h (\bm{i}-\bm{j})} p
=\left\{
\begin{array}{ll}
{ h^{\|\bm{i}\|_1}} \delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}p +O( h^{\|\bm{i}\|_1+1}), & p <_m \bm{i};\\
{ h^{\|\bm{i}\|_1}}\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}p , & \hbox{otherwise},
\end{array}
\right.
\end{align*}
which leads to the corollary immediately.\qed
\end{pf}
Now, we are ready to prove Theorem \ref{mianthm}.
\vskip 8pt
\noindent\textsc{Proof of Theorem \ref{mianthm}.} We adopt the notation of Definition \ref{HermiteProjectorde} and Remark \ref{dualbasis}. Let $\bm{q}=(q_1,q_2,\ldots,q_n)$ be an $\mathbb{R}$-basis for $\mathrm{ran}P$. Without loss of generality, we assume that the entries of $\bm{\lambda}$ and
$\bm{\lambda}_h$ are ordered ascendingly w.r.t. $\prec_\lambda$ and then denoted as $o_1, \ldots, o_n$ and $o'_1, \ldots, o'_n$ respectively. For convenience, we set $n \times n$ matrices
$$
\bm{\lambda}^T \bm{q}=(o_i q_j)_{1\leq i, j\leq n}, \quad \bm{\lambda}^T_h \bm{q}=(o'_i q_j)_{1\leq i, j\leq n},
$$
and, therefore, for every $q \in \mathbb{R}[\bm{x}]$, $n$ by $1$ vectors
$$
\bm{\lambda}^T q=(o_i q)_{1\leq i\leq n}, \quad \bm{\lambda}^T_h q=(o'_i q)_{1\leq i\leq n}.
$$
By Corollary \ref{polynomialcor}, equation (\ref{henxinc}) can be rewritten as
$$
\delta_{\bm{\xi}}\circ\mathrm{D}^{\bm{i}}p=\left\{
\begin{array}{ll}
\frac{1}{ h^{\|\bm{i}\|_1}}\sum\limits_{\bm{0}\leq \bm{j}\leq\bm{i}}
(-1)^{\bm{j}} {\bm{i}\choose \bm{j}}\delta_{\bm{\xi}+h (\bm{i}-\bm{j})}
p+O(h), & p <_m \bm{i}; \\
\frac{1}{ h^{\|\bm{i}\|_1}}\sum\limits_{\bm{0}\leq \bm{j}\leq\bm{i}}
(-1)^{\bm{j}} {\bm{i}\choose \bm{j}}\delta_{\bm{\xi}+h (\bm{i}-\bm{j})}
p, & \hbox{otherwise,}
\end{array}
\right.
$$
which implies that for fixed $1\leq k \leq \mu$ and $\bm{i} \in
\mathfrak{\Delta}^{{(k)}}$, $\delta_{\bm{\xi}^{(k)}}\circ\mathrm{D}^{\bm{i}}p$ can be
linearly expressed by $\{\delta_{\bm{\xi}^{(k)}+h \bm{l}} p: \bm{l}\in \mathfrak{\Delta}^{(k)}\}\cup \{O(h)\}$ since $\mathfrak{\Delta}^{{(k)}}$ is lower, and moreover, the linear combination
coefficient of each $ \delta_{\bm{\xi}^{(k)}+h \bm{l}}p$ is independent of $p\in \mathbb{R}[\bm{x}]$. Thus, it turns out that there exists a nonsingular matrix $T_p$ of order $n$ such that
\begin{equation}\label{juzhen}
\left[\widehat{\bm{\lambda}_h^T \bm{q}}\Big| \widehat{\bm{\lambda}_h^T
q}\right]:=T_p\left[\bm{\lambda}^T_h \bm{q}| \bm{\lambda}^T_h
q\right]=\left[\bm{\lambda}^T \bm{q}| \bm{\lambda}^T q\right]+\left[E_h|
\bm{\epsilon}_{h}\right],
\end{equation}
where each entry of $[E_h|\bm{\epsilon}_{h}]$ is either $0$ or
$O(h)$. As a consequence, the linear systems
$$\left(\widehat{\bm{\lambda}_h^T \bm{q}}\right) \bm{x}=\widehat{\bm{\lambda}_h^T
q}\quad \mbox{and} \quad\left(\bm{\lambda}^T_h \bm{q}\right) \bm{x}=\bm{\lambda}^T_h q$$
are equivalent, namely they have the same set of solutions.
(i) From (\ref{juzhen}), it follows that each entry of
matrix $\widehat{\bm{\lambda}_h^T \bm{q}}$ converges to its
corresponding entry of matrix $\bm{\lambda}^T \bm{q}$ as $h$ tends
to zero, which implies that
$$\lim\limits_{h\rightarrow
0}\det\left(\widehat{\bm{\lambda}_h^T \bm{q}}\right)=\det \left(\bm{\lambda}^T
\bm{q}\right).$$
Since $\det (\bm{\lambda}^T \bm{q})\neq 0$, there exists $\eta>0$ such that
$$\det\left(\widehat{\bm{\lambda}_h^T \bm{q}}\right) \neq 0, \quad0<|h|<\eta.$$
Notice that (\ref{juzhen}) directly leads to $\mathrm{rank}
\left(\widehat{\bm{\lambda}_h^T \bm{q}}\right)=\mathrm {rank}\left(\bm{\lambda}_h^T
\bm{q}\right)$,
$$
\mathrm{ran}P_h=\mathrm{Span}_\mathbb{R}\bm{q}, \quad 0<|h|<\eta,
$$
follows, i.e., $\bm{q}$ forms an $\mathbb{R}$-basis for $\mathrm{ran}P_h$. Since $\bm{q}$ is also a basis for $\mathrm{ran}P$, we have
$$
\mathrm{ran}P=\mathrm{ran}P_h,\quad 0<|h|<\eta.
$$
(ii) Suppose that $\widetilde{\bm{x}}_h$ and $\widetilde{\bm{x}}$ be the unique solutions of nonsingular linear systems
\begin{equation}\label{Leq}
(\bm{\lambda}_h^T \bm{q})\bm{x}=\bm{\lambda}_h^T q
\end{equation}
and
\begin{equation}\label{Heq}
(\bm{\lambda}^T \bm{q})\bm{x}=\bm{\lambda}^T q
\end{equation}
respectively, where $0<|h|<\eta$. It is easy to see that
$$
P_h q=\bm{q\widetilde{\bm{x}}}_h\quad \mbox{and}\quad Pq=\bm{q\widetilde{\bm{x}}}.
$$
Remark that, as $h\rightarrow 0$, $P$ is the pointwise limit of $P_h$ if and only if $P q$ is the coefficientwise limit of $P_h q$ for all $q\in \mathbb{R}[\bm{x}]$. Therefore, it is sufficient to show that for every $q\in \mathbb{R}[\bm{x}]$, the solution vector of system (\ref{Leq}) converges to the one of system (\ref{Heq}) when $h$ tends to zero, namely
$$
\lim_{h\rightarrow 0} \bm{\widetilde{x}}_h= \bm{\widetilde{x}}.
$$
By (\ref{juzhen}), the linear system
\begin{equation}\label{widehat}
\left(\widehat{\bm{\lambda}_h^T
\bm{q}}\right)\bm{x}=\widehat{\bm{\lambda}_h^T q}
\end{equation}
can be rewritten as
$$\left(\bm{\lambda}^T \bm{q}+E_h\right)\bm{x}=\left(\bm{\lambda}^T
q+\bm{\epsilon}_{h}\right).
$$
Since system (\ref{widehat}) is equivalent to system (\ref{Leq}), $\widetilde{\bm{x}}_h$ is also the unique solution of it. Consequently, applying the perturbation analysis of the sensitivity
of linear systems (see for example \cite{Matrixcompu1996}, p.80ff), we have
$$\left\|\widetilde{\bm{x}}_h-\bm{\widetilde{x}}\right\|\leq\left\|{(\bm{\lambda}^T \bm{q})}^{-1}\right\|\left\|\bm{\epsilon}_h-E_h\bm{\widetilde{x}}\right\|+O(h^2).$$
Since each entry of vector $\bm{\epsilon}_h-E_h\bm{\widetilde{x}}$ is
either $0$ or $O(h)$, it follows that $\lim\limits_{h\rightarrow
0}\|\widetilde{\bm{x}}_h-\bm{\widetilde{x}}\|= 0$, or, equivalently,
$\lim\limits_{h\rightarrow 0}\widetilde{\bm{x}}_h=\bm{\widetilde{x}}$, which completes the proof of the theorem. \qed
\bibliographystyle{elsarticle-num}
|
1,116,691,497,901 | arxiv | \section{Introduction}
\IEEEPARstart{I}{n} this work, we are interested in
answering the following question -- {\em Is there an optimal
way to combine multi-view and multi-date satellite images,
and noisy training labels derived from OpenStreetMap (OSM)
\cite{osm} for the task of semantically labeling
buildings and roads on the ground over large geographic
regions (100 km$^2$)}? Note that labeling points on the
ground is more challenging than labeling pixels in images
because the former requires that we first map each point on
the ground to the correct pixel in each image. This is only
possible if (1) the multi-date and multi-view images are
not only aligned with one another but are also aligned well
in an absolute sense to the real world; and (2) if we have
accurate knowledge of the heights of the points on the
ground.
Before summarizing our main contributions, to give the
reader a glimpse of the power of the approach presented in
this study, we show some sample results in
Fig. \ref{fig:building-sv-mv}.
\begin{table*}[h]
\setlength{\tabcolsep}{0.07cm}
\begin{center}
\begin{tabular}{|M{1.4cm}|M{0.22\linewidth}|M{0.22\linewidth}|M{0.22\linewidth}|M{0.22\linewidth}|}
\hline
Single-View Training (Baseline) & \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-sv-eg1.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-sv-eg4.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-sv-eg3.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-sv-eg2.jpg}} \\
\hline
\hline
Multi-View Training (Proposed Approach) & \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-mv-eg1.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-mv-eg4.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-mv-eg3.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 1\linewidth]{building-mv-eg2.jpg}} \\
\hline
\end{tabular}
\end{center}
\captionof{figure}{To illustrate the power of our
approach, the buildings in the bottom row were extracted
by our approach based on multi-view training for
semantic labeling. Compare with the top row where the
training is based on single-views. Building points are
marked in translucent \textcolor{blue}{blue}.}
\label{fig:building-sv-mv}
\end{table*}
Towards answering the aforementioned question, we put forth
the following contributions:
\begin{enumerate}
\setlength\itemsep{1pt}
\item We present a novel multi-view training paradigm that
yields improvements in the range \textit{4-7\% in the
per-class IoU} (Intersection over Union)
metric. \textit{ Our evaluation directly demonstrates that
updating the weights of the convolutional neural network
(CNN) by simultaneously learning from multiple views of
the same scene can help alleviate the burden of noisy
training labels.}
\item We present a direct comparison between training
classifiers on 8-band true orthophoto images vis-a-vis
training them on the original off-nadir images captured by
the satellites. The fact that we use OSM training labels
poses challenges for the latter approach, as it necessitates
the need to transform labels from geographic coordinates
into the off-nadir image-pixel coordinates. Such a
transformation requires that we have knowledge of the
heights of the points. The comparison presented in this
study is unlike most published work in the literature that
use pre-orthorectified single-view images.
Additionally, we have \textit{released our software for creating
true orthophotos, for public use}. Interested researchers
can download this software from the link at \cite{gwarp}.
\item In order to make the above comparison possible, we
present a true end-to-end automated framework that aligns
large multi-view, multi-date images (each containing about
$43008 \times 38000$ pixels), constructs a high-resolution
accurate Digital Surface Model (DSM) over a 100 km$^2$
area (which is needed for establishing correspondences
between the pixels in the off-nadir images and points on
the ground), and learns from noisy OSM labels \textbf{
without any additional human supervision}.
\end{enumerate}
\begin{figure*}[h]
\centering \subfloat[
{
\includegraphics[width=0.47\linewidth,height=0.4\linewidth]{ohio_dsm3.jpg}
\label{fig:dsm_ohio_1}
} \subfloat[
{
\includegraphics[width=0.47\linewidth,height=0.4\linewidth]{ohio_dsm4.jpg}
\label{fig:dsm_ohio_2}
}\\
\subfloat[
{
\includegraphics[width=0.47\linewidth,height=0.4\linewidth]{ucsd_dsm4.jpg}
\label{fig:dsm_cali_1}
}
\subfloat[
{
\includegraphics[width=0.47\linewidth,height=0.4\linewidth]{ucsd_dsm5.jpg}
\label{fig:dsm_cali_2}
}
\caption{We have uploaded as Supporting Material the
\textbf{flyby videos} and the images of the DSMs for two
large areas, a 120 {\boldmath km$^2$} area from Ohio and
a 62 {\boldmath km$^2$} area from California. The flyby
videos can also be viewed at the link at
\cite{flyby}. The top two images depict two small
sections from the Ohio DSM, and the bottom two images depict two small sections
from the California DSM. The DSM depictions have been
colored according to the elevation values within the
boundaries of each section.}
\label{fig:dsm_example}
\end{figure*}
For our study, we use WorldView-3 (WV3) \cite{WV3} images
collected over two regions in Ohio and California, USA. We
use 32 images for each region. The images were collected
across a span of 2 years under varying conditions. Automatic
alignment and DSM construction are carried out for both
regions. Smaller sections of these DSMs are shown in
Fig. \ref{fig:dsm_example}.
The rest of this manuscript is organized as follows. In Section
\ref{sec:lit_review}, we briefly review relevant
literature. Section \ref{sec:system_overview} provides
details on aligning images, creating large-area DSMs, and
deriving training labels from OSM. Section
\ref{sec:mvapproach} presents different approaches for
training and inference using CNNs. Section
\ref{sec:offnadir} discusses a strategy for using training
labels derived from OSM to label off-nadir
images. Experimental evaluation is described in Section
\ref{sec:evaluation}. Concluding remarks are presented in
Section \ref{sec:conclusion}.
\section{Literature Review}
\label{sec:lit_review}
State-of-the-art approaches that demonstrate the use of
labels derived from OSM for finding roads and/or buildings
in overhead images include the studies described in
\cite{polymapper}, \cite{roadtracer},
\cite{bastani2018machine}, \cite{Chu_2019_ICCV},
\cite{yang2019road}, \cite{park2019refining},
\cite{Etten_2020_WACV}, \cite{mosinska2019joint},
\cite{yang2019road_v2}, \cite{eth_cnn}, \cite{resunet},
\cite{saito}, \cite{forez}, \cite{mnih_hybrid},
\cite{DeepVGI}, \cite{OSMDeepOD} and
\cite{DeepOSM}. Many of these approaches use some
category of neural networks as part of their
machine-learning frameworks. For instance, while the study
described in \cite{polymapper} uses a CNN backbone to
extract keypoints that are subsequently input to a
recurrent neural network (RNN) to extract building
polygons and road networks, the approach presented in
\cite{roadtracer} constructs a road network in an
iterative fashion by using a CNN to detect the next road
segment given the previously extracted road network. The
work discussed in \cite{park2019refining} builds upon the
approach in \cite{roadtracer} by using a generative
adversarial network (GAN) \cite{gans} to further refine
the outputs. In addition, the recent contributions in
\cite{deeproadmapper}, \cite{dlinknet},
\cite{batra2019improved}, \cite{singh2018self},
\cite{rotich2018using}, \cite{rotich2018resource} and
\cite{costea2018roadmap} use datasets with precise training
labels for semantic labeling of overhead imagery. All these
approaches use single-view images that are usually
pre-orthorectified.
Some examples of popular datasets for semantic labeling of
overhead imagery with manually-generated and/or
manually-corrected training labels can be found in
\cite{deepglobe}, \cite{spacenet}, \cite{jhu_us3d},
\cite{jhu_urban3d}, \cite{jhu_new3d}, \cite{datafusion},
\cite{datafusioncon1}, \cite{isprs2d}, \cite{torontocity}
and \cite{dota}. The dataset presented in
\cite{jhu_us3d} provides satellite images, airborne LiDAR,
and building labels (derived from LiDAR) that are manually
corrected. The DeepGlobe dataset \cite{deepglobe} provides
satellite images and precise labels (annotated by experts)
for land cover classification and road and building
detection. The study described in \cite{jhu_new3d}
combines multi-view satellite imagery and large-area DSMs
(obtained from commercial vendors) \cite{jhu_urban3d} with
building labels that are initialized using LiDAR from the
HSIP 133 cities data set \cite{GRID}. The IEEE GRSS Data
Fusion Contest dataset \cite{datafusion},
\cite{datafusioncon1} provides true ortho images, LiDAR
and hyperspectral data along with precise groundtruth
labels for 17 local climate zones. A summary of the
top-performing algorithms on this dataset can be found in
\cite{datafusioncon2}.
We will restrict our discussion of prior contributions that
use information from multiple views to CNN-based
approaches. Variants of multi-view CNNs have been proposed
primarily for segmentation of image-sequences and video
frames, and for applications such as 3D shape
recognition/segmentation and 3D pose
estimation. State-of-the-art examples include the approaches
described in \cite{mv-paper1}, \cite{mv-paper2},
\cite{mv-paper3}, \cite{mv-paper4}, \cite{mv-paper5},
\cite{mv-paper6}, \cite{mv-paper8}, \cite{mv-paper9},
\cite{mv-paper10}, \cite{mv-paper11}, \cite{mv-paper12},
\cite{mv-paper13} and \cite{mv-paper14}. These contributions
share one or more of the following attributes: (1) They
synthetically generate multiple views by either projecting
3D data into different planes, or by viewing the same image
at multiple scales; (2) They extract features from multiple
views, concatenate/pool such features and/or enforce
consistency checks between the features; (3) They use only a
few views (of the order of 5 or less). For instance,
while the study described in \cite{mv-paper1} improves
semantic segmentation of RGB-D video sequences by
enforcing consistency checks after projecting the
sequences into a reference view at training time, the
approach presented in \cite{mv-paper3} estimates 3D hand
pose by first projecting the input point clouds onto
three planes, subsequently training CNNs for each plane
and then fusing the output predictions.
With respect to the field of remote-sensing, multi-date
satellite images have been used for applications such as
change detection. For instance, the study described in
\cite{changedetect} demonstrates unsupervised
change-detection between a single pair of images with deep
features extracted using a cycle-consistent GAN
\cite{CycleGan2017}. However, there do not exist many studies
that use CNNs for labeling of multi-view and multi-date
satellite images. A relevant contribution is the one
described in \cite{dfc_winner} that won the 2019 IEEE GRSS
Data Fusion Contest for Multi-View Semantic Stereo
\cite{dfc2019}. The work in \cite{danesfield} also uses
off-nadir WV3 images for semantic labeling. Both these
approaches still treat the different views of the same scene
on the ground independently during training. To the best of
our knowledge, there has not existed prior to our work reported here a true multi-view
approach for semantic segmentation using satellite images.
We also include a brief review of the literature related to
constructing DSMs from satellite images. Fully automated
approaches for constructing DSMs from satellite images have
been discussed in \cite{ohio_3d}, \cite{largescale_isprs},
\cite{nasa-ames}, \cite{ozge_comparison}, \cite{s2p}, \cite{demsculpt} and
\cite{isprs_ptcld_example}. While the studies described
in \cite{largescale_isprs}, \cite{nasa-ames} and
\cite{s2p} process pairs of images to construct multiple
pairwise point clouds that are subsequently fused to
construct a dense DSM, the contribution in
\cite{ozge_comparison} compares such approaches with an
alternative approach that divides the 3D scene into
voxels, projects each voxel into all the images and
subsequently reasons about the probability of occupancy of
each voxel using the corresponding pixel features from all
the images. {\em In all of these contributions, the DSMs
that are constructed cover relatively small
areas
} The large-area DSM contribution in
\cite{Pleiades} is based on a small number of in-track
images that are typically captured seconds or minutes apart
by the Pl\'eiades satellite. In addition to the aforementioned contributions, the study in \cite{rvlstereo} provides a dataset containing stereo-rectified images and the associated groundtruth disparity maps for different world regions, that can be used for benchmarking stereo-matching algorithms.
\section {A Framework for Large-Area Image
Alignment, DSM Creation, and Generating Training Samples from OSM}
\label{sec:system_overview}
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\linewidth]{Overview_jstars_1.jpg}
\caption{Overview of our framework. The three inputs
are shown in orange-colored boxes. All outputs
produced by the system are shown in green-colored
boxes. The modules in blue-colored ellipses operate on
a tile-wise basis.}
\label{fig:overview}
\end{figure*}
As stated in the Introduction section, our goal is to generate
accurate semantic labels for the points on the ground (as
opposed to the pixels in the images). Solving this problem
requires correcting the positioning errors in the satellite
cameras and estimating accurate elevation information for
each point on the ground --- since only then we can
accurately establish the relationship between the pixels in
the images and the points on the ground. This will also
enable us to establish correspondences between the pixels of
the multiple views of the same scene.
Therefore, an important intermediate step in our processing
chain is the calculation of the DSM. To the best of our
knowledge, there is no public contribution that discusses a
complete framework for automatic alignment and creation of
{\em large-area} DSMs over a 100 km$^2$ region using
satellite images taken as far apart as 2 years. Because of
the role played by high-quality large-area DSMs in our
framework, we have highlighted this part of the framework in
the Introduction and shown some sample results in
Fig. \ref{fig:dsm_example}.
An overview of the overall framework presented in this study
is shown in Fig. \ref{fig:overview}. The system has three
inputs that are shown by the orange colored boxes: (1)
panchromatic and 8-band multispectral satellite
images;
(2) the metadata associated with the images; and (3) the OSM
vectors. After the CNN is trained in the manner described
in the rest of this manuscript, the framework directly outputs
semantic labels for the world points.
In the rest of this section, we will briefly describe the
major components of the framework, apart from the
machine-learning component. These components are described
in greater detail in the Appendices.
\begin{LaTeXdescription}%
\setlength\itemsep{2pt}
\item[Tiling and Image Alignment:] The notion of a tile is
used only for aligning the images and for constructing a
DSM. For the CNN-based machine-learning part of the
system, we work directly with the whole images and with
the OSM for the entire area of interest. Tiling is made
necessary by the following two considerations: (1) The
alignment correction parameters for a full satellite image
cannot be assumed to be the same over the entire image;
(2) The computational requirements for image-to-image
alignment and DSM construction become too onerous for
full-sized images. We have included evidence for the need
for tiling in Appendix \ref{sec:image_align}. On a
related note, the study reported in \cite{chipcluster}
describes an approach that divides a large region into
smaller chips for the purpose of land cover clustering.
After tiling, the images are aligned with
bundle-adjustment algorithms, which is a standard practice
for satellite images. Alignment in this context means
calculating corrections for the rational polynomial
coefficients (RPCs) of each image.
\item[DSM Construction:] A DSM is constructed from the
disparity map generated by the hierarchical tSGM algorithm
\cite{rothermel2012sure}. Stereo matching is only applied
to those pairs that pass certain prespecified criteria
with respect to differences in the view angles, sun
angles, time of acquisition, etc., subject to the
maximization of the azimuth angle coverage. The disparity
maps and corrected RPCs are used to construct pairwise
point clouds. Since the images have already been aligned,
the corresponding point clouds are also aligned and can be
fused without any further 3D alignment. Tile-level DSMs
are merged into a large-area DSM.
\item[Generating Training Samples:] The training data is
generated by using an $F \times F$ window to randomly
sample the images after they have been pansharpened and
orthorectified using the DSM. We refer to such an $F \times F$
window on the ground as a ground-window.
The parameter $F$ is empirically set to 572 in our experiments\footnote{Note that the value of $F$ can change depending on the resolution of the images. $F$ should be chosen such that the windows are large enough to capture sufficient spatial context around the objects of interest.}. Subsequently, the OSM vectors are converted to raster format with
the same resolution as in the orthorectified images. Thus
there is a label for each geographic point in the
orthorectified images. The OSM roads are thickened to have
a constant width of 8m. Since the images
are aligned with sub-pixel accuracy and are
orthorectified, the training samples from the multiple
images that view the same ground-window correspond to
one another on a point-by-point basis, thereby giving us
multi-view training data.
\end{LaTeXdescription}
\section{Multi-View Training and Inference}
\label{sec:mvapproach}
\subsection{Motivation for our Proposed Approach}
\label{sec:motiv}
Our multi-view training framework is motivated by the
following factors:
\begin{LaTeXdescription}
\setlength\itemsep{2pt}
\item[Convenience:] With newer and better single-view CNNs
being designed so frequently, it would be convenient if the
multi-view fusion module could be designed as an add-on to
an existing pretrained architecture. This would make it easy
to absorb the latest improvements in the single-view
architectures directly into the multi-view fusion
framework. We won't have to rethink the feature
concatenation for each new single-view CNN
architecture. Additionally, we want to efficiently train the
single-view weights in parallel across multiple GPUs and
carry out fusion on a single GPU.
\item[Multi-Date Images:] The satellite images could have
been collected years apart under different illumination and
atmospheric conditions. Thus, our task is very different
from traditional multi-view approaches that work with 3D
shapes or images captured by moving a (handheld)
camera around the same
scene.
\item[Varying Number of Views:]
The number of views covering a ground-window can vary between
1 to all available images (32 in our case). This causes
practical challenges in backpropagating gradients
when using CNNs that assume the availability of a fixed
number of views for concatenating features. At the same
time, we do not want to exclude windows that are covered by
less than a specified number of views. Our goal is to use
all available training data and all available views for
every ground-window.
\end{LaTeXdescription}
\subsection{Multi-View Fusion Module}
\label{sec:mvcnn}
\begin{figure*}[h]
\centering
\includegraphics[width=1\linewidth]{MultiViewTraining_v2.jpg}
\caption{Overview of Multi-View Training}
\label{fig:mvtrain}
\end{figure*}
\begin{figure*}[h]
\centering
\subfloat[
{
\includegraphics[width=1\linewidt
]{Strategy-2a.jpg}
}\\
\subfloat[
{
\includegraphics[width=1\linewidt
]{Strategy-2b.jpg}
}
\caption{Two choices for Multi-View Fusion. At top is MV-A
in which the weights of the MV Fusion layer are
different for each channel of each view. At bottom is
MV-B where the weights of the MV Fusion layer are shared
by all the channels of a view.}
\label{fig:mv-ab}
\end{figure*}
Fig. \ref{fig:mvtrain} shows an overview of our multi-view
training framework where we propose that the multi-view
information be aggregated at the predictions stage. In this
sense, our approach is related to the strategies discussed
in \cite{mv-paper9} and \cite{mv-paper14}. While the
contribution in \cite{mv-paper9} considers the ``RGB'' and
the depth channel of the same RGB-D image as two ``views''
(which is a much simpler case), the 3D shape segmentation
approach in \cite{mv-paper14} synthetically generates
multiple-views of the same 3D object. In contrast, the
significantly more complex nature of our data makes our
problem very different from these
tasks.
The multi-view fusion module shown in Fig. \ref{fig:mvtrain}
can be added to any existing/pretrained single-view CNN. We
experimented with different choices for this module and
present two that gave good performance yields. These are
shown in Fig.~\ref{fig:mv-ab} and we denote them as MV-A
(Multi-View-A) and MV-B(Multi-View-B) respectively. Both
MV-A and MV-B consist of a single block of weights with
kernel size, stride and padding set to 1.
In the following discussion, $V$ denotes a subset of views
for a single ground-window. $N$ is the number of views in
$V$. $H$ and $W$ are the height and width of a single view
respectively. $M$ is the maximum number of possible views
for a ground-window. $C_L$ is the number of target classes.
As shown in Fig. \ref{fig:mvtrain}, the Single-View (SV) CNN
outputs a tensor of shape $(C_L,H,W)$ for each of N views
which are concatenated along the batch axis to yield a
tensor of shape $(N,C_L,H,W)$, which we denote as
$T^N_{MV}$. This tensor is then inserted into a larger tensor
which we denote as $T_{MV}$. Each view has a fixed index in
$T_{MV}$. Missing views are filled with zeros. The
difference between MV-A and MV-B can now be explained as
follows.
\begin{LaTeXdescription}
\setlength\itemsep{2pt}
\item[MV-A:] In this case, $T^N_{MV}$ is reshaped into a tensor
of shape $(1,N \times C_L,H,W)$. It is then inserted into
$T_{MV}$ which is of shape $(1,M \times C_L,H,W)$. $T_{MV}$
is then input to the MV-A module which subsequently outputs
a tensor of shape $(1,C_L,H,W)$. MV-A thus contains a total
of $M \times C_L$ trainable weights, one for each channel of
each view.
\item[MV-B:] In this case, $T^N_{MV}$ is first reshaped into a
tensor of shape $(C_L,N,H,W)$. It is then inserted into
$T_{MV}$ which is of shape $(C_L,M,H,W)$. $T_{MV}$ is then
input to the MV-B module which subsequently outputs a tensor
of shape $(C_L,1,H,W)$. MV-B thus contains a total of $M$
trainable weights, one for each view. The first and second
axis of this tensor are swapped to yield a tensor of shape
$(1,C_L,H,W)$ which is then used to calculate the loss.
\end{LaTeXdescription}
\subsection{Multi-View Loss Function}
\label{sec:loss}
The total loss is defined as
\begin{align}
L = \alpha \cdot L_{SV} + \beta \cdot L_{MV}
\label{eqn:loss}
\end{align}
where $L_{SV}$ represents the single-view loss, $L_{MV}$
represents the multi-view loss and $\alpha$ and $\beta$ are
scalars used to weight the two loss functions. The
single-view loss is calculated as follows.
\begin{align}
L_{SV} = \frac{1}{N} \sum_{i=1}^{N} \text{CE$_i(G_i,T_i)$}
\end{align}
where CE$_i$ is the pointwise cross-entropy loss for the
$i^{th}$ view, $N$ is the number of views in a subset $V$
of views that cover a single ground-window and $T_i$ is the
output tensor of the SV CNN for the $i^{th}$ view. To
calculate CE$_i$, we mask the OSM labels for the
ground-window with the occlusion mask of the $i^{th}$
view. This masked ground-truth is denoted by $G_i$ in the
equation above. Note that this mask is implicitly computed
during the process of true orthorectification. The gradients
of $L_{SV}$ are not backpropagated at these masked
points. What this means is that for each individual view,
$L_{SV}$ only focuses on portions of the ground-window that
are visible in that view.
The pointwise cross-entropy loss between two probability
distributions $A$ and $B$, each defined over $C_L$ classes,
is calculated as follows.
\begin{align}
\text{CE}(A,B) = -\sum_p\sum^{C_L}_{j=1} A(p,j) \cdot log(B(p,j))
\label{eqn:celoss}
\end{align}
where $p$ refers to a single point. $A(p,j)$ is the
probability that point $p$ belongs to class $j$ as defined
by $A$. $B(p,j)$ is the probability that point $p$ belongs
to class $j$ as defined by $B$.
The multi-view loss is calculated as follows.
\begin{align}
L_{MV} = \text{CE}(G,P_{MV})
\label{eqn:mvloss}
\end{align}
where CE$(G,P_{MV})$ is the pointwise cross-entropy loss for
the ground-window. This is calculated using the unmasked OSM
label $G$ and the output $P_{MV}$ of the MV Fusion
module. $P_{MV}$ can be viewed as a final probability
distribution that is estimated by fusing the individual
probability distributions that are output by the SV CNN for
each of the $N$ views. We can denote $P_{MV}$ as a function
$f(T_1,T_2,...,T_N)$ where $f$ depends upon the architecture
of the MV Fusion module. Note that $f$ is differentiable.
Thus, Eq. \ref{eqn:mvloss} can be rewritten as
\begin{align}
L_{MV} = \text{CE}(G,f(T_1,T_2,...,T_N))
\label{eqn:mvloss1}
\end{align}
Substituting the expression for the $\text{CE}$ loss from
Eq. \ref{eqn:celoss} into Eq. \ref{eqn:mvloss1}, we get the
following expression for $L_{MV}$.
\begin{align}
L_{MV} = -\sum_p\sum^{C_L}_{j=1} G(p,j) \cdot log(f(T_1,T_2,...,T_N)(p,j))
\label{eqn:mvloss2}
\end{align}
Note that \textit{$L_{MV}$ is not linearly separable over
the views in V.} In other words, unlike $L_{SV}$, we
cannot separate it into a sum of losses for each view. Thus,
$L_{MV}$ captures the predictions of the network in an
ensemble sense over multiple views covering a
ground-window. When backpropagating the gradients of $L$,
the gradients from $L_{MV}$ are influenced by the relative
differences between the predictions for each view, and this
in turn translates into better weight-updates. Moreover, by
using $L_{MV}$, the network is shown labels for all portions
of the ground including those that are missing in some views
of $V$. This enables the network to make better decisions
about occluded regions using multiple views.
\subsection{Strategies for Multi-View Training and Inference}
\label{sec:mvtrain}
\subsubsection{Approaches for Data-Loading}
The term ``data-loading'' refers to how the data samples are
grouped into batches and input to the CNN. We use two
different data-loading approaches.
\begin{LaTeXdescription}
\setlength\itemsep{2pt}
\item[Single-View Data-Loading (SV DATALOAD):] This is a
conventional data-loading strategy where a single training
batch can contain views of different ground-windows. The
batch size is constant and only depends on the available
GPU memory. SV DATALOAD uses all the available data.
\item[Multi-View Data-Loading (MV DATALOAD):]
Under this strategy, a training batch consists solely of
views that cover the same ground-window. The number of
such views can vary from window to window. However, due to
memory constraints, we cannot load all 32 views onto the
GPUs simultaneously. As a work around, we use the
following approach. Let $|Q|$ denote a pre-specified
number of views that can fit into the GPU memory, $R$
denote the set of available views for a ground-window and
$|R|$ denote the total number of views in $R$. If
$|R| < |Q|$, we skip loading this ground-window. If
$|R| > |Q|$, we randomly split $R$ into a collection of
overlapping subsets $\{Q_j\}$, such that each $Q_j$ has
$|Q|$ views and $\cup Q_j = R$ where $\cup$ denotes the
union operator. The tensor $T_{MV}$ that is input to the
MV Fusion module is reset to zero before inputting each
$Q_j$ to the CNN. Note that \textit{this random split has
the added advantage that the CNN sees a different
collection of views for the same ground-window in
different epochs, which should help it to learn better.}
\end{LaTeXdescription}
The design of MV DATALOAD is motivated by our
observation that if we allow the batch size to change significantly
for every ground-window (based on the corresponding number
of available views), it significantly slows down the rate
of convergence. Therefore, we exclude ground-windows with
less than $|Q|$ views, and for the remaining windows we make
sure that every subset $Q_j$ has $|Q|$ views. \textit{This
enforces a constant batch size of $|Q|$}, resulting in
faster convergence.
\subsubsection{Training Strategies}
We use the following different strategies to train the CNNs.
\\\\
\noindent \textbf{Single-View Training (SV TRAIN):} In this strategy,
the SV CNN is trained independently of the MV Fusion
module. We apply the SV DATALOAD approach to use all
available data. One can also interpret this as setting
$\beta = 0$ in Eq. \ref{eqn:loss} and freezing the weights
of the MV Fusion module.
We now define three different multi-view training strategies
as follows.
\begin{LaTeXdescription}%
\setlength\itemsep{2pt}
\item[MV TRAIN-I:] We first train the SV CNN using SV
TRAIN. Subsequently, we use MV DATALOAD to only train the
MV Fusion module by setting $\alpha=0$ in
Eq. \ref{eqn:loss}, and by freezing the weights of the SV
CNN. Hence, $L_{MV}$ only affects the weights of the MV
Fusion module and does not affect the SV CNN.
\item[MV TRAIN-II:] We first train the SV CNN using SV
TRAIN. Subsequently, both the pretrained SV CNN and the MV
Fusion module are trained together using MV DATALOAD and
the total loss as defined in Eq. \ref{eqn:loss}. Thus, the
$L_{MV}$ loss influences the weight updates of the SV CNN
as well. In practice, we lower the initial learning rate
of the SV CNN as it has already been trained and we only
want to fine-tune its weights.
\item[MV TRAIN-III:] In this strategy, we do not pretrain
the SV CNN, but rather train both the SV CNN and the MV
Fusion module together from scratch using the total loss
$L$ (Eq. \ref{eqn:loss}), and MV DATALOAD. This has the
disadvantage that the network never sees ground-windows
with less than $|Q|$ views, where $|Q|$ is a
user-specified parameter. One might expect this reduction
in the amount of training data to negatively impact
performance, especially given the sparse nature of the OSM
labels. Our experimental evaluation confirms this.
\end{LaTeXdescription}
To make a decision on when to stop training, a common
practice in machine-learning is to use a validation
dataset. However, in our case the validation data is also
drawn from OSM (to avoid any human intervention), and is
therefore noisy. To handle this, we make the following
proposal. We train a network until the training loss stops
decreasing. At the end of every epoch, we measure the IoU
using the validation data. For inference, we save the
network weights from two epochs -- one with the smallest
validation loss and the largest validation IoU, and the
other with the smallest training loss and an IoU that is
within an acceptable range of the largest validation IoU (to
reduce the chances of overfitting to the training data). We
denote the former as EPOCH-MIN-VAL (E$_{\text{MIN-VAL}}$)
and the latter as EPOCH-MIN-TRAIN (E$_{\text{MIN-TRAIN}}$)
respectively.
\subsubsection{Inference}
To establish a baseline, we use a SV CNN trained with the SV
TRAIN strategy defined above, and merge the predictions from
overlapping views via majority voting. We will denote this
approach as SV CNN + VOTE. We also implemented an
alternative strategy of simply averaging the predicted
probabilities across overlapping views, which produced
nearly identical results to majority voting. For the sake of
brevity, we only report the results from SV CNN + VOTE as
the baseline.
Inference using the SV CNN + MV Fusion module is noticeably
faster than SV CNN + VOTE, because the former combines
multi-view information directly on the GPU. For inference,
the MV DATALOAD approach can be used with a single minor
modification. Instead of resetting the $T_{MV}$ tensor to
zeros before inputting each subset $Q_j$ of $R$, it is only
reset to zeros once for each ground-window. This means that the
final prediction for a ground-window is still made using all
the views.
\section{Semantic Segmentation Using Off-Nadir Images}
\label{sec:offnadir}
Up till now, our discussion was focused on using true
orthophotos for semantic segmentation. However, for many
applications, it would be useful to directly train CNNs on
the off-nadir images. Even for labeling world points, it
would be interesting to compare the approach from the
previous section vis-a-vis first training CNNs on the
original off-nadir images, and subsequently orthorectifying
the predicted labels. However, this would require a way to
project the OSM training labels from geographic coordinates
into the off-nadir images. Most prior OSM-based studies in
the literature are ill-equipped to carry out such a
comparison because they use pre-orthorectified images. Our
end-to-end automated pipeline, which includes the ability to
create large-area DSMs, enables us to solve the problem
stated above in the manner described below.
Since each building and buffered road-segment is represented
by a polygon in OSM, we use the following procedure to
create smooth labels in the off-nadir images. For a specific
polygon and a specific off-nadir image,
\begin{enumerate}
\item We obtain the longitude and latitude coordinates of
the vertices of the polygon from OSM.
\item Using the longitude and latitude coordinates, we find
the corresponding height values of the vertices from the
DSM.
\item Using the RPC equations and the latitude, longitude
and height coordinates, we project each vertex into the
off-nadir image.
\item Subsequently all the pixels contained inside a
projected polygon are marked with the correct
label. Portions of the polygon that fall outside the image
are ignored.
\end{enumerate}
The above procedure is repeated for every polygon and
off-nadir image. In practice, the polygons can be projected
independently of one another in parallel. This method is
very fast, but does come at a cost. Consider an example of
projecting a polygon representing a building-roof into an
off-nadir image. If the DSM height for a corner of this
polygon is incorrect, then, because we first project vector
data into the image and subsequently rasterize it, the
projected shape of the entire building-roof label could
become distorted. \textit{Thus, the noise in the DSM has
greater impact on the noise in the training labels when
using off-nadir images vis-a-vis using true orthophotos.}
A possible alternative strategy is to first map each pixel
in each off-nadir image into its longitude and latitude
coordinates, and subsequently check if this point lies
inside an OSM polygon. However, inverse projection needs an
iterative solution and cannot be done directly with the RPC
equations. Such a strategy will be significantly slower than
our adopted method.
\section{Experimental Evaluation and Results}
\label{sec:evaluation}
We use two datasets to evaluate the different
components of our framework. The first dataset consists of
32 WV3 images covering a 120 km$^2$ region in Ohio and the
second dataset consists of 32 WV3 images covering a 62
km$^2$ region in California. The latter is part of the
publicly available Spacenet \cite{spacenet}
repository. Building and road label data is downloaded from
the OSM website. \emph{No other preprocessing is done before
feeding the data to our framework}. Alignment and
large-area DSM construction are evaluated using both
datasets. For an extensive quantitative assessment of the
performances of the different semantic segmentation
strategies, we divided the 120 km$^2$ region in Ohio into a
109 km$^2$ region for training, a 1 km$^2$ region for
validation, and an unseen 10 km$^2$ region for
inference. The unseen region contains precise manual
annotations.
The last region is ``unseen'' because no samples in the
training and the validation regions fall inside that region.
We select the popular U-Net \cite{unet} as the SV CNN
because it is lightweight and has been used in many prior
studies with overhead imagery \cite{resunet},
\cite{yang2019road}, \cite{yang2019road_v2}. The U-Net is
modified to accept 8 band data, and we add
batch-normalization \cite{batchnorm} layers. Since OSM
labels are sparse, we weight the cross entropy losses with
the weights set to 0.2, 0.4 and 0.4 for the background,
building and road classes respectively. Training is done
using 4 NVIDIA Gtx-1080 Ti GPUs. Due to GPU memory
constraints, the parameter $|Q|$ for MV DATALOAD is set to
16.
We will present the results of the semantic-segmentation
studies in the main body of the manuscript. Quantitative
evaluation of the image-to-image alignment and inter-tile
DSM alignment are included in Appendix
\ref{sec:align_qual}.
\subsection{Single-View vs Multi-View CNNs}
We have carried out experiments with different combinations
of CNNs, training strategies and inference
models.
For clarity, we present the most interesting results in this
manuscript.
The relevant notations have already been
defined in Section \ref{sec:mvtrain}.
To assist the reader, we will explain the notation used in
the tables below with an example. Consider the first row in
Table \ref{table:sv-mv}. This row corresponds to the case of
training a Single-View CNN using SV TRAIN. At inference
time, the EPOCH-MIN-VAL weights are used and the predictions
from different views are merged using majority voting.
\begin{table}[h]
\renewcommand{\arraystretch}{1.5}
\caption{Comparison of SV TRAIN vs MV TRAIN-II}
\label{table:sv-mv}
\centering
\begin{tabular}{|p{2.05cm}|p{1.675cm}|p{1.175cm}|p{1cm}|p{0.6cm}|}
\hline
CNN & Training & Inference & \multicolumn{2}{|c|}{IoU}\\
\hline
& & & Buildings & Roads\\
\hline
SV CNN + VOTE & SV TRAIN & E$_{\text{MIN-VAL}}$ & 0.75 & 0.57 \\
\hline
SV CNN + MV-A & MV TRAIN-II & E$_{\text{MIN-VAL}}$ & \textbf{0.79} & 0.55 \\
\hline
SV CNN + MV-B & MV TRAIN-II & E$_{\text{MIN-VAL}}$ & \textbf{0.80} & 0.57\\
\hline
\hline
SV CNN + VOTE & SV TRAIN & E$_{\text{MIN-TRAIN}}$ & 0.75 & 0.56 \\
\hline
SV CNN + MV-A & MV TRAIN-II & E$_{\text{MIN-TRAIN}}$ & 0.73 & \textbf{0.6}\\
\hline
SV CNN + MV-B & MV TRAIN-II & E$_{\text{MIN-TRAIN}}$ & 0.73 & \textbf{0.64}\\
\hline
\end{tabular}
\end{table}
Table \ref{table:sv-mv} shows the best gains that we get by
using multi-view training and inference, vis-a-vis
single-view training and majority voting. The first three
rows correspond to running inference using the EPOCH-MIN-VAL
weights. Using MV TRAIN-II to train the SV CNN + MV-B
network, we outperform the baseline with a $5\%$ increase in
the IoU for the building class, while performing comparably
with the baseline for the road class. With the MV-A module,
the IoU for the building class improves by $4\%$, but that of
the road class decreases by $2\%$.
The noise in the training and validation labels for roads is
much more than that for buildings because we assume a
constant width of 8 m for all roads, and because the
centerlines of roads (as marked in OSM) are often not along
their true centers. To handle this, in Section
\ref{sec:mvtrain}, we proposed to also save the network
weights for the epoch with the minimum training loss and
good validation IoU. By using the validation IoU, we
reduce the chances of these network weights being overfitted
to the data. Our intuition is borne out by the last three
rows of Table \ref{table:sv-mv}. When compared to the
baseline, using MV TRAIN-II with the SV CNN + MV-A and the
SV CNN + MV-B networks increases the IoU for the road class
by $4\%$ and $8\%$ respectively while slightly lowering the
building IoU by $2\%$. It is interesting to note that in
contrast, EPOCH-MIN-VAL and EPOCH-MIN-TRAIN perform
comparably for the SV TRAIN strategy. Based on these
results, we conclude that \textit{the MV TRAIN-II strategy
is a good approach for multi-view training and the MV-B
Fusion module yields the maximum gains.} We recommend
using EPOCH-MIN-VAL for segmenting buildings, and
EPOCH-MIN-TRAIN for segmenting roads.
We should point out that the SV CNN + VOTE baseline is trained on the
same data as the SV CNN + MV Fusion module (trained with MV
TRAIN II), and therefore the improvements are not due to data
augmentation.
\subsection{Does Multi-View Training Improve the Single-View
CNN?}
To obtain additional insights into how multi-view training
improves accuracy, we carry out two ablation studies using
the SV CNN + MV-B network because it yielded the maximum
gains with the MV TRAIN-II strategy.
For the first study, we freeze the pretrained SV CNN and
only train the MV-B module using the MV TRAIN-I
strategy. The corresponding IoU scores are reported in the
first two rows of Table \ref{table:mv-I-ablation}. Comparing
these two rows with the baseline (SV CNN + VOTE) shown in
Table \ref{table:sv-mv}, we see that we do not get any
noticeable improvements. Remember that in MV TRAIN-I, the
multi-view loss ($L_{MV}$) only modifies the weights of the
MV Fusion module. This points to the need for allowing
$L_{MV}$ to influence the weights of the SV CNN as well, as
is done by MV TRAIN-II.
\begin{table}[h]
\begin{center}
\caption{Impact of multi-view training on the
single-view CNN}
\label{table:mv-I-ablation}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|p{2.05cm}|p{1.675cm}|p{1.175cm}|p{1cm}|p{0.6cm}|
\hline
CNN & Training & Inference & \multicolumn{2}{|c|}{IoU}\\
\hline
& & & Buildings & Roads\\
\hline
SV CNN + MV-B & MV TRAIN-I & E$_{\text{MIN-VAL}}$ & 0.75 & 0.57 \\
\hline
SV CNN + MV-B & MV TRAIN-I & E$_{\text{MIN-TRAIN}}$ & 0.75 & 0.57 \\
\hline
SV$_{\text{(MV)}}$ + VOTE & MV TRAIN-II & E$_{\text{MIN-VAL}}$ & \textbf{0.80} & 0.55 \\
\hline
SV$_{\text{(MV)}}$ + VOTE & MV TRAIN-II & E$_{\text{MIN-TRAIN}}$ & 0.74 & \textbf{0.62} \\
\hline
SV CNN + MV-B & MV TRAIN-II & E$_{\text{MIN-VAL}}$ & \textbf{0.80} & 0.57\\
\hline
SV CNN + MV-B & MV TRAIN-II & E$_{\text{MIN-TRAIN}}$ & 0.73 & \textbf{0.64}\\
\hline
\end{tabular}
\end{center}
\end{table}
For the second study, we take the best performing SV CNN +
MV-B network that was trained using the MV TRAIN-II strategy
and remove the MV-B module from it. We denote this SV CNN as
SV$_{\text{(MV)}}$ CNN. We run inference using this
SV$_{\text{(MV)}}$ CNN and merge the predictions from
overlapping views using majority voting. The corresponding
IoUs are shown in the third and fourth rows of Table
\ref{table:mv-I-ablation}. Comparing these two rows with the
baseline SV CNN + VOTE in Table \ref{table:sv-mv}, we see
that \textbf {multi-view training has significantly improved
the performance of the SV$_{\text{(MV)}}$ network itself,
without any increase in the number of trainable
parameters.} This indicates that intelligently training a
SV CNN using all the available views for a scene can
alleviate the effect of noise in the training labels,
without changing the original architecture of the SV
CNN.
We reproduce the IoUs of the complete SV CNN + MV-B network
trained with MV TRAIN-II, in the fifth and sixth rows of
Table \ref{table:mv-I-ablation}. Comparing the 3$^{rd}$ and
5$^{th}$ rows, and the 4$^{th}$ and 6$^{th}$ rows, we see
that the MV Fusion module does provide an additional $2\%$
improvement in the IoU for the road class, over the
SV$_{\text{(MV)}}$ network.
\subsection{The Need for Using a Combination of SV DATALOAD and MV DATALOAD}
As another experiment, when we employ the MV TRAIN-III
strategy to train the SV CNN + MV-B network from scratch
using the MV DATALOAD method, the IoU for the building class
drops down significantly to 0.62, when compared to the
baseline in Table \ref{table:sv-mv}. This is as expected
because in this case, the network is trained with fewer
training samples. It never sees ground-windows with less
than $|Q|$ views. Therefore, it is important that the
network be trained with as much non-redundant data as
possible and with multi-view constraints, as is done by
using a combination of SV DATALOAD and MV DATALOAD in MV
TRAIN-II.
\subsection{Comparison to Prior State-of-the-Art}
\label{sec:prior_comp}
For a fair comparison, we consider the most relevant prior
state-of-the-art studies that use multi-view off-nadir
images for semantic segmentation. The work presented in
\cite{dfc_winner} discusses the entry that won the 2019 IEEE
GRSS Data Fusion Contest for Multi-view Semantic
Stereo. This approach trains single-view networks using both
WV3 images and DSMs over a small 10-20 km$^2$ region with
\textit{ precisely annotated human labels} and reports an
IoU of about 0.8 for the building class. The performance gains
come from training the network on DSMs which helps to segment
buildings more accurately. Our best IoU for the building
class is also 0.8, but we use only noisy training labels
that are automatically derived from a much larger 100 km$^2$
region. It is possible that by adding the DSMs as inputs to
our network, we could further improve the
IoU.
Our IoU for the building class is noticeably better than
that reported by the work in \cite{danesfield}, which trains
single-view CNNs on WV3 images and OSM labels covering 1-2
km$^2$.
Most of the other studies in the
literature use single-view pre-orthorectified images. It
should be pointed out that our multi-view training strategy
could be applied to any of those network architectures.
\hfill\\
\\
\\
\noindent \textit{Using DeepLabv3+ as the SV CNN:}
\\
\\
\indent For another comparison, we change the SV CNN from a
U-Net to a pretrained DeepLabv3+ (DLabv3) CNN with a
WideResNet38 trunk \cite{deeplabv3} that is one of the top
performers on the CityScapes benchmark dataset
\cite{Cordts2016Cityscapes}. We modify the first layer to
accept 8 bands. With this modification, the network has
$\sim$137 million trainable parameters whereas in comparison
the U-Net only has $\sim$31 million parameters.
We first train the network using SV TRAIN.
Due to the size of the network, we set the batch size to 12
for SV TRAIN. At inference time we use majority voting. This
strategy is denoted as SV$_{(\text{DLabv3})}$ + VOTE. For
the multi-view training, we append the MV-B Fusion module to
the network and then employ the MV TRAIN-II strategy. Recall
that this is the best performing strategy for the
U-Net. $|Q|$ is set to 12 for the MV TRAIN-II strategy.
In Table \ref{table:deeplab-mv-train-II} we show the IoUs
for the DeepLabv3+ experiments. Firstly, we note that
SV$_{(\text{DLabv3})}$ + VOTE achieves an IoU of $0.828$ and
$0.553$ for the building and road classes respectively. It
should be kept in mind that this network has already been
trained on a large amount of precise labels from the
CityScapes dataset \cite{Cordts2016Cityscapes}.
What is interesting is that these numbers are comparable to
the corresponding IoUs of $0.80$ and $0.57$ for the U-Net +
MV-B network trained with MV TRAIN-II, despite the fact that
\textit{the U-Net has significantly fewer trainable
parameters than the DeepLabv3+ network and that it has
been trained only on noisy labels}.
The second row of Table \ref{table:deeplab-mv-train-II} has
the entry for running inference using the EPOCH-MIN-VAL
weights of the SV$_{(\text{DLabv3})}$ + MV-B network after
being trained with MV TRAIN-II. Compared to the
SV$_{(\text{DLabv3})}$ + VOTE, the building IoU goes down by
2.8$\%$ whereas the road IoU goes up by 5$\%$. The mean IoU
goes up by $1\%$ when we use multi-view training. One
possible reason for this small improvement is that we are
already at the limits of how much a CNN can learn, given the
extent of noise in the system. Another possibility is that
since the DeepLabv3+ network is much bigger than the U-Net,
and since the multi-view features are fused at the end, one
can expect the influence of the multi-view loss ($L_{MV}$)
on the earlier layers of the DeepLabv3+ network to be
reduced when compared to the case of the U-Net. We might get
better results by fusing the multi-view data at an earlier
stage in the network. This needs further
investigation.
\begin{table}[h]
\begin{center}
\caption{Comparison of SV TRAIN with MV TRAIN-II when Using
DeepLabv3+ as the SV CNN}
\label{table:deeplab-mv-train-II}
\renewcommand{\arraystretch}{1.5
\begin{tabular}{|p{2.3cm}|p{1.675cm}|p{1cm}|p{1cm}|p{0.6cm}|}
\hline
CNN & Training & Inference Model & \multicolumn{2}{c|}{IoU}\\
\hline
& & & Buildings & Roads\\
\hline
SV$_{(\text{DLabv3})}$ + VOTE & SV TRAIN & E$_{\text{MIN-VAL}}$ & 0.828 & 0.553 \\
\hline
SV$_{(\text{DLabv3})}$ + MV-B & MV TRAIN-II & E$_{\text{MIN-VAL}}$ & 0.800 & 0.605 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Training on True Orthophotos vs on Off-Nadir Images}
In Section \ref{sec:offnadir}, we have described our
framework for creating training labels to directly train a
SV CNN on the off-nadir images. For evaluation, we use this
trained CNN to label those portions of the off-nadir images
that correspond to the ``unseen'' inference region. For a
fair comparison, the predicted labels are then
orthorectified so that the evaluation is done in the same
orthorectified space. Predictions from overlapping images
are merged via majority voting.
When the off-nadir images and projected OSM labels are used
for training, both the EPOCH-MIN-VAL and EPOCH-MIN-TRAIN
weights yield IoU scores of 0.73 and 0.55 for the building
and road classes respectively. These scores are $2\%$ lower
than the corresponding numbers for the SV CNN + VOTE that is
trained on true orthophotos. As mentioned in Section
\ref{sec:offnadir}, one possible reason for this reduced IoU
might be the increased error in the OSM labels when
projected into the off-nadir images. Another reason could be
that the CNN finds it difficult to separate the building
walls from the roofs in the off-nadir images. In contrast,
vertical building walls are not present in true
orthophotos. Nevertheless, we have demonstrated that it is
possible to train a CNN on off-nadir images using noisy
labels, and obtain decent IoU scores. Such a CNN can be
directly used with new off-nadir images without having to
align or orthorectify the images. Multi-view training using
off-nadir images is also possible, albeit more challenging,
which we leave for future work.
\def 0.4\linewidth {0.4\linewidth}
\def 0.4\linewidth {0.4\linewidth}
\begin{figure*}[h]
\centering
\subfloat[Ortho View
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_10_img.jpg}
}
\subfloat[Predicted Labels
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_10.jpg}
}\\
\subfloat[Ortho View
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_5_img.jpg}
}
\subfloat[Predicted Labels
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_5.jpg}
}\\
\caption{Examples of orthorectified images and semantic
labels output by our pipeline. Buildings are marked in
\textcolor{blue}{blue} and roads are marked in
\textcolor{magenta}{magenta}.}
\label{fig:qual_eval_1}
\end{figure*}
\subsection{Qualitative Results}
In Figs. \ref{fig:qual_eval_1}, \ref{fig:qual_eval_2} and
\ref{fig:qual_eval_3}, we show some typical examples of
semantic labels output by our CNN. In addition, Fig.
\ref{fig:building-sv-mv} highlights how multi-view training
can help the CNN to segment challenging buildings such as
residential buildings which are often occluded by trees,
roofs made of highly reflective surfaces, and small
buildings. With respect to segmentation of roads, parking
lots pose a difficult challenge because their shapes and
spectral signatures are very similar to those of true
roads. However, multi-view training is able to learn from
the differences caused by the absence and presence of
vehicles in images captured on different dates, and this is
illustrated in Fig. \ref{fig:road-sv-mv}.
\begin{table*}[h]
\setlength{\tabcolsep}{0.07cm}
\begin{center}
\begin{tabular}{|M{1.5cm}|M{0.25\linewidth}|M{0.25\linewidth}|M{0.25\linewidth}|}
\hline
Single-View Training & \raisebox{-0.5\height}{\includegraphics[width=1\linewidth, height = 0.8\linewidth]{road-sv-eg1.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 0.8\linewidth]{road-sv-eg2.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 0.8\linewidth]{road-sv-eg3.jpg}}\\
\hline
\hline
Multi-View Training & \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 0.8\linewidth]{road-mv-eg1.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 0.8\linewidth]{road-mv-eg2.jpg}}
& \raisebox{-.5\height}{\includegraphics[width=1\linewidth, height = 0.8\linewidth]{road-mv-eg3.jpg}}\\
\hline
\end{tabular}
\end{center}
\captionof{figure}{Examples illustrating how multi-view
training helps to distinguish parking lots from true
roads. Predicted road labels are marked in
\textcolor{magenta}{magenta}}
\label{fig:road-sv-mv}
\end{table*}
\begin{figure*}[htbp!]
\centering
\subfloat[Ortho View
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_9_img.jpg}
}
\subfloat[Predicted Labels
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_9.jpg}
}\\
\subfloat[Ortho View
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_7_img.jpg}
}
\subfloat[Predicted Labels
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_7.jpg}
}\\
\caption{Examples of orthorectified images and semantic
labels output by our pipeline. Buildings are marked in
\textcolor{blue}{blue} and roads are marked in
\textcolor{magenta}{magenta}.}
\label{fig:qual_eval_2}
\end{figure*}
\renewcommand 0.4\linewidth {0.4\linewidth}
\renewcommand 0.4\linewidth {0.25\linewidth}
\begin{figure*}[htbp!]
\centering
\subfloat[Ortho View
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_8_img.jpg}
}
\subfloat[Predicted Labels
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_8.jpg}
}\\
\subfloat[Ortho View
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_6_img.jpg}
}
\subfloat[Predicted Labels
{
\includegraphics[width=0.4\linewidth, height = 0.4\linewidth]{example_pred_6.jpg}
}\\
\caption{Examples of orthorectified images and semantic
labels output by our pipeline. Buildings are marked in
\textcolor{blue}{blue} and roads are marked in
\textcolor{magenta}{magenta}.}
\label{fig:qual_eval_3}
\end{figure*}
\section{Conclusions}
\label{sec:conclusion}
We have presented a novel multi-view training paradigm that
significantly improves the accuracy of semantic labeling
over large geographic areas. The proposed approach
intelligently exploits information from multi-view and
multi-date images to provide robustness against noise in the
training labels. Our approach also speeds up inference, with
minimal increase in the GPU memory
requirements. Additionally, we have demonstrated that it is
possible to use OSM training data to reliably segment
large-area geographic regions and off-nadir satellite images
without any human supervision. While we have focused on
end-to-end automatic labeling of geographic areas, the ideas
put forth in this study can be incorporated into other
multi-view semantic-segmentation applications. Our research
opens up exciting possibilities for multi-view training in
related deep-learning tasks such as object detection and
panoptic segmentation.
With respect to semantic segmentation, one possible
direction of future research is to design an architecture
for multi-stage fusion of information from multiple
views. More precisely, the features from different views
could be combined at multiple layers of a CNN to yield
possible improvements in accuracy. On a related note, one could also conduct a study to determine which layers of the SV CNN are influenced by the multi-view loss. Another exciting
possibility would be to develop a multi-view framework for
off-nadir images. This would require the use of lookup
arrays to map between the pixels (in different images)
that correspond to the same world point, and to correctly
backpropagate gradients. Yet another research direction would be
to create normalized DSMs and input them to the multi-view
CNN framework. With respect to large-area image alignment
and DSM creation, it might be advantageous to investigate
the use of a second stage of alignment to resolve the generally small
alignment differences between neighboring tiles. In
addition, it should be possible to model the
errors in the image-alignment and stereo-matching
algorithms and subsequently use these models to
construct more accurate DSMs. We plan to use our
end-to-end automated framework to carry out these studies
as part of future research.
\appendices
\section{Alignment of Full-Sized Satellite Images}
\subsection{Tiling}
The WorldView-3 images we have used in this study are
typically of size $43008 \times 38000$ in pixels and cover
an area of the ground of size 147 $km^2$. In general,
images of this size must be broken into {\em image patches},
with each image patch covering a {\em tile} on the ground.
This is made necessary by the following three
considerations:
\begin{itemize}
\item As we describe in Appendix \ref{sec:image_align}, the
corrections to the camera model calculated for
high-precision alignment of the images with one another
cannot be assumed to be constant across an entire
satellite image.
\item
The image alignment algorithms usually start with the
extraction of {\em tie points} from the images. Tie points
are the corresponding key points (like those yielded by
interest operators like SIFT and SURF) in pairs of images.
The computational effort required for extracting the tie
points goes up quadratically as the size of the images
increases since the key points must be compared across
larger images.
\item
The run-time memory requirements of modern stereo matching
algorithms, such as those based on semi-global matching
(SGM), can become much too onerous for full sized satellite
images.
\end{itemize}
Based on our experience with WV3 images, we divide the
geographic area into overlapping tiles where each tile
consists of a central 1 $km^2$ main area and a 300 m overlap
with each of the four adjoining tiles. This makes for a
total area of 2.56 $km^2$ for each tile.\footnote{A more
accurate way to refer to a tile would be that it exists on
a flat plane that is tangential to the WGS ellipsoid model
of the earth. This definition does not depend on whether
the underlying terrain is flat or hilly.} The image
patches that cover tiles of this size are typically of size
$5300 \times 5300$ in pixels.
Note that the notion of a tile is used only for aligning the
images and for constructing a DSM. This DSM is needed to
orthorectify the satellite images in order to bring them
into correspondence with OSM and with one another. For the
CNN-based machine-learning part of the system, we work
directly with the whole images and with the OSM for the
entire geographic area of interest.
\subsection {Image-to-Image Alignment}
\label{sec:image_align}
Aligning the satellite images that cover a geographic area
means that if we were to project a hypothetical ground point
into each of the images, the pixels thus obtained would
correspond to their actually recorded positions with
sub-pixel precision. If this exercise were to be carried
out for WV3 images without first aligning them, the
projections in each of the satellite images could be off by
as much as 7 pixels from their true locations.
One needs a camera model for the images in order to
construct such projections and, for the case of WV3
images, the camera model comes in the form of rational
polynomial coefficients (RPCs).
It was shown by Grodecki and Dial \cite{Grodecki} that the
residual errors in the RPCs, on account of small
uncertainties in the measurements related to the position
and the attitude of a satellite, can be corrected by adding a
{\em constant bias} to the projected pixel coordinates of
the ground points, provided the area of interest on the
ground is not too large. We refer to this as the {\em
constant bias assumption} for satellite image
alignment. We have tested the constant bias assumption
mentioned above and verified its validity for image patches
of size $5300 \times 5300$ (in pixels) for the WV3
images. Fig. \ref{fig:var_bias} presents evidence that the
constant bias assumption fails for a full-sized satellite
image.
\begin{figure*}
\subfloat[
{
\includegraphics[width=1\linewidth,height=0.4\linewidth]{using_100_bias.jpg}
}\\%\vspace{0.07in}
\subfloat[
{
\includegraphics[width=1\linewidth,height=0.4\linewidth]{using_local_bias.jpg}
}
\caption[An example to show why one cannot use a constant
bias correction for a full-sized image]{An example to show
why one cannot use a constant bias correction for a
full-sized image. At top is the ortho view of a portion
of a pairwise point cloud for the constant bias
assumption. At bottom is the same for tile-based bias
corrections. The points have been colored using the
color from the images}
\label{fig:var_bias}
\end{figure*}
\subsection{Tile-Based Alignment of Large-Area Satellite Images}
\label{sec:tiept_switch}
In order to operate on a large-area basis, we had to extend
the standard approach of bundle adjustment that is used to
align images. The standard approach consists of: (1)
extracting the key points using an operator like SIFT/SURF;
(2) establishing correspondences between the key points in
pairs of images on the basis of the similarity of their
descriptor vectors; (3) using RANSAC to reject the outliers
in the set of correspondences (we refer to the surviving
correspondences as the {\em pairwise tie points}); and (4)
estimating the optimum bias corrections for each of the
images by the minimization of a reprojection-error based
cost function.
We have extended the standard approach by: (1) augmenting
the {\em pairwise tie points} with {\em multi-image tie
points}; and (2) adding an $L_2$ regularizer to the
reprojection-error based cost function. In what follows, we
start with the need for {\em multi-image tie points}.
Our experience has shown that doing bundle adjustment with
the usual pairwise tie points does not yield satisfactory
results when the sun angle is just above the horizon or when
there is significant snow-cover on the ground. Under these
conditions, the decision thresholds one normally uses for
extracting the key points from the images often yield an
inadequate number of key points. And if one were to lower the
decision thresholds, while that does increase the number of
key points, it also significantly increases the number of
false correspondences between them.
In such images, one gets better results overall by
extracting what we refer to as {\em multi-image tie points}.
The main idea in multi-image tie-point extraction is to
construct a graph of the key points detected with lower
decision thresholds and subsequently identify the key
points that correspond to the same putative world point
across multiple images, as opposed to just two
images.\footnote{The multi-image tie-point extraction module
was developed by Dr. Tanmay Prakash.} Unfortunately,
multi-image tie points are computationally more expensive
than pairwise tie points --- roughly three times more
expensive. Therefore, they must be used only when needed.
We have developed a ``detector'' that automatically
identifies the tiles that need the extra robustness provided
by the multi-image tie points. The detector is based on the
rationale that the larger the extent to which each image
shares key-point correspondences with {\em all} the other images,
the more accurate the alignment is likely to be. This
rationale is implemented by constructing an attributed graph
in which each vertex stands for an image and each edge for
the number of key-point correspondences between a pair of
images. If we denote the largest connected component in this graph by
$C$, the extent to which each node in $C$ is connected with
all the other nodes in the same component can then be measured
by the following ``density'':
\begin{equation}
D(C) = \frac{2|E_c|}{|C|(|C|-1)}
\end{equation}
where $|E_c|$ is the total number of edges and $|C|$ is the
total number of vertices in $C$ respectively. The detection
for the need for multi-image tie points is carried out by
first applying a threshold to $|C|$ and then to $D(C)$. This
detection algorithm is described in detail in
Fig. \ref{alg:image_align}. The algorithm is motivated by
the observation that a dense tie point graph based on
pairwise tie points is indicative of good alignment.
After the tie points --- pairwise or multi-image --- have
been identified in all the image patches for a given tile,
we apply sparse bundle adjustment (SBA) to them to align the
image patches. The implementation of SBA includes an
$L_2$-regularization term that is added to the
reprojection-error based cost function because it
significantly increases the overall global accuracy of the
alignment. The only remaining issue with regard to the
alignment of the images is inter-tile alignment which we
discuss in Appendix \ref{sec:merge_dsm}.
\begin{figure*}
\begin{boxedalgorithmic}
\STATE \hspace{0.2\columnwidth} \large{An Algorithm to Detect the Need for Multi-Image Tie Points}
\par\noindent\rule[0.5\columnsep]{0.95\columnwidth}{0.01pt}
\normalsize
\linespread{1.2}\selectfont
\STATE $S$ -- Total number of image patches to be aligned\\
\textbf{Step 1: Run alignment using pairwise tie points}\\
$T_p$ -- Tie-point graph returned by alignment using pairwise tie points\\
$V$ -- Set of all image patches $\{v_i\}$. Each image patch is a vertex of $T_p$\\
$E$ -- Set of all edges $\{e_{ij}\}$. $e_{ij}$ is an edge between the vertices $v_i$ and $v_j$ with a weight equal to the number of tie points between $v_i$ and $v_j$\\
$k$, $D_{min}$ -- User-specified thresholds\\
$AQ$ -- Flag set to True if alignment is of satisfactory quality. Otherwise set to False.\\
\textbf{Evaluate alignment quality}
\begin{enumerate
\item Find the largest connected component $C$ of $T_p$. \\$|C|$ is the number of image patches in $C$. $|E_c|$ is the number of edges in $C$.
\item Check how many image patches have been aligned.\\ If $|C| < k \cdot S$, where $0 < k < 1$, $AQ \gets \textit{False}$. Return AQ
\item Check if $C$ is a tree, i.e., if $|E_c| == |C| - 1$,
$AQ \gets \textit{False}$. Return AQ \\ Explanation --
The pushbroom camera model can be closely approximated
by an affine camera model, i.e., the camera rays are
almost parallel. Therefore, if $C$ is a tree, then for
each pair of image patches, the two image patches might
be well aligned with each other. However, distinct pairs
might not be aligned with one another.
\item Check the sparsity of $C$. $D(C)$ is the density of $C$. $D(C) =
\frac{2|E_c|}{|C|(|C|-1)}$\\
If $D(C) < D_{min}$, $AQ \gets \textit{False}$. Return AQ
\end{enumerate}
\textbf{Step 2: If $AQ == \textit{False}$, rerun alignment using multi-image tie points}
\end{boxedalgorithmic}
\caption{An algorithm to detect the need for multi-image tie
points}
\label{alg:image_align}
\end{figure*}
\section{Creating a Tile-Level DSM}
\subsection{Stereo Matching}
\label{sec:pairwise_dsm}
As a first step towards constructing a DSM, stereo matching
is carried out in a pairwise manner. Similar to the study
reported in \cite{s2p}, pairs of images are selected based
on heuristics such as the difference in view angles,
difference in sun angles, time of acquisition, absolute view
angle, etc. In addition, images are selected to cover as
wide an azimuth-angle distribution as possible. We err on
the side of caution and select a minimum of 40 and a maximum
of 80 pairs per tile. For each selected pair, the images
are rectified using the approach described by the study in
\cite{oh2010piecewise}.
For stereo matching, we use the hierarchical tSGM algorithm
\cite{rothermel2012sure} with some enhancements to improve
matching accuracy and speed. Specifically, we modify the
penalty parameters in the matching cost function as
described by the work in \cite{zbontar2016stereo}. We
noticed that this improves accuracy near the edges of
elevated structures. The second improvement is to use the
SRTM (Shuttle Radar Topography Mission) DEM (Digital
Elevation Model) \cite{srtm} that provides coarse terrain
elevation information at a low resolution (30 m). This DEM
does not contain the heights of buildings. We use the DEM to
better initialize the disparity search range for every point
in the disparity volume through a novel procedure that we
refer to as ``DEM-Sculpting''. Additional details regarding ``DEM-Sculpting'' can be found in our work described in \cite{demsculpt}.
This improves accuracy and speeds up stereo
matching. Additionally we use a guided bilateral filter for
post-processing. With these additions, the matching
algorithm is able to handle varying landscapes across a
large area.
\subsection{Pairwise Point-Cloud Creation and Fusion}
\label{sec:dsm_fusion}
The disparity maps and corrected RPCs are then used to
construct pairwise point clouds. Since the images have
already been aligned, the corresponding point clouds are
also aligned and can be fused without any further 3D
alignment. At each grid point in a tile, the median of the
top $Y$ values is retained as the height at that point,
where $Y$ is an empirically chosen parameter. Subsequently,
median filtering and morphological and boundary based
hole-filling techniques are applied.\footnote{The
point-cloud generation and fusion modules as used in our
framework were developed by John Papadakis from Applied
Research Associates (ARA).}
\subsection{Merging Tile-Level DSMs}
\label{sec:merge_dsm}
On account of the high absolute alignment precision achieved
by using the $L_2$ regularization term in the bundle
adjustment logic, our experience shows that nothing further
needs to be done for merging the tile-level DSMs into a
larger DSM. To elaborate, the statistics of the differences
in the elevations at the tile boundaries are shown in Table
\ref{tbl:tile_edges} in Appendix \ref{sec:inter-tile}. We
see that the median absolute elevation difference at the
tile boundaries is less than 0.5 m -- an error that is much
too small to introduce noticeable errors in
orthorectification.
We crop out the center 1 $km^2$ region from each DSM tile
and place it in the coordinate frame of the larger DSM. This
sidesteps the need to resolve any noise-induced variations
in the overlapping regions.
\section{Generating Training Data using Pansharpened Images
and OSM}
\subsection{Pansharpening and Orthorectification}
Using the fused DSM as the elevation map, the system is now
ready for orthorectifying the satellite images
that cover the geographic area. Orthorectification means
that you map the pixel values in the images to their
corresponding ground-based points in the geographic area of
interest. What the system actually orthorectifies are the
pansharpened versions of the images --- these being the
highest resolution panchromatic images (meaning grayscale
images) that are assigned multispectral values from the
lower resolution multispectral data.
Orthorectification of an off-nadir image can lead to
``nodata'' regions on the ground if the pixels corresponding
to those regions are occluded in the image by tall
structures. Our system automatically delineates such regions
with a mask that is subsequently used during training of the
CNN to prevent gradients at those points from being
backpropagated. Each orthorectified image is resampled at a
resolution of 0.5 m. More details on orthorectification can
be found in Appendix \ref{sec:gwarp++}.
\subsection {Aligning OSM with Orthorectified Images}
\label{sec:osm_align_train_data_gen}
This module addresses the noise arising from any
misalignments between the OSM and the orthorectified
images. Our framework incorporates the following two
strategies to align the OSM with the orthorectified images:
\begin{enumerate}
\item Using Buildings: First, the system subtracts the DEM
from the constructed DSM to extract coarse building
footprints. Subsequently, these building footprints are
used to align the orthorectified images with the OSM using
Normalized Cross Correlation (NCC). This strategy has
proved useful in areas with inadequate OSM road labels.
\item Using Roads: First, the system uses the ``Red Edge''
and ``Coastal'' bands to calculate the Non-Homogeneous
Feature Difference (NHFD) \cite{nhfd}, \cite{nhfd_2} for
each point in the orthorectified image and subsequently
applies thresholds to the NHFD values to detect coarse
road footprints. The NHFD is calculated using the formula:
\begin{equation}
\text{NHFD} = \frac{(\text{Red Edge -
Coastal})}{(\text{Red Edge + Coastal})}
\end{equation}
Subsequently, the roads (noisy obviously) are aligned with
the OSM roads using NCC. The system uses this strategy in
rural areas that may not contain the buildings needed for
the previous approach to work.
\end{enumerate}
After alignment, the OSM vectors are converted to raster
format with the same resolution as in the orthorectified
images. Thus there is a label for each geographic point in
the orthorectified images. The OSM roads are thickened to
have a constant width of 8 m.
Fig. \ref{fig:osm_align} shows misaligned and aligned OSM
vectors. It should be noted that some residual
alignment error does persist. We plan to improve this module
by aligning each building/road separately.
\begin{figure*}
\centering{}
\subfloat[
{\centering
\includegraphics[width=0.4\linewidth,height=0.25\linewidth]{unaligned_osm_2.jpg}
}
\subfloat[
{
\centering
\includegraphics[width=0.4\linewidth,height=0.25\linewidth]{aligned_osm_2.jpg}
}
\caption{This figure shows typical results obtained by
aligning the orthorectified images with OSM. What is
shown in \textcolor{red}{red} at left are the unaligned
OSM vectors, and what is in \textcolor{blue}{blue} at
right are the aligned versions of the same.}
\label{fig:osm_align}
\end{figure*}
\section{True Orthorectification Using gwarp++}
\label{sec:gwarp++}
We can orthorectify the pansharpened images using the fused
DSM as the elevation map. Orthorectification is the process
of mapping the pixel values in the images to their
corresponding points in the geographic area of interest.
There is an important distinction to be made between
orthorectification and true orthorectification. If a LiDAR
point cloud or DSM is not available, the common practice is
to orthorectify images by using a DEM as the source of
elevation information. Since a DEM does not contain the
heights of elevated structures (buildings, trees, etc.),
such an orthorectified view will not represent a true nadir
view of the ground. For instance, the vertical walls of
buildings will be visible in such a view. To create a true
ortho view, we need to take the heights of the elevated
structures into account. While doing so, we need to detect
those portions of the scene that are occluded by taller
structures. Obviously these occluded portions will vary
depending upon the satellite view angle.
To the best of our knowledge, there are no open-source
utilities to create true ortho images using RPCs and DSMs at
this time. Therefore, we have developed a utility, which we
have named ``gwarp++'', to create full-sized true ortho
images quickly and efficiently. Interested researchers can
download the ``gwarp++'' software from the link at
\cite{gwarp}. We will now provide a brief overview of
``gwarp++''.
\begin{figure*}
\begin{boxedalgorithmic}
\STATE \hspace{0.15\columnwidth} \large{An Algorithmic
Description of ``gwarp++'' for True Orthorectification}
\par\noindent\rule[0.5\columnsep]{0.95\columnwidth}{0.01pt}
\normalsize
\linespread{1.25}\selectfont
\STATE $\phi$ -- Latitude, $\lambda$ -- Longitude, $h$ -- height\\
$\phi_{\text{step}}$ -- Latitude step size,
$\lambda_{\text{step}}$ -- Longitude step size,
$h_{\text{step}}$ -- Height step size\\
Image$_{\text{Patch}}$ -- Off-Nadir image patch\\
$L$ -- Length of Image$_{\text{Patch}}$ in pixels, $W$ -- Width of Image$_{\text{Patch}}$ in pixels\\
$LT \gets Zeros(W,L)$ \COMMENT{// Initialize lookup table to zeros}\\
OutArray -- Output array for the orthorectified grid\\
\textbf{Step 1: Find the extents of the tile spanned by the image patch}\\
$(\phi_{\text{min}}, \lambda_{\text{max}})$ -- Top-left corner of the tile\\
$(\phi_{\text{max}}, \lambda_{\text{min}})$ -- Bottom-right corner of the tile\\
\textbf{Step 2: Project points into Image$_{\text{Patch}}$ and update $LT$}\\
\FOR{$\phi = \phi_{\text{min}} \ ; \ \phi \ \leq \phi_{\text{max}} \ ; \ \phi = \phi + \phi_{\text{step}}$}
{
\FOR{$\lambda = \lambda_{\text{max}} \ ; \ \lambda \ \geq \lambda_{\text{min}} \ ; \ \lambda \ = \lambda - \lambda_{\text{step}}$}
\STATE{
$h \gets \text{DSM}(\phi, \lambda)$\\
$h_{\text{ground}} \gets \text{DEM}(\phi, \lambda)$\\
\FOR{$h' = \ h \ ; \ h' \geq h_{\text{ground}} \ ; \ h' = h' - h_{\text{step}}$}
\STATE{
$(s,l) \gets$ Proj$_{\text{RPC}}(\phi, \lambda, h')$ \COMMENT{// Proj$_{\text{RPC}}$ denotes the RPC equations used to project the 3D point into the image}\\
\IF{$LT(s,l) < h'$} \STATE{ $LT(s,l) \gets h'$ }\ENDIF
}\ENDFOR
}\ENDFOR
}\ENDFOR\\
\textbf{Step 3: Create OutArray with a second pass over the grid}\\
\FOR{$\phi = \phi_{\text{min}} \ ; \ \phi \ \leq
\phi_{\text{max}} \ ; \ \phi = \phi +
\phi_{\text{step}}$} {
\FOR{$\lambda = \lambda_{\text{max}} \ ; \ \lambda \
\geq \lambda_{\text{min}} \ ; \ \lambda \ = \lambda -
\lambda_{\text{step}}$}\STATE{
$h \gets \text{DSM}(\phi, \lambda)$\\
$(s,l) \gets$ Proj$_{\text{RPC}}(\phi, \lambda, h)$\\
\IF{$LT(s,l) > h + \gamma$}
\STATE{OutArray$(\phi, \lambda) \gets $ NODATA}
\ELSE
\STATE{OutArray$(\phi, \lambda) \gets $ Image$_{\text{Patch}}(s,l)$ \COMMENT{// Can also interpolate values}
}\ENDIF
}\ENDFOR
}\ENDFOR
\end{boxedalgorithmic}
\caption{An algorithmic description of ``gwarp++'' for true
orthorectification}
\label{alg:gwarp++}
\end{figure*}
We will first discuss the case of orthorectifying an image
patch (that belongs to a single tile) with the help of a
DSM. Consider two points W$_1 = ( \phi_1, \lambda_1, h_1 )$
and W$_2 = ( \phi_2, \lambda_2, h_2 )$ that both project to
the same pixel coordinates in the image patch. $\phi$,
$\lambda$ and $h$ denote the latitude, longitude and height
coordinates respectively. If $h_2 > h_1$, it means that
W$_1$ is occluded by W$_2$. This is the core idea that
``gwarp++'' uses to detect the occluded ``nodata'' points.
Now, consider a single world point W $=(\phi, \lambda, h )$,
where $h$ is the height value from the DSM. Let
$h_{\text{ground}}$ be the corresponding height value in the
DEM. The DEM gives us a rough estimate of the height of the
ground. It is possible to use more sophisticated techniques,
such as the one described by the study in \cite{csf}, to
directly estimate the elevation of the ground from the
DSM. The DEM is sufficient for our application. Instead of
just projecting W into the image patch, ``gwarp++''
projects a set of points
\begin{align*}
& \hspace{0.25\linewidth} \mathcal{W'} = \big\{( \phi, \lambda, h')\big\} \
& \\
&\forall \ h' \ \epsilon \ [\ h, \ h - h_{\text{step}}, \ h -
2 \cdot h_{\text{step}},...,\ h_{\text{ground}} \ ]&
\end{align*}
where $h_{\text{step}}$ is a user-defined step size.
$\mathcal{W'}$ is therefore a set of points sampled along
the vertical line from W to the ground. To understand the
motivation for doing this, it might help to consider the
case when W is the corner of the roof of a building. In that
case, $\mathcal{W'}$ is the set of points along the
corresponding vertical building edge from W to the
ground. If we apply this procedure to all the points on the
roof of a building, we will end up projecting the entire
building into the image patch.
We now describe the implementation of ``gwarp++'' below. The
algorithm is summarized in Fig. \ref{alg:gwarp++}.
\begin{enumerate}
\item ``gwarp++'' starts out by dividing the tile into a 2D
grid of world points. The grid is 2D in the sense that
only the longitude and the latitude coordinates are
considered. The extents of this grid can be determined in
an iterative fashion by using the RPC equations and the
pixel coordinates of the corners of the image patch. The
distance between the points of this grid is a user-defined
parameter.
\item Using the height values from the DSM, for each point
in the grid, ``gwarp++'' projects a set of points into
the image patch as explained above. For each pixel in the
image patch, a lookup table ``$LT$'' stores the maximum
height, with the maximum being computed across all the
points that project into this pixel. This procedure is
repeated for all the points in the 2D grid.
\item At this stage, for each point J in the 2D grid, we
know three things:
\begin{itemize}
\item $h_J$ -- The DSM height value at J
\item $(s,l)$ -- The pixel into which J projects after
assigning J an elevation value of $h_J$
\item $LT(s,l)$ -- The maximum height of a world point
that projects into $(s,l)$
\end{itemize}
If $LT(s,l) \ > h_J$, we can conclude that J is occluded by
some other world point that has a height value of
$LT(s,l)$.
\item Therefore, using a second pass over all the points of
the 2D grid, ``gwarp++'' marks the occluded points
with a ``NODATA'' value. In practice, to account for
quantization errors and the noise in the DSM,
``gwarp++'' checks if $LT(s,l) \ > h_J \ + \ \gamma$
where $\gamma$ is chosen appropriately.
\end{enumerate}
To orthorectify the full-sized image, we orthorectify each
image patch using its corrected RPCs and the large-area
DSM. The orthorectified image patches are then mosaiced into
a full-sized orthorectified image during which the
overlapping portions between the image patches are
discarded.
``gwarp++'' is written in C++. It has the nice property
of being massively parallel since the projection for each
point can be carried out independently and since each tile
can be processed independently. This parallelism is
exploited at both stages. For each image patch, OpenMP
\cite{openmp} is used to process the points in parallel. And
the different image patches are themselves orthorectified in
parallel by different virtual machines running on a
cloud-based framework.
For our application, each full-sized orthorectified image is
resampled at a resolution of 0.5 m. Furthermore, the occluded
points are delineated with a mask that is subsequently used
during training of the CNN to prevent gradients at those
points from being backpropagated.
\subsection{Accuracy of ``gwarp++''}
We conclude our discussion on true orthorectification with a
few remarks on the accuracy of the orthorectified images
produced by ``gwarp++''.
\textbf{3D vs 2.5D:} For each point W, ``gwarp++''
considers points along the vertical line from W to the
ground. This is not a good strategy for buildings that
possess more exotic shapes such as spherical water towers or
buildings with walls that slope inwards. In these cases
``gwarp++'' can incorrectly mark some points as occluded
points. The only way to handle such cases is by using a 3D
point cloud instead of a 2.5D DSM, which is beyond the scope
of our discussion.
\textbf{Error Propagation:} Errors in the RPCs and errors in
the DSM will translate into errors in the orthorectified
images. However, in our application, these errors are
largely drowned out by the errors in the OSM
labels. Nevertheless, it might be useful to study how these
errors propagate, which we leave for future work.
\section{Quantitative Evaluation of Alignment}
\label{sec:align_qual}
\subsection{Image-to-Image Alignment}
We use multiple metrics to evaluate the quality of
alignment. Table \ref{table:align_reproj} shows the average
reprojection error across tiles (and images) for both
regions, before and after alignment. Average reprojection
error goes down from 5-7 pixels to $\text{0.3}$ pixels for
both regions.
\begin{table}[H]
\setlength{\tabcolsep}{4pt}
\begin{center}
\caption{Average reprojection error in pixels across tiles and images in Ohio and California}
\label{table:align_reproj}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|c|c|c|c|}
\hline
Region & & Mean & Variance \\
\hline\hline
\multirow{2}{4em}{Ohio}
& Unaligned & 6.70 & 0.180 \\
\cline{2-4}
& Aligned & {\bf 0.30} & 0.003 \\
\hline
\multirow{2}{4em}{California}
& Unaligned & 5.71 & 0.280 \\
\cline{2-4}
& Aligned & {\bf 0.32} & 0.001 \\
\hline
\end{tabular}
\end{center}
\end{table}
Since pushbroom sensors can be closely approximated by
affine cameras with parallel rays, reprojection error alone
does not give the complete picture. For our second metric,
we manually annotate tie points in 31 out of 32 images over
a 1 km$^2$ region in Ohio and in all 32 images over a 2
km$^2$ region in California. Within these regions, we
measure the pairwise alignment errors for all possible pairs
of images and report them in Table
\ref{table:align_pairwise}. One can observe that most of the
pairs are aligned with subpixel error. This is a much harder
metric than the mean reprojection error. It is important to
use this metric especially since stereo matching requires
subpixel alignment accuracy. The good quality of alignment
across the large region is also reflected in the high
quality of the DSM and the semantic labeling metrics.
\begin{table}[H]
\begin{center}
\caption{Pairwise alignment error statistics using manually annotated groundtruth for Ohio and California}
\label{table:align_pairwise}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|c|p{2cm}|p{2cm}|p{1.5cm}|}
\hline
Region & No. of pairs with error $<1$ pixel & No. of pairs with error $<2$ pixels & Total No. of pairs \\
\hline\hline
Ohio & 417 & 455 & 465 \\
\hline
California & 484 & 496 & 496 \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Inter-Tile Alignment}
\label{sec:inter-tile}
Due to the high absolute alignment precision achieved by
using the $L_2$ regularization term in the bundle-adjustment
logic, it turns out that the tile-level DSMs are
well aligned with one another. Table \ref{tbl:tile_edges}
shows the statistics of the differences in the elevations at
the tile boundaries. It can be seen that the median absolute
elevation difference at the tile boundaries is less than 0.5
m -- an error that is much too small to introduce noticeable
errors in orthorectification.
\begin{table}[H]
\caption{Median of the absolute differences in elevation, and median of the
rms value of the differences in elevation at the tile boundaries}
\label{tbl:tile_edges}
\renewcommand{\arraystretch}{1.5}
\begin{center}
\begin{tabular}{|c|p{2.8cm}|p{3cm}|}
\hline
Region & Median absolute Z diff & Median RMS of Z diff \\
\hline\hline
Ohio & 0.42 m & 0.72 m \\
\hline
California & 0.47 m & 0.79 m \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{A Distributed Workflow for Stereo Matching and DSM
Creation}
\label{sec:dist_stereo_matching}
Creating DSMs for a 100 km$^2$ region is the most
computationally-intensive and the slowest module in the
framework shown in Fig.~\ref{fig:overview}. It is also the
module that is most likely to cause ``out-of-memory''
errors. Therefore, we need to carefully choose some specific
design attributes for this module, which we will highlight
in this section.
We can leverage the inherent parallelism in stereo matching
and in DSM creation by intelligently distributing the tasks
across a cloud computing system. The steps for distributed
stereo matching and DSM creation are enumerated below and
shown in Fig.~\ref{fig:distributed_stereo}.
\begin{enumerate}
\item A captain virtual machine (VM) prepares a list of the
selected stereo pairs of image patches for each tile. This
is done for all the tiles at the beginning. All the tiles
are added to a queue. All the lists are stored on a shared
Network Attached Storage (NAS).
\item The captain sends a message to all the worker VMs to
start. The captain also assumes the role of a worker at
this step.
\item \label{item:step2} For the first tile in the queue,
the workers pull/request a pair of image patches to
process. Safeguards are imposed to ensure that each worker
gets a unique pair.
\item Each worker attempts to create a pairwise point cloud
and subsequently reports the status of its task. Each
worker then pull/requests the next unprocessed stereo pair
for the current tile. Successfully processed stereo pairs
are marked as done.
\item If there are no more unprocessed stereo pairs for this
tile then:
\begin{enumerate}[label=(\roman*)]
\item The current tile is removed from the queue. All the
idle workers except for the captain and the large VMs
move on to the next tile in the queue, i.e., to step
\ref{item:step2}. By a large VM, we mean a VM with more
memory and a larger number of CPUs.
\item All the stereo pairs for which point-cloud creation
failed are processed for a second time by the remaining
workers. Even if processing fails again, these stereo
pairs are marked as done.
\end{enumerate}
\item At this stage, all the selected stereo pairs for the
current tile are marked as done. The large VMs join their
smaller counterparts on the next tile, i.e., at step
\ref{item:step2}. The captain alone starts the process of
fusing the multiple pairwise point clouds into a single
fused DSM for the current tile. After this, the captain
also proceeds to join the other VMs in step
\ref{item:step2}.
\end{enumerate}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=1\linewidth,height=0.5\linewidth]{distributed_stereo.jpg}
\caption{An example to illustrate our distributed
stereo-matching and DSM-creation workflow. In this
example, there are only 2 tiles and 3 selected stereo
pairs for each tile. There are only 3 VMs, a captain, a
small VM and a large VM. T indicates the time
stamp. Notice how at T = 3, two of the VMs have moved
onto Tile 2 whereas the captain stays back to finish
processing Tile 1.}
\label{fig:distributed_stereo}
\end{figure*}
A graphic illustration of the above workflow is shown in
Fig.~\ref{fig:distributed_stereo}. For the sake of clarity,
in this illustration, we assume that there are only 2 tiles
and that there are only 3 selected stereo pairs for each
tile. We also assume that there are only 3 VMs, a captain, a
small VM and a large VM.
\subsection{Advantages of this Distributed Workflow}
\begin{itemize}
\item No VM remains idle except during the last processing
stage of the very last tile.
\item Failed pairs are processed twice to handle
``out-of-memory'' errors.
\item The intensive process of creating a fused DSM is
carried out on the most powerful captain VM.
\item Note that we could have opted to use a simpler
workflow where all the VMs wait for a fused DSM to be
created before proceeding to the next tile. However, our
workflow reduces the processing time by a number of
days.
For an example, assume that there are 10 VMs and 100
tiles. Also assume that each stereo pair takes 20 minutes
to process, that we process 80 pairs per tile and that the
point-cloud fusion takes 60 minutes. If all the workers
waited for a tile-level DSM to be created before moving
on to the next tile, then it would take
$ \frac{(\frac{80 \times 20}{10} + 60)}{60}
\approx $ 3 hours and 40 minutes to finish processing a
single tile. For 100 tiles it would take $\approx$ 15 days
and 6 hours.
Our workflow takes
$ \frac{(\frac{80 \times 20}{10})}{60}
\approx $ 2 hours and 40 minutes for a single tile. This
is because while the captain is fusing the point clouds
for a tile, the other VMs will be processing the next
tile. For 100 tiles it would take $\approx$ 11 days and 2
hours, roughly saving us 4 days of processing time.
\end{itemize}
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,497,902 | arxiv | \section{Introduction}
In this paper, we describe a procedure for automatically explaining logical and perceptual abstractions encoded by individual neurons in deep networks. Prior work in neural network interpretability has found that neurons in models trained for a variety of tasks learn human-interpretable concepts, e.g.\ faces or parts-of-speech, often without explicit supervision \cite{bau2017network,dalvi2019one,dalvi2019neurox,olah2020zoom}.
Yet many existing interpretability methods are limited to ad-hoc explanations based on manual inspection of model visualizations or inputs \cite{dalvi2019one,nguyen2016synthesizing,olah2020zoom,simonyan2013deep,zeiler2014visualizing,zhou2014object}.
To instead automate explanation generation, recent work \cite{bau2017network,dalvi2019neurox} has proposed to use labeled ``probing datasets'' to explain neurons by identifying concepts (e.g. \emph{dog} or \emph{verb}) closely aligned with neuron behavior.
However, the atomic concepts available in probing datasets
may be overly simplistic explanations of neurons. A neuron might robustly respond to images of dogs without being exclusively specialized for dog detection; indeed, some have noted the presence of \emph{polysemantic} neurons in vision models that detect multiple concepts \cite{fong2018net2vec,olah2020zoom}. The extent to which these neurons have learned meaningful perceptual abstractions (versus detecting unrelated concepts) remains an open question.
More generally, neurons may be more accurately characterized not just as simple detectors, but rather as operationalizing complex decision rules composed of multiple concepts (e.g.\ \emph{dog faces, cat bodies, and car windows}). Existing tools are unable to surface such compositional concepts automatically.
We propose to generate explanations by searching for logical forms defined by a set of composition operators over primitive concepts (Figure~\ref{fig:overview}).
Compared to previous work \cite{bau2017network}, these explanations serve as better approximations of neuron behavior, and identify behaviors that help us answer a variety of interpretability questions across vision and natural language processing (NLP) models. First, what kind of logical concepts are learned by deep models in vision and NLP? Second, do the quality and interpretability of these learned concepts relate to model performance? Third, can we use the logical concepts encoded by neurons to control model behavior in predictable ways?
We find that:
\begin{enumerate}
\item Neurons learn compositional concepts: in \textbf{image classification}, we identify neurons that learn meaningful perceptual abstractions (e.g.\ \emph{tall structures}) and others that fire for unrelated concepts. In natural language inference (\textbf{NLI}), we show that shallow heuristics (based on e.g.\ gender and lexical overlap) are not only learned, but reified in individual neurons.
\item Compositional explanations help predict model accuracy, but interpretability is not always associated with accurate classification: in \textbf{image classification}, human-interpretable abstractions are \emph{correlated} with model performance, but in \textbf{NLI}, neurons that reflect shallower heuristics are \emph{anticorrelated} with performance.
\item Compositional explanations allow users to predictably manipulate model behavior: we can generate crude ``copy-paste'' adversarial examples based on inserting words and image patches to target individual neurons, in contrast to black-box approaches \cite{alzantot2018generating,szegedy2014intriguing,wallace2019universal}.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{procedure.pdf}
\caption{Given a set of inputs (a) and scalar neuron activations (b) converted into binary masks (c), we generate an explanation via beam search, starting with an inventory of primitive concepts (d), then incrementally building up more complex logical forms (e). We attempt to maximize the IoU score of an explanation (f); depicted is the IoU of $M_{483}(\x)$ and \texttt{(water OR river) AND NOT blue}.}
\label{fig:overview}
\vspace{-1em}
\end{figure}
\section{Generating compositional explanations}
Consider a neural network model $f$ that maps inputs $\x$ to vector representations $r \in \mathbb{R}^d$. $f$ might be a prefix of a convolutional network trained for image classification or a sentence embedding model trained for a language processing task. Now consider an individual neuron $f_n(\x) \in \mathbb{R}$ and its activation on a set of concrete inputs (e.g.\ ResNet-18 \cite{he2016deep} layer 4 unit 483; Figure~\ref{fig:overview}a--b). How might we explain this
neuron's behavior in human-understandable terms?
The intuition underlying our approach is shared with the NetDissect procedure of \citet{bau2017network}; here we describe a generalized version. The core of this intuition is that a good explanation is a \emph{description} (e.g.\ a named category or property) that identifies the same inputs for which $f_n$ activates.
Formally, assume we have a space of pre-defined atomic \emph{concepts} $C \in \mathcal{C}$ where each concept is a function $C : \x \mapsto \{0, 1\}$ indicating whether $\x$ is an instance of $C$. For image pixels, concepts are image segmentation masks; for the \emph{water} concept, $C(\x)$ is 1 when $\x$ is an image region containing water (Figure~\ref{fig:overview}d).
Given some measure $\delta$ of the similarity between neuron activations and concepts, NetDissect explains the neuron $f_n$ by searching for the concept $C$ that is most similar:
\begin{align}
\textsc{Explain-NetDissect}(n) = \argmax_{C \in \mathcal{C}} \delta(n, C).
\label{eq:nd}
\end{align}
While $\delta$ can be arbitrary, \citet{bau2017network} first \emph{threshold} the continuous neuron activations $f_n(\x)$ into binary masks $M_n(\x) \in \{0, 1\}$ (Figure~\ref{fig:overview}c).
This can be done \emph{a priori} (e.g.\ for post-ReLU activations, thresholding above 0), or by dynamically thresholding above a neuron-specific percentile. We can then compare binary neuron masks and concepts with the Intersection over Union score (IoU, or Jaccard similarity; Figure~\ref{fig:overview}f):
\begin{align}
\delta(n, C) \triangleq \IoU(n, C) = {\big[\sum_{\x} \mathbbm{1}(M_n(\x) \land C(\x))\big]} ~\big/~ {\big[\sum_{\x} \mathbbm{1}(M_n(\x) \lor C(\x)) \big]}.
\end{align}
\paragraph{Compositional search.}
The procedure described in Equation~\ref{eq:nd} can only produce explanations from the fixed, pre-defined concept inventory $\mathcal{C}$.
Our main contribution is to combinatorially expand the set of possible explanations to include \emph{logical forms} $\LC$ defined inductively over $\mathcal{C}$
via composition operations such as disjunction (\textsc{Or}), conjunction (\textsc{And}), and negation (\textsc{Not}), e.g.\ $(L_1~\textsc{And}~L_2)(\x) = L_1(\x) \land L_2(\x)$ (Figure~\ref{fig:overview}e). Formally, if $\Omega_\eta$ is the set of $\eta$-ary composition functions, define $\LC$:
\begin{enumerate}
\item Every primitive concept is a logical form: $\forall C \in \mathcal{C}$, we have $C \in \LC$.
\item Any composition of logical forms is a logical form: $\forall \eta,\; \omega \in \Omega_\eta,\; (L_1, \dots, L_\eta) \in \LC^\eta$, where $\LC^\eta$ is the set of $\eta$-tuples of logical forms in $\LC$, we have $\omega(L_1, \dots, L_\eta) \in \LC$.
\end{enumerate}
Now we search for the best logical form $L \in \LC$:
\begin{align}
\textsc{Explain-Comp}(n) = \argmax_{L \in \LC} \IoU(n, L).
\label{eq:comp}
\end{align}
The $\argmax$ in Equation~\ref{eq:comp} ranges over a structured space of compositional expressions, and has the form of an inductive program synthesis problem \cite{kitzelmann2009inductive}.
Since we cannot exhaustively search $\LC$, in practice we limit ourselves to formulas of maximum length $N$, by iteratively constructing formulas from primitives via beam search with beam size $B = 10$. At each step of beam search, we take the formulas already present in our beam,
compose them with new primitives,
measure IoU of these new formulas, and keep the top $B$ new formulas by IoU, as shown in Figure~\ref{fig:overview}e.
\section{Tasks}
The procedure we have described above is model- and task-agnostic. We apply it to two tasks in vision and NLP: first, we investigate a scene recognition task explored by the original NetDissect work \cite{bau2017network},
which allows us to examine compositionality in a task where neuron behavior is known
to be reasonably well-characterized by atomic labels.
Second, we examine \emph{natural language inference} (NLI): an example of a seemingly challenging NLP task which has recently come under scrutiny due to models' reliance on shallow heuristics and dataset biases \cite{gardner2020evaluating,gururangan2018annotation,kaushik2020learning,mccoy2019right,poliak2018hypothesis,wallace2019universal}. We aim to see whether compositional explanations can uncover such undesirable behaviors in NLI models.
\begin{wrapfigure}{r}{0.4\textwidth}
\vspace{-.5em}
\centering
\includegraphics[width=0.4\textwidth]{broden.pdf}
\caption{Example concepts from the Broden dataset \cite{bau2017network}, reproduced with permission.}
\label{fig:broden_examples}
\vspace{-1em}
\end{wrapfigure}
\paragraph{Image Classification.} NetDissect \cite{bau2017network} examines whether a convolutional neural network trained on a scene recognition task has learned detectors that correspond to meaningful abstractions of objects.
We take the final 512-unit convolutional layer of a
ResNet-18 \cite{he2016deep} trained on the Places365 dataset \cite{zhou2017places}, probing for concepts in the ADE20k scenes dataset \cite{zhou2017scene} with atomic concepts $\mathcal{C}$ defined by annotations in the Broden dataset \cite{bau2017network}. There are 1105 unique concepts in ADE20k, categorized by Scene, Object, Part, and Color (see Figure~\ref{fig:broden_examples} for examples).
Broden has pixel-level annotations, so for each input image $\mathbf{X} \in \mathbb{R}^{H \times W}$, inputs are indexed by pixels $(i, j)$: $\mathbf{x}_{i, j} \in \mathcal{X}$. Let $f_n(\mathbf{x}_{i, j})$ be the activation of the $n$th neuron at position $(i, j)$ of the image $\mathbf{X}$, after the neuron's activation map has been bilinearly upsampled from layer dimensions $H_l \times W_l$ to the segmentation mask dimensions $H \times W$.
Following \cite{bau2017network}, we create neuron masks $M_n(x)$ via dynamic thresholding: let $T_n$ be the threshold such that $P(f_n(\x) > T_n) = 0.005$ over all inputs $\x \in \mathcal{X}$. Then $M_n(\x) = \mathbbm{1}(f_n(\x) > T_n)$.
For composition, we use operations \textsc{And} ($\land$), \textsc{Or} ($\lor$), and \textsc{Not} ($\lnot$), leaving more complex operations (e.g. relations like \textsc{above} and \textsc{below}) for future work.
\paragraph{NLI.}
Given premise and hypothesis sentences, the task of NLI is to determine whether the premise \emph{entails} the hypothesis, \emph{contradicts} it, or neither (\emph{neutral}).
We investigate a BiLSTM baseline architecture proposed by \cite{bowman2016fast}.
A bidirectional RNN encodes both the premise and hypothesis to form 512-d representations.
Both representations, and their elementwise product and difference, are then concatenated
to form a 2048-d representation that is fed through a multilayer perceptron (MLP) with two 1024-d layers with ReLU nonlinearities and a final softmax layer.
This model is trained on the Stanford Natural Language Inference (SNLI) corpus \cite{bowman2015large} which consists of 570K sentence pairs.
Neuron-level explanations of NLP models have traditionally analyzed how RNN hidden states detect word-level features as the model passes over the input sequence \cite{bau2019identifying,dalvi2019one}, but in most NLI models, these RNN features are learned early and are often quite distant from the final sentence representation used for prediction.
Instead, we analyze the MLP component, probing the 1024 neurons of the penultimate hidden layer for sentence-level explanations, so our inputs $\x$ are premise-hypothesis pairs.
We use the SNLI validation set as our probing dataset (10K examples). As our features, we take the Penn Treebank part of speech tags (labeled by SpaCy\footnote{\url{https://spacy.io/}}) and the 2000 most common words appearing in the dataset. For each of these we create 2 concepts that indicate whether the word or part-of-speech appears in the premise or hypothesis.
Additionally, to detect whether models are using lexical overlap heuristics \cite{mccoy2019right}, we define 4 concepts indicating that the premise and hypothesis have more than 0\%, 25\%, 50\%, or 75\% overlap, as measured by IoU between the unique words.
For our composition operators, we keep \textsc{And}, \textsc{Or}, and \textsc{Not}; in addition, to capture the idea that neurons might fire for groups of words with similar meanings, we introduce the unary \textsc{Neighbors} operator. Given a word feature $C$, let the \emph{neighborhood} $\mathcal{N}(C)$ be the set of 5 closest words $C'$ to $C$, as measured by their cosine distance in GloVe embedding space \cite{pennington2014glove}. Then,
$\textsc{Neighbors}(C)(\x) = \bigvee_{C' \in \mathcal{N}(C)} C'(\x)$ (i.e.\ the logical \textsc{Or} across all neighbors).
Finally, since these are post-ReLU activations, instead of dynamically thresholding we simply define our neuron masks $M_n(\x) = \mathbbm{1}(f_n(\x) > 0)$. There are many ``dead'' neurons in the model,
and some neurons fire more often than others; we limit our analysis to neurons that activate reliably across the dataset, defined as being active at least 500 times (5\%) across the 10K examples probed.
\section{Do neurons learn compositional concepts?}
\begin{figure}[t]
\centering
\begin{minipage}[t]{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{by_length.pdf}
\caption{Distribution of IoU versus max formula length. The line indicates mean IoU. $N = 1$ is equivalent to NetDissect \cite{bau2017network}; IoU scores steadily increase as max formula length increases.}
\label{fig:iou_by_length}
\end{minipage}\hspace{1em}
\begin{minipage}[t]{.48\textwidth}
\centering
\includegraphics[width=\linewidth]{bullring.pdf}
\caption{NetDissect \cite{bau2017network} assigns unit 106 the label {\color{darkred} \texttt{bullring}}, but in reality it is detects general sports fields, except football fields, as revealed by the {\color{darkblue} \textbf{length 3}} and {\color{darkgreen} \textbf{length 10}} explanations.}
\label{fig:bullring}
\end{minipage}
\vspace{-1em}
\end{figure}
\paragraph{Image Classification.}
Figure~\ref{fig:iou_by_length} (left) plots the distribution of IoU scores for the best concepts found for each neuron as we increase the maximum formula length $N$. When $N = 1$, we get \textsc{Explain-NetDissect}, with a mean IoU of 0.059; as $N$ increases, IoU increases up to 0.099 at $N = 10$, a statistically significant 68\% increase ($p = 2 \times 10^{-9}$).
We see diminishing returns after length 10, so we conduct the rest of our analysis with length 10 logical forms.
The increased explanation quality suggests that our compositional explanations indeed detect behavior beyond simple atomic labels: Figure~\ref{fig:bullring} shows an example of a \emph{bullring} detector which is actually revealed to detect fields in general.
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{vision_examples.pdf}
\caption{Image classification explanations categorized by {\color{darkgreen} \textbf{semantically coherent}} abstraction (a--b) and specialization (c), and {\color{darkred} \textbf{unrelated}} polysemanticity (d). For clarity, logical forms are length $N = 3$.}
\label{fig:vision_examples}
\vspace*{2\floatsep}
\centering
\includegraphics[width=\textwidth]{nli_examples.pdf}
\caption{NLI length 5 explanations. For each neuron, we show the explanation (e.g.\ \texttt{pre:x} indicates \texttt{x} appears in the premise), IoU, class weights $w_\text{\{entail,neutral,contra\}}$, and activations for 2 examples.}
\label{fig:nli_examples}
\vspace{-1em}
\end{figure}
We can now answer our first question from the introduction: are neurons learning meaningful abstractions,
or firing for unrelated concepts? Both happen: we manually inspected
a random sample of 128 neurons in the network and their length 10 explanations, and found that \textbf{69\%} learned some meaningful combination of concepts, while \textbf{31\%} were \emph{polysemantic}, firing for at least some unrelated concepts.
The 88 ``meaningful'' neurons fell into 3 categories (examples in Figure~\ref{fig:vision_examples}; more in Appendix~\ref{app:vision_extra}; Appendix~\ref{app:vision_uniqueness} reports concept uniqueness and granularity across formula lengths):
\begin{enumerate}
\item 50 (57\%) learn a perceptual \textbf{abstraction} that is also lexically coherent, in that the primitive words in the explanation are semantically related (e.g.\ to \emph{towers} or \emph{bathrooms}; Figure~\ref{fig:vision_examples}a).
\item 28 (32\%) learn a perceptual \textbf{abstraction} that is \emph{not} lexically coherent, as the primitives are not obviously semantically related. For example, \texttt{cradle OR autobus OR fire escape} is a vertical rails detector, but we have no annotations of vertical rails in Broden (Figure~\ref{fig:vision_examples}b).
\item 10 (12\%) have the form $L_1 \; \texttt{AND NOT} \; L_2$, which we call \textbf{specialization}. They detect more specific variants of Broden concepts (e.g.\ \texttt{(water OR river) AND NOT blue}; Figure~\ref{fig:vision_examples}c).
\end{enumerate}
The observation that IoU scores do not increase substantially past length 10 corroborates the finding of \cite{fong2018net2vec}, who also note that few neurons detect more than 10 unique concepts in a model. Our procedure, however, allows us to more precisely characterize whether these neurons detect abstractions or unrelated disjunctions of concepts, and identify more interesting cases of behavior (e.g.\ \emph{specialization}).
While composition of Broden annotations explains a majority of the abstractions learned,
there is still considerable unexplained behavior. The remaining behavior could be due to noisy activations, neuron misclassifications, or detection of concepts absent from Broden.
\paragraph{NLI.}
NLI IoU scores reveal a similar trend (Figure~\ref{fig:iou_by_length}, right): as we increase the maximum
formula length, we account for more behavior, though scores continue increasing past length 30. However, short explanations are already useful: Figure~\ref{fig:nli_examples}, Figure~\ref{fig:nli_adversarial} (explained later), and Appendix~\ref{app:nli_extra} show example length 5 explanations, and Appendix~\ref{app:nli_uniqueness} reports on the uniqueness of these concepts across formula lengths. Many neurons correspond to simple decision rules based mostly on lexical features: for example, several neurons are \emph{gender sensitive} (Unit 870), and activate for \emph{contradiction} when the premise, but not the hypothesis, contains the word \texttt{man}. Others fire for verbs that are often associated with a specific label, such as \texttt{sitting}, \texttt{eating}, or \texttt{sleeping}. Many of these words have high \emph{pointwise mutual information} (PMI) with the class prediction; as noted by \cite{gururangan2018annotation}, the top two highest words by PMI with \emph{contradiction} are \texttt{sleeping} (15) and \texttt{nobody} (39, Figure~\ref{fig:nli_adversarial}). Still others (99) fire when there is high lexical overlap between premise and hypothesis, another heuristic in the literature \cite{mccoy2019right}. Finally, there are neurons that are not well explained by this feature set (473). In general, we have found that many of the simple heuristics \cite{gururangan2018annotation,mccoy2019right} that make NLI models brittle to out-of-distribution data \cite{gardner2020evaluating,kaushik2020learning,wallace2019universal} are actually reified as individual features in deep representations.
\section{Do interpretable neurons contribute to model accuracy?}
\label{sec:interp}
\begin{wrapfigure}{r}{0.45\textwidth}
\vspace{-2em}
\centering
\includegraphics[width=0.45\textwidth]{iou_v_perf.pdf}
\caption{Top: neuron IoU versus model accuracy over inputs where the neuron is active for vision (length 10) and NLI (length 3). Bottom: Pearson correlation between these quantities versus max formula length.}
\label{fig:iou_v_perf}
\vspace{-1em}
\end{wrapfigure}
A natural question to ask
is whether it is empirically desirable to have more (or less) interpretable neurons, with respect to the kinds of concepts identified above.
To answer this, we measure the performance of the entire model on the task of interest when the neuron is activated.
In other words, for neuron $n$, what is the model accuracy on predictions for inputs where $M_n(\x) = 1$?
In \textbf{image classification}, we find that the more interpretable the neuron (by IoU), the more accurate the model is when the neuron is active
(Figure~\ref{fig:iou_v_perf}, left; $r = 0.31$, $p < 1e-13$); the correlation increases as the formula length increases and we are better able to explain neuron behavior. Given that we are measuring abstractions over the human-annotated features deemed relevant for scene classification, this suggests, perhaps unsurprisingly, that neurons that detect more interpretable concepts are more accurate.
However, when we apply the same analysis to the \textbf{NLI} model, the \emph{opposite} trend
occurs: neurons that we are better able to explain are \emph{less} accurate (Figure~\ref{fig:iou_v_perf}, right; $r = -0.60$, $p < 1e-08$). Unlike vision, most sentence-level logical descriptions recoverable by our approach are spurious by definition, as they are too simple compared to the true reasoning required for NLI. If a neuron can be accurately summarized by simple deterministic rules, this suggests the neuron is making decisions based on spurious correlations, which is reflected by the lower performance. Analogously, the more \emph{restricted} our feature set (by maximum formula length), the better we capture this anticorrelation. One important takeaway is that the ``interpretability'' of these explanations is not \emph{a priori} correlated with performance, but rather dependent on the concepts we are searching for: given the right concept space, our method can identify behaviors that may be correlated \emph{or} anticorrelated with task performance.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{intervention_examples.pdf}
\caption{``copy-paste'' adversarial examples for vision. For each {\textcolor{darkred}{\textbf{scene}}} (with 3 example images at bottom), the neuron that contribute most (by \textcolor{darkblue}{\textbf{connection weight}}) are shown, along with their length 3 explanations. We target the \textbf{bold} explanations to crudely modify an input image and change the prediction towards/away from the scene. In the top-right corner, the left-most image is presented to the model (with predictions from 4 models shown); we modify the image to the right-most image, which changes the model prediction(s).}
\label{fig:vision_adversarial}
\vspace*{1.5\floatsep}
\includegraphics[width=\linewidth]{nli_adversarial.pdf}
\caption{``copy-paste'' adversarial examples for NLI. Taking an example from SNLI, we construct an {\color{darkpurple} \textbf{adversarial (adv)}} premise or hypothesis which changes the true label and results in an \emph{incorrect} model prediction (original label/prediction {\color{darkpurple} $\smash{\advarrow}$} adversarial label/prediction).}
\label{fig:nli_adversarial}
\vspace{-1.1em}
\end{figure}
\section{Can we target explanations to change model behavior?}
Finally, we see whether compositional explanations allow us to manipulate model behavior. In both models, we have probed the final hidden representation before a final softmax layer produces the class predictions. Thus, we can measure a neuron's contribution to a specific class with the weight between the neuron and the class, and see whether constructing examples that activate (or inhibit) these neurons leads to corresponding changes in predictions. We call these ``copy-paste'' adversarial examples to differentiate them from standard adversarial examples involving imperceptible perturbations \cite{szegedy2014intriguing}.
\paragraph{Image Classification.} Figure~\ref{fig:vision_adversarial} shows some Places365 classes along with the
neurons that most contribute to the class as measured by the connection weight. In many cases, these connections are sensible; water, foliage, and rivers contribute to a \emph{swimming hole} prediction; houses, staircases, and fire escape (objects) contribute to \emph{fire escape} (scene).
However, the explanations in \textbf{bold} involve polysemanticity or spurious correlations.
In these cases, we found it is possible to construct a ``copy-paste'' example which uses the neuron explanation to predictably alter the prediction.\footnote{Appendix~\ref{app:sensitivity} tests sensitivity of these examples to size and position of the copy-pasted subimages.} In some cases, these adversarial examples are generalizable across networks besides the probed ResNet-18, causing the same behavior across AlexNet \cite{krizhevsky2012imagenet}, ResNet-50 \cite{he2016deep}, and DenseNet-161 \cite{huang2017densely}, all trained on Places365.
For example, one major contributor to the \emph{swimming hole} scene (top-left) is a neuron that fires for non-blue water; making the water blue switches the prediction to \emph{grotto} in many models.
The consistency of this misclassification suggests that models are detecting underlying biases in the training data. Other examples include a neuron contributing to \emph{clean room} that also detects ice and igloos; putting an igloo in a corridor causes a prediction to shift from \emph{corridor} to \emph{clean room}, though this does not occur across models, suggesting that this is an artifact specific to this model.
\paragraph{NLI.} In NLI, we are able to trigger similar behavior by targeting spurious neurons
(Figure~\ref{fig:nli_adversarial}). Unit 39 (top-left) detects the presence of \texttt{nobody} in
the hypothesis as being highly indicative of \emph{contradiction}. When creating an adversarial example by adding \emph{nobody} to the original hypothesis, the true label shifts from \emph{entailment} to \emph{neutral}, but the model predicts \emph{contradiction}.
Other neurons predict \emph{contradiction} when \texttt{couch}-related words (Unit 133) or \texttt{sitting} (Unit 15) appear in the hypothesis, and can be similarly targeted.
Overall, these examples are reminiscent of the image-patch attacks of \cite{carter2019activation}, adversarial NLI inputs \cite{alzantot2018generating,wallace2019universal}, and the data collection process for recent \emph{counterfactual} NLI datasets \cite{gardner2020evaluating,kaushik2020learning}, but instead of searching among neuron visualizations or using black-box optimization to synthesize examples, we use explanations as a transparent guide for crafting perturbations by hand.
\section{Related Work}
\paragraph{Interpretability.}
Interpretability in deep neural networks has received considerable attention over the past few years.
Our work extends existing work on generating explanations for individual neurons in deep representations \cite{bau2019identifying,bau2017network,dalvi2019one,dalvi2019neurox,fong2018net2vec,olah2020zoom}, in contrast to analysis or probing methods that operate at the level of entire representations (e.g.\ \cite{andreas2017translating,hewitt2019structural,pimentel2020information}). Neuron-level explanations are fundamentally limited, since they cannot detect concepts distributed across multiple neurons, but this has advantages: first, neuron-aligned concepts offer evidence for representations that are \emph{disentangled} with respect to concepts of interest; second, they inspect unmodified ``surface-level'' neuron behavior, avoiding recent debates on how complex representation-level probing methods should be \cite{hewitt2019designing,pimentel2020information}.
\paragraph{Complex explanations.}
In generating logical explanations of model behavior, one related work is the Anchors procedure of \cite{ribeiro2018anchors}, which finds conjunctions of features that ``anchor'' a model's prediction in some local neighborhood in input space.
Unlike Anchors, we do not explain local model behavior, but rather globally consistent behavior of neurons across an entire dataset. Additionally, we use not just conjunctions, but more complex compositions tailored to the domain of interest.
As our compositional formulas increase in complexity, they begin to resemble
approaches to generating \emph{natural language} explanations of model decisions
\cite{andreas2017translating,camburu2018snli,hendricks2016generating,hendricks2018grounding,rajani2019explain}.
These methods primarily operate at the representation level, or describe rationales for individual model predictions.
One advantage of our logical explanations is that they are directly grounded in features of the data and have explicit measures of quality (i.e.\ IoU), in contrast to language explanations generated from black-box models that themselves can be uninterpretable and error-prone: for example, \cite{hendricks2018grounding} note that naive language explanation methods often mention evidence not directly present in the input.
\paragraph{Dataset biases and adversarial examples.}
Complex neural models are often
\emph{brittle}: they fail to generalize to out-of-domain data \cite{barbu2019objectnet,gardner2020evaluating,kaushik2020learning,recht2019imagenet} and are susceptible to adversarial attacks where inputs are subtly modified in a way that causes a model to fail catastrophically
\cite{ribeiro2018semantically,szegedy2014intriguing,wallace2019universal}. This may be due in part to biases in dataset collection \cite{barbu2019objectnet,gururangan2018annotation,poliak2018hypothesis,recht2019imagenet}, and models fail on datasets which eliminate these biases \cite{barbu2019objectnet,gardner2020evaluating,kaushik2020learning,recht2019imagenet}.
In this work we suggest that these artifacts are learned to the degree that we can identify specific detectors for spurious features in representation space,
enabling ``copy-paste'' adversarial examples constructed solely based on the explanations of individual neurons.
\section{Discussion}
We have described a procedure for obtaining compositional explanations of neurons in deep representations.
These explanations more precisely characterize the behavior learned by neurons, as shown through higher measures of explanation quality (i.e.\ IoU) and qualitative examples of models learning perceptual abstractions in vision and spurious correlations in NLI.
Specifically, these explanations (1) identify abstractions, polysemanticity, and spurious correlations localized to specific units in the representation space of deep models; (2) can disambiguate higher versus lower quality neurons in a model with respect to downstream performance; and (3) can be targeted to create ``copy-paste'' adversarial examples that predictably modify model behavior.
Several unanswered questions emerge:
\begin{enumerate}
\item We have limited our analysis in this paper to neurons in the penultimate hidden layers of our networks. Can we probe other layers, and better understand how concepts are formed and composed between the intermediate layers of a network (cf.\ \cite{olah2020zoom})?
\item Does \emph{model pruning} \cite{hinton2015distilling} more selectively remove the ``lower quality'' neurons identified by this work?
\item To what extent can the programs implied by our explanations serve as drop-in approximations of neurons, thus obviating the need for feature extraction in earlier parts of the network? Specifically, can we distill a deep model into a simple classifier over binary concept detectors defined by our neuron explanations? \item If there is a relationship between neuron interpretability and model accuracy, as Section~\ref{sec:interp} has suggested, can we use neuron interpretability as a regularization signal during training, and does encouraging neurons to learn more interpretable abstractions result in better downstream task performance?
\end{enumerate}
\section*{Reproducibility}
Code and data are available at \url{github.com/jayelm/compexp}.
\section*{Broader Impact}
Tools for model introspection and interpretation are crucial for better understanding the
behavior of black-box models, especially as they make increasingly important decisions in high-stakes societal applications. We believe that the explanations generated in this paper can help unveil richer concepts that represent spurious correlations and potentially problematic biases in models, thus helping practitioners better understand the decisions made by machine learning models.
Nonetheless, we see two limitations with this method as it stands: (1) it currently requires technical expertise to implement, limiting usability by non-experts; (2) it relies on annotated datasets which may be expensive to collect, and may be biased in the kinds of features they contain (or omit). If a potential feature of interest is not present in the annotated dataset, it cannot appear in an explanation. Both of these issues can be ameliorated with future work in (1) building easier user interfaces for explainability, and (2) reducing data annotation requirements.
In high stakes cases, e.g.\ identifying model biases, care should also be taken to avoid relying too heavily on these explanations as causal proof that a model is encoding a concept, or assuming that the absence of an explanation is proof that the model does not encode the concept (or bias).
We provide evidence that neurons exhibit surface-level behavior that is well-correlated with human-interpretable concepts, but by themselves, neuron-level explanations cannot identify the full array of concepts encoded in representations, nor establish definitive causal chains between inputs and decisions.
\begin{ack}
Thanks to David Bau, Alex Tamkin, Mike Wu, Eric Chu, and Noah Goodman for helpful comments and discussions, and to anonymous reviewers for useful feedback.
This work was partially supported by a gift from NVIDIA under the NVAIL grant program.
JM is supported by an NSF Graduate Research Fellowship and the Office of Naval Research Grant
ONR MURI N00014-16-1-2007.
\end{ack}
\small
|
1,116,691,497,903 | arxiv | \section{Introduction}
Almost all statements that have been made in the other chapters of this review \cite{chapIntro} about the duality and integrability of string theory on $\mathrm{AdS}_5\timesS} % {\mathbbm{S}^5$ and $\mathcal{N}=4$ Yang-Mills theory in four dimensions, also hold in an appropriately adopted form for a second example of the AdS/CFT correspondence. This example has been known since June 2008 \cite{Aharony:2008ug}, and it is as concrete as the ``old'' one. Because the involved space-times are of one less dimension, this correspondence is often referred to as AdS$_4$/CFT$_3$ to distinguish it from the more established AdS$_5$/CFT$_4$.\footnote{Since December 2009, also an AdS$_3$/CFT$_2$ correspondence has been discussed in the context of integrability \cite{Babichenko:2009dk}.}
In the AdS$_5$/CFT$_4$ case, we had IIB superstring theory on $\mathrm{AdS}_5\timesS} % {\mathbbm{S}^5$ with self-dual RR 5-form flux $F^{(5)} \sim N$ through $\mathrm{AdS}_5$ and $S} % {\mathbbm{S}^5$. This is now replaced by:
\begin{eqnarray} \label{eqn:IIA-theory}
\parbox{100mm}{
IIA superstring theory on $\mathrm{AdS}_4\times\mathrm{CP}^3$\\
with RR four-form flux $F^{(4)} \sim N $ through $\mathrm{AdS}_4$\\
and RR two-form flux $F^{(2)} \sim k$ through a $\mathrm{CP}^1\subset\mathrm{CP}^3$.
}
\end{eqnarray}
On the gauge theory side, we had $\mathcal{N}=4$ superconformal Yang-Mills theory with coupling $g_{\mathrm{YM}}$ and gauge group $\grp{U}(N)$ on $\mathbbm{R}^{1,3}$. Now this is replaced by ABJM theory:
\begin{eqnarray} \label{eqn:ABJM-theory}
\parbox{100mm}{
$\mathcal{N}=6$ superconformal Chern-Simons-matter theory\\
with gauge group $\grp{U}(N)\times\grp{U}(N)$ on $\mathbbm{R}^{1,2}$\\
and Chern-Simons levels $k$ and $-k$.
}
\end{eqnarray}
Both theories are controlled by two and only two parameters, $k$ and $N$, which take integer values. These parameters determine all other quantities like coupling constants and the effective string tension. In ABJM theory, the Chern-Simons level $k$ acts like a coupling constant. The fields can be rescaled in such a way that all interactions are suppressed by powers of $\frac{1}{k}$, i.e.\ large $k$ is the weak coupling regime.
One can take a planar, or 't Hooft, limit which is given by
\begin{eqnarray} \label{eqn:tHooft}
k,N \to \infty \comma \lambda \equiv \frac{N}{k} = \mathrm{fixed} \; .
\end{eqnarray}
It is in this limit where integrability shows up and which is therefore the focus of this review. On the gravity side, the string coupling constant and effective tension are given by\footnote{There are corrections to the second relation at two loops in the sigma model \cite{Bergman:2009zh}.}
\begin{eqnarray} \label{eqn:stringparameter}
g_s \sim \lrbrk{\frac{N}{k^5}}^{1/4} = \frac{\lambda^{5/4}}{N}
\comma
\frac{R^2}{\alpha'} = 4\pi\sqrt{2\lambda}
\; ,
\end{eqnarray}
where $R$ is the radius of $\mathrm{CP}^3$ and \emph{twice} the radius of $\mathrm{AdS}_4$. These relations are qualitatively the same as in the AdS$_5$/CFT$_4$ context. In the planar limit $g_s$ goes to zero and thus the strings do not split or join. At small 't Hooft coupling, the background is highly curved and the string is subject to large quantum fluctuations. At large 't Hooft coupling, the background is weakly curved which renders the sigma-model weakly coupled and the string behaves classically.
The first equation in \eqref{eqn:stringparameter} contains a hint that the duality is about more than the relationship between \eqref{eqn:IIA-theory} and \eqref{eqn:ABJM-theory}. If we are not in the 't Hooft limit but if we let $N \gg k^5$, then the string coupling $g_s$ becomes large. However, strongly coupled IIA string theory is M-theory. Indeed, ABJM theory \eqref{eqn:ABJM-theory} at arbitrary value of $k$ and $N$ is dual to \cite{Aharony:2008ug}
\begin{eqnarray} \label{eqn:M-theory}
\parbox{85mm}{
M-theory on $\mathrm{AdS}_4 \times S} % {\mathbbm{S}^7/\mathbbm{Z}_k$\\
with four-form flux $F^{(4)} \sim N $ through $\mathrm{AdS}_4$.
}
\end{eqnarray}
In other words, ABJM theory is the world-volume theory of a stack of $N$ M2 branes moving on $\mathbbm{C}^4/\mathbbm{Z}_k$ \cite{Aharony:2008ug}. The duality of \eqref{eqn:IIA-theory} and \eqref{eqn:ABJM-theory} is really only a corollary of this more general M/ABJM duality in the limit where $k^5 \gg N$ and where therefore M-theory is well approximated by weakly coupled IIA string theory on a $\mathrm{AdS}_4 \times \mathrm{CP}^3$ background\footnote{$CP^3$ arises from writing $S} % {\mathbbm{S}^7$ as $S} % {\mathbbm{S}^1$ fibered over $\mathrm{CP}^3$ and by identifying the circle as the M-theory direction which shrinks to zero size by the orbifold action of $\mathbbm{Z}_k$ in the large $k$ limit.}. The lecture notes \cite{Klebanov:2009sg} discuss the general M/ABJM correspondence. However, in the planar limit \eqref{eqn:tHooft}, where $k$ and $N$ grow large with equal powers, we are always in the IIA regime. Thus, by concentrating on the question of integrability we are only concerned with IIA/ABJM. An extended and largely self-contained review of the AdS$_4$/CFT$_3$ correspondence is forthcoming \cite{newreview}.
\paragraph{Overview.} In a nutshell, the differences between AdS$_5$/CFT$_4$ and AdS$_4$/CFT$_3$, see \tabref{tab:nutshell}, are: The first duality involves theories that are invariant under the supergroup $\grp{PSU}(2,2|4)$ and therefore are maximally supersymmetric (32 supercharges), while the theories in the second duality are $\grp{OSp}(6|4)$-symmetric, a group which contains ``only'' 24 supercharges. After gauge fixing, the symmetry groups reduce to two and one copy of $\grp{SU}(2|2)$, respectively. The \emph{sixteen} elementary excitations in the 5/4d case transform in the representation $(2|2)_L\otimes(2|2)_R$ of the residual symmetry group, while there are only \emph{eight} elementary excitations in the 4/3d case which transform in the representation
\begin{eqnarray} \label{eqn:A+B}
(2|2)_{A\mathrm{-particles}}\oplus(2|2)_{B\mathrm{-particles}} \; .
\end{eqnarray}
In \secref{sec:ABJMtoInt} and \secref{sec:IIAtoInt} we will show how these two types of particles arise from the gauge and string theory degrees of freedom, respectively.
\begin{table}%
\begin{center}
\begin{tabular}{l|c|c}
& AdS$_5$/CFT$_4$ & AdS$_4$/CFT$_3$ \\[1mm] \hline
&& \\[-3mm]
Global symmetry & $\grp{PSU}(2,2|4)$ & $\grp{OSp}(6|4)$ \\[2mm]
\raisebox{11mm}[0mm]{Dynkin diagram}
& \raisebox{6mm}[0mm]{\includegraphics[scale=0.4]{dynkin-psu224}}
& \includegraphics[scale=0.4]{dynkin-osp64} \\[3mm]
Residual symmetry & $\grp{SU}(2|2)_L\times\grp{SU}(2|2)_R$ & $\grp{SU}(2|2)$ \\[3mm]
Representations & $(2|2)_L\otimes(2|2)_R = 16\, \mathrm{d.o.f}$ & $(2|2)_A\oplus(2|2)_B = 8\, \mathrm{d.o.f}$
\end{tabular}
\end{center}
\caption{\textbf{Comparison of symmetries.} The Dynkin diagram of $\grp{PSU}(2,2|4)$ contains two $\grp{SU}(2|2)$ branches which represent the residual symmetries, and exactly one momentum carrying root which we marked by shading it gray. This indicates that all 16 elementary excitations transform in a single irreducible representation with one fundamental index in each $\grp{SU}(2|2)$. The Dynkin diagram of $\grp{OSp}(6|4)$ contains only one $\grp{SU}(2|2)$ branch, but two momentum carrying roots. Correspondingly, the 8 elementary excitations transform in two copies of the fundamental representation of $\grp{SU}(2|2)$.}
\label{tab:nutshell}
\end{table}
Another difference between the two dualities is that the interpolation between weak and strong coupling in AdS$_4$/CFT$_3$ is much more intricate. Take e.g.\ the magnon dispersion relation, which due to the underlying $\grp{SU}(2|2)$ symmetry is fixed in either duality to be of the form \cite{Beisert:2005tm} (see also \cite{Berenstein:2008dc})
\begin{eqnarray} \label{eqn:general-disp-rel}
E(p) = \sqrt{Q^2 + 4 h^2(\lambda) \sin^2\tfrac{p}{2}} \; ,
\end{eqnarray}
where $Q$ is the magnon R-charge and where the function $h(\lambda)$ is \emph{not} fixed by symmetry. The fundamental magnon in AdS$_5$/CFT$_4$ has charge $Q=1$, while in AdS$_4$/CFT$_3$ it has $Q=\sfrac{1}{2}$. In the AdS$_5$/CFT$_4$ case the function $h(\lambda)$ turned out to be simply $\sqrt{\lambda}/4\pi$, which can be argued to arise from S-duality \cite{Berenstein:2009qd}. In the present case there is no such argument and indeed the function $h$ happens to be quite non-trivial. The weak and strong coupling asymptotics are given by
\begin{eqnarray} \label{eqn:general-h-expansion}
h(\lambda) = \begin{cases}
\lambda \Bigsbrk{ 1 + c_1 \lambda^2 + c_2 \lambda^4 + \ldots } & \mbox{for $\lambda\ll 1$} \; , \\[2mm]
\sqrt{\frac{\lambda}{2}} + a_1 + \frac{a_2}{\sqrt{\lambda}} + \ldots & \mbox{for $\lambda\gg 1$} \; ,
\end{cases}
\end{eqnarray}
where the leading terms were deduced in \cite{Minahan:2008hf,Gaiotto:2008cg} and \cite{Gaiotto:2008cg,Grignani:2008is}, respectively. In fact, the $\lambda$-dependence of many other quantities like the S-matrix, the Bethe ansatz, the Zhukowsky map, the universal scaling function, etc., are also related between the AdS$_5$/CFT$_4$ and the AdS$_4$/CFT$_3$ correspondence by appropriately replacing $\lambda$ by $h(\lambda)$. Despite this fact, the subleading terms seem to be scheme dependent.
The first indication of this scheme dependence was the observation that string theory computations \cite{McLoughlin:2008ms,Alday:2008ut,Krishnan:2008zs} of the one-loop energy shift of the spinning folded string (encoded in the universal scaling function) gave an answer that differed from the prediction of the conjectured Bethe equations \cite{Gromov:2008qe}. Two possible, but mutually exclusive, resolutions were proposed. In \cite{Gromov:2008fy}, an algebraic curve inspired regularization was used to sum the string frequencies which changed the string theory result so that it agreed with the one from the Bethe equations. Conversely, in \cite{McLoughlin:2008he}, it was shown that in the string regularization scheme, the function $h(\lambda)$ receives a one-loop correction ($a_1$ in \eqref{eqn:general-h-expansion}), and when using this contribution in the Bethe equations then the string theory result was reproduced. A similar comparison of the worldsheet and the algebraic curve computations for the circular spinning string and the analysis of different prescriptions for summing frequencies was carried out in \cite{Bandres:2009kw}. The interplay between the summation prescriptions and the constant term in the strong-coupling expansion of $h(\lambda)$ was further explored in the context of giant magnons and their dispersion relation in \cite{Abbott:2010yb}.
It does not seem that consensus has been reached in the literature as to how this puzzle should be resolved. However, it is probably wrong to attach too great importance to the function $h(\lambda)$ as it is an \emph{unphysical} object. In order to compare the results of different calculations of the same quantity, one should rather eliminate $h(\lambda)$ (resp.\ $\lambda$) from this quantity in favor of a \emph{physical reference observable} that has been computed within the same scheme.
\section{\texorpdfstring{$\mathcal{N}=6$}{N=6} Chern-Simons matter theory}
\label{sec:ABJM-theory}
\paragraph{Field content.} ABJM theory is a three-dimensional superconformal Chern-Simons theory with product gauge group $\grp{U}(N)\times\hat{\grp{U}}(N)$ at levels $\pm k$ and specific matter content. The quiver diagram visualizing the fields of the theory and their gauge representations is drawn in \figref{fig:ABJM-quiver}. The entire field content is given by two gauge fields $A_\mu$ and $\hat{A}_\mu$, four complex scalar fields $Y^A$, and four Weyl-spinors $\psi_A$. The matter fields are $N\times N$ matrices transforming in the bi-fundamental representation of the gauge group.
\begin{figure}%
\begin{center}
\includegraphics[scale=1]{ABJM-quiver}%
\end{center}
\caption{\textbf{Quiver diagram of ABJM theory.} The arrows indicate the representations of the fields under the gauge groups. The arrows are drawn from a fundamental to an anti-fundamental representation.
}
\label{fig:ABJM-quiver}%
\end{figure}
\paragraph{Global symmetries.} The global symmetry group of ABJM theory, for Chern-Simons level%
\footnote{We are ignoring the symmetry enhancement to $\grp{OSp}(8|4)$ at $k=1$ and $k=2$, because for the purpose of discussing integrability we have to work in the 't Hooft limit where $k$ is large.} %
$k>2$, is given by the orthosymplectic supergroup $\grp{OSp}(6|4)$ \cite{Aharony:2008ug,Bandres:2008ry} and the ``baryonic'' $\grp{U}(1)_b$ \cite{Aharony:2008ug}. The bosonic components of $\grp{OSp}(6|4)$ are the R-symmetry group $\grp{SO}(6)_R\cong\grp{SU}(4)_R$ and the 3d conformal group $\grp{Sp}(4)\cong\grp{SO}(2,3)$. The conformal group contains the spacetime rotations $\grp{SO}(3)_r \cong \grp{SU}(2)_r$ and dilatations $\grp{SO}(2)_\Delta \cong \grp{U}(1)_\Delta$. The fermionic part of $\grp{OSp}(6|4)$ generates the $\mathcal{N} = 6$ supersymmetry transformations. The baryonic charge $\grp{U}(1)_b$ is $+1$ for bi-fundamental fields, $-1$ for anti-bi-fundamental fields, and $0$ for adjoint fields. The representations in which the fields transform under these symmetries are listed in \tabref{tab:ABJM-fields}. For more details about the $\grp{OSp}(6|4)$ group theory in this context see \cite{Papathanasiou:2009en}. Finally, the model also possesses a discrete, parity-like symmetry. This might be surprising since the Chern-Simons action is not invariant but changes sign under a canonical parity transformation. The trick to make the model parity invariant is to accompany the ``naive'' parity transformation by the exchange of the two gauge group factors. The total transformation is a symmetry because the Chern-Simons terms for the two gauge group factors have opposite signs.
\begin{table}%
\begin{center}
\begin{tabular}{c|cccccc}
& $\grp{U}(N)$ & $\hat{\grp{U}}(N)$ & $\grp{SU}(4)_{R}$ & $\grp{SU}(2)_{r}$ & $\grp{U}(1)_\Delta$ & $\grp{U}(1)_b$ \\ \hline
$A_\mu$ & $\rep{N}^2$ & $\rep{1}$ & $\rep{1}$ & $\rep{3}$ & $1$ & $0$ \\
$\hat{A}_\mu$ & $\rep{1}$ & $\rep{N}^2$ & $\rep{1}$ & $\rep{3}$ & $1$ & $0$ \\
$Y^A$ & $\rep{N}$ & $\bar{\rep{N}}$ & $\rep{4}$ & $\rep{1}$ & $\sfrac{1}{2}$ & $1$ \\
$\psi_{A}$ & $\rep{N}$ & $\bar{\rep{N}}$ & $\bar{\rep{4}}$ & $\rep{2}$ & $1$ & $1$
\end{tabular}
\end{center}
\caption{\textbf{Representations of ABJM fields.}
}
\label{tab:ABJM-fields}
\end{table}
\paragraph{Action.} The ABJM action was first spelled out in all detail in \cite{Benna:2008zy} in $\mathcal{N}=2$ superspace and in component form. An $\mathcal{N}=3$ \cite{Buchbinder:2008vi}, an $\mathcal{N}=1$ \cite{Mauri:2008ai}, and an $\mathcal{N}=6$ \cite{Cederwall:2008xu} superspace version is also known. The component action using the conventions of \cite{Benna:2008zy} reads
\begin{eqnarray} \label{eqn:ABJM-component-action}
\mathcal{S} \eq \frac{k}{4\pi} \int d^3x\: \Bigsbrk{
\levi^{\mu\nu\lambda} \mathop{\mathrm{tr}} \bigbrk{
A_\mu \partial_\nu A_\lambda + \tfrac{2i}{3} A_\mu A_\nu A_\lambda
- \hat{A}_\mu \partial_\nu \hat{A}_\lambda - \tfrac{2i}{3} \hat{A}_\mu \hat{A}_\nu \hat{A}_\lambda
}
\nl \hspace{20mm}
- \mathop{\mathrm{tr}} (D_\mu Y)^\dagger D^\mu Y
- i \mathop{\mathrm{tr}} \psi^\dagger \slashed{D} \psi
- V_{\mathrm{ferm}} - V_{\mathrm{bos}}
} \; ,
\end{eqnarray}
where the sextic bosonic and quartic mixed potentials are
\begin{eqnarray}
V^{\mathrm{bos}} \eq - \frac{1}{12} \mathop{\mathrm{tr}} \Bigsbrk{
Y^A Y_A^\dagger Y^B Y_B^\dagger Y^C Y_C^\dagger
+ Y_A^\dagger Y^A Y_B^\dagger Y^B Y_C^\dagger Y^C
\nl\hspace{11mm}
+ 4 Y^A Y_B^\dagger Y^C Y_A^\dagger Y^B Y_C^\dagger
- 6 Y^A Y_B^\dagger Y^B Y_A^\dagger Y^C Y_C^\dagger
} \; . \\
V^{\mathrm{ferm}} \eq \frac{i}{2} \mathop{\mathrm{tr}} \Bigsbrk{
Y_A^\dagger Y^A \psi^{\dagger B} \psi_B
- Y^A Y_A^\dagger \psi_B \psi^{\dagger B}
+ 2 Y^A Y_B^\dagger \psi_A \psi^{\dagger B}
- 2 Y_A^\dagger Y^B \psi^{\dagger A} \psi_B
\nl\hspace{10mm}
- \levi^{ABCD} Y_A^\dagger \psi_B Y_C^\dagger \psi_D
+ \levi_{ABCD} Y^A \psi^{\dagger B} Y^C \psi^{\dagger D}
} \; .
\end{eqnarray}
The covariant derivative acts on bi-fundamental fields as
\begin{eqnarray} \label{eqn:cov-derivative}
D_\mu Y = \partial_\mu Y + i A_\mu Y - i Y \hat{A}_\mu \; ,
\end{eqnarray}
while on anti-bi-fundamental fields it acts with $A_\mu$ and $\hat{A}_\mu$ interchanged. According to the M-theory interpretation, this theory describes the low-energy limit of $N$ M2 branes probing a $\mathbbm{C}^4/\mathbbm{Z}_k$ singularity. The three-dimensional spacetime of ABJM theory is the world-volume of those M2 branes. For conventions and further details we refer to \cite{Benna:2008zy}.
\paragraph{Perturbation theory and 't Hooft limit.} Note that the Chern-Simons level occurs in \eqref{eqn:ABJM-component-action} as an overall factor of the entire action. Alternatively, one can rescale the fields in such a way that all quadratic terms come without any factors of $k$ and interactions of order $n$ come with $\frac{1}{k^{n/2-1}}$. Either way, this shows that $g_{\mathrm{CS}}^2 \equiv \frac{1}{k}$ acts like a coupling constant of ABJM theory, quite similar to $g_{\mathrm{YM}}^2$ in $\mathcal{N}=4$ SYM, though of course $k$ has to be an integer to preserve non-abelian gauge symmetry. As announced in the introduction, the theory can be restricted to the planar sector by taking the 't Hooft limit \eqref{eqn:tHooft} which introduces the effective coupling
\begin{eqnarray} \label{eqn:ABJM-coupling}
\lambda \equiv g_{\mathrm{CS}}^2 N = \frac{N}{k} \; .
\end{eqnarray}
In this limit the theory becomes integrable \cite{Minahan:2008hf} (see also \cite{Gaiotto:2008cg,Bak:2008cp}) in the same sense as we are used to in planar $\mathcal{N}=4$ SYM theory and as we will discuss below.
\paragraph{Gauge group.} The model can be generalized to have gauge group $\grp{U}(M)_{k}\times\grp{U}(N)_{-k}$ \cite{Aharony:2008gk}. This generalization goes by the name ABJ theory. The M-theory interpretation is given by $\min(M,N)$ M2 branes allowed to move freely on $\mathbbm{C}^4/\mathbbm{Z}_k$ and $\abs{M-N}$ fractional M2 branes stuck to the singularity. The gauge theory action is formally the same as in \eqref{eqn:ABJM-component-action}, except that the matter fields are now given by rectangular matrices. Thus two 't Hooft couplings can be defined by
\begin{eqnarray} \label{eqn:ABJ-coupling}
\lambda = \frac{M}{k} \comma \hat{\lambda} = \frac{N}{k} \; ,
\end{eqnarray}
and it becomes possible to take different planar limits depending on the ratio of $\lambda$ and $\hat{\lambda}$. On the other hand, the generalized parity invariance of the ABJM theory is explicitly broken, because now the two gauge group factors cannot be exchanged anymore.
\paragraph{Deformation.} It is possible to introduce independent Chern-Simons levels $k$ and $\hat{k}$ for the two gauge groups $\grp{U}(N)$ and $\hat{\grp{U}}(N)$ that do not sum to zero. This generalized theory possesses less supersymmetry and less global symmetry. It is proposed to be dual to a type IIA background with the Romans mass $F_0 = k + \hat{k}$ turned on \cite{Gaiotto:2009mv}. This modification, however, seems to break integrability \cite{Forcella:2009gm}.
\section{From ABJM theory to the integrable model}
\label{sec:ABJMtoInt}
\paragraph{Spin-chain picture.} The integrability of the planar ABJM theory is best described in terms of an integrable $\grp{OSp}(6|4)$ spin-chain which represents single trace operators \cite{Minahan:2008hf}. A qualitative difference between the case at hand and the case of $\mathcal{N}=4$ SYM is that the ABJM spin-chain is an ``alternating spin-chain.'' Because the matter fields are in bi-fundamental representations of the product gauge group $\grp{U}(N)\times\hat{\grp{U}}(N)$, gauge invariant operators need to be built from products of fields that transform alternatingly in the representations $(\rep{N},\rep{\bar{N}})$ and $(\rep{\bar{N}},\rep{N})$, e.g.
\begin{eqnarray} \label{eqn:spin-chain-vacuum}
\mathop{\mathrm{tr}}( Y^1 Y_4^\dagger Y^1 Y_4^\dagger \cdots ) \; .
\end{eqnarray}
Thus, the spin-chain has even length and the fields on the odd sites are distinct from the ones on the even sites. On the odd sites, we can have any of the 4$_B$+8$_F$ fields $Y^A$, $\psi_{A\alpha}$, and on the even sites, we can have any of the 4$_B$+8$_F$ fields $Y^\dagger_A$, $\psi^{\dagger A}_\alpha$. We can also act with an arbitrary number of derivatives $D_\mu = D_{\alpha\beta}$ onto the fields, but derivatives do not introduce extra sites. Also field strength insertions do not count as extra sites as they can be written as anti-symmetrized derivatives.
\paragraph{Spin-chain excitations.} In the spin-chain description, the ABJM fields are distinguished according to whether they represent the vacuum (or ``down spin''), or elementary or multiple excitations. A convenient and common choice for the vacuum spin-chain is the BPS operator \eqref{eqn:spin-chain-vacuum}, i.e. $Y^1$ is the vacuum on the odd sites, and $Y_4^\dagger$ is the vacuum on the even sites.
Selecting a vacuum breaks the $\grp{OSp}(6|4)$ symmetry group of ABJM theory down to $\grp{SU}(2|2)\times\grp{U}(1)_{\mathrm{extra}}$ which becomes the symmetry group of the spin-chain model \cite{Minahan:2008hf,Gaiotto:2008cg}. The bosonic components of this $\grp{SU}(2|2)$ are $\grp{SU}(2)_G \times \grp{SU}(2)_r \times \grp{U}(1)_E$, where $\grp{SU}(2)_G$ is the unbroken part of $\grp{SU}(4)_R$, $\grp{SU}(2)_r \cong \grp{SO}(1,2)_r$ is the Lorentz group, and $\grp{U}(1)_E$ is the spin-chain energy $E=\Delta - J$ which itself is a combination of the conformal dimension $\Delta$ and a broken $\grp{SU}(4)_R$ generator $J$. The charges of the fields under these groups are listed and explained in \tabref{tab:ABJM-spin-chain-charges}.
\begin{table}%
\begin{center}
\begin{scriptsize}
\begin{tabular}{l|c|ccc|cc|c}
& $\grp{SU}(4)_R$ & $\grp{SU}(2)_{G'}$ & $\grp{SU}(2)_{G}$ & $\grp{U}(1)_{\mathrm{extra}}$ & $\grp{U}(1)_\Delta$ & $\grp{SU}(2)_r$ & $\grp{U}(1)_E$ \\
& $[p_1,q,p_2]$ & $J$ & $$ & $$ & $\Delta$ & $s$ & $E=\Delta-J$ \\ \hline
$Y^1$ & $[\,1\,,\,0\,,\,0\,]$ & $+1/2$ & 0 & $+1$ & $1/2$ & $0$ & $0$ \\
$Y^2$ & $[-1,1,0]$ & 0 & $+1/2$ & $-1$ & $1/2$ & $0$ & $1/2$ \\
$Y^3$ & $[0,-1,1]$ & 0 & $-1/2$ & $-1$ & $1/2$ & $0$ & $1/2$ \\
$Y^4$ & $[0,0,-1]$ & $-1/2$ & 0 & $+1$ & $1/2$ & $0$ & $1$ \\ \hline
$\psi_{1\pm}$ & $[-1,0,0]$ & $-1/2$ & 0 & $-1$ & $1$ & $\pm1/2$ & $3/2$ \\
$\psi_{2\pm}$ & $[1,-1,0]$ & 0 & $-1/2$ & $+1$ & $1$ & $\pm1/2$ & $1$ \\
$\psi_{3\pm}$ & $[0,1,-1]$ & 0 & $+1/2$ & $+1$ & $1$ & $\pm1/2$ & $1$ \\
$\psi_{4\pm}$ & $[\,0\,,\,0\,,\,1\,]$ & $+1/2$ & 0 & $-1$ & $1$ & $\pm1/2$ & $1/2$ \\ \hline
$D_{0}$ & $[\,0\,,\,0\,,\,0\,]$ & $0$ & $0$ & $0$ & $1$ & $0$ & $1$ \\
$D_{\pm}$ & $[\,0\,,\,0\,,\,0\,]$ & $0$ & $0$ & $0$ & $1$ & $\pm1$ & $1$ \\ \hline
$Y^\dagger_1$ & $[-1,0,0]$ & $-1/2$ & 0 & $-1$ & $1/2$ & $0$ & $1$ \\
$Y^\dagger_2$ & $[1,-1,0]$ & 0 & $-1/2$ & $+1$ & $1/2$ & $0$ & $1/2$ \\
$Y^\dagger_3$ & $[0,1,-1]$ & 0 & $+1/2$ & $+1$ & $1/2$ & $0$ & $1/2$ \\
$Y^\dagger_4$ & $[\,0\,,\,0\,,\,1\,]$ & $+1/2$ & 0 & $-1$ & $1/2$ & $0$ & $0$ \\ \hline
$\psi^{\dagger1\pm}$ & $[\,1\,,\,0\,,\,0\,]$ & $+1/2$ & 0 & $+1$ & $1$ & $\pm1/2$ & $1/2$ \\
$\psi^{\dagger2\pm}$ & $[-1,1,0]$ & 0 & $+1/2$ & $-1$ & $1$ & $\pm1/2$ & $1$ \\
$\psi^{\dagger3\pm}$ & $[0,-1,1]$ & 0 & $-1/2$ & $-1$ & $1$ & $\pm1/2$ & $1$ \\
$\psi^{\dagger4\pm}$ & $[0,0,-1]$ & $-1/2$ & 0 & $+1$ & $1$ & $\pm1/2$ & $3/2$
\end{tabular}
\end{scriptsize}
\end{center}
\caption{\textbf{Charges of fields.} The R-symmetry group $\grp{SO}(6)_R\cong\grp{SU}(4)_R$ splits up into $\grp{SU}(2)_{G'} \times \grp{SU}(2)_G \times \grp{U}(1)_{\mathrm{extra}}$, and the conformal group $\grp{Sp}(2,2)\cong\grp{SO}(2,3)$ splits up into $\grp{U}(1)_\Delta \times \grp{SU}(2)_r$. The symmetry group of the spin-chain is $\grp{SU}(2|2) \times \grp{U}(1)_{\mathrm{extra}} \supset \grp{SU}(2)_G \times \grp{SU}(2)_r \times \grp{U}(1)_E \times \grp{U}(1)_{\mathrm{extra}}$. The $\grp{U}(1)_J$ generator $J = \frac{p_1+q+p_2}{2}$ is the Cartan generator of $\grp{SU}(2)_{G'}$, and the $\grp{U}(1)_E$ generator $E$ is given by the difference $\Delta-J$.}
\label{tab:ABJM-spin-chain-charges}
\end{table}
By construction, the ground state spin-chain \eqref{eqn:spin-chain-vacuum} has energy $E = \Delta - J = 0$. This spin-chain can be excited by replacing one of the vacuum fields by a different field or by acting with a covariant derivative. This procedure increases the energy in quanta of $\delta E = 1/2$ by a total amount that can be read off from the last column in \tabref{tab:ABJM-spin-chain-charges}. If the energy increases by $1/2$, then the excitation is an elementary one. We find that the elementary excitations on the odd and even sites are given by
\begin{subequations} \label{eqn:spin-chain-particles}
\begin{eqnarray}
\mbox{``$A$''-particles:} && (Y^2,Y^3|\psi_{4+},\psi_{4-}) \; , \label{eqn:spin-chain-A-particle} \\[1mm]
\mbox{``$B$''-particles:} && (Y^\dagger_3,Y^\dagger_2|\psi^{\dagger1}_+,\psi^{\dagger1}_-) \; , \label{eqn:spin-chain-B-particle}
\end{eqnarray}
\end{subequations}
respectively \cite{Gaiotto:2008cg}. These are the two multiplets anticipated in \eqref{eqn:A+B}. All other fields correspond to composite excitations and are listed in \tabref{tab:multi-excitations}.
\begin{table}%
\begin{center}
\begin{tabular}{l|l|l}
& Multi-excitation & made of \\ \hline
Double excitations & $Y^\dagger_1 {\gray Y^1} \:,\; Y^4 {\gray Y^\dagger_4}$ & $Y^2 Y^\dagger_2 \pm Y^3 Y^\dagger_3$ \\
& $\psi_2 {\gray Y^\dagger_4} \:,\; \psi^{\dagger3} {\gray Y^1}$ & $\psi_4 Y^\dagger_2 \pm Y^3 \psi^{\dagger1}$ \\
& $\psi_3 {\gray Y^\dagger_4} \:,\; \psi^{\dagger2} {\gray Y^1}$ & $\psi_4 Y^\dagger_3 \pm Y^2 \psi^{\dagger1}$ \\ \hline
Triple excitations & $\psi_1 {\gray Y^\dagger_4 Y^1}$ & $Y^2 \psi^{\dagger1} Y^3$ \\
& $\psi^{\dagger4} {\gray Y^1 Y^\dagger_4}$ & $Y^\dagger_2 \psi_4 Y^\dagger_3$ \\
& $D_\mu {\gray Y^1 Y^\dagger_4}$ & $\psi_4 \gamma_\mu \psi^{\dagger1}$
\end{tabular}
\end{center}
\caption{\textbf{Multi-excitations.} In order to determine which elementary excitations a composite is made out of, one needs to compare their $\grp{SU}(2|2) \times \grp{U}(1)_{\mathrm{extra}}$ charges. E.g. for the triple excitation $\psi_1$ one can check that the charges of $\psi_1$ together with the two background fields $Y^1 Y^\dagger_4$ coincide with the charges of the three elementary excitations $Y^2 \psi^{\dagger1} Y^3$.}
\label{tab:multi-excitations}
\end{table}
\paragraph{Subsectors.} A subsector is a set of fields which is closed under the action of the spin-chain Hamiltonian, i.e. there is no overlap between spin-chains from within a subsector with spin-chains from outside. The subsectors of ABJM theory above the vacuum \eqref{eqn:spin-chain-vacuum} are listed in \tabref{tab:subsectors}. To prove that these sectors are closed to all orders in perturbation theory, one defines a positive semi-definite charge $P = n_1 p_1 + n_2 q + n_3 p_2 + n_4 \Delta + n_5 s + n_6 b \ge 0$ from the eigenvalues of all operators that commute with the spin-chain Hamiltonian $E=\Delta - J$. These are the 5 Cartan generators of $\grp{OSp}(6|4)$ and the baryonic charge $U(1)_b$. The set of fields with $P=0$ constitute a closed subsector. Different subsectors are obtained by different choices for the numbers $n_i$.
\begin{table}%
\begin{center}
\begin{tabular}{l|lll}
Subsector & Vacuum & Single & Double \\ \hline
Vacuum & $Y^1$ $Y^\dagger_4$ \\
$\grp{SU}(2)\times\grp{SU}(2)$ & $Y^1$ $Y^\dagger_4$ & $Y^2$ $Y^\dagger_3$ \\
$\grp{OSp}(2|2)$ & $Y^1$ $Y^\dagger_4$ & $\psi_{4+}$ $\psi^{\dagger1}_+$ & $D_+$\\
$\grp{OSp}(4|2)$ & $Y^1$ $Y^\dagger_4$ & $Y^2$ $\psi_{4+}$ $Y^\dagger_3$ $\psi^{\dagger1}_+$ & $D_+$ $\psi_{3+}$ $\psi^{\dagger2}_+$ \\ \hline
$\grp{SU}(2)$ & $Y^1$ $Y^\dagger_4$ & $Y^2$ \\
$\grp{SU}(1|1)$ & $Y^1$ $Y^\dagger_4$ & $\psi_{4+}$ \\
$\grp{SU}(2|1)$ & $Y^1$ $Y^\dagger_4$ & $Y^2$ $\psi_{4+}$ \\
$\grp{SU}(3|2)$ & $Y^1$ $Y^\dagger_4$ & $Y^2$ $Y^3$ $\psi_{4+}$ $\psi_{4-}$
\end{tabular}
\end{center}
\caption{\textbf{Subsectors.} This list of closed subsectors above the vacuum $\mathop{\mathrm{tr}}( Y^1 Y_4^\dagger Y^1 Y_4^\dagger \cdots )$ is complete, although a specific subsector can be realized also by other fields. That would correspond to a different embedding of the sector into the full theory. Note that there is no closed $\grp{SL}(2)$ sector that is made only out of derivatives as we had in $\mathcal{N}=4$ SYM. This is because derivatives are double excitations of fermions with the above choice of vacuum. However, it is also possible to consider closed subsectors based on a different vacuum. There is, for instance, an $\grp{SL}(2)$ sector built from derivatives onto the vacuum $\mathop{\mathrm{tr}}(Y^1\psi^{\dagger 1})^L$ \cite{Zwiebel:2009vb}, which was studied e.g.\ in \cite{Beccaria:2009ny,Beccaria:2009wb}.}
\label{tab:subsectors}
\end{table}
\paragraph{Spin-chain Hamiltonian.} Various works have computed the spin-chain Hamiltonian for different subsectors to different loop orders with different methods in different approximations. The first results were obtained in the $\grp{SU}(4)$ sector\footnote{This sector is closed at two-loop order but not beyond.} at two\footnote{There is no contribution to the Hamiltonian at an odd number of loops as in three dimensions no such Feynman diagram is logarithmically divergent.} loops \cite{Minahan:2008hf,Bak:2008cp} where the spin-chain Hamiltonian reads
\begin{eqnarray}
H = \frac{\lambda^2}{2} \sum_{l=1}^{2L} \bigbrk{ 2 - 2 P_{l,l+2} + P_{l,l+2} K_{l,l+1} + K_{l,l+1} P_{l,l+2} } \; .
\end{eqnarray}
with $P_{l,m}$ and $K_{l,m}$ being the permutation and the trace operator, respectively, and $2L$ being the length of the spin-chain. This Hamiltonian has been proven to be integrable by means of an algebraic Bethe ansatz \cite{Minahan:2008hf,Bak:2008cp}. In the $\grp{SU}(2)\times\grp{SU}(2)$ sector, independently studied in \cite{Gaiotto:2008cg}, the trace operators annihilate the states and the Hamiltonian reduces to the sum of two decoupled Heisenberg XXX$_{1/2}$ Hamiltonians, one acting onto the even sites and one acting onto the odd sites. The only coupling between these two sublattices comes from the cyclicity condition which says that the \emph{total} momentum of all excitations has to be zero (mod $2\pi$), not individually for the even and odd sites. Nevertheless, the Hamiltonians will continue to be decoupled up to six loop order \cite{Gromov:2008qe}.
The extension of the two-loop Hamiltonian to the full theory was derived in \cite{Zwiebel:2009vb} and \cite{Minahan:2009te}. The integrability in the $\grp{OSp}(4|2)$ sector was proved by means of a Yangian construction \cite{Zwiebel:2009vb}.
The generalization to ABJ theory at two loops was studied in the scalar sector \cite{Bak:2008vd} and the full theory \cite{Minahan:2009te}, which at this perturbative order amounts to replacing $\lambda^2$ in the ABJM result by $\lambda\hat{\lambda}$, cf.\ \eqref{eqn:ABJ-coupling}. That means that the absence of parity in ABJ theory is not visible at two loop order.
Beyond two loops only the dispersion relation, i.e.\ the eigenvalue of the Hamiltonian on spin-chains with a single excitation, is known to date. It is of the general form \eqref{eqn:general-disp-rel}. The expansion of the interpolating function $h$ to four-loop order was computed for the ABJM and the ABJ theory in \cite{Minahan:2009aq,Minahan:2009wg,Leoni:2010tb} with the result
\begin{eqnarray} \label{eqn:h-func-ABJ}
h^2(\lambda,\hat{\lambda}) = \lambda\hat{\lambda} - (\lambda\hat{\lambda})^2 \Biggsbrk{
\frac{2\pi^2}{3} + \frac{\pi^2}{6} \biggbrk{\frac{\lambda-\hat{\lambda}}{\sqrt{\lambda\hat{\lambda}}}}^2
} \; ,
\end{eqnarray}
where the ABJM expression is obtained from this by setting the two 't Hooft couplings equal to each other. We see that $h(\lambda,\lambda)$ is for the form \eqref{eqn:general-h-expansion} with $c_1=-\pi^2/3$. Note that \eqref{eqn:h-func-ABJ} is invariant under the exchange of $\lambda$ and $\hat{\lambda}$, even though ABJ theory lacks manifest parity invariance. The fact that parity is not broken in the spin-chain picture is \emph{not} a consequence of integrability, because as shown in \cite{Bak:2008vd} there are integrable but parity breaking spin-chain Hamiltonians already at two loops. Alternative explanations for the non-visibility of parity breaking were proposed \cite{Bak:2008vd}. In ABJ theory one can also study the limit $\lambda \gg \hat{\lambda}$ \cite{Minahan:2009aq}. In this limit, the Hamiltonian of the $\grp{SU}(2)\times\grp{SU}(2)$ sector is, at any loop order, proportional to two decoupled Heisenberg spin-chain Hamiltonians \cite{Minahan:2009aq}. An exact expression for the $\lambda$-dependent prefactor, which gives a prediction for the function $h(\lambda,\hat{\lambda})$ in the limit $\hat{\lambda} \ll \lambda$, has been conjectured in \cite{Minahan:2010nn}. Very recently, even for the case when $\lambda=\hat{\lambda}$ an all-order guess for $h^2(\lambda)$ was made \cite{Leoni:2010tb}, that is in line with the weak and strong coupling data.
At six loops only a subset of Feynman diagrams have been evaluated, namely those which move the impurities along the spin-chain by the maximal amount that is possible at this loop order \cite{Bak:2009tq}. The contributions from this subset to the dilatation operator are consistent with the corresponding spin-chain being integrable \cite{Bak:2009tq}.
Also non-planar contributions to the two-loop dilatation operator have been computed in the $\grp{SU}(2)\times\grp{SU}(2)$ sector \cite{Kristjansen:2008ib}. The degeneracy of the dimensions of parity pairs at the planar level, which is a signature of integrability, is lifted by the non-planar contributions \cite{Kristjansen:2008ib}. At the non-planar level, one can also observe the breaking of parity in the ABJ theory already at two loops \cite{Caputa:2009ug}.
\section{Superstrings on \texorpdfstring{$\mathrm{AdS}_4 \times \mathrm{CP}^3$}{AdS4xCP3}}
\label{sec:IIA-theory}
\paragraph{String background.} $\mathrm{AdS}_4 \times \mathrm{CP}^3$ with two- and four-form fluxes turned on is a solution to IIA supergravity that preserves 24 out of 32 supersymmetries \cite{Nilsson:1984bj}, i.e.\ unlike $\mathrm{AdS}_5 \times S} % {\mathbbm{S}^5$ it is not maximally supersymmetric. The $\mathrm{AdS}_4 \times \mathrm{CP}^3$ superspace geometry has been constructed in \cite{Gomis:2008jt}. The fermionic coordinates $\Theta^{1..32} = \bigbrk{\vartheta^{1..24},\upsilon^{1..8} }$ split into 24 coordinates $\vartheta$, which correspond to the unbroken supersymmetries of the background, and eight coordinates $\upsilon$ corresponding to the broken supersymmetries.
\paragraph{Green-Schwarz action.} Although formal expressions for the Green-Schwarz superstring action exist for any type II supergravity background \cite{Grisaru:1985fv}, in practice it is generically hopeless to find exact expressions for the supervielbeins. Nevertheless, utilizing the connection to M-theory on $\mathrm{AdS}_4 \times S} % {\mathbbm{S}^7$, all functions that are required to write down the Nambu-Goto form of the action, in particular the supervielbeins and the NS-NS two-form superfield, were explicitly spelled out in \cite{Gomis:2008jt}. Two different $\kappa$-gauge-fixed versions of the action were given in \cite{Grassi:2009yj} and \cite{Uvarov:2009hf}. The latter version was obtained by a double dimensional reduction of the action of the supermembrane on $\mathrm{AdS}_4\timesS} % {\mathbbm{S}^7$.
\paragraph{Coset action.} A less complete, but sometimes more pragmatic, approach to strings on $\mathrm{AdS}_4 \times \mathrm{CP}^3$ has earlier been taken in \cite{Arutyunov:2008if} and \cite{Stefanski:2008ik}. The observation is that $\mathrm{AdS}_4$ is the coset $\grp{SO}(2,3)/\grp{SO}(1,3)$ and $\mathrm{CP}^3$ is the coset $\grp{SO}(6)/\grp{U}(3)$, and that $\grp{SO}(2,3)\times\grp{SO}(6)$ is the bosonic subgroup of $\grp{OSp}(6|4)$. Thus the idea is to write the superstrings action as a sigma-model on the supercoset
\begin{eqnarray} \label{eqn:osp-coset}
\frac{\grp{OSp}(6|4)}{\grp{SO}(1,3)\times \grp{U}(3)} \; ,
\end{eqnarray}
analogously to the $\grp{PSU}(2,2|4)/\grp{SO}(1,4)\times\grp{SO}(5)$ coset model for superstrings on $\mathrm{AdS}_5 \times S} % {\mathbbm{S}^5$ \cite{Metsaev:1998it}, which itself was inspired by the WZW-type action for strings in flat space \cite{Henneaux:1984mh}. Again it is possible to define a $\mathbbm{Z}_4$ grading \cite{Berkovits:1999zq} of the (complexified) algebra \cite{Arutyunov:2008if,Stefanski:2008ik}, and when this grading is used to split up the current one-form $A = -g^{-1}dg = A^{(0)} + A^{(1)} + A^{(2)} + A^{(3)}$, constructed from a parametrization of the coset representatives $g$, then the coset action is given by
\begin{eqnarray} \label{eqn:cosetaction}
\mathcal{S} = - \frac{R^2}{4\pi\alpha'} \int\!d\sigma\,d\tau \:
\mathop{\mathrm{str}} \Bigsbrk{ \sqrt{-h}\, h^{\alpha\beta} \, A^{(2)}_\alpha A^{(2)}_\beta
+ \kappa \levi^{\alpha\beta} \, A^{(1)}_\alpha A^{(3)}_\beta } \; .
\end{eqnarray}
The explicit form of this sigma-model action can look quite differently depending on the choice of coset representative and the choice of gauge \cite{Arutyunov:2008if,Stefanski:2008ik,Uvarov:2008yi,Zarembo:2009au}.
\paragraph{Fermions, $\kappa$-symmetry and singular configurations.} There is a subtle problem with the coset action \eqref{eqn:cosetaction}. The supercoset \eqref{eqn:osp-coset} has only 24 fermionic directions, which is the number of supersymmetries preserved by the background. However, independent of how many supersymmetries are preserved, the Green-Schwarz superstring always requires two Majorana-Weyl fermions with a total number of 32 degrees for freedom. Thus the coset model misses 8 fermions and can therefore not be equivalent to the GS string! This problem did not exist in the case of $\mathrm{AdS}_5 \times S} % {\mathbbm{S}^5$ because that background is maximally supersymmetric and the corresponding supercoset has 32 fermionic directions.
It has been argued that the eight missing fermions $\upsilon$ are part of the 16 fermionic degrees of freedom that due to $\kappa$-gauge symmetry are unphysical anyway, i.e.\ to think of the coset action on \eqref{eqn:osp-coset} as an action with $\kappa$-symmetry partially gauge-fixed. Of the remaining 24 fermions $\vartheta$, further 8 should then be unphysical. For this interpretation to be correct, the rank of $\kappa$-symmetry of the coset action must be 8. This is in fact true for generic bosonic configurations \cite{Arutyunov:2008if,Stefanski:2008ik}, unfortunately however not for strings that move only in the AdS part of the background, in which case the rank of $\kappa$-symmetry is 12 \cite{Arutyunov:2008if}. This means that on such a ``singular configuration'' the coset model is a truncation of the GS string where instead of removing 8 unphysical fermions (from 32 to 24), 4 physical fermions have been put to zero, while 4 unphysical fermions have been retained. A similarly singular configuration from the point of view of the coset model is given by the worldsheet instanton in $\mathrm{CP}^3$ of the Wick-rotated theory \cite{Cagnazzo:2009zh}.
The upshot is that the coset model is generically equivalent to the GS string, but \emph{not} on singular backgrounds. The consequence is that these singular backgrounds cannot be quantized semi-classically within the coset description.
\paragraph{Near plane-wave expansion.} One method for dealing with a curved RR-background at the quantum level is to take a Penrose limit of the geometry which leads to a solvable plane-wave background and then to include curvature corrections perturbatively. Penrose limits of the $\mathrm{AdS}_4\times\mathrm{CP}^3$ background were studied in \cite{Nishioka:2008gz,Gaiotto:2008cg,Grignani:2008is,Astolfi:2009qh,Grignani:2009ny}. The near plane-wave Hamiltonian was derived in a truncation\footnote{This truncation is not consistent and the absence of the fermions yields divergences, which were regularized using $\zeta$-function regularization. Up to so-called ``non-analytic'' terms, the result is correct.} to the bosonic sector in \cite{Astolfi:2008ji,Sundin:2008vt}, for a sector including fermions in \cite{Sundin:2009zu}, and for the full theory in \cite{Astolfi:2009qh}. Taking the Penrose limit of the $\mathrm{AdS}_4 \times \mathrm{CP}^3$ geometry in the AdS-light-cone gauge \cite{Uvarov:2009hf}, one ends up with a trivial plane-wave, namely flat space \cite{Uvarov:2009nk}. In this case, not just the near-flat-space but the exact Hamiltonian is known \cite{Uvarov:2009nk}.
\paragraph{Pure spinors.} The pure spinor formulation of the superstring on $\mathrm{AdS}_4\times\mathrm{CP}^3$ was developed in \cite{Fre:2008qc,Bonelli:2008us,D'Auria:2008cw}. This approach is suitable for the covariant quantization of the string.
\section{From \texorpdfstring{$\mathrm{AdS}_4 \times \mathrm{CP}^3$}{AdS4xCP3} to the integrable model}
\label{sec:IIAtoInt}
\paragraph{Evidence for integrability.} The purely bosonic sigma-model on $\mathrm{AdS}_4 \times \mathrm{CP}^3$ is integrable at the classical level, though quantum corrections spoil the integrability \cite{Abdalla:1982yd,Abdalla:1984en}. For the super-coset model, classical integrability is also proven \cite{Arutyunov:2008if,Stefanski:2008ik}. The Lax connection found in \cite{Bena:2003wd} for the $\mathrm{AdS}_5\timesS} % {\mathbbm{S}^5$ case as a means of writing the equations of motion in a manifestly integrable form is directly applicable here. Moreover, the absence of particle production in the coset sigma-model has been shown explicitly for bosonic amplitudes at tree-level \cite{Kalousios:2009ey}. However, we know that the full GS string is more than the coset model. Evidence for the classical integrability of the complete $\mathrm{AdS}_4\times\mathrm{CP}^3$ superstring was recently given in \cite{Sorokin:2010wn} by constructing a Lax connection that was shown to be flat for (a) strings that move in a certain subspace that is different from the coset model and (b) the full theory to at least second order in fermions. Different integrable reductions of the sigma model have also been studied \cite{Ahn:2008hj,Rashkov:2008rm,Dukalski:2009pr}.
\paragraph{Matching $\mathbf{AdS_4\times CP^3}$ to ABJM theory.} The metric on $\mathrm{AdS}_4\times \mathrm{CP}^3$ has the two factors
\begin{eqnarray} \label{eqn:metric-AdS4CP3}
ds^2 = R^2 \Bigsbrk{ \tfrac{1}{4} ds^2_{\mathrm{AdS}_4} + ds^2_{\mathrm{CP}^3}} \; ,
\end{eqnarray}
where $R$ is the radius of $\mathrm{CP}^3$ which is twice the radius of $\mathrm{AdS}_4$. This relative size is demanded by supersymmetry and comes out automatically when one starts from the coset action \eqref{eqn:cosetaction}. The radius $R$ is related to the 't Hooft coupling $\lambda$ of ABJM theory by \eqref{eqn:stringparameter}. In global coordinates the metric for $\mathrm{AdS}_4$ reads
\begin{eqnarray} \label{eqn:metric-AdS4}
ds^2_{\mathrm{AdS}_4} = -\cosh^2\rho \, dt^2 + d\rho^2 + \sinh^2\rho \bigbrk{ d\theta^2 + \sin^2\theta\, d\varphi^2}
\end{eqnarray}
with coordinate ranges $\rho=0\ldots\infty$, $t=-\infty\ldots\infty$, $\theta=0\ldots\pi$, and $\varphi=0\ldots2\pi$. The metric on $\mathrm{CP}^3$ is the standard Fubini-Study metric and can be written as
\begin{eqnarray} \label{eqn:metric-CP3}
ds^2_{\mathrm{CP}^3} \eq d\xi^2 + \cos^2\xi\sin^2\xi \Bigsbrk{ d\psi + \sfrac{1}{2}\cos\theta_1\,d\varphi_1 - \sfrac{1}{2} \cos\theta_2\,d\varphi_2 }^2 \nl
+ \sfrac{1}{4} \cos^2\xi \Bigsbrk{ d\theta_1^2 + \sin^2\theta_1 \, d\varphi_1^2 }
+ \sfrac{1}{4} \sin^2\xi \Bigsbrk{ d\theta_2^2 + \sin^2\theta_2 \, d\varphi_2^2 } \; .
\end{eqnarray}
The coordinates $(\theta_1,\varphi_1)$ and $(\theta_2,\varphi_2)$ parameterize two two-spheres, the angle $\xi=0\ldots\frac{\pi}{2}$ determines their radii, and the angle $\psi=0\ldots2\pi$ corresponds to the $\grp{U}(1)_R$ isometry.
\begin{table}
\begin{center}
\begin{tabular}{l|l|l}
field & mass & dispersion relation \\ \hline
$t$, $\psi$ & $0$ & $\omega_n = n$ \\
$x_{1,2,3}$, $\xi$ & $\kappa$ & $\omega_n = \sqrt{\kappa^2 + n^2}$ \\
$\theta_{1,2}$, $\varphi_{1,2}$ & $\kappa/2$ & $\omega_n = \sqrt{(\kappa/2)^2 + n^2} \pm \kappa/2$
\end{tabular}
\end{center}
\caption{\textbf{Spectrum of fluctuations about the point-like string.} Two linear combinations of $\theta_{1,2}$ and $\varphi_{1,2}$ possess the dispersion relation with $+\kappa/2$, and two other linear combinations the one with $-\kappa/2$.}
\label{tab:fluctuations-point}
\end{table}
The background admits five Killing vectors
\begin{eqnarray} \label{eqn:killing-vectors}
E = -i \partial_t
\comma
S = -i \partial_\varphi
\comma
J_{\varphi_1} = -i \partial_{\varphi_1}
\comma
J_{\varphi_2} = -i \partial_{\varphi_2}
\comma
J_{\psi} = -i \partial_\psi
\end{eqnarray}
leading to the five conserved charges: the worldsheet energy $E$, the AdS-spin $S$ and the $\mathrm{CP}^3$ momenta $J_{\varphi_1}$, $J_{\varphi_2}$, and $J_\psi$. Note that this is one conserved charge less than in the $\mathrm{AdS}_5 \times S} % {\mathbbm{S}^5$ case where there are two AdS-spins. This shows that $\mathrm{AdS}_4\times\mathrm{CP}^3$ is less symmetric. The charges \eqref{eqn:killing-vectors} are one choice of Cartan generators of $\grp{SO}(3,2)\times\grp{SU}(4)$. The angular momenta $J_{\varphi_1}$ and $J_{\varphi_2}$ correspond to the Cartan generators of two $\grp{SU}(2)$ subgroups that on the gauge theory side transform $(Y^1,Y^2)$ and $(Y^3,Y^4)$, respectively. The angular momentum $J_\psi$ is the $\grp{U}(1)_R$ generator. Thus, the angular momenta are related to the charges in \tabref{tab:ABJM-spin-chain-charges} according to
\begin{eqnarray} \label{eqn:string-gauge-charges}
J_{\varphi_1} = \sfrac{1}{2} p_1
\comma
J_{\varphi_2} = \sfrac{1}{2} p_2
\comma
J_\psi = q + \sfrac{1}{2} (p_1 + p_2)
\; .
\end{eqnarray}
These relations are important for identifying classical strings with gauge theory operators. It also suggests a parametrization of $\mathrm{CP}^3$ inside $\mathbbm{C}^4$ in terms of the embedding coordinates
\begin{align}
y^1 & = \cos\xi \, \cos\tfrac{\theta_1}{2} \, e^{ \, i(+\varphi_1+\psi)/2 } &
y^3 & = \sin\xi \, \cos\tfrac{\theta_2}{2} \, e^{ \, i(+\varphi_2-\psi)/2 } \\
y^2 & = \cos\xi \, \sin\tfrac{\theta_1}{2} \, e^{ \, i(-\varphi_1+\psi)/2 } &
y^4 & = \sin\xi \, \sin\tfrac{\theta_2}{2} \, e^{ \, i(-\varphi_2-\psi)/2 } \nonumber
\end{align}
which can be identified one-to-one with the scalar fields $Y^A$ of ABJM theory.
\paragraph{Worldsheet spectrum.} In order to relate the string description to the spin-chain picture, we need to quantize the worldsheet theory. It is only known how to do this by semiclassical means, i.e.\ by expanding the string about a classical solution and quantizing the fluctuations. As can be seen from the charges, the classical string solution that corresponds to the vacuum spin-chain, or in other words to the gauge theory operator $\mathop{\mathrm{tr}} (Y^1 Y^\dagger_4)^L$ (with $L$ large so that the string becomes classical), is a point-like string that moves along the geodesic parametrized by $t = \kappa \tau$, $\psi = \kappa \tau$, located at the center of $\mathrm{AdS}_4$ ($\rho=0$) and the equator of $\mathrm{CP}^3$ ($\xi=\pi/4$), and furthermore sitting at the north pole of the first sphere ($\theta_1=0$) and at the south pole of the other sphere ($\theta_2=\pi$). Expanding the fields in fluctuations of order $\lambda^{-1/4}$ yields the mass spectrum given in \tabref{tab:fluctuations-point}.
The massless fluctuations $\tilde{t}$ and $\tilde{\psi}$ can be gauged away, i.e. set to zero. This is the usual light-cone gauge, $t+\psi \sim \tau$, with one light-cone direction in $\mathrm{AdS}_4$ and one in $\mathrm{CP}^3$. We are left with 4 light excitations ($\theta_{1,2}$, $\varphi_{1,2}$) from $\mathrm{CP}^3$ and 4 heavy excitations of which one ($\xi$) comes from $\mathrm{CP}^3$ and the other three ($x_{1,2,3}$) from $\mathrm{AdS}_4$. For the eight physical fermions the same pattern is found: 4 light excitations of mass $\kappa/2$ and 4 heavy excitations of mass $\kappa$.
These worldsheet modes transform in definite representations of the residual symmetry group $\grp{SU}(2|2)\times\grp{U}(1)_{\mathrm{extra}}$ that is left after fixing the light-cone gauge \cite{Bykov:2009jy}. The light fields form two $(2|2)$-dimensional supermultiplets \cite{Zarembo:2009au}
\begin{subequations} \label{eqn:worldsheet-particles}
\begin{eqnarray}
\mbox{``$A$''-particles:} && (X^a,\psi_\alpha) \; , \label{eqn:worldsheet-A-particle} \\[1mm]
\mbox{``$B$''-particles:} && (X^\dagger_a,\psi^{\dagger \alpha}) \; , \label{eqn:worldsheet-B-particle}
\end{eqnarray}
\end{subequations}
where $a=1,2$ and $\alpha=1,2$ are $\grp{SU}(2)_{G}\times\grp{SU}(2)_{r}$ indices. The doublet of complex scalars $X^a$ is a combination of $\theta_{1,2}$ and $\varphi_{1,2}$, and the fermions are written in terms of a complex spinor $\psi_\alpha$. These two supermultiplets correspond precisely to the $A$- and $B$-particles \eqref{eqn:spin-chain-particles} in the spin-chain picture, respectively!
The heavy fields form one $(1|4|3)$-dimensional supermultiplet $(\xi,\chi^a_\alpha, x_{1,2,3})$ \cite{Zarembo:2009au}. The bosonic components are literally the coordinates used above, and the fermionic component is a doublet of Majorana spinors. These heavy fields, however, do not count as independent excitations in the spin-chain description, they are rather an artifact of the above analysis which is done at infinite coupling $\lambda$. When going to finite coupling they ``dissolve'' into two light particles \cite{Zarembo:2009au}. At the technical level this is seen by looking at which particle poles appear in Green's functions at \emph{not} strictly infinite coupling \cite{Zarembo:2009au,Sundin:2009zu}. The first observation is that in the free theory the pole for the heavy particles with mass $\kappa$ coincides with the branch point of the branch cut that accounts for the pair production of two light modes with mass $\tfrac{\kappa}{2}$ each. When interactions are turned on, i.e. when $1/\sqrt{\lambda}$ corrections are considered, the pole moves into the branch cut, and the statement is that the exact propagator has a branch cut only.
\paragraph{Giant magnons.} As we have just seen, the worldsheet fluctuations match the spin-chain excitations, but only as far as their charges are concerned. The dispersion relation of the worldsheet excitations is relativistic rather than periodic as in \eqref{eqn:general-disp-rel}. In order to see the periodic dispersion relation also on the string theory side, macroscopically many quanta must be excited. The result are classical string solutions known as giant magons \cite{Hofman:2006xt}, or dyonic giant magnons \cite{Dorey:2006dq,Chen:2006gea} if they have at least two non-zero angular momenta. The dispersion relation of all dyonic giant magnons are of the form \eqref{eqn:general-disp-rel} for appropriate values for $Q$.
The variety of giant magnons in $\mathrm{CP}^3$ is somewhat larger than in $S} % {\mathbbm{S}^5$. The simplest types are obtained by embedding the HM giant magnon \cite{Hofman:2006xt} into subspaces of $\mathrm{CP}^3$ \cite{Gaiotto:2008cg} (see also \cite{Abbott:2008qd}). There are two essentially different choices: one may either pick a proper two-sphere inside $\mathrm{CP}^3$ or a two-sphere with antipodes identified. According to these subspaces the former choice leads to what is called the $\mathrm{CP}^1$ ($\congS} % {\mathbbm{S}^2$) giant magnon \cite{Gaiotto:2008cg} and the latter choice to the so-called $\mathrm{RP}^2$ ($\cong S} % {\mathbbm{S}^2/\mathbbm{Z}_2$) giant magnon \cite{Gaiotto:2008cg,Grignani:2008is}.
The $\mathrm{RP}^2$ giant magnon is in fact a threshold bound state of two HM giant magnons, one inside each of the $S} % {\mathbbm{S}^2$s parametrized by $(\theta_1,\varphi_1)$ and $(\theta_2,\varphi_2)$ in \eqref{eqn:metric-CP3} \cite{Grignani:2008is}. Therefore this kind of giant magnon is sometimes referred to as the $S} % {\mathbbm{S}^2\timesS} % {\mathbbm{S}^2$ magnon or as the $\grp{SU}(2)\times\grp{SU}(2)$ magnon. This is, however, somewhat misleading as the two constituent magnons do not move independently.
The dyonic generalization of the $\mathrm{CP}^1$ giant magnon moves in a $\mathrm{CP}^2$ subspace of $\mathrm{CP}^3$ and was found for momentum $p=\pi$ in \cite{Abbott:2009um} and for general momenta in \cite{Hollowood:2009sc}. This giant magnon does not have an analogue in $\mathrm{AdS}_5\timesS} % {\mathbbm{S}^5$. The $\mathrm{CP}^2$ dyonic giant magnons are in one-to-one correspondence with the elementary spin chain excitations \eqref{eqn:spin-chain-particles}: the polarizations of the giant magnons match the flavors of the excitations \cite{Hatsuda:2009pc}. In \cite{Hatsuda:2009pc} it has also been shown, that the classical phase shifts in the scattering of these dyonic giant magnons are consistent
with the S-matrix proposed by \cite{Ahn:2008aa}. The general scattering solutions of $N$ giant magnons have also been known since very recently \cite{Kalousios:2010ne}, in fact for the much wider context of giant magnons on $\mathrm{CP}^n$, $\grp{SU}(n)$ and $S} % {\mathbbm{S}^n$ \cite{Hollowood:2009tw}.
The dyonic generalization of the $\mathrm{RP}^2$ giant magnon moves in a $\mathrm{RP}^3$ subspace of $\mathrm{CP}^3$ and was found in \cite{Ahn:2008hj}. This giant magnon is the CDO dyonic giant magnon on $S} % {\mathbbm{S}^3$ \cite{Chen:2006gea} embedded into $\mathrm{RP}^3$. It can be regarded as a composite of two $\mathrm{CP}^2$ dyonic magnons with equal momenta \cite{Hatsuda:2009pc}. Finally, by the dressing method one can also find a two-parameter one-charge solution \cite{Hollowood:2009sc,Kalousios:2009mp,Suzuki:2009sc}.
\section{Solving \texorpdfstring{AdS$_4$/CFT$_3$}{AdS4/CFT3} using integrability}
\label{sec:integrability}
In this section, we will briefly discuss those aspects of the methods employed to solve the AdS$_4$/CFT$_3$ model that differ from the ones in the AdS$_5$/CFT$_4$ case. For an introduction to these tools, we refer to the other chapters of this review. For the Bethe ansatz see \cite{chapABA}, for the S-matrix see \cite{chapSMat}, for the algebraic curve see \cite{chapCurve}, and for the thermodynamic Bethe ansatz and the Y-system see \cite{chapTBA,chapTrans}.
\paragraph{Asymptotic Bethe equations.} The Bethe equations for the two-loop $\grp{SU}(4)$ sector were derived within the algebraic Bethe ansatz scheme in \cite{Minahan:2008hf}, where also the extension of the Bethe equations to the full theory, though still at one loop, were conjectured. The form of these equations is quite canonical and the couplings between the Bethe roots is encoded in the Dynkin diagram of $\grp{OSp}(6|4)$, see \tabref{tab:nutshell}. The all-loop extension of the Bethe equations was conjectured in \cite{Gromov:2008qe}.
The fact that we now have two types of momentum carrying roots---call them $u$ and $v$---means that the conserved charges are given by sums over all roots of both of these kinds
\begin{eqnarray} \label{eqn:BA-charges}
Q_n = \sum_{j=1}^{K_u} q_n(u_j) + \sum_{j=1}^{K_v} q_n(v_j) \; ,
\end{eqnarray}
where $q_n$ is the charge carried by a single root. The spin-chain energy, or anomalous dimension, or string light-cone energy, is the second charge $E = h(\lambda) Q_2$. The other Bethe roots---call them $r$, $s$, and $w$---are auxiliary roots and influence the spectrum only indirectly through their presence in the Bethe equations.
The $\grp{SU}(2)\times\grp{SU}(2)$ sector is given by only exciting the momentum carrying roots. The $\grp{SU}(4)$ sector uses the roots $u$, $v$, $r$, though this sector is only closed at two loops. The four components of an $A$-particle, cf.\ \eqref{eqn:spin-chain-particles} and \eqref{eqn:worldsheet-particles}, correspond to the states with one $u$ root and excitation numbers $\{K_r,K_s,K_w\} = \{0,0,0\}$, or $\{1,0,0\}$, or $\{1,1,0\}$, or $\{1,1,1\}$ for the auxiliary roots. The same holds for the $B$-particle if the $u$-root is replaced by one of type $v$. This accounts for all light excitations. The heavy excitations are given by a stack of one of each kind of the momentum carrying roots. This is the Bethe ansatz way of seeing that the heavy excitations are compounds.
This Bethe ansatz has been put to a systematic test by comparing the predicted eigenvalues to the direct diagonalization of the spin-chain Hamiltonian for various length-4 and length-6 states at two loops \cite{Papathanasiou:2009zm}.
\paragraph{S-Matrix.} It has been shown that the proposed all-loop Bethe ansatz can be derived from an exact two-particle S-matrix \cite{Ahn:2008aa}. The alternating nature of the spin-chain, naturally breaks the S-matrix up into pieces: interactions between two $A$-particles, between two $B$-particles, and between one of each kind \cite{Ahn:2008aa}, where each piece is proportional to the old and famous $\grp{SU}(2|2)$ S-matrix \cite{Beisert:2005tm,Arutyunov:2006yd} from AdS$_5$/CFT$_4$. Crossing symmetry relates $AA$- and $BB$- to $AB$-scattering and therefore does not fix the overall scalar factor for any of them uniquely. A solution that is consistent with the Bethe equations was made in \cite{Ahn:2008aa} and uses the BES dressing phase \cite{Beisert:2006ez}.
This S-matrix does not have poles that correspond to the heavy particles, which is in line with them not being asymptotic states. The heavy particles occur, however, as intermediate states. That is seen from the fact that they appear as internal lines in the Feynman diagrams that are used to derive the worldsheet S-matrix from scattering amplitudes \cite{Zarembo:2009au}.
The S-matrix has the peculiarity that the scattering of $A$- and $B$-particles is reflectionless \cite{Ahn:2008tv}. Though at first unexpected, this property has been confirmed perturbatively at weak \cite{Ahn:2009zg} and at strong coupling \cite{Zarembo:2009au}. This reflectionlessness would follow straightforwardly if one assumes that the two terms in \eqref{eqn:BA-charges} were individually conserved \cite{Ahn:2009tj}.
\paragraph{Algebraic curve.} The algebraic curve for the AdS$_4$/CFT$_3$ duality was constructed from the string coset sigma-model in \cite{Gromov:2008bz}. It is a ten-sheeted Riemann surface $q(x)$ whose branches---or quasi-momenta---are pairwise related $q_{1,2,3,4,5}=-q_{10,9,8,7,6}$. The physical domain is defined for spectral parameter $\abs{x}>1$. The values of the quasi momenta within the unit circle are related to their values outside it by an inversion rule \cite{Gromov:2008bz}. Branch cut and pole conditions are identical to the ones in the AdS$_5$/CFT$_4$ case. The Virasoro constraints demand that the quasi momenta $q_1,\ldots,q_4$ all have a pole with the same residue at $x=1$ and another one at $x=-1$, while the quasi momentum $q_5$ cannot have a pole at $x=\pm1$.
For a given algebraic curve, the charges of the corresponding string solution are encoded in the large $x$ asymptotics. E.g.\ the curve
\begin{eqnarray} \label{eqn:vacuum-curve}
q_{1}(x) = \ldots = q_{4}(x) = \frac{L}{2g} \frac{x}{x^2-1}
\comma
q_5(x) = 0 \; .
\end{eqnarray}
carries the charges $(\Delta_0,S,J_{\varphi_1},J_{\varphi_2},J_{\psi}) = (L,0,\frac{L}{2},\frac{L}{2},L)$ and $\delta\Delta = 0$ of $\mathop{\mathrm{tr}}(Y^1 Y^\dagger_4)^L$ and thus corresponds to the vacuum. String excitations are represented by additional poles that connect the various branches. A dictionary between the polarizations of the excitations and the different branch connections is given in \cite{Gromov:2008bz}. The light modes can be recognized as those which connect a non-trivial sheet with a trivial sheet in \eqref{eqn:vacuum-curve}, and the heavy modes are those which connect two non-trivial sheets.
\paragraph{Thermodynamic Bethe ansatz and Y-system.} The Y-system for the $\grp{OSp}(6|4)$ spin-chain was conjectured along with the corresponding equations for AdS$_5$/CFT$_4$ in \cite{Gromov:2009tv}. A derivation of the Y-system, i.e. writing down the asymptotic Bethe ansatz at finite temperature for the mirror theory, formulating the string hypothesis, and Wick rotating back to the original theory, was performed in \cite{Bombardelli:2009xz} and \cite{Gromov:2009at}, and a modification of the original conjecture was found.
\section*{Acknowledgements}
I am very happy to thank T.~McLoughlin and O.~Ohlsson Sax for numerous very helpful discussions. Part of this review was written while I was still affiliated with the Princeton Center for Theoretical Science whom I thank for their support.
\phantomsection
\addcontentsline{toc}{section}{\refname}
|
1,116,691,497,904 | arxiv | \section{Introduction}
Rust is a statically typed programming language designed to improve performance and security by solving problems that C/C++ developers have long struggled with: memory errors and concurrent programming. Rust provides a way to import other libraries into your project, primarily through Rust's package manager, Cargo. These third-party libraries, known in the Rust ecosystem as crates, imported from the open source component central repository crates.io\cite{1cratesIo}, Cargo can help build code, download and compile third-party dependencies. Cargo is almost indispensable if you want to manage third-party dependencies when writing more complex Rust programs, but the Cargo ecosystem still faces problems with malicious dependencies\cite{2pfretzschner2017identification,3duan2020towards,4ohm2020towards,5gkortzis2021software,6ferreira2021containing,7prana2021out,8jafari2021dependency,9lauinger2018thou,zerouali2019impact}, vulnerability propagation\cite{10decan2018impact,11kikas2017structure,12liu2022demystifying}, poor compatibility\cite{8jafari2021dependency,13moller2020detecting,14hafner2021node}, license violations\cite{15decan2019package,16qiu2021empirical,decan2019empirical,zerouali2018empirical},technical Lag of dependencies\cite{cox2015measuring,decan2018evolution,chinthanet2019lag} and difficulty managing dependencies\cite{decan2016github,catuogno2017secure}.
Security vulnerabilities in open-source software third-party dependencies are one of the most pressing issues facing the open-source software supply chain; discovering and fixing vulnerabilities in packages can take a long time, and During this time, vulnerabilities can propagate to dependent packages, which poses a significant potential security risk to the open-source software supply chain. The December 2021 outbreak of the open-source vulnerability log4j\cite{log4j} is a stark example of these attacks, which exploit the increasing use of an open-source in the software development process, which is facilitated by dependency management that automatically parses, downloads, and installs hundreds of open source packages throughout the software lifecycle. The occurrence of such attacks.
Previous research has focused on vulnerability propagation in the ecosystem of Npm\cite{zimmermann2019small,6ferreira2021containing,8jafari2021dependency,cogo2021empirical}, RubyGems\cite{15decan2019package,11kikas2017structure,decan2017empirical}, Maven\cite{soto2021comprehensive,asyrofi2020ausearch}, Pypi\cite{decan2016topology,imminni2016spyse,valiev2018ecosystem,wang2020watchman}, packagist\cite{15decan2019package}, etc. Very little work has been done on vulnerability propagation in the Cargo ecosystem, In this study we focus on the security of dependencies in the Cargo ecosystem. Most scholars only consider direct dependencies in other studies on vulnerability propagation in ecosystems, and fewer scholars consider pass-through dependencies. However, they do not compare the pass-through dependencies with the actual official parsing rules, resulting in the limited accuracy of their research results. Our study not only considers direct dependencies and transmission dependencies but also takes into account the actual official parsing rules. Hence, it is more accurate and reasonable to study vulnerability propagation in the Cargo ecosystem through this approach. In this study, we focus on the security of dependencies in the Cargo ecosystem. We have conducted data mining and analysis of the official package registry of Cargo and combined the known security vulnerabilities of Rust\cite{GitHubAdvisoryDatabase} published on GitHub to build a knowledge graph of Cargo dependency vulnerabilities through the Neo4j\cite{Neo4j} graph database. rules and semantic\cite{Semantic} version control systems. Our results can help researchers better understand the vulnerability propagation problem in the Cargo package ecosystem, help the Rust community improve package review mechanisms, and reduce software supply chain attacks against the Rust language.
The challenges faced in this paper include the following:
\begin{itemize}
\item {\verb|Data Analysis|}: The Cargo ecosystem's official package registry crates.io has released over 70,000 packages, each containing several different versions on average, each with different dependency information, forming a large and complex dependency network between dependencies, some of which may be discarded after release, making it extremely difficult to obtain, process, and analyze the data.
\item{\verb|Building a knowledge graph|}: After obtaining data about the Cargo package ecosystem from crates.io, it is necessary to construct a knowledge graph based on the correlations between the data. The knowledge graph structure should be designed to accurately reflect the relationships between dependencies and provide data support for the subsequent dependency resolution algorithm, so it is difficult to design the structure of the knowledge graph of dependency vulnerabilities.
\item{\verb|Dependency resolution algorithm design|}: The dependency resolution algorithm is the key to the accurate resolution of Cargo's dependency vulnerability knowledge graph to vulnerability propagation and is the key to determining the path of vulnerability propagation. The dependency resolution algorithm is challenging to satisfy both the official dependency resolution rules and to consider rules such as version control systems.
\item{\verb|Propagation path determination|}: The dependency relationships in the Cargo package ecosystem are very complex, and it is difficult to build a good knowledge graph of dependency vulnerabilities and parsing algorithms to accurately calculate the actual path of vulnerability propagation.
\end{itemize}
The main contributions of this paper are as follows.
\begin{enumerate}
\item [(1)] For the first time, a dependency vulnerability knowledge graph is constructed for the Cargo ecosystem, filling a gap in the field. We constructed the dependency-vulnerability knowledge graph containing 570563 nodes and 4023703 edges, covering all libraries and versions in the Cargo ecosystem in a specific time frame.
\item [(2)] A new algorithm for parsing the dependency vulnerability knowledge graph is proposed, obtaining the dependency passing relationship consistent with the actual installation without installing the corresponding library. Our proposed parsing algorithm only needs to input the name and version number of the library to be parsed. According to the official dependency parsing rules, the algorithm can recursively calculate the dependency transfer relationship and save it as JSON data.
\item [(3)] Based on the Cargo dependency-vulnerability knowledge graph and our proposed parsing algorithm, we conducted the first large-scale empirical study on vulnerability propagation paths, propagation scope, vulnerability characteristics, and vulnerability propagation factors in the Cargo ecosystem.Our study shows that the security vulnerabilities in the Cargo ecosystem are mainly memory-related. 18\% of the libraries affected by the vulnerability have their latest versions still affected the vulnerability. The number of versions affected by the vulnerability propagation in the whole Cargo ecosystem is 19.78\%. The percentage of libraries affected by vulnerability propagation in the whole Cargo ecosystem is 28.61\%.
\item [(4)] Based on our findings, we propose feasible strategies that can be practically implemented to prevent the propagation of the vulnerability to the administrators of the Cargo community, the developers of the Cargo package manager, and the owners of the libraries respectively.
\end{enumerate}
The rest of the paper is organized as follows. Section II presents background information on the Cargo package management mechanism and its vulnerability propagation. Section III discusses the progress of existing work on other ecosystems and the Cargo ecosystem. Section IV presents the idea of constructing the Cargo dependency-vulnerability knowledge graph in this paper. Section V presents the design and implementation of the dependency-vulnerability knowledge graph parsing algorithm. Section VI presents the empirical study of vulnerability propagation in the Cargo ecosystem. Section VII describes the impact of this paper on the Cargo ecosystem and the Rust security community and the limitations of this paper. Section VIII summarizes the main contents of this paper.
\section{Motivation \& Background}
This section will describe Cargo's package management mechanism and some of the rules by which Cargo performs dependency resolution.This section also describes the problems with current approaches to studying the package management ecosystem, as well as why we chose the Knowledge Graph and why we wrote this article.
\subsection{Motivation}
Previously, the analysis of vulnerability propagation in the package manager ecosystem has mainly used static dependency analysis methods\cite{10decan2018impact,alfadel2021empirical}.If only the relationship between dependencies is statically resolved, this way of analyzing vulnerability propagation cannot accurately give the scope of vulnerability propagation in the package management system, because many library version constraints only give an upper limit, but not a lower limit, in this case we can hardly say that all versions below the upper limit will be affected by the vulnerability, so static dependency resolution has a great The limitations of static dependency resolution in calculating the scope of vulnerability propagation. A further problem with existing research is that it does not combine dependencies in the development environment with those in the production environment to analyze the actual propagation of vulnerabilities. In order to solve these problems, we propose a new dependency resolution algorithm by combining the official dependency resolution rules and static dependency resolution methods. Our proposed dependency vulnerability knowledge graph parsing algorithm can be more accurate than previous studies for vulnerability propagation analysis, and can better solve the problem of version range false positives caused by similar library version constraints when only an upper limit is given, because our algorithm can accurately calculate the list of versions that are specifically affected by the target vulnerability in the actual parsing process, instead of just giving a version range, which is our motivation for writing This is the motivation for writing this article.
\subsection{Why choose Knowledge Graph?}
Inspired by Liu et al\cite{12liu2022demystifying}, we find that knowledge graph is a good way to visualize information flow, which can better express the correlation between different versions of dependencies and vulnerabilities in the Cargo ecosystem, and at the same time, knowledge graph can connect structured and unstructured data in the Cargo ecosystem to solve the problem of information silos between these information sources, through which we We can connect libraries, different versions of libraries, and vulnerability data to form a vulnerability dependency knowledge graph that connects upstream and downstream dependencies, and then use this vulnerability dependency knowledge graph as a starting point to propose dependency resolution algorithms, which can better calculate the actual propagation of vulnerabilities in the current ecosystem, which is why we choose the knowledge graph as our research tool.
\subsection{Cargo dependency resolution rules}
Cargo allows users to specify the version of dependencies via the \emph{Cargo.toml} file. Let us take the example of quote\cite{quote}, a third-party package that has been downloaded 98 million times by the Rust community, to analyze how Cargo specifies the version of dependencies. If we specify quote = "1.0.16" in the \emph{Cargo.toml} file, where 1.0.16 appears to be a specific version number, but it represents a version range and allows compatibility updates under the constraints of the version control system Semantic, Table 1 analyzes Cargo's compatibility with the 1.0.16 version requirement as an example Table 1 analyzes the compatibility requirements of Cargo with version 1.0.16 as an example.
\begin{table}[]
\setlength{\tabcolsep}{5ex} \centering
\caption{Cargo version compatibility convention example analysis}
\begin{tabular}{cc}
\hline
\textbf{Version Requirements} & \textbf{Version range} \\ \hline
1.0.16 & \textgreater{}= 1.0.16, \textless 2.0.0 \\
1.0 & \textgreater{}= 1.0.0, \textless{}2.0.0 \\
1 & \textgreater{}= 1.0.0, \textless{}2.0.0 \\
0.0.16 & \textgreater{}= 0.0.16, \textless{}0.0.17 \\
0.0 & \textgreater{}= 0.0.0, \textless{}0.1.0 \\
0 & \textgreater{}= 0.0.0, \textless{}1.0.0 \\ \hline
\end{tabular}
\end{table}
Cargo uses Semantic to constrain the compatibility between different versions of a package. Cargo uses the leftmost non-zero number of the version to determine compatibility, e.g. version numbers 1.0.16 and 1.1.16 are considered compatible, and Cago considers it safe to update in the compatible range, but updates outside the compatibility range are not allowed. For example, updating from 1.0.16 to 2.0.0. Table 2 gives the syntax of Cargo's version requirements for dependencies.
\begin{table}[]
\setlength{\tabcolsep}{3ex} \centering
\caption{Cargo's version requirement syntax for dependencies}
\label{tab:freq}
\begin{tabular}{ccc}
\hline
\textbf{Symbols} & \textbf{Example} & \textbf{Version range} \\ \hline
Insert & \textasciicircum{}1.0.16 & \textgreater{}=1.0.16,\textless{}2.0.0 \\
Wave & $\sim$1.2 & \textgreater{}=1.2.0,\textless{}1.3.0 \\
Wildcard & 1.* & \textgreater{}=1.0.0,\textless{}2.0.0 \\
Equivalent & =1.0.16 & =1.0.16 \\
Compare & \textgreater{}1.2 & \textgreater{}=1.3.0 \\
Composite & \textgreater{}=1.3,\textless{}1.5 & \textgreater{}1.3.0,\textless{}1.5.0 \\ \hline
\end{tabular}
\end{table}
When Cargo encounters multiple packages specifying dependencies for a standard package, it first determines whether the versions of the dependencies specified by the multiple packages conform to the Semantic compatibility convention. It uses a giant version currently available in the compatibility range if they do. If they do not conform to the Semantic compatibility convention, Cargo builds two separate copies of the dependencies, but this may introduce a parsing error. Many of the versions in Cargo are pre-releases, which Cargo does not usually use. To use these pre-releases, the user must specify the pre-release version, which often means that it is unstable. The Semantic version requirement is not the only constraint considered by Cargo's dependency parser but also the characteristics of the package, the type of dependency, the version of the parser, and many other rules.
\section{Related work}
In this section, we will discuss security risks in the package ecosystem and work related to dependency resolution and management.
\subsection{Security risks in package ecosystem}
Many scholars have empirically analyzed the evolution of dependencies in package ecosystems over time, examining how existing package dependencies affect the ecosystem over time. Kikas et al.\cite{11kikas2017structure} analyzed the structure and evolution of dependency networks in JavaScript, Ruby, and Rust ecosystems, and their probing results revealed significant differences between language ecosystems. At the same time, their study shows that vulnerabilities in removing the most popular packages are increasing.
Li et al.\cite{li2022empirical} investigated how yanked values are used in the cargo ecosystem, as well as the reasons and frequency of use, and their findings show that from 2014 to 2020, the percentage of yanked use is increasing all the time, with package holders cancelling releases for other reasons than revoking flawed releases. They also found that 46\% of packages use delayed releases, which resulted in 1.4\% of releases in the ecosystem having unresolved dependencies.
Evans et al.\cite{evans2020rust} conducted a large-scale empirical study of the use of unsafe Rust in real-world Rust libraries and applications. Their study showed that the unsafe keyword is used in less than 30\% of Rust libraries and that more than half of Rust compilers cannot fully statically check for this problem because the unsafe keyword is hidden in the call chain the library. Bae et al.\cite{bae2021rudra} present a procedure for analyzing and reporting potential memory security vulnerabilities in unsafe Rust. They extend their analysis to all packages hosted in the Rust package registry, where RUDRA can scan the entire registry and identify unknown memory security vulnerabilities within 6.5 hours.
Decan et al.\cite{15decan2019package} empirically compare the compliance of Cargo, npm, Packagist, and Rubygems and examine the evolution of this compliance over time, exploring the extent to which ecosystem-specific features or policies affect the degree of compliance and presenting an assessment based on the principles of group wisdom to help package maintainers decide which type of versioning constraints they should impose on their dependencies.
Chinthanet et al.\cite{chinthanet2021lags} empirically investigated the fixed releases of packages from 231 npm projects on GitHub to determine the possible lag between vulnerable releases and their fixed releases, and their study lays the groundwork for how to mitigate lag in the ecosystem. Zerouali et al.\cite{zerouali2018empirical} proposed a lag model and validated it on the npm package manager This model, they analyzed the history of update times and technical lags for over 500,000 packages, considered development and runtime dependencies, and studied direct and pass-through dependencies.
\subsection{Dependency Analysis}
Liu et al.\cite{12liu2022demystifying} propose a knowledge graph-based dependency solution where they parse the dependencies in the Npm ecosystem into trees and investigate the security threats posed by dependency tree vulnerabilities on a large scale, and conduct an ecosystem-wide empirical study of vulnerability propagation in dependency trees and their evolution over time by precisely parsing the dependency trees with official dependency parsing rules.
Zimmermann et al.\cite{zimmermann2019small} study the security risks for npm users by systematically analyzing the dependencies between packages, the maintainers responsible for these packages, and publicly reported security issues, examining the possibility of running vulnerable and malicious code due to third-party dependencies. Their study finds that a single package may affect a large portion of the entire ecosystem, that a few maintainer accounts can inject malicious code into most packages, and that a lack of maintenance can cause packages to be vulnerable to attacks, while they give several mitigation techniques to face this problem.
Abate et al.\cite{abate2020dependency} review the idea of making dependency resolution a particular concern in package manager implementations, and by surveying the dependency resolution capabilities in state-of-the-art package managers, they argue that schemes such as SAT-based dependency resolution are being widely used, and present some new challenges for dependency resolution.
Catuogno et al.\cite{catuogno2017secure} addressed the problem of enforcing software dependencies in the package manager to prevent malicious users from forcing the system to install any package. At the same time, they performed an experimental evaluation of their protocol to update the critical material on the target device in a non-interactive manner. This critical update would allow decrypting more packages dependent on the new installation.
Pashchenko et al.\cite{pashchenko2018vulnerable} obtained a counting method to avoid overinflation by carefully analyzing deployed dependencies, aggregating dependencies by project, distinguishing stopped dependencies, addressing the overinflation of academic and industrial approaches to vulnerable dependencies in OSS software, and satisfying the need of industrial practices for proper allocation of development and audit resources, with the vast majority of vulnerable dependencies being able to be fixed by updating to the new version.
\section{Dependency-vulnerability knowledge graph construction}
In this section, we combine the characteristics of the dependency information in the Cargo software registry crates.io and the security vulnerabilities already disclosed in the Rust language to construct a Cargo dependency vulnerability knowledge graph to aid our subsequent vulnerability propagation research.
This paper relies on terms from the Rust software registry \emph{crates.io} and uses them in the graph database Neo4j, as described in Table 4.
To build the Cargo Dependency Vulnerability Knowledge Graph, we obtained the data of the library from the official Rust registry \emph{crates.io} and the disclosed vulnerabilities with CVE numbers from the GitHub Advisory. We then correlated these data according to specific dependencies through the graph database Neo4j.
\subsection{Dataset Source}
\textbf{Get library metadata.} We first monitored the GitHub repository corresponding to the official Rust software registry \emph{crates.io}, and obtained 75922 library metadata by cleaning and organizing the data with a Python script. The main field information of the metadata is shown in Table 5.
\textbf{Get the public CVE vulnerability of the Rust language.} We obtained the known vulnerabilities in the Rust language with CVE numbers disclosed on GitHub Advisory through a Python script and obtained a total of 351 vulnerabilities, the main fields contained in the vulnerabilities are shown in Table 6.
\textbf{Incremental updates of library metadata and CVE vulnerability data.}
\emph{crates.io}, the software registry corresponding to the Cargo package manager and the CVE vulnerability data corresponding to these libraries are constantly updated over time. To make our constructed knowledge graph of dependency-vulnerabilities effectively capture the updates of these data and keep our knowledge graph current, we used the requests library in Python to crates.io's data sources in the GitHub repository and the publicly available vulnerability data about the Rust language in the GitHub Advisory to The data is fetched every 2 hours and compared to the existing hash table. Our dataset is updated if there is an update to the data.
\subsection{Knowledge graph construction}
\textbf{The library metadata is associated with known CVE vulnerability data through the graph database Neo4j.} Where library, library\_version, and CVE vulnerability data are stored as nodes, where the edges include has, library\_affects, version\_affects, and version\_depends, etc. To better illustrate the connection between these nodes and relationships, we can refer to Figure 1, which describes the basic structure of Cargo's dependency-vulnerability knowledge graph.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{n2.png}
\caption{Cargo Dependency Vulnerability Knowledge Graph Structure Diagram}
\end{figure}
\textbf{Cargo Dependency-vulnerability Knowledge Graph data statistics.} In order to build an accurate Cargo dependency-vulnerability knowledge graph, we obtained 75922 library metadata from the official Rust software registry in January 2022 and 351 vulnerabilities in the Rust language that have been publicly disclosed and contain CVE numbers from the GitHub Advisory, of which there are library\_version nodes 494290. Among them are 4023698 Relationships, including 491744 has, 351 library\_affects, 6730 version\_affects and 3524878 version\_depends relationships, and the specific data can be referred to Table 3 Cargo dependency vulnerability knowledge graph data Statistics table.
\begin{table}[]
\setlength{\tabcolsep}{5ex} \centering
\caption{Cargo Dependency Vulnerability Knowledge Graph Statistics}
\scalebox{1.2}{
\begin{tabular}{cc}
\hline
\textbf{Nodes/Relationships} & \textbf{Statistics} \\ \hline
library & 75922 \\
cve & 351 \\
library\_version & 494290 \\
has & 491744 \\
library\_affects & 351 \\
version\_affects & 6730 \\
version\_depends & 3524878 \\ \hline
\end{tabular}
}
\end{table}
\begin{table*}[]
\setlength{\tabcolsep}{5ex} \centering
\caption{Cargo Dependency Vulnerability Knowledge Graph Glossary}
\label{tab:freq}
\scalebox{1}{
\begin{tabular}{ccc}
\hline
\textbf{Terminology} & \textbf{Description} & \textbf{Type} \\ \hline
library & Represents a separate software component in crates.io that can be referenced by other components. & Node \\
library\_version & Represents a certain version of a library. & Node \\
cve & Represents publicly disclosed vulnerabilities in the Rust language that have a CVE number. & Node \\
has & library-\textgreater{}library\_version means that library has this version. & Relationship \\
library\_affects & cve-\textgreater{}library represents a cve vulnerability that affects this library. & Relationship \\
version\_affects & cve-\textgreater{}library\_version represents the cve vulnerability that affects the library\_version. & Relationship \\
version\_depends & library\_version-\textgreater{}library represents the dependencies needed for a libray\_version. & Relationship \\ \hline
\end{tabular}
}
\end{table*}
\begin{table*}[]
\setlength{\tabcolsep}{5ex} \centering
\caption{ Example of library metadata field information (using abort as an example)}
\begin{tabular}{ccc}
\hline
\textbf{Field Name} & \textbf{Value} & \textbf{Description} \\ \hline
id & abort & Database id \\
created\_at & 2018-01-09T17:32:09.879845+00:00 & The time when the library was published to crates.io. \\
description & Abnormal termination (stable, no\_std) & Basic descriptive information about the library. \\
downloads & 3506 & Total number of downloads \\
max\_stable\_version & 0.1.3 & The most stable version number \\
max\_version & 0.1.3 & Maximum version number \\
name & abort & name \\
newest\_version & 0.1.3 & Latest version number. \\
recent\_downloads & 1972 & Number of recent downloads. \\
updated\_at & 2021-01-12T22:27:17.016095+00:00 & Last updated \\ \hline
\end{tabular}
\end{table*}
\begin{table*}[]
\setlength{\tabcolsep}{5ex} \centering
\caption{CVE-2022-21685 Vulnerability Field Information}
\begin{tabular}{ccc}
\hline
\textbf{Field Name} & \textbf{Value} & \textbf{Description} \\ \hline
databaseId & 9045 & Description of the vulnerability in the CVE \\
severity & MODERATE & Severity \\
cvss & 0.0 & CVSS Scores \\
publishedAt & 2022-01-14T21:03:36Z & Release Time \\
summary & Integer underflow in Frontier & Vulnerability Overview \\
updatedAt & 2022-01-15T00:03:46Z & Update time \\
value & CVE-2022-21685 & Vulnerability Number \\
vulnerableVersionRange & \textless{}= 0.1.0 & Range of versions affected by the vulnerability \\
firstPatchedVersion & null & First patch version \\
ecosystem & RUST & Ecosystem \\
package\_name & frontier & Impacted package names \\ \hline
\end{tabular}
\end{table*}
\section{Dependency-vulnerability knowledge graph parsing algorithm}
In this section, we design and implement a dependency-vulnerability knowledge graph parsing algorithm based on the Cargo dependency-vulnerability knowledge graph constructed above. In this algorithm, we consider the static analysis method and take into account the parsing rules of Cargo in actual operation to ensure the accuracy of our parsing algorithm as much as possible.
\subsection{Algorithm design}
Existing studies mainly study the dependency transfer relationships in ecosystems such as NPM and Maven. Most of them adopt a static analysis approach without considering the official resolution rules, resulting in a low accuracy rate of dependency transfer relationship resolution.
The hardware side of the algorithm experiment platform in this paper includes a Linux server with a 3090A GPU, and the software side includes the Neo4j graph database, Python v3.6.4, Py2neo v2021.2.3, and Semantic v2.13.0. Semantic is a library for implementing semantic version control that helps us determine the rule-compliant versions of a range of dependent versions. Neo4j is a high-performance NoSQL graph database that stores structured data on the web rather than in tables. Neo4j can also be seen as a high-performance graph engine with all the features of a mature database with the advantages of embeddedness, high performance, and lightweight.
Our parsing algorithm takes two parameters: the JSON metadata corresponding to the Cargo dependency-vulnerability knowledge graph and the package name and version number to be parsed, and the algorithm outputs JSON data containing the parent-child node relationship. The parsing algorithm finds the node data corresponding to the specified package name and version number and then iterates through all the dependencies required by this node. In considering the official parsing rules, we mainly consider whether it is a development dependency, whether the optional option is false, whether features are in the standard library, etc. At the same time, we have to determine the default version of the dependency by Semantic, pass the dependency by recursion, and finally, determine the hierarchical relationship based on the parent-child relationship between the dependencies and save the data in JSON format. The core of our algorithm is shown in Algorithm 1.
\begin{algorithm}
\SetKwData{Left}{left}
\SetKwData{This}{this}
\SetKwData{LibraryNode}{LibraryNode}
\SetKwData{DependencyData}{DependencyData}
\SetKwData{CargoMap}{CargoMap}
\SetKwData{Up}{up}
\SetKwData{Up}{up}
\SetKwData{Up}{up}
\SetKwFunction{Union}{Union}
\SetKwFunction{OfficialParsingRules}{OfficialParsingRules}
\SetKwFunction{DependencyResolution}{DependencyResolution}
\SetKwData{ParsingList}{ParsingList}
\SetKwFunction{FindNode}{FindNode}
\SetKwFunction{FindCompress}{FindCompress}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{Graph.json,Name of package,Version number}
\Output{Dependency tree metadata}
\BlankLine
\LibraryNode$\leftarrow$\FindNode{$Name,Version$}\;
\ParsingList$\leftarrow$[]\;
\CargoMap$\leftarrow$\{\}\;
\SetKwProg{Fn}{Function}{:}{\KwRet}
\Fn{\DependencyResolution{$Name,Version$}}{
\For{$depend\leftarrow DependencyData[0]$ \KwTo $DependencyData[len-1]$}
{
\If{\OfficialParsingRules {$depend$} == True}{
\ParsingList.append(depend)\;
\CargoMap[name] $\leftarrow$ depend[version]\;
\DependencyResolution(depend[name],depend[version])
}\Else{continue}
}
}
\SetKwProg{Fn}{Function}{:}{\KwRet}
\Fn{\OfficialParsingRules{$depend$}}{
\If{depend[type] == 'dev'}{
\KwRet False\;
}
\If{depend[optional] == 'false'}{
\KwRet False\;
}
\If{depend[features] not in depend['features']['std']}{
\KwRet False\;
}
\Else{
\KwRet True\;
}
}
\caption{Knowledge Graph Parsing}
\label{algo_disjdecomp}
\end{algorithm}
\subsection{Algorithm Implementation}
We relied on the knowledge graph parsing algorithm constructed above to experiment with the top 100 downloaded packages in the official Rust package registry \emph{crates.io}, and achieved good parsing results. We will take the most downloaded package rand v0.8.5 of crates.io as an example to further illustrate the parsing results of our parsing algorithm.
By querying \emph{crates.io} we can see that the package \emph{rand} has been downloaded a total of 115,833,961 times since its release, and from the number of downloads, we can also see that this package is widely used. The result of our parsing algorithm is shown in Figure 2.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{n1.png}
\caption{The result of the parsing algorithm for rand}
\end{figure}
Our parsing algorithm can parse the direct dependency of rand and the passed dependency and determine the version number that conforms to Cargo's official parsing rules based on the version range of the dependency. Maximum version number. We tested the actual installation by manually specifying the package name and version number in the \emph{Cargo.toml} file, and parsed the installed dependencies in the \emph{Cargo.lock} file with a Python script, and verified the validity of our algorithm by comparing a large amount of data.
\section{Vulnerability propagation study}
The dependency hierarchy between components in the Cargo ecosystem can lead to the propagation of vulnerabilities along the software supply chain. This section identifies the propagation paths of vulnerabilities in the Cargo ecosystem. It analyzes the possible propagation of vulnerabilities based on the Cargo dependency-vulnerability knowledge graph and parsing algorithm constructed above.
\subsection{Propagation path determination}
We take the library involved in the disclosed and CVE-numbered vulnerabilities in Cargo ecology as the starting point, combine it with our dependency tree parsing algorithm, reverse to find all the libraries containing this library in the dependency tree, and mark their paths and save them to the specified JSON file. We use this method for the 351 CVEs obtained vulnerabilities obtained by this method. In order to verify the accuracy of our vulnerability propagation paths, we also performed manual proofreading to ensure the consistency of the vulnerability propagation paths calculated by the Python program and the actual vulnerability propagation paths. We not only considered the static parsing rules when calculating the vulnerability propagation paths but also considered the official actual parsing rules to ensure the consistency between the experimental environment and the actual situation of vulnerability propagation.
To better illustrate the actual path of vulnerability propagation, we take the vulnerability CVE-2020-36442 as an example for explanation, as shown in Figure 3 CVE-2020-36442 vulnerability propagation path. This vulnerability affects the library beef in Rust. The range of versions affected by the vulnerability is <0.5.0, our parsing algorithm not only gives the specific range of versions affected beef also gives the actual list of affected versions while giving the vulnerability propagation path to pass the dependencies \emph{audiotags} and pass the dependencies on the actual list of affected versions The traditional static analysis method may calculate the library \emph{allaudiotags} in the vulnerability propagation path, but after our parsing algorithm found that \emph{allaudiotags} library depends on the 0.2.7182 version of \emph{audiotags}, but this version is not affected by the vulnerability, so In the Figure, we connect by dashed lines, the affected box in the Figure is the actual path of vulnerability propagation, so our algorithm can more accurately determine the actual path of vulnerability propagation, which provides an accurate data base for the following vulnerability propagation study.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{n4.png}
\caption{One of the propagation paths of CVE-2020-36442 vulnerability}
\end{figure}
\subsection{Statement of the Problem}
In order to accurately find the factors and characteristics of vulnerability propagation in the Cargo ecosystem, we conducted a large-scale evaluation of the libraries in crates.io based on the dependency-vulnerability knowledge graph parsing algorithm proposed in this paper. We launched a detailed investigation of the following issues.
\subsubsection{\textbf{RQ1: What are the characteristics of vulnerabilities in the Cargo ecosystem?}}
Using the publicly disclosed 351 vulnerabilities in the Rust language as a data base, we studied vulnerabilities in the Cargo ecosystem in terms of both the main types of vulnerabilities and the severity of the vulnerabilities. By studying the main types of vulnerabilities, we can reflect what kind of vulnerabilities the current Cargo ecosystem is mainly affected by, which can better warn developers using the Cargo ecosystem to avoid being affected by such vulnerabilities, and the severity of vulnerabilities can reflect the severity of vulnerability threats to libraries in the current Cargo ecosystem. The types of vulnerabilities described in this article are mainly from the main types given by the CWE\cite{CWE} community.
\textbf{The main types of vulnerabilities.} In order to understand the main types of vulnerabilities propagated in the Cargo ecosystem, we conducted vulnerability type statistics on the acquired vulnerabilities, and the statistics were compared using the CVE number as the standard using crawler technology and the NVD database, and the TOP 10 vulnerability types we counted are shown in Table 7 Cargo ecosystem vulnerability type TOP 10, from this table we can see that the vulnerability types affecting Cargo ecosystem security vulnerability types mainly include continued reference to memory after memory has been released, use of uninitialized resources, improper memory buffer boundary operations, double memory release, etc. It can be found that the current vulnerabilities affecting Cargo ecosystem security are mainly related to memory-related vulnerabilities, which is an interesting finding because we know that one of the features of the Rust language is This is an interesting finding because we know that one of the features of Rust language is that memory is safer compared to other languages, but we can find that most of the vulnerabilities in the Cargo ecosystem affect memory security, so users must be careful when managing and using third-party libraries through Cargo to prevent downloading libraries containing vulnerabilities through Cargo that could lead to memory security problems in the project.
\begin{table*}[]
\setlength{\tabcolsep}{5ex} \centering
\caption{Top 10 Cargo ecosystem vulnerability types}
\scalebox{1}{\begin{tabular}{ccc}
\hline
\textbf{Type of vulnerability (CWE)} & \textbf{Description} & \textbf{Number of appearances} \\ \hline
CWE-416 & Use after release & 26 \\
CWE-908 & Use of uninitialized resources & 25 \\
CWE-119 & Inappropriate restrictions on operations within memory buffer boundaries & 20 \\
CWE-415 & Double Release & 19 \\
CWE-787 & Cross-border memory writing & 16 \\
CWE-362 & Inappropriate concurrent execution of shared resources & 15 \\
CWE-77 & Improper escape handling of special elements used in commands & 14 \\
CWE-400 & Uncontrolled resource consumption & 9 \\
CWE-476 & Null pointer dereference & 8 \\
CWE-125 & Cross-border memory reading & 8 \\ \hline
\end{tabular}}
\end{table*}
\textbf{Severity of vulnerabilities.} The severity of the 351 known vulnerabilities in the Cargo ecosystem was calculated from the GitHub Advisory, where the severity was classified as LOW (0.05), MODERATE (0.205), HIGH (0.460), and CRITICAL (0.328). The percentage of vulnerabilities classified as CRITICAL was 0.328, and the percentage of high-risk and urgent vulnerabilities was 0.788. The specific results are shown in Figure 4. We also analyzed the distribution of their CVSS scores. Our study found that the CVSS scores of vulnerabilities in the Cargo ecosystem were mainly distributed in the interval [4.7,9.8], as shown in Figure 5. (Cargo ecosystem vulnerability CVSS score distribution.)
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{n5.png}
\caption{Cargo Ecosystem Vulnerability Severity Ratio Distribution}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth]{cvss2.eps}
\caption{Cargo ecosystem vulnerability CVSS score distribution}
\end{figure}
\textbf{Proportion without patches.} We studied whether the vulnerabilities obtained had patches, and our study found that the Proportion of the 351 vulnerabilities obtained without patches accounted for 0.2678.We can see that the Proportion of vulnerabilities without patches in the Cargo ecosystem is very high. Once hacker implements targeted software supply chain attacks against these vulnerabilities may bring serious harm.
\subsubsection{\textbf{RQ2: Can vulnerable dependencies be fixed by updating the version?}}
We conducted a comparative study of the range of versions known to be affected by the vulnerability in the Cargo ecosystem and the latest versions of the affected libraries. We found that 82\% of the libraries affected by the vulnerability in the Cargo ecosystem could be avoided by updating to a new version, and 18\% of the latest versions of the affected libraries were still affected by the vulnerability. To better illustrate this type of problem, we selected five instances where the vulnerabilities have been published for years. However, the latest version of the library is still affected by the vulnerabilities, as shown in Table 8. Our statistics found that many of the latest versions of libraries in the Cargo ecosystem are still affected by the vulnerabilities. These vulnerabilities can be downloaded and used by users through cargo, which brings the possibility of software supply chain attacks.
\begin{table*}[]
\setlength{\tabcolsep}{3ex} \centering
\caption{Example of the latest version of library still affected by the vulnerability}
\scalebox{1}{
\begin{tabular}{ccccccc}
\hline
\textbf{CVE-ID} & \textbf{Affected library} & \textbf{Range of versions affected} & \textbf{Latest version} & \textbf{Vulnerability Announcement Time} \\ \hline
CVE-2016-10933 & portaudio & \textless{}=0.7.0 & 0.7.0 & 2019-8-26 \\
CVE-2020-35900 & array-queue & 0.3.3 & 0.3.3 & 2020-12-31 \\
CVE-2021-30456 & id-map & 0.2.1 & 0.2.1 & 2021-4-7 \\
CVE-2021-29936 & adtensor & \textless{}=0.0.3 & 0.0.3 & 2021-4-1 \\
CVE-2020-36204 & im & \textless{}=15.0.0 & 15.0.0 & 2021-1-26 \\ \hline
\end{tabular}
}
\end{table*}
\subsubsection{\textbf{RQ3: What is the scope of vulnerability propagation?}}
This paper studies the vulnerability propagation of 351 known vulnerabilities in the Cargo ecosystem. Based on our proposed dependency vulnerability knowledge graph parsing algorithm and vulnerability propagation path determination method above, after our calculations, we find that there are 246 libraries containing known vulnerabilities. A total of 6731 versions of these libraries are affected by the vulnerabilities. There are 21722 libraries affected by the propagation of pass-dependent vulnerabilities, of which a total of 97,779 versions of these libraries are affected by the propagation of vulnerabilities, which indicates that the percentage of libraries affected by the propagation of vulnerabilities in the entire Cargo ecosystem is 28.61\% (21722/75922). The percentage of versions affected by vulnerabilities in the entire Cargo ecosystem is 19.78\% (97779/494290). From this percentage, we can see that the impact of vulnerability propagation in the Cargo ecosystem is not negligible.
We counted the five most serious CVE vulnerabilities affecting the security of the Cargo ecosystem, including the libraries directly affected by these vulnerabilities, as well as the library affected by the propagation of the vulnerabilities and the total number of versions affected. This is shown in Table 9.
\begin{table*}[]
\setlength{\tabcolsep}{3ex} \centering
\caption{The five most serious CVE vulnerabilities affecting the security of the Cargo ecosystem}
\begin{tabular}{cccc}
\hline
\textbf{CVE-ID} & \textbf{Direct impact library} & \textbf{Total number of libraries spreading} & \textbf{Total number of versions influenced} \\ \hline
CVE-2021-32715 & hyper & 2592 & 19898 \\
CVE-2021-45710 & tokio & 3630 & 18461 \\
CVE-2019-25010 & failure & 3252 & 16923 \\
CVE-2018-20997 & openssl & 622 & 5144 \\
CVE-2020-35916 & image & 876 & 5066 \\ \hline
\end{tabular}
\end{table*}
Our research has found that a known CVE vulnerability may affect only one library during public disclosure. However, this library's propagation scope in the open-source software supply chain can be enormous, and the impact on the library through propagation may be overwhelming. Let us imagine a scenario where a hacker manages to obtain credentials to push a malicious version of a library to a location in the famous \emph{crates.io} dependency tree. Then all users of the popular library are affected by this vulnerability once the malicious library is involved in the dependency tree of the famous library.
\subsubsection{\textbf{RQ4: Is the version containing the vulnerability being deprecated?}}
The yank mechanism is a mechanism that Cargo provides to developers to deprecate a library that has been published to the software registry crates.io. The package manager provides a deprecation mechanism to developers to organize the use of certain features in the library, such as API methods. However, in our research, we found that the percentage of libraries whose latest versions are still affected by vulnerabilities and whose latest versions are yanked to true is only 1.7\%, which means that a high percentage of libraries in the Cargo ecosystem are not deprecated even though the latest versions have security vulnerabilities. This also shows that there is still a shortage of release level deprecation of libraries with vulnerabilities in the Cargo ecosystem.
\subsubsection{\textbf{RQ5: What are the main factors that cause the propagation of vulnerabilities in the Cargo ecosystem?}}
In this paper, the vulnerability propagation study in the Cargo dependency vulnerability knowledge graph found that the main factors causing the propagation of vulnerabilities in the Cargo ecosystem are the following.
\textbf{Ignore the impact of passing dependencies.} Many developers only focus on the security of the packages they download and parse through Cargo but ignore the security of the pass-through dependencies that come with the packages. The security impact caused by direct dependencies is far less than the security risk caused by pass-through dependencies. We found that the spread of vulnerabilities caused by pass-through dependencies is about ten times greater than direct dependencies when determining the spread of vulnerabilities.
\textbf{Not updated to the latest version.} After an existing vulnerability is discovered, developers are often notified to fix the vulnerability first, and it may only be disclosed through public vulnerability databases such as NVD after a specified period, so responsible developers tend to address the existence of security vulnerabilities by updating the version, so a large percentage of the vulnerability propagation in the Cargo ecosystem is caused by not updating the version of the library containing the vulnerability promptly. Library.
\textbf{Libraries with known vulnerabilities are not being addressed.} Our research found that 60 of the latest versions of crates.io's libraries were affected by security vulnerabilities that have been published for several years. However, the crates.io community did not deprecate or take other measures to address these vulnerable libraries at the release level, nor did they receive any warning messages when installing them through Cargo. Therefore, the failure of the community maintainers to effectively deal with these vulnerable libraries is an essential factor in the spread of the vulnerability.
\section{Discussion}
In this section, we discuss our findings and the implications of these findings for the Rust language, library maintainers, and the Cargo community. At the same time, we give measures on how to mitigate these software supply chain attacks against the Cargo ecosystem for the characteristics of vulnerability propagation in the Cargo ecosystem, and finally, conclude with the limitations of this paper.
\subsection{Impact}
We look at the factors that contribute to the propagation of vulnerabilities in the Cargo ecosystem and hope to provide some valuable insights for managers, users, and owners of libraries in the Cargo community while contributing to the future development of the Cargo ecosystem.
\textbf{Advice to the administrators of the Cargo community.} Monitor in real-time the vulnerability information disclosed by the mainstream security vulnerability database about the existing library on crates.io, ensure as much as possible that the vulnerability data release is aware, and notify the maintenance staff of the library to deal with the library, release the version that fixes the vulnerability as early as possible, and before the developers release the version that fixes the vulnerability, once a user has downloaded this library through cargo to download this library, immediately give a vulnerability warning message to remind users that the library being downloaded contains known vulnerabilities, and users who have already downloaded this library in the project can be reminded by email. If the library owner fails to fix the vulnerability within a specified period, a mechanism can take the library down. At the same time, we note that developers in the Rust community have proposed a code review system for the Cargo package manager, \emph{cargo-crev}, a tool that helps Rust users assess the quality and trustworthiness of their package dependencies, so we recommend that the Cargo community increase support for these tools.
\textbf{Advice to Cargo users.} When installing a library through Cargo, the user should not only confirm whether this library contains known CVE vulnerabilities but also check whether the delivery dependency network of this library contains known vulnerabilities through the \emph{Cargo.lock} file as much as possible, and if the project is enormous. It is not easy to find it manually. Ensure the installed library is The latest version because the commonly used library will generally be upgraded to the fixed version before the vulnerability is publicly disclosed.
Meanwhile, Cargo officially provides the Cargo-audit tool to help users check for vulnerabilities in the RustSec vulnerability database. Users can use this tool to check the libraries involved in the \emph{Cargo.lock} file in their projects to prevent possible vulnerabilities.
\textbf{Advice to library owners.} If the owner receives a bug or vulnerability message from the crates.io community or open-source repository, the owner should be able to locate the scope of the vulnerability quickly and fix it and release a new version to \emph{crates.io} as soon as possible. Suppose the vulnerability cannot be fixed in the short term. In that case, the owner can also contact the administrators of the Cargo community to assist with the process of preventing software supply chain attacks through these vulnerabilities.
\subsection{Limitations}
Due to the limitation of computing power, we only calculated the general situation of vulnerability propagation. We did not make a deeper calculation of the nodes involved in more vulnerability propagation paths. However, the case of vulnerability for multi-level propagation in our experiments is relatively small. Our study only ignored the version of the propagation path of more than 100. This case of vulnerability propagation only accounts for 1.78\% of the total number of experiments. Therefore, the impact on the overall experimental results is relatively small, and the impact on our experimental conclusions is negligible.
The vulnerability data may not fully cover the existing vulnerabilities in the Cargo ecosystem. However, the existing vulnerabilities indicate the problems in the Cargo ecosystem, so the impact on the overall findings of the study is relatively small.
In future work, we will increase the arithmetic power and optimize the time complexity of our algorithm to investigate the case of multi-level propagation of vulnerabilities. We will also collect more vulnerabilities from NVD, CVE, RUSTSEC and other vulnerability databases about the Cargo ecosystem for further research.
\section{Conclusion}
In order to empirically study the propagation of vulnerabilities in the Cargo ecosystem
In this paper, we fill the gap in the field by constructing a Cargo dependency-vulnerability knowledge graph containing 570,563 nodes and 4,023,703 edges based on the Cargo dependency parsing rules and 75,922 known security vulnerabilities of libraries and Rust language in the Cargo ecosystem as the data base. In this paper, from the constructed knowledge graph, we propose for the first time a dependency-vulnerability knowledge graph parsing algorithm considering the official parsing rules of Cargo, which can accurately simulate the actual dependency passing of a given library and version number without actually installing the library. Based on this algorithm, we study the vulnerability propagation problem in the Cargo ecosystem and analyze the path of vulnerability propagation. We give the scope of vulnerability propagation in the Cargo ecosystem and the factors that may lead to vulnerability propagation. Finally, we propose some measures for the Cargo community to mitigate these problems.
\bibliographystyle{IEEEtran}
|
1,116,691,497,905 | arxiv | \section{Introduction}
The brain \footnote{The Brain is wider than the Sky,\\ For, put them side by side,\\ The one the other will include\\ With ease, and you beside.\\ Emily Dickinson, Complete Poems. 1924 (1830-1886).} is the most complex organ
in the human body. It contains approximately $10^2$ billion neurons and $10^3$
trillion synaptic connections, where each neuron can be connected to up to
$10^4$ other neurons \cite{gerstner02}. The neuron is the basic working unit of
the brain and it is responsible for carrying out the communication and the
processing of information within the brain \cite{sporns05}. Those tasks are
achieved through neuronal firing spatio-tem\-poral patterns that are depended
on the neuron own dynamics and the way they are networked.
Towards the goal to understand the brain, over the past several years,
mathematical models have been introduced to emulate neuronal firing patterns. A
simple model that has been considered to describe neuronal spiking is based on
the cellular automaton \cite{viana14,borges15}. This model uses discrete state
variables, coordinates and time \cite{wolfram83}. Another proposed bursting
behaviour mo\-del is a simplification of the neuron model described by
differential equations, where the state variables are continuous, while the
coordinates and the time are discrete \cite{batista12,lameu16a,lameu16b}.
Recently, Girardi-Schappo et al. \cite{schappo17} proposed a map that
reproduces neuronal excitatory and autonomous behaviour that are observed
experimentally.
Differential equations have also been used to model neuronal patterns
\cite{abbott99,batista14,baptista10}. The integrate-and-fire model was
developed by Lapicque in 1907 \cite{lapicque07} and it is still widely used.
But one of the most successful and cerebrated mathematical models using
differential equations was proposed by Hodgkin and Huxley in $1952$
\cite{hodgkin52}. The Hodgkin-Huxley model explains the ionic mechanisms
related to propagation and initiation of action potentials, i.e., the
characteristic potential pulse that propagates in the neurons. In $1984$,
Hindmarsh and Rose \cite{hindmarsh84} developed a model that simulates bursts
of spikes. The phenomenological Hindmarsh-Rose model may be seen as a
simplification of the Hodgkin-Huxley model.
Hodgkin-Huxley neuron networks have been successfully used as a mathematical
model to describe processes occurring in the brain. An important brain activity
phenomenon is the neuronal synchronisation. This phenomenon is related to
cognitive functions, memory processes, perceptual and motor skills, and
information transfer
\cite{baptista10,baptista08,baptista08plos,cris15,uhlhaas09}.
There has been much work on neuronal synchronisation. Temporal synchronisation
of neuronal activity happens when neurons are excited synchronously, namely
assemblies of neurons fire simultaneously \cite{baptista10,melloni07}. Newly,
Borges and collaborators \cite{borges17} modelled spiking and bursting
synchronous behaviour in a neuronal network. They showed that not only
synchronisation, but also the kind of synchronous behaviour depends on the
coupling strength and neuronal network connectivity. Studies showed that phase
synchronisation is related to information transfer between brain areas at
different frequency bands \cite{fell11}. Neuronal synchronisation can be
related to brain disorders, such as epilepsy and Parkinson's disease.
Parkinson's disease is associated with synchronised oscillatory activity in
some specific part of the brain \cite{rubchinsky12}. Based on that, Lameu et
al. \cite{lameu16} proposed interventions in neuronal networks to provide a
procedure to suppress pathological rhythms associated with forms of
synchronisation.
In this review, we focus the attention on the weakly and strongly synchronous
states in dependence with brain plasticity. Brain plasticity, also known as
neuroplasticity, is a fundamental mechanism for neuronal adaptation in response
to changes in the environment or to new situations \cite{benett64}. In 1890,
James \cite{james90} proposed that the interconnection among the neurons in the
brain and so the functional behaviour carried on by neurons are not static.
Experimental evidence of plasticity was demonstrated by Lashley in 1923
\cite{lashley23} through experiments on monkeys. Scientific evidence of
anatomical brain plasticity was published in 1964 by Bennett et al.
\cite{bennett64} and Diamond et al. \cite{diamond64}.
In the field of theoretical neuroscience, Hebb \cite{hebb49} wrote his ideas in
words that inspired mathematical modelling related to synaptic plasticity
\cite{gerstner12}. According to Hebbian theory, the synaptic strength increases
when a presynaptic neuron participates in the firing of a postsynaptic neuron,
in other words, neurons that fire together, also wire together. The Hebbian
plasticity led to the modelling of spike timing-dependent plasticity (STDP)
\cite{markram12,borges16}. It was possible to obtain the STDP function for
excitatory synapses by means of synaptic plasticity experiments performed by Bi
and Poo \cite{poo98}. The STDP function for inhibitory synapses was reported in
experimental results in the entorthinal cortex by Haas et al. \cite{haas06}.
In this review, we show results that allow to understand the relation
between spike synchronisation and synaptic plasticity and this dependence with
the non-trivial topology that is induced in the brain due to STDP. As so, we
consider an initial all-to-all network, where the neuronal network is built by
connecting neurons by means of excitatory and inhibitory synapses. We show
that the transition from weakly synchronous to strongly synchronous states
depends on the neuronal network architecture, as well as to the STDP network
evolves to non-trivial topology. When the strength of the inhibitory
connections is of the same order of that of the excitatory connections, the
final topology in the plastic brain presents the rich-club phenomenon, where
neurons that have high degree connectivity towards neurons of the same
presynaptical group (either excitatory of inhibitory) become strongly connected
to neurons of the other postsynaptical group. The final topology has all the
features of a non-trivial topology, when the strength of the synapses becomes
reasonably larger than the strength of the excitatory connections, where
neurons only sparsely connect to other neurons.
The structure of the review is the following. In Section $2$, we introduce the
Hodgkin-Huxley model for a neuron and the synchronisation dynamics of neuronal
networks. Section $3$ presents the Hebbian rule and the spike-timing dependent
plasticity (STDP) in excitatory and inhibitory syna\-pses. In Section $4$, we
show the effects of the synaptic plasticity on the network topology and
synchronous behaviour. Finally, in the last Section, we draw the conclusions.
\section{Hodgkin-Huxley Neuronal Networks}
\subsection{Neurons}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=9cm]{fig1.eps}
\caption{Schematic illustration showing the three main parts of neurons
(dendrite, soma and axon), including the presynaptic and postsynaptic neurons.}
\label{fig1}
\end{center}
\end{figure}
Neurons are cells responsible for receiving, processing and transmitting
information in the neuronal system \cite{alberts02}. They have differences
in sizes, length of axons and dendrites, in the number of dendrites and axons
terminals. Figure \ref{fig1} illustrates the three main parts of the neuron:
dendrite, cell body or soma, and axon \cite{arbib02}. The dendrites are
responsible for the signal reception, and the axons drive the impulse from the
cell body to another neuron. The neurons are connected through synapses, where
the neuron that sends the signal is called presynaptic and the postsynaptic is
the neuron that receives it. The most common form of neuron communication is by
means of the chemical synapses, where the signal is propagated from the
presynaptic to postsynaptic neurons by releasing neurotransmitters.
The signal propagates by means of the variation of internal neuron electric
potential. An action potential occurs when a neuron sends information from the
soma to the axon. The action potential is characterised by a rapid
change in the membrane potential, as shown in Fig. \ref{fig2}. In the absence
of stimulus, the membrane potential remains near a baseline level. A
depolarisation occurs when the action potential is greater than a threshold
value. After the depolarisation, the action potential goes through a certain
repolarisation stage, where the action potential rapidly reaches the
refractory period or hyperpolarisation. The refractory period is the time
interval in which the axon does not transmit the impulse \cite{arbib02}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm,width=8cm]{fig2.eps}
\caption{Plot of the action potential showing the various phases at a point on
the cell membrane.}
\label{fig2}
\end{center}
\end{figure}
Action potentials are generated and propagates due to different ions crossing
the neuron membrane. The ions can cross the membrane through ion channels and
ion pumps \cite{gouaux05}. Figure \ref{fig3}(a) shows the ion channels of
sodium (${\rm Na}^+$) and potassium (${\rm K}^+$). In the depolarisation stage,
a great amount of sodium ions move into the axon (I), while the repolarisation
occurs when the potassium ions move out of the axon (II). Figure \ref{fig3}(b)
shows the transport of sodium (I and II) and potassium ions (III and IV)
through the pumps. The sodium-potassium pumps transport sodium ions out and
potassium ions in, and it is responsible for maintaining the resting potential
\cite{gouaux05}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm,width=8cm]{fig3.eps}
\caption{Schematic diagram of the ions traffic across cell membranes, (a) ion
channels and (b) ion pumps.}
\label{fig3}
\end{center}
\end{figure}
\subsection{Hodgkin-Huxley Model}
Hodgkin and Huxley \cite{hodgkin52} performed experiments on the giant squid
axon using microelectrodes introduced into the intracellular medium. They
proposed a mathematical model that allowed the development of a quantitative
approximation to understand the biophysical mechanism of action potential
generation. In 1963, Hodgkin and Huxley were awarded with the Nobel Prize in
Physiology or Medicine for their work. The Hodgkin-Huxley model is given by
\begin{eqnarray}\label{Equacoes_HH}
C\dot{V} & = & I-g_{\rm K}n^{4}(V-E_{\rm K})-g_{\rm Na}m^{3}h(V-E_{\rm Na}) -g_{L}(V-E_{\rm L}), \nonumber \\
\dot{n} & = & \alpha_{n}(V)(1-n)-\beta_{n}(V)n,\\
\dot{m} & = & \alpha_{m}(V)(1-m)-\beta_{m}(V)m,\\
\dot{h} & = & \alpha_{h}(V)(1-h)-\beta_{h}(V)h,
\end{eqnarray}
where $C$ is the membrane capacitance ($\mu$F/cm$^2$), $V$ is the membrane
potential (mV), $I$ is the constant current density, parameter $g$ is the
conductance, and $E$ the reversal potentials for each ion. The functions $m(V)$
and $n(V)$ represent the activation for sodium and potassium, respectively, and
$h(V)$ is the function for the inactivation of sodium. The functions
$\alpha_{n}$, $\beta_{n}$, $\alpha_{m}$, $\beta_{m}$, $\alpha_{h}$, $\beta_{n}$
are given by
\begin{eqnarray}
\alpha_{n}(v) & = & \frac{0.01 v + 0.55}{1 - \exp \left(-0.1 v-5.5 \right)},\\
\beta_{n}(v) & = & 0.125\exp\left(\frac{-v-65}{80}\right),\\
\alpha_{m}(v) & = & \frac{0.1 v + 4}{1 - \exp\left (-0.1 v - 4\right)},\\
\beta_{m}(v) & = & 4\exp\left(\frac{-v-65}{18}\right),\\
\alpha_{h}(v) & = & 0.07\exp\left(\frac{-v-65}{20}\right),\\
\beta_{h}(v) & = & \frac{1}{1 + \exp\left(-0.1 v - 3.5\right)},
\end{eqnarray}
where $v=V/$[mV]. We consider $C=1$ $\mu$F/cm$^{2}$, $g_{\rm K}=36$mS/cm$^{2}$,
$E_{\rm K}=-77$mV, $g_{\rm Na}=120$mS/cm$^{2}$, $E_{\rm Na}=50$mV,
$g_{\rm L}=0.3$mS/cm$^{2}$, $E_{\rm L}=-54.4$mV \cite{borges17}. Depending
on the value of the external current density $I$ ($\mu$A/cm$^2$) the neuron can
present periodic spikings or single spike activity. In the case of periodic
spikes, if the constant $I$ increases, the spiking frequency also increases.
Figure \ref{fig4} shows the temporal evolution of the membrane potential of a
Hodgkin-Huxley neuron for $I=0\mu$A/cm$^2$ (black line) and for
$I=9\mu$A/cm$^2$ (red line). For the case without current, the neuron shows
an initial firing and, after the spike, it remains in the resting potential. In
the second case the external current $I$ is greater than the required threshold
and the neuron exhibits firings.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=5cm,width=8cm]{fig4}
\caption{\small Membrane potential $V$ of a Hodgkin-Huxley neuron with
$I=0\mu$A/cm$^2$ (black line) and $I=9\mu$A/cm$^2$ (red line).}
\label{fig4}
\end{center}
\end{figure}
\subsection{Neuronal Synchronisation}
The synchronisation process here is related to natural phenomena ranging from
metabolic processes in our cells to the highest cognitive activities
\cite{arenas08}. Neuronal synchronisation has been found in the brain during
different tasks and at rest \cite{deco11}. We study in this text neuronal
synchronisation process in a network of coupled Hodgkin-Huxley neurons. The
network dynamics is given by
\cite{popovych13}
\begin{eqnarray}
C\dot{V_{\rm i}} & = & I_{\rm i}-g_{\rm K}n^{4}(V_{\rm i}-E_{\rm K})-
g_{\rm Na}m^{3}h(V_{\rm i}-E_{\rm Na}) \nonumber \\
& & - g_{L}(V_{\rm i}-E_{\rm L})+\frac{(V_{\rm r}^{\rm Exc}-
V_{\rm i})}{\omega_{\rm Exc}} \sum_{j=1}^{N_{\rm Exc}}\varepsilon_{\rm ij}s_{\rm j}
\nonumber \\
& & +\frac{(V_{r}^{\rm Inhib}-V_{\rm i})}{\omega_{\rm Inhib}}\sum_{j=1}^{N_{\rm Inhib}}
\sigma_{\rm ij}s_{\rm j}+\Gamma_i,
\end{eqnarray}
where the elements of the matrix $\varepsilon_{\rm ij}$ ($\sigma_{\rm ij}$) are
the intensity of the excitatory (inhibitory) synapse (coupling strength)
between the presynaptic neuron $j$ and the postsynaptic neuron $i$,
$\omega_{\rm Exc}$ ($\omega_{\rm Inhib}$) represents the me\-an number of excitatory
(inhibitory) synapses of each neuron, $\Gamma_i$ is an external perturbation so
that the neuron is randomly chosen and the chosen one receives an input with a
constant intensity $\gamma$, $N_{\rm Exc}$ is the number of excitatory neurons,
and $N_{\rm Inhib}$ is the number of inhibitory neurons. The excitatory
(inhibitory)neurons are connected with reverse potential
$V_{\rm r}^{\rm Exc}=20 \rm mV$ ($V_{\rm r}^{\rm Inhib}=-75\rm mV$), and the
postsynaptic potential $s_{i}$ is given by \cite{popovych13}
\begin{equation}
\frac{ds_{i}}{dt}=\frac{5(1-s_{i})}{1+\exp(-\frac{V_{i}+3}{8})}-s_{i}.
\end{equation}
One measure that we adopt to quantify synchronous behaviour is the Kuramoto
order parameter that reads as \cite{kuramoto84}
\begin{equation}
Z(t) = R(t){\rm e}^{i\psi(t)}= \frac{1}{N}\sum_{j=1}^{N}{\rm e}^{i\theta_{j}(t)},
\end{equation}
where $R(t)$ is the amplitude, $\psi(t)$ is the angle of a centroid phase
vector, and
\begin{equation}
\theta_{j}(t)=2\pi\frac{t-t_{j,m}}{t_{j,m+1}-t_{j,m}}
\end{equation}
is the phase of the neuron $j$, with $t_{j,m}< t < t_{j,m+1}$. The time $t_{j,m}$
denotes the $m$-th spike of the neuron $j$. In a complete synchronised state the
network exhibits $R=1$. For a strongly synchronised regime it has
$R\geq 0.9$, whereas a weakly synchronous behaviour occurs for $R<0.9$.
Figure \ref{fig5}(a) and (b) exhibit the raster plots of spike onsets for a
random network with $100$ Hodgkin-Huxley neurons coupled by means of excitatory
synapses, mean degree $K=10$, $\gamma=0$, excitatory coupling intensity
$\varepsilon_{\rm ij}=0.1$ and $\varepsilon_{\rm ij}=0.5$, respectively. In Figure
\ref{fig5}(a), the neuronal network presents weakly synchronous behaviour,
while in Figure \ref{fig5}(b) the network shows stron\-gly synchronised spiking
(though not complete synchronisation). Figure \ref{fig5}(c) shows the order
parameter $R(t)$ for $\varepsilon_{\rm ij}=0.1$ (black line) and
$\varepsilon_{\rm ij}=0.5$ (red line). By increasing the coupling strength, from
$0.1$ to $0.5$, the neuronal network asymptotes to a synchronous behaviour.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=8cm]{fig5.eps}
\caption{(Colour online) Raster plots of spike onsets for a random network
with $100$ Hodgkin-Huxley neurons, $\gamma=0$, (a) $\varepsilon_{\rm ij}=0.1$ and
(b) $\varepsilon_{\rm ij}=0.5$. In (c) the time evolution of the Kuramoto order
parameter for $\varepsilon_{\rm ij}=0.1$ (black line) and
$\varepsilon_{\rm ij}=0.5$ (red line).}
\label{fig5}
\end{center}
\end{figure}
\section{Spike-Timing Dependent Plasticity}
Work carried to try to unveil the role of synaptic plasticity in learning and
memory has the Hebb rule as a basis. Hebb rule is a postulate proposed in 1949
by Hebb in his book ``The organization of behavior'' \cite{hebb49}. He
conjectured that the synapse from presynaptic to postsynaptic neuron
should be maximally strengthened if the input from presynaptic neuron
contributes to the firing of postsynaptic. In this way, a long-term
potentiation is caused when there is coincident spiking of presynaptic and
postsynaptic neurons \cite{gerstner10}.
In the synaptic plasticity, synapse weakening and strengthening are implemented
by long-term depression (LTD) and potentiation (LTP), respectively
\cite{feldman12}. LTP refers to a long-lasting increase in excitatory
postsynaptic potential, while LTD decreases the efficacy of a synapse. Bliss et
al. \cite{bliss73} suggested that low-frequency firing drives LTD, whereas LTP
is driven by presynaptic firing of the high-frequency. Synaptic plasticity
alteration as a function of the relative timing of presynaptic and postsynaptic
firing was named as spike timing-dependent plasticity (STDP) by Song et al.
\cite{song00}. STDP has been observed in brain regions, and relevant studies on
it were carried out by Gerstner \cite{gerstner96} and Markram et al.
\cite{markram95,markram97}. Fr\'egnac et al. \cite{fregnac10} provided the
existence of STDP in cat visual cortex {\it in vivo}. Moreover, research on
STDP has focused in the hippocampus and cortex \cite{buchanan10}.
We have studied the changes in synchronous and desynchronous states caused in a
Hodgkin-Huxley network due to excitatory (eSTDP), as well as inhibitory (iSTDP)
spike timing-dependent plasticity. We have considered the plasticity as a
function of the difference of postsynaptic and presynaptic excitatory and
inhibitory firing according to Refs. \cite{poo98} and \cite{haas06},
respectively.
The excitatory eSTDP is given by
\begin{equation}\label{eqplast}
\Delta \varepsilon_{ij}= \left\{
\begin{array}{ll}
\displaystyle A_{1}\exp(-\Delta t_{ij}/\tau_{1})\;,\;\Delta t_{ij}\geq 0 \\
\displaystyle -A_{2}\exp(\Delta t_{ij}/\tau_{2})\;,\;\Delta t_{ij} < 0
\end{array}
\right. ,
\end{equation}
where
\begin{equation}
\Delta t_{ij}=t_{i}-t_{j}=t_{\rm pos}-t_{\rm pre},
\end{equation}
$t_{\rm pos}$ is the spike time of the postsynaptic neuron, and $t_{\rm pre}$ is
the spike time of the presynaptic one.
Figure \ref{fig6}(a) shows the result obtained from Eq. (\ref{eqplast}) for
$A_{1}=1.0$, $A_{2}=0.5$, $\tau_{1}=1.8$ms, and $\tau_{2}=6.0$ms. The initial
synaptic weights $\varepsilon_{ij}$ are normally distributed with mean and
standard deviation equal to $0.25$ and $0.02$, respectively
($0\leq \varepsilon_{ij}\leq 0.5$). They are updated according to Eq.
(\ref{eqplast}), where
\begin{equation}
\varepsilon_{ij}\rightarrow \varepsilon_{ij}+10^{-3}\Delta\varepsilon_{ij}.
\end{equation}
The green dashed line denotes the intersection between the absolute values of
the depression (black line) and potentiation (red line) curves. For
$\Delta t_c^{Exc}<1.8$ms the potentiation is larger than the depression. In
addition, the red line denotes the absolute value of the coupling strength
($|\Delta \varepsilon_{ij}|$).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=8cm]{fig6.eps}
\caption{Plasticity as a function of the difference of spike timing of
postsynaptic and presynaptic synapses for (a) excitatory (eSTDP) and (b)
inhibitory (iSTDP). The green dashed line indicates the intersection between
the potential and depression curves.}
\label{fig6}
\end{center}
\end{figure}
In the inhibitory iSTDP synapses, the coupling str\-ength $\sigma_{ij}$ is
adjusted according to the equation
\begin{equation}\label{eqplastI}
\Delta \sigma_{ij} = \frac{g_0}{g_{\rm norm}} {\alpha}^{\beta} |\Delta t_{ij}|
{\Delta t_{ij}}^{\beta -1} \exp(-\alpha |\Delta t_{ij}|),
\end{equation}
where $g_0$ is the scaling factor accounting for the amount of change in
inhibitory conductance induced by the synaptic plasticity rule, and
$g_{\rm norm} = {\beta}^{\beta} \exp(-\beta)$ is the normalising constant.
In Figure \ref{fig6}(b) we see the result obtained from Eq. (\ref{eqplastI})
for $g_0 = 0.02$, $\beta=10.0$, $\alpha =0.94$ if $\Delta t_{ij}>0$, and for
$\alpha=1.1$ if $\Delta t_{ij}<0$. As a result, $\Delta \sigma_{ij}>0$ for
$\Delta t_{ij}>0$, and $\Delta \sigma_{ij}<0$ for $\Delta t_{ij}<0$. The initial
inhibitory synaptic weights $\sigma_{ij}$ are normally distributed with mean and
standard deviation equal to $\sigma=c\varepsilon$ ($1\leq c\leq 3$) and 0.02,
respectively ($0\leq \sigma_{ij}\leq 2c\varepsilon$). The coupling strengths are
updated according to Eq. (\ref{eqplastI}), where
\begin{equation}
\sigma_{ij}\rightarrow \sigma_{ij}+10^{-3}\Delta\sigma_{ij}.
\end{equation}
The updates for $\varepsilon_{ij}$ and $\sigma_{ij}$ are applied to the last
postsynaptic spike. For $\Delta t_c^{\rm Inhib}<9.8$ms the depression is larger
than the potentiation.
\section{Influence of the Synaptic Plasticity on the Network Topology}
\subsection{Without External Perturbation}
About $20\%$ of the synapses in the brain have inhibitory characteristics
\cite{noback05}. We consider that the intensities of both excitatory and
inhibitory synapses are modifiable over time by a plasticity rule. We use
a network of $200$ Hodgkin-Huxley neurons with $I_i$ normally distributed in the
interval [$9.0$-$10.0$]. $E_i$ represents the $i$-th excitatory neurons with
sub-index $i$ in the interval [$1$-$160$] and $I_i$ represents the $i$-th
inhibitory neuron with the sub-index $i$ in [$161$-$200$]. In all the
simulations, we consider a total time interval of $2000$s.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=8cm]{fig7.eps}
\caption{Intensity of the final coupling for initial couplings with
$\frac{\sigma}{\varepsilon}=1$ and $\gamma=0$, (a) excitatory and (b)
inhibitory synapses. The coupling matrix has a triangular shape.}
\label{fig7}
\end{center}
\end{figure}
When the initial intensity of the inhibitory synapses is small
($\frac{\sigma}{\varepsilon} \approx 1$), we show that the potentiation
occurs in both kinds of synapses and the final coupling matrix exhibits a
triangular shape, as seen in Fig. \ref{fig7}. In the excitatory synapses a
reinforcement is observed from the neurons of greater to smaller frequency
(Fig. \ref{fig7}(a)), whereas in the inhibitory synapses, the potentiation
occurs from the neurons of smaller to greater frequency (Fig. \ref{fig7}(b)).
Figure \ref{fig7}(a) points out that presynaptic excitatory neurons that are
more likely to strongly connect to a large number of postsynaptic excitatory
neurons are also more likely to strongly connect to postsynaptic inhibitory
neurons. Similarly, Figure \ref{fig7}(b) points out that presynaptic
inhibitory neurons that are more likely to strongly connect to a large number
of postsynaptic inhibitory neurons are also more likely to stron\-gly connect
to postsynaptic excitatory neurons. This reveal a rich club phenomenon in the
neural plasticity, where the neurons with larger degrees to its own "club"
(either the excitatory or the inhibitory community) tend to be also more
connected to the other "club". The rich-club phenomenon is know to exist in
the topological organisation of the brain \cite{towlson13} and was recently
hypothetised to be an effect of Hebbian learning mechanisms in Ref.
\cite{vertes14}.
In Fig. \ref{fig8} it is exhibited the value of the excitatory
$(\bar{\varepsilon})$ and the inhibitory $(\bar{\sigma})$ mean coupling as a
function of $\frac{\sigma}{\varepsilon}$. A small variability around the mean
values of the excitatory and inhibitory couplings is observed for small values
of $\frac{\sigma}{\varepsilon}$. However, increasing the inhibitory synapse
implies in an increase in the variability around both mean values, as
indicated by the standard deviation bars. This fact becomes notable when the
initial intensity of the inhibitory synapses is greater than
$\frac{\sigma}{\varepsilon}=1.5$. As a result, the inhibitory synapses act more
intensely on the neuronal network dynamics, and a different asymptotic
behaviour can be observed. Figures \ref{fig9} and \ref{fig10}, at $t=2000$s,
show the coupling matrices with the values of the excitatory and inhibitory
couplings for an initial value given by $\frac{\sigma}{\varepsilon}=2.7$. In
some simulations, the synaptic connections tend to zero, namely, the network
becomes disconnected (Fig. \ref{fig9}). In other simulations, disconnected
blocks are observed, as shown in Fig. \ref{fig10}. Nevertheless, for the same
value of the $\frac{\sigma}{\varepsilon}$ parameter, the system can exhibit an
asymptotic behaviour similar to the case when initial coupling have
$\frac{\sigma}{\varepsilon}=1.0$ (Fig.\ref{fig7}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm,width=7cm]{fig8.eps}
\caption{Mean excitatory (black circles) and inhibitory (red triangles)
couplings as a function of $\frac{\sigma}{\varepsilon}$, where we consider
simulations without external perturbations. The bars indicate the standard
deviation calculated for the mean value from 30 simulations.}
\label{fig8}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=8cm]{fig9.eps}
\caption{Intensity of the couplings for $\frac{\sigma}{\varepsilon}=2.7$,
$\gamma=0$, $t=2000$s, (a) excitatory and (b) inhibitory synapses. The network
has disconnected neurons.}
\label{fig9}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=8cm]{fig10.eps}
\caption{Intensity of the couplings for $\frac{\sigma}{\varepsilon}=2.7$,
$\gamma=0$, $t=2000$s, (a) excitatory and (b) inhibitory synapses. The neural
network contains disconnected blocks.}
\label{fig10}
\end{center}
\end{figure}
The behaviour observed in the synapse intensity can be explained in terms of
the average time between spikes. For that, we defined the mean time between
spikes among neurons having both excitatory and inhibitory synapses by the
equations
\begin{eqnarray}
\bar{\Delta t}_{ij}^{\rm Exc} & = & \frac{1}{\tau}\sum_{i\neq j}
| t_{\rm pre}^{\rm Exc}-t_{\rm pos} |, \\
\bar{\Delta t}_{ij}^{\rm Inhib} & = & \frac{1}{\tau}\sum_{i\neq j}
| t_{\rm pre}^{\rm Inhib}-t_{\rm pos} |.
\end{eqnarray}
In Figure \ref{fig11}, $\bar{\Delta t}_{ij}^{\rm Exc}$ and
$\bar{\Delta t}_{ij}^{\rm Inhib}$ values are show for the extreme case of initial
couplings given by $\frac{\sigma}{\varepsilon}=2.7$ (black lines) and initial
coupling given by $\frac{\sigma}{\epsilon}=1.0$ (red lines). For the case where
the neuronal network becomes disconnected (black lines), the average time
values that are more frequently are found in the depression region of the eSTDP
and iSTDP models ($\bar{\Delta t}_{ij}^{\rm Exc}>\Delta t_{c}^{\rm Exc}=1.8\rm ms$
and $\bar{\Delta t}_{ij}^{\rm Inhib}<\Delta t_{c}^{\rm Inhib}=9.8\rm ms$). However, in
simulations where a neuronal network becomes strongly connected, a higher
concentration of the average time values in the potentiation regions of the
plasticity models is observed
($\bar{\Delta t}_{ij}^{\rm Exc}<\Delta t_{c}^{\rm Exc}=1.8\rm ms$ and
$\bar{\Delta t}_{ij}^{\rm Inhib}>\Delta t_{c}^{\rm Inhib}=9.8\rm ms$). So,
potentiation happening for high frequencies excitatory synapses and lower
frequencies inhibitory synapses promote the strengthening of synaptic
connectivity and the rich-club phenomenon.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=7cm]{fig11.eps}
\caption{Probability distribution frequency of the average firing times for
$\frac{\sigma}{\varepsilon}=2.7$, $\gamma=0$, (a) excitatory and (b) inhibitory
synapses. For the triangular shape and unidirectionally connected coupling
matrix (Fig. \ref{fig7}), the $\bar{\Delta t_{ij}}$ values are more frequently
found in the potentiation regions (red curves in (a) and (b)). The black lines
in (a) and (b) illustrate the completely opposite case observed in Fig.
\ref{fig9}. The values of $\Delta t_c$ were obtained in Fig. \ref{fig6}.}
\label{fig11}
\end{center}
\end{figure}
\subsection{With External Perturbation}
An external perturbation combined with eSTDP and iSTDP can provide a positive
contribution to the excitatory and inhibitory mean coupling. In this case, we
observe that when the influence of the inhibitory is smaller than the
excitatory synapse ($\frac{\sigma}{\varepsilon}<2.3$), the potentiation occurs
in approximately all the synapses (excitatory and inhibitory) (Fig.
\ref{fig12}). Then, the network remains stron\-gly connected, with a topology
close to all-to-all. Almost all the intensities of the connections converge to
high values (${\bar\varepsilon}_{ij}\geq 0.4$ and
${\bar\sigma}_{ij}\approx 0.5$). Only a few connections, where the presynaptic
neurons have lower frequency, tend to zero.
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=8cm]{fig12.eps}
\caption{Perturbed intensity of the final coupling for
$\frac{\sigma}{\varepsilon}=2.2$, $\gamma=10\mu\rm A/cm^2$, (a) excitatory and
(b) inhibitory synapses. Almost all connections in the neuronal networks are
reinforced.}
\label{fig12}
\end{center}
\end{figure}
For larger $\frac{\sigma}{\varepsilon}$ values, we also observe that the
inhibi\-tory connections become strengthened. The inhibitory mean coupling
converges to the largest value allowed in the interval when
$\frac{\sigma}{\varepsilon}>2.3$. However, for this same value of
$\frac{\sigma}{\varepsilon}$, there is a trend of decreasing intensity of
excitatory synapses ($\bar{\varepsilon_{ij}}\approx 0$). The neurons remain
connected through the inhibitory synapses (Fig. \ref{fig13}).
An abrupt transition in the mean excitatory coupling values can also be
seen for $\frac{\sigma}{\varepsilon} \approx 2.3$. For values slightly less
than $2.3$ ($\frac{\sigma}{\varepsilon}=2.2$), both excitatory and inhibitory
synapses undergo an increase in their intensities, whereas, for values
of $\frac{\sigma}{\varepsilon}$ larger than this threshold, the inhibitory
synapses undergo potentiation while the excitatory sy\-napses tend to zero
(Fig. \ref{fig14}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=8cm]{fig13.eps}
\caption{Perturbed intensity of the final coupling for for initial coupling
given by $\frac{\sigma}{\varepsilon}=2.4$, $\gamma=10\mu\rm A/cm^2$, (a)
excitatory and (b) inhibitory synapses. All the excitatory connections in the
neuronal networks disappear, but the inhibitory synapses are enhanced.}
\label{fig13}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm,width=7cm]{fig14}
\caption{Perturbed mean excitatory (black circles) and inhibitory (red
triangles) couplings as a function of $\frac{\sigma}{\varepsilon}$, where we
consider $\gamma=10\mu\rm A/cm^2$.}
\label{fig14}
\end{center}
\end{figure}
The time evolution of both excitatory and inhibitory synapses depend on the
time interval between spikes of presynaptic and postsynaptic neurons. Figure
\ref{fig15} shows the frequency between the mean times among presynaptic and
postsynaptic spikes. This figure exhibits the two extreme cases, when the
neuronal network converges to a stron\-gly connected global topology or to a
network with only inhibitory synapses, for $\frac{\sigma}{\varepsilon}=2.3$.
When the increase of the weights occurs in almost all the synapses, the
$\bar{\Delta t_{ij}}$ values appear more frequently in the regions of
potentiation of both models of plasticity
($\bar{\Delta t}_{ij}^{\rm Exc}<\Delta t_{c}^{\rm Exc}=1.8\rm ms$ and
$\bar{\Delta t}_{ij}^{\rm Inhib}>\Delta t_{c}^{\rm Inhib}=9.8\rm ms$). However, when
only strong inhibitory synapses are observed in the final neuronal network, it
is verified that $\bar{\Delta t_{ij}}$ values in excitatory synapses are more
frequently found in the depression region of the eSTDP model
($\bar{\Delta t}_{ij}^{\rm Exc}>\Delta t_{c}^{\rm Exc}=1.8\rm ms$). In this case,
the inhibitory synapses are reinforced due to the fact that the
$\bar{\Delta t_{ij}}$ values are more frequently found in the region of
potentiation of the iSTDP model
(${\Delta t}_{ij}^{\rm Inhib}>\Delta t_{c}^{\rm Inhib}=9.8\rm ms$).
Therefore, noise can always enhance inhibitory sy\-napses in the plastic brain.
Excitatory synapses can also be enhanced if the initial network has
sufficiently large excitatory synaptic strength (no less than about half the
value of the inhibitory synapses strength).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=7cm,width=7cm]{fig15.eps}
\caption{Probability distribution function of the average firing times for
$\frac{\sigma}{\varepsilon}=2.3$, $\gamma=10\mu\rm A/cm^2$, (a) excitatory and
(b) inhibitory synapses. For all-to-all topology (Fig \ref{fig12},
the $\bar{\Delta t_{ij}}$ vales are more frequently found in the potentiation
regions (red curves in (a) and (b)). The black lines in (a) and (b)
illustrate the completely opposite case observed in Fig. \ref{fig13}. The
values of $\Delta t_c$ were obtained from Fig. \ref{fig6}.}
\label{fig15}
\end{center}
\end{figure}
\section{Influence of the Synaptic Plasticity on the Synchronous Behaviour}
\subsection{Without External Perturbation}
The change in the behaviour of the synapse intensity between presynaptic and
postsynaptic neurons due to plasticity is reflected on the spike
synchronisation. In Fig. \ref{fig16} we observe different behaviours in
relation to synchronisation, where we calculate the order parameter. Figure
\ref{fig16} exhibits the behaviour of the order parameter as a function of time
for simulations without external perturbations, discarding a large transient
ti\-me. The neuronal network evolves to the strong synchronised state with
$R(t)>0.9$ (black line) if the initial ratio of intensities of the inhibitory
synapses are weak ($\frac{\sigma}{\varepsilon}\approx 1.0$), this inhibition and
excitation have similar initial strengths. However, with the increase
of the inhibitory synapses intensities $\frac{\sigma}{\varepsilon}>1.5$,
different final states are observed in relation to the synchronisation (red,
green and blue lines).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm,width=7cm]{fig16.eps}
\caption{Order parameter for $\frac{\sigma}{\varepsilon}=1.0$ (black line) and
$\frac{\sigma}{\varepsilon}=2.7$ (red, blue and green lines).}
\label{fig16}
\end{center}
\end{figure}
\subsection{With External Perturbation}
We consider an external perturbation ($\gamma=10\mu\rm A/cm^2$) when the
initial inhibitory synapses intensity ratio are small
($\frac{\sigma}{\varepsilon}\approx 1.0$). In this case, the network has a
synchronous behaviour ($\bar{R}(t)>0.9$), as shown in Fig. \ref{fig17} (black
line). When inhibitory synapses intensities have a great influence on the
network dynamics ($\frac{\sigma}{\varepsilon}\approx 3.0$), neurons tend to
exhibit desynchronised firing behaviour with $\bar{R}(t) \approx 0.1$ (red
line). However, when $\frac{\sigma}{\varepsilon}\approx 2.3$, we observe two
possible asymptotic values for the order parameter. In some simulations a
strongly synchronised behaviour appears, while in others it is observed a
weakly synchronous evolution of spikes between the neurons in the network
(green and blue lines).
\begin{figure}[htbp]
\begin{center}
\includegraphics[height=6cm,width=7cm]{fig17.eps}
\caption{Order parameter for $\gamma=10\mu\rm A/cm^2$,
$\frac{\sigma}{\varepsilon}=1.0$ (black line), $\frac{\sigma}{\varepsilon}=3.0$
(red line) and $\frac{\sigma}{\varepsilon}=2.3$ (blue and green lines).}
\label{fig17}
\end{center}
\end{figure}
\section{Conclusions}
Neuronal networks based on the Hodgkin-Huxley mo\-del have been used to simulate
coupled spiking neurons. The Hodgkin-Huxley neuron is a coupled set of ordinary
nonlinear differential equations that describes the ionic basis of the membrane
potential. In this review, we considered a Hodgkin-Huxley network with synaptic
plasticity (STDP). The STDP is a process that adjusts the strength of the
synapses in the brain according to time interval between presynaptic and
postsynaptic spikes.
We studied the effects of STDP on the topology and spike synchronisation.
Regarding the final topology and depending on the balance between inhibitory and
excitatory couplings, the network can evolve not only to different coupling
strength configurations, but also to different connectivities.
When the strength of the inhibitory connections is of the same order of that of
the excitatory connections, the final topology in the plastic brain exhibits
the rich-club phenomenon, where neurons that have high degree connectivity
towards neurons of the same presynaptical group (either excitatory of
inhibitory) become strongly connected to neurons of the other postsynaptical
group, i.e., a presynaptical neuron that is highly connected to presynaptical
excitatory neurons (or inhibitory ones) becomes strongly connected to
postsynaptical inhibitory (or excitatory ones).
When the strength of the synapses becomes reasonably larger than the strength
of the excitatory connections, then the final topology has all the features of
a complex topology, where neurons only sparsely connect to other neurons with a
non-trivial topology.
When noise is introduced in the neural network, we observe that inhibitory
synapses are always enhanced in the plastic brain. Excitatory synapses can also
be enhanced if the initial network has sufficiently large excitatory synaptic
strength (no less than about half the value of the inhibitory synapsis
strength).
The changes in the synapse strength and the connectivities due to STDP
produce significant alterations in the synchronous states of the neuronal
network. We observe that the synchronous states depend on the balance between
the excitatory and inhibitory intensities. We also find coexistence of
strongly synchronous and weakly synchronous behaviours.
\section*{Acknowledgements}
This work was possible by partial financial support from the following
Brazilian government agencies: CNPq (154705/2016-0, 311467/2014-8), CAPES,
Funda\-\c c\~ao Arauc\'aria, and S\~ao Paulo Research Foundation (processes
FAPESP 2011/19296-1, 2015/07311-7, 2016/16148-5, 2016/23398-8, 2015/50122-0).
Research supported by grant 2015/50122-0 S\~ao Paulo Research Foundation
(FAPESP) and DFG-IRTG 1740/2.
|
1,116,691,497,906 | arxiv | \section{Introduction}\label{waffle1}
For all natural numbers $n$, we define the $n$-dimensional hypercube $Q_n = (V,E)$ where $V = \{0,1\}^n$ and $uv \in E$ if the two vertices differ in exactly one co-ordinate. For a vertex $u \in V$ inductively we let $\Gamma^0(u) = \{u\}$, $\Gamma^1(u) = \Gamma(u) = \{v \in V(Q_n): uv \in E(Q_n)\}$, and for $k \geq 2$ we have $\Gamma^{k}(u) = \bigcup_{v \in \Gamma^{k-1}(u)}\Gamma(v) \setminus \Gamma^{k-2}(u)$ (so $\Gamma^k(u)$ is the set of vertices which have shortest path length to $u$ equal to $k$). For a subset of the vertices $U \subseteq V$, we also write $\Gamma(U) = \bigcup_{u \in U} \Gamma(u)$, and we define the \em closed neighbourhood \em of $U$ to be $U \cup \Gamma(U)$, the set of vertices in $U$ together with the neighbourhood of $U$. Note that according to our definition, $\Gamma(U)$ is not necessarily disjoint from $U$; namely, every $u \in U$ with at least one neighbour in $U$, will be contained in $\Gamma(U)$.
Let $A, B \subseteq [n] = \{1,2,\ldots,n\}$ with $|A| = |B| = r$. We say that $A <_L B$, i.e., that $A$ precedes $B$ in the \emph{lexicographic} (or \emph{lex}) \emph{ordering} on the sets of size $r$, if and only if
\[
\min A \triangle B = \min ((A \cup B) \setminus (A \cap B)) \in A.
\]
Next, let $<_S$ be the ordering of subsets of $[n]$ such that $A <_S B$ if $|A| < |B|$ or if $|A| = |B|$ and $A <_L B$. This is known as the \emph{simplicial ordering}. Since with every vertex $v = (v_1, \ldots, v_n) \in V(Q_n)$ we can naturally associate a set $Z_v = \{i \in [n]: v_i = 1\}$, the orderings $<_L$ and $<_S$ induce orderings on $V(Q_n)$: for $u,w \in V(Q_n)$ we have $u <_L w$ if $Z_u <_L Z_w$, and $u <_S w$ if $Z_u <_S Z_w$. The following well known result of Harper \cite{HARP} (see also \cite[\S16]{bolcomb}) shows that initial segments of $<_S$ have minimal closed neighbourhoods.
\begin{thm}\label{harpermod}
For each $\ell \in \mathbb{N}$, let $S_\ell$ be the first $\ell$ elements of $V(Q_n)$ according to $<_S$. If $D \subset V(Q_n)$ with $|D| = \ell$, then
\[
|\Gamma(D) \cup D| \ge |\Gamma(S_\ell) \cup S_\ell|.
\]
\end{thm}
When $\ell = \binom{n}{k}$ and $\sum_{i=0}^{k-1}\binom{n}{i} = o\left(\binom{n}{k}\right),$ $S_{\ell}$ closely resembles a $k$-th neighbourhood (the set of vertices at distance $k$ from a vertex). In this instance, by the well known LYM-inequality (see Lemma \ref{locLYM} to come), the closed neighbourhood of $S_{\ell}$ has size at least
\begin{align*}
\left|\Gamma(S_{\ell})\cup S_{\ell}\right| \ge \sum_{i=0}^k \binom{n}{i} + \frac{\left(\ell-\sum_{i=0}^{k-1}\binom{n}{i}\right)}{\binom{n}{k}}\binom{n}{k+1} = \binom{n}{k+1} + O\left(\frac{1}{k} \binom{n}{k}\right).
\end{align*}
Two questions arise. Firstly, must all sets of order $\binom{n}{k}$ with minimal closed neighbourhood closely resemble a $k$-th neighbourhood of a vertex? Secondly, what happens when a set of size $\binom{n}{k}$ has close to the minimal closed neighbourhood? In this paper we answer the second question through a stability theorem when $k$ is not too large; consequently, our result also answers the first question in the positive. Note that in Theorem~
\ref{thm:f-k-stability} we consider neighbourhoods of sets of vertices rather than closed neighbourhoods, but since these differ by at most $\binom{n}{k}$ vertices this does not change the nature of our result.
\begin{thm}\label{thm:f-k-stability}
Let $\rho$ and $\kappa$ be positive real numbers. Then there exists $n_0 = n_0(\rho,\kappa) \in \mathbb{N}$ and $\delta = \delta(\rho,\kappa)>0$ such that the following holds: Let $k : \mathbb{N} \to \mathbb{N}$ and $p : \mathbb{N} \to [\rho,\infty)$ be functions such that $k(n) \le \tfrac{\log n}{3\log \log n}$, $\tfrac{k(n)}{p(n)} \leq \kappa$, and $\tfrac{p(n) k(n)^3}{n} \le \delta$. Then for $n \ge n_0$, the following holds: If $A \subseteq V(Q_n)$ with $|A| = \binom{n}{k(n)}$ and $|\Gamma(A)| \le \binom{n}{k(n)+1} + \binom{n}{k(n)}p(n)$, then there exists some $w \in V(Q_n)$ for which we have
\begin{equation}
\label{eqn:errorBound}
|\Gamma^{k(n)}(w) \cap A| \ge \binom{n}{k(n)} - C\binom{n}{k(n)-1}p(n)k(n),
\end{equation}
where $C = 24 + 33/\rho + 32 \kappa$.
\end{thm}
Throughout the paper we use the notation $f(n) = O(g(n))$ to mean that there exists some constant $C > 0$ such that $|\tfrac{f(n)}{g(n)}| \leq C$ for all $n$, and $f(n) = o(g(n))$ to say that $\tfrac{f(n)}{g(n)} \to 0$ as $n \rightarrow \infty$. For the ease of notation, we shall often denote $k = k(n)$ and $p = p(n)$.
Let us briefly discuss the sharpness and some limitations of Theorem~\ref{thm:f-k-stability}. Let $A \subseteq V(Q_n)$ be a set of size $|A| = \binom{n}{k(n)}$ satisfying $|\Gamma(A)| \le \binom{n}{k(n)+1} + \binom{n}{k(n)}p(n)$. Then let $w \in V(Q_n)$ be a vertex of the hypercube maximising the value of $|\Gamma^{k(n)}(w) \cap A|$. By~\eqref{eqn:errorBound} we know that at most $C\binom{n}{k(n)-1}p(n)k(n)$ vertices of $A$ lie outside of $\Gamma^{k(n)}(w)$. Can we match this bound? For example, the desired size of $|\Gamma(A)|$ (up to some lower order terms) could be obtained by building $A$ as a disjoint union of $ \binom{n}{k(n)}-\binom{n}{k(n)-1}p(n)$ vertices in $\Gamma^{k(n)}(w)$, together with the $(k(n)-1)$th neighbourhoods of $p(n)$ other vertices in the cube. This example shows that our bound on $|A \setminus \Gamma^{k(n)}(w)|$ is sharp up to an $O(k(n))$ multiplicative term. We believe that at least for $k(n)$ not too large, the $O(k(n))$ is an artefact of our proof. However, it is possible that for $k(n)$ large (possibly larger than the assumptions of our theorem allow) this extra factor in~\eqref{eqn:errorBound} is necessary.
In Theorem~\ref{thm:f-k-stability} we assume that the set $A$ we consider satisfies $|A| = \binom{n}{k}$. However, by the fact that the size of $\Gamma(A)$ cannot decrease when we remove elements from $A$, we can obtain a similar result for sets of size slightly larger than $\binom{n}{k}$, for example, of size $|A| = \sum_{i=0}^k \binom{n}{i}$ when $k$ is not too large. We do this by taking a subset $B \subset A$ of size $)\binom{n}{k}$, applying Theorem~\ref{thm:f-k-stability} to $B$, and then observing that $|\Gamma^{k(n)}(w) \cap A| \ge |\Gamma^{k(n)}(w) \cap B| \geq \binom{n}{k(n)} - C\binom{n}{k(n)-1}p(n)k(n)$. We believe that with very similar methods, results concerning sets of size $\alpha \binom{n}{k}$ might also be derived. However, we anticipate the technical details would be rather tedious.
The strongly related edge-boundary version of the isoperimetric problem (see, e.g., Harper \cite{HARPedge}, Bernstein \cite{bernstein}, and Hart \cite{hart}) has been considered in the stability context by Ellis \cite{ellis}, Ellis, Keller and Lifshitz \cite{EKLedge}, Friedgut \cite{friedgut}, and others.
There are many other fundamental stability-type results in graph theory: for example, the Erd{\H o}s-Simonovits Stability Theorem \cite{erd-sim} states that an $H$-free graph that is close to maximum in size must in fact be close to a Tur\'an graph. The famous Erd{\H o}s-Ko-Rado Theorem \cite{EKR} concerning the maximum size of intersecting set systems has been extended using stability results by, among others, Dinur and Friedgut \cite{DFstability}, Bollob\'as, Narayanan and Raigorodskii \cite{BNRstability}, and Devlin and Kahn \cite{DKstability}.
The stability versions of extremal results can often be applied even more widely that the statements they extend; indeed, the motivation for this work came from the authors' forthcoming paper with Alex Scott \cite{shotgun} on the shotgun reconstruction in the hypercube.
The paper is organised as follows. In Section \ref{prelim} we prove some preparatory lemmas including a tightening of the Local LYM Lemma, and in Section \ref{proofmain} we prove Theorem \ref{thm:f-k-stability}.
We also remark that Peter Keevash and Eoin Long have independently been working on a similar problem \cite{PKEL}. They use very different techniques and their results give weaker bounds for the set-sizes we consider but work for general sized sets and also for much larger sets (i.e., for $k \gg \tfrac {\log n}{3\log \log n}$, although with $p = O(1/k)$).
\section{Preliminaries}\label{prelim}
Given $0 \leq r \leq n$, let $[n]^{(r)}$ be the family of all $r$-element subsets of $[n]$, also called a \emph{layer}. Along with the lex ordering $<_L$, another important ordering in finite set theory is the \emph{colexicographic}, or \emph{colex}, ordering $<_C$ of layers $[n]^{(r)}$. For $A,B \in [n]^{(r)}$ we have $A <_C B$ if $A \neq B$ and
\[
\max A \triangle B = \max ((A \cup B) \setminus (A \cap B)) \in B.
\]
An important fact connecting the orderings $<_L$ and $<_C$ on $[n]^{(r)}$ is that if $\mathcal{F}$ is the initial segment of $<_L$ on $[n]^{(r)}$ then $\mathcal{F}^c = \{[n] \setminus A : A \in \mathcal{F}\}$ is isomorphic to the initial segment of colex on $[n]^{(n-r)}$ (more precisely, it is the initial segment of colex on $[n]^{(n-r)}$ using the ``reversed alphabet'' where $n < n-1 < \ldots < 1$). Indeed, if $|A| = |B| = r$ and $A <_L B$ then by definition we have $\min ((A \cup B) \setminus (A \cap B)) \in A$, which implies that $\min ((A^c \cup B^c) \setminus (A^c \cap B^c)) \in B^c$. Treating the alphabet as ``reversed'' we see that indeed $A^c <_C B^c$.
Let us now fix some more notation that will be used throughout this paper. For $\mathcal{F} \subseteq [n]^{(r)}$ we write
\[
\partial(\mathcal{F}) = \{A \in [n]^{(r-1)} : \exists B \in \mathcal{F}, A \subseteq B\}
\]
for the \emph{shadow} of $\mathcal{F}$, and similarly
\[
\partial^+(\mathcal{F})= \{A \in [n]^{(r+1)} : \exists B \in \mathcal{F}, B \subseteq A\}
\]
for the \emph{upper shadow} of $\mathcal{F}$.
It will be useful to be able to bound from below the size of the neighbourhood of a subset of $[n]^{(r)}$ by some function of the size of the subset itself. A good starting point for this is the local LYM-inequality \cite[Ex. 13.31(b)]{LYMstuff}.
\begin{lem}
\label{locLYM}
Let $\mathcal{A} \subseteq [n]^{(r)}$, then
\begin{equation}
\label{eq:locLYMlower}
\frac{|\partial(\mathcal{A})|}{\binom{n}{r-1}} \ge \frac{|\mathcal{A}|}{\binom{n}{r}},
\end{equation}
and
\begin{equation}
\label{eq:locLYMupper}
\frac{|\partial^+(\mathcal{A})|}{\binom{n}{r+1}} \ge \frac{|\mathcal{A}|}{\binom{n}{r}}.
\end{equation}
\end{lem}
Theorem \ref{harpermod} and Lemma \ref{locLYM} give us the following corollary.
\begin{cor}
\label{cor:harperBound}
Let $k \in \mathbb{N}$ and let $B \subseteq V(Q_n)$ with $|B| \leq \binom{n}{k}$. Then
\[
|\Gamma(B)| \geq |B| \frac{n}{k+1} - 2 \binom{n}{k}.
\]
\end{cor}
\begin{proof}
We have
\[
|\Gamma(B)| \geq |B \cup \Gamma(B)| - |B| \geq |B \cup \Gamma(B)| - \binom{n}{k}.
\]
Let $\ell = |B|$. By Theorem \ref{harpermod} we can bound further to obtain
\[
|B \cup \Gamma(B)| \geq |\Gamma(S_\ell) \cup S_\ell| \geq |\Gamma(S_\ell)| \geq \sum_{i=1}^{k+1} |\Gamma(S_\ell) \cap [n]^{(i)}| \geq \sum_{i=0}^k |\partial^+(S_\ell \cap [n]^{(i)})|.
\]
Applying \eqref{eq:locLYMupper} we then have
\[
\sum_{i=0}^k |\partial^+(S_\ell \cap [n]^{(i)})| \geq \sum_{i=0}^k |S_\ell \cap [n]^{(i)}| \frac{n-i}{i+1} \geq |B| \frac{n-k}{k+1} \geq |B| \frac{n}{k+1} - \binom{n}{k},
\]
completing the proof.
\end{proof}
Unfortunately the well-known inequality \eqref{eq:locLYMupper} is not quite strong enough for our purpose, and so we will need the following result.
\begin{lem}\label{kruskalkatona}
Let $m,r,i \in \mathbb{N}$. If $\mathcal{F} \subseteq [n]^{(r)}$ has order
\begin{equation}
\label{eq:fOrder}
|\mathcal{F}| \in \left [ \binom{n}{r} - \binom{n-i+1}{r}+1,\binom{n}{r} - \binom{n-i}{r} \right ],
\end{equation}
then
\begin{equation}
\label{eq:f+NewBound}
|\partial^+(\mathcal{F})| \ge |\mathcal{F}| \frac{\binom{n}{r+1} - \binom{n-i}{r+1}}{\binom{n}{r} - \binom{n-i}{r}}.
\end{equation}
\end{lem}
We do not claim that Lemma \ref{kruskalkatona} is unknown, but we have been unable to find a reference and so we provide a proof here. The proof uses the following celebrated result of Kruskal and Katona \cite{Kat68,Krusk63}.
\begin{thm}\label{thm:kruskat}
Let $\mathcal{F} \subseteq [n]^{(r)}$ and let $\mathcal{A}$ be the first $|\mathcal{F}|$ elements of $[n]^{(r)}$ according to $<_C$. Then $|\partial(\mathcal{F})| \ge |\partial(\mathcal{A})|$.
\end{thm}
For the ease of reading, for $0 \leq m \leq n$ we shall use the standard notation $[m,n] = \{m, m+1, \ldots, n\}$.
\begin{proof}[Proof of Lemma \ref{kruskalkatona}]
Let $m,r,i \in \mathbb{N}$ and suppose $\mathcal{F} \subseteq [n]^{(r)}$ satisfies \eqref{eq:fOrder}. It is easy to see that $\partial^+(\mathcal{F}) = (\partial(\mathcal{F}^c))^c$, and so it suffices to estimate $|\partial(\mathcal{F}^c)|$. By Theorem \ref{thm:kruskat}, the size of the shadow of $\mathcal{F}^c$ is at least the size of the shadow of the initial segment of size $|\mathcal{F}|$ in the $<_C$ order on $[n]^{(n-r)}$.
So suppose that $\mathcal{H} \subset [n]^{(n-r)}$ is an initial segment of $<_C$ order of size as in \eqref{eq:fOrder}. We first want to claim that
\[
|\mathcal{H}| = \sum_{j=0}^{i-2} \binom{n-j-1}{r-1} + s,
\]
where $1 \leq s \le \binom{n-i}{r-1}$. Indeed, observe that the first $\binom{n}{r} - \binom{n-i}{r}$ elements in the $<_L$ order on $[n]^{(r)}$ are the sets that are not fully contained in $[i+1,n]$. These can be listed as the $\binom{n-1}{r-1}$ sets that contain $1$, followed by the $\binom{n-2}{r-1}$ sets that contain $2$ but do not contain $1$, etc., followed finally by the $\binom{n-i}{r-1}$ sets $A$ such that $A \cap [i] = \{i\}$. A similar argument holds for the lower bound in \eqref{eq:fOrder}, which proves our claim.
For $j=0,\ldots,i-2$, let
\[
\mathcal{H}_j = \left \{A \cup [n+1-j, n] : A \in [n-j-1]^{(n-r-j)} \right \},
\]
so that $|\mathcal{H}_j| = \binom{n-j-1}{n-r-j} = \binom{n-j-1}{r-1}$. Then $\mathcal{H}$, being the initial segment of the $<_C$ order on $[n]^{(n-r)}$, can be expressed as the disjoint union $\mathcal{H} = \bigcup_{j=0}^{i-2} \mathcal{H}_j \cup \mathcal{S}$, where
\[
\mathcal{S} \subset \left \{A \cup [n+2-i, n] : A \in [n-i]^{(n-r-(i-1))} \right \}
\]
has size $s$. We may then write the shadow of $\mathcal{H}$ as the disjoint union
\[
\partial \mathcal{H} = \bigcup_{j=0}^{i-2} \left (\partial \mathcal{H}_j \setminus (\partial \mathcal{H}_0 \cup \ldots \cup \partial \mathcal{H}_{j-1}) \right) \cup \left (\partial \mathcal{S} \setminus (\partial \mathcal{H}_0 \cup \ldots \cup \partial \mathcal{H}_{i-2}) \right).
\]
For each $j$, $\partial \mathcal{H}_j \setminus (\partial \mathcal{H}_0 \cup \ldots \cup \partial \mathcal{H}_{j-1})$ contains exactly the sets of the form $A \cup [n+1-j, n]$ where $A \in [n-j-1]^{(n-r-j-1)}$. Writing $\mathcal{S} = \{A \cup [n+2-i,n] : A \in \mathcal{A}\}$ (so $\mathcal{A} \subseteq [n-i]^{(n-r-(i-1))}$ has $|\mathcal{A}| = s$) we similarly see that
\[
\partial \mathcal{S} \setminus (\partial \mathcal{H}_0 \cup \ldots \cup \partial \mathcal{H}_{i-2}) = \{A \cup [n+2-i,n] : A \in \partial \mathcal{A}\}.
\]
Hence $\partial \mathcal{H}$ is the disjoint union and consequently
\begin{align*}
| \partial \mathcal{H} | & = \bigcup_{j=0}^{i-2} | \{A \cup [n+1-j, n] : A \in [n-j-1]^{(n-r-j-1)}\} | \\
& \qquad \cup | \{A \cup [n+2-i,n] : A \in \partial \mathcal{A}\} | \\
& = \sum_{j=0}^{i-2} \binom{n-j-1}{n-r-j-1} + |\partial \mathcal{A}|.
\end{align*}
Observing that $(n-j-1) - (n-r-j-1) = r$ and applying \eqref{eq:locLYMlower}, we see
\begin{align*}
|\partial \mathcal{H}| & \ge \sum_{j=0}^{i-2} \binom{n-j-1}{r} + \frac{n-r-(i-1)}{r}|\mathcal{A}| \\
& = \sum_{j=0}^{i-2} \frac{n-r-j}{r}\binom{n-j-1}{r-1} + \frac{n-r-(i-1)}{r}s.
\end{align*}
If we divide the above expression by $|\mathcal{H}|$, we can think of this lower bound as a ``weighted average'', with the weights of the elements of $\mathcal{H}_j$ equal to $\frac{n-r-j}{r}$, and the weights of the elements of $\mathcal{S}$ equal to $\frac{n-r-(i-1)}{r}$. This last weight is the smallest, hence increasing $s$ only decreases this average. Therefore we get
\begin{align}
\label{eq:f+Weights}
\frac{|\partial \mathcal{H}|}{|\mathcal{H}|} & \ge \frac{\sum_{j=0}^{i-1} \frac{n-r-j}{r}\binom{n-j-1}{r-1}}{\sum_{j=0}^{i-1}\binom{n-j-1}{r-1}} \nonumber \\
& = \frac{\sum_{j=0}^{i-1} \binom{n-j-1}{r}}{\sum_{j=0}^{i-1} \binom{n-j-1}{r-1}} \\
& = \frac{\binom{n}{r+1}-\binom{n-i}{r+1}}{\binom{n}{r} - \binom{n-i}{r}}, \nonumber
\end{align}
completing the proof of the lemma.
\end{proof}
\begin{cor}
\label{cor:f+monotone}
The sequence $\frac{\binom{n}{r+1} - \binom{n-i}{r+1}}{\binom{n}{r} - \binom{n-i}{r}}$ in \eqref{eq:f+NewBound} is non-increasing in $i$.
\end{cor}
\begin{proof}
If $i \geq n-r+1$ then $\binom{n-i}{r+1} = \binom{n-i}{r} = 0$ and the sequence stabilises. For $i \leq n-r$, by \eqref{eq:f+Weights} we have
\[
\frac{\binom{n}{r+1}-\binom{n-i}{r+1}}{\binom{n}{r} - \binom{n-i}{r}} = \frac{\sum_{j=0}^{i-1} \frac{n-r-j}{r}\binom{n-j-1}{r-1}}{\sum_{j=0}^{i-1}\binom{n-j-1}{r-1}}.
\]
If we move from $i$ to $i+1$ on the left-hand side, in the weighted average on the right-hand side we obtain another term $\binom{n-i-1}{r-1}$ with weight $\tfrac{n-r-i}{r}$; this weight is smaller than all the preceding weights and so the average decreases.
\end{proof}
The next lemma somewhat cleans up the multiplicative factor in Lemma \ref{kruskalkatona}.
\begin{lem}\label{kruskbound}
Suppose $\alpha,c \in (0,1)$ are such that $\binom{n}{r} - \binom{\alpha n}{r} = c\binom{n}{r}$. Then
\[
\frac{\binom{n}{r+1} - \binom{\alpha n}{r+1}}{\binom{n}{r} - \binom{\alpha n}{r}} \ge \frac{n-r}{r+1} \left( 1 + \frac{1-c}{r} \right).
\]
\end{lem}
\begin{proof}
Suppose that $\binom{\alpha n}{r} = (1-c)\binom{n}{r}$. Then
\begin{align*}
(1-c) & = \prod_{i=0}^{r-1} \frac{\alpha n - i}{n-i} \\
& = \prod_{i=0}^{r-1} \left( \alpha - (1-\alpha)\frac{i}{n-i} \right) \\
& \ge \prod_{i=0}^{r-1} \left( \alpha - (1-\alpha)\frac{r}{n-r} \right) \\
& = \left( \frac{\alpha n - r}{n-r} \right)^r.
\end{align*}
Hence we have that $\tfrac{\alpha n - r}{n-r} \le (1-c)^{1/r}$. Thus
\begin{align*}
\binom{\alpha n}{r+1} & = \frac{\alpha n - r}{r+1}(1-c)\binom{n}{r} \\
& = (1-c)\frac{\alpha n -r}{n-r}\frac{n-r}{r+1}\binom{n}{r} \\
& \le (1-c)^{1+1/r}\binom{n}{r+1}.
\end{align*}
We therefore have
\begin{align*}
\frac{\binom{n}{r+1} - \binom{\alpha n}{r+1}}{\binom{n}{r} - \binom{\alpha n}{r}} & \geq \frac{\left( 1-(1-c)^{1+1/r} \right)\binom{n}{r+1}}{c\binom{n}{r}} \\
& = \frac{n-r}{r+1}\frac{c + (1-c) \left(1-(1-c)^{1/r} \right)}{c} \\
& = \frac{n-r}{r+1} \left(1 + \frac{1-c}{c} \left( 1-(1-c)^{1/r} \right) \right).
\end{align*}
A generalisation of Bernoulli's inequality says that if $x \ge -1$ and $t \in [0,1]$, then we have $(1+x)^t \le 1+tx$. Applying this to the above formula with $x=-c$ and $t=1/r$ we obtain
\begin{align*}
\frac{\binom{n}{r+1} - \binom{\alpha n}{r+1}}{\binom{n}{r} - \binom{\alpha n}{r}} & \ge \frac{n-r}{r+1} \left (1 + \frac{1-c}{c} \cdot \frac{c}{r} \right) = \frac{n-r}{r+1} \left (1+\frac{1-c}{r} \right).
\end{align*}
\end{proof}
In the proof of Theorem \ref{thm:f-k-stability} we first delete sets of vertices with too many unique neighbours. The next lemma will allow us to impose that after this deletion, we get larger and larger layers around vertices in our set.
\begin{lem}\label{expand}
Let $k = o(\log n)$. For sufficiently large $n$ the following holds. Let $J$ be a subset of the hypercube such that for all $S \subseteq J$,
\begin{equation}
\label{eq:fewUniques}
|\Gamma(S) \setminus \Gamma(J \setminus S)| \le |S|\frac{n}{k+1} \left(1+\frac{1}{8k} \right).
\end{equation}
Then for any vertex $v$ and $j \le 2k$, if $|J \cap \Gamma^{j}(v)| \in [1,\frac{1}{2}\binom{n}{k}]$, then
\[
|J \cap \Gamma^{j+2}(v)| \ge \frac{n}{64k^3}|J \cap \Gamma^{j}(v)|.
\]
\end{lem}
\begin{proof}
Without loss of generality, throughout this proof we assume that $v = (0, \ldots, 0)$, so $Z_v = \emptyset$ and for all $j$ we have $\Gamma^j(v) = [n]^{(j)}$. Let $k = o(\log n)$ and let $J$ be a subset of the vertex set of the hypercube such that \eqref{eq:fewUniques} holds for all $S \subseteq J$. The first and most significant step in the proof will be to find a good lower bound on the ratio $|\partial^+(J \cap \Gamma^{j}(v))| / |J \cap \Gamma^{j}(v)|$, arguing according to three different cases. After this bound is obtained, the lemma will follow quite easily.
Assume that we have $j \le 2k$ with $|J \cap \Gamma^{j}(v)| \in [1,\frac{1}{2}\binom{n}{k}]$. If $j \leq k-1$, then we may appeal to \eqref{eq:locLYMupper} to see that for sufficiently large $n$,
\begin{align*}
\frac{|\partial^+(J \cap \Gamma^{j}(v))|}{|J \cap \Gamma^{j}(v)|} & \ge \frac{n-j}{j+1} \\
& \ge \frac{n}{k} - 1 \\
& = \frac{n}{k+1} \left( 1+\frac{1}{k}-\frac{k+1}{n} \right) \\
& \ge \frac{n}{k+1} \left( 1+\frac{1}{4k} \right).
\end{align*}
Now suppose that $j \ge k$. By Theorem \ref{thm:kruskat} and the relation between the orders $<_C$ and $<_L$, $|\partial^+(J \cap \Gamma^{j}(v))|$ is minimised when $J \cap \Gamma^{j}(v)$ is the initial segment of size $|J \cap \Gamma^{j}(v)|$ in the $<_L$ order on $[n]^{(j)}$.
First suppose that $|J \cap \Gamma^{j}(v)| \le \binom{n-(j+i)}{k-i}$ for some $i \geq 1$. Then all elements of the initial segment of length $|J \cap \Gamma^{j}(v)|$ in the $<_L$ order on $[n]^{(j)}$ contain the set $[j-k+i]$. So remove $[j-k+i]$ from all sets in $J \cap \Gamma^{j}(v)$ and instead work in $[j-k+i+1,n]$. We now have an initial segment of size $|J \cap \Gamma^{j}(v)|$ in the $<_L$ order in $[j-k+i+1,n]^{(k-i)}$ and so, for sufficiently large $n,$ \eqref{eq:locLYMupper}, together with the fact that $j \leq 2k$ and $i \geq 1$, give
\begin{align*}
\label{eq: lex1}
|\partial^+(J\cap \Gamma^{j}(v))| & \ge |J \cap \Gamma^{j}(v)|\frac{n-j}{k-i+1} \\
& \ge |J \cap \Gamma^{j}(v)|\frac{n}{k+1} \left( 1 + \frac{1}{4k} \right).
\end{align*}
Finally let us consider the case when $|J \cap \Gamma^{j}(v)| > \binom{n-(j+1)}{k-1}$. Since $k=o(\log n)$, we have $|J \cap \Gamma^{j}(v)| \le \frac{1}{2}\binom{n}{k} \le \frac{3}{5} \binom{n-j+k}{k}$ for sufficiently large $n$. Therefore we see that all elements of the initial segment of length $|J \cap \Gamma^{j}(v)|$ in the $<_L$ order on $[n]^{(j)}$ contain the set $[j-k]$. Hence remove $[j-k]$ from all sets and instead work in $[j-k+1,n]$. For convenience, we relabel our ground set so that we work with the initial segment of $<_L$ order in $[m]^{(k)}$ where $m=n-j+k$ instead. For $n$ (and so also $m$) large enough we have
\[
\binom{m}{k} - \binom{m(\frac{1}{3})^{1/k}}{k} \geq \binom{m}{k} - \frac{m^k}{3k!} \geq \frac{3}{5} \binom{m}{k} = \frac{3}{5} \binom{n-j+k}{k} \geq |J \cap \Gamma^{j}(v)|.
\]
By Corollary \ref{cor:f+monotone}, we can apply Lemma \ref{kruskalkatona} with $\mathcal{F} = J \cap \Gamma^{j}(v)$, $n=m$, $n-i = m(\frac{1}{3})^{1/k}$, and $r=k$, to get
\begin{equation}
\label{eqn:upperShadBound}
|\partial^+(J \cap \Gamma^{j}(v))| \ge |J \cap \Gamma^{j}(v)| \frac{\binom{m}{k+1} - \binom{m(\frac{1}{3})^{1/k}}{k+1}}{\binom{m}{k} - \binom{m(\frac{1}{3})^{1/k}}{k}}.
\end{equation}
(We note that $m(\frac{1}{3})^{1/k}$ should be an integer to apply Lemma \ref{kruskalkatona}. This can be fixed by considering the ceiling of $m(\frac{1}{3})^{1/k}$, but for ease of reading we refrain from doing this.)
Now since $k$ grows sufficiently slowly, for $n$ sufficiently large we have
\begin{align*}
\binom{m(\frac{1}{3})^{1/k}}{k} & = \frac{m(\frac{1}{3})^{1/k} (m(\frac{1}{3})^{1/k}-1) \ldots (m(\frac{1}{3})^{1/k}-k+1)}{k!} \\ & = \binom{m}{k}\prod_{i=0}^{k-1}\frac{m(\frac{1}{3})^{1/k}-i}{m-i} \\ &\geq \binom{m}{k} \left(\frac{\left(\frac{1}{3}\right)^{1/k}-\frac{k-1}{m}}{1-\frac{k-1}{m}}\right)^k \geq \frac{1}{4}\binom{m}{k}.
\end{align*}
So for $n$ large enough we have $\binom{m}{k} - \binom{m(\frac{1}{3})^{1/k}}{k} \leq \tfrac{3}{4} \binom{m}{k}$ and we can apply Lemma \ref{kruskbound} to \eqref{eqn:upperShadBound} to find
\begin{align*}
|\partial^+(J \cap \Gamma^{j}(v))| & \ge |J \cap \Gamma^{j}(v)|\frac{m-k}{k+1} \left( 1+\frac{1-\frac{3}{4}}{k} \right) \\
& \ge |J \cap \Gamma^{j}(v)|\frac{n}{k+1} \left( 1+\frac{1}{4k} \right).
\end{align*}
In all cases, we see that
\begin{equation}
\label{eq:upperShadowSmall}
|\partial^+(J \cap \Gamma^{j}(v))| \ge |J \cap \Gamma^{j}(v)|\frac{n}{k+1} \left( 1+\frac{1}{4k} \right).
\end{equation}
Since $j \leq 2k$, each vertex in $\Gamma^{j+2}(v)$ is adjacent to at most $2(k+1)$ vertices in $\partial^+(J \cap \Gamma^{j}(v))$. Together with \eqref{eq:upperShadowSmall}, this gives
\begin{align*}
|\Gamma(J\cap \Gamma^{j}(v)) \setminus \Gamma(J \setminus \Gamma^{j}(v))| & \ge |\partial^+(J \cap \Gamma^{j}(v))| - (2k+2)|J \cap \Gamma^{j+2}(v)| \\
& \ge |J \cap \Gamma^{j}(v)|\frac{n}{k+1} \left( 1+\frac{1}{4k} \right) - (2k+2)|J \cap \Gamma^{j+2}(v)|.
\end{align*}
On the other hand, by \eqref{eq:fewUniques},
\[
|\Gamma(J\cap \Gamma^{j}(v)) \setminus \Gamma(J \setminus \Gamma^{j}(v))| \le |J \cap \Gamma^{j}(v)|\frac{n}{k+1} \left( 1+\frac{1}{8k} \right).
\]
Together these inequalities give
\[
(2k+2)|J \cap \Gamma^{j+2}(v)| \ge |J \cap \Gamma^{j}(v)|\frac{n}{(k+1)8k},
\]
and so $|J \cap \Gamma^{j+2}(v)| \ge \frac{n}{16k(k+1)^2}|J \cap \Gamma^{j}(v)| \ge \frac{n}{64k^3}|J \cap \Gamma^{j}(v)|$.
\end{proof}
\section{Proof of Theorem \ref{thm:f-k-stability}}
\label{proofmain}
In this section we prove Theorem \ref{thm:f-k-stability}. The nature of the proof is much like that of the Erd\H{o}s-Simonovits stability arguments \cite{erd-sim}. Starting with a set $A$ with close to minimal neighbourhood size, we first delete sets of vertices which contribute too many unique neighbours (neighbours unseen by the rest of $A$). We then build up, layer by layer, a rough structure around a vertex of $A$. If $A$ has many vertices in the $j$-th neighbourhood of a vertex $v$, then there must be many vertices of $A$ in $\Gamma^{j+2}(v)$ (else $A \cap \Gamma^j(v)$ has too many unique neighbours). This will mean that for each vertex $v \in A$, there is some $j(v)$ such that almost all of $A$ is contained in $\Gamma^{2j(v)}(v)$, and we then show that $j(v) = k$ for almost all $v \in A$. This means that we can find two vertices $u,v \in A$ at distance $2k$ from one another with $j(u) = j(v) = k$. A pigeonhole argument then reveals a vertex $w$ between $u$ and $v$ for which $A$ is almost entirely contained in $\Gamma^k(w)$.
\begin{proof}[Proof of Theorem \ref{thm:f-k-stability}]
Let $\kappa, \rho > 0$ and let $k: \mathbb{N} \to \mathbb{N}$ and $p: \mathbb{N} \to [\rho,\infty)$ be functions with $k \leq \tfrac{\log n}{3 \log \log n}$, $k \leq \kappa p$, and $pk^3 / n \leq \delta$ for some $\delta > 0$ small to be defined later. Suppose $A \subseteq V(Q_n)$ with $|A| = \binom{n}{k}$ and $|\Gamma(A)| \le \binom{n}{k+1} + \binom{n}{k}p$. For ease of reading, we now state the following two claims here which we will prove later.
\begin{claim}
\label{claim:discard}
There exists $B \subseteq A$ with $|B| \ge \binom{n}{k} - D \binom{n}{k-1}pk$, where $D = 16 + 32/\rho$, such that for all $S \subseteq B$ we have
\[
|\Gamma(S) \setminus \Gamma(B \setminus S)| \le |S|\frac{n}{(k+1)} \left( 1+\frac{1}{8k} \right).
\]
\end{claim}
\begin{claim}
\label{claim:cleaning}
Let $B \subseteq A$ be a set which satisfies Claim \ref{claim:discard}. Suppose that there is a vertex $u \in V(Q_n)$ and an integer $\ell \in [k,2k]$ such that $|B \cap \Gamma^\ell(u)| \ge \frac{65 k^3}{n}\binom{n}{k}$. Then
\[
|B \cap \Gamma^\ell(u)| \geq \binom{n}{k} - C\binom{n}{k-1}pk,
\]
where $C = 24 + 33/\rho + 32\kappa.$
\end{claim}
Using Claims \ref{claim:discard} and \ref{claim:cleaning}, we now start by proving the following claim.
\begin{claim}
\label{claim:vertexBigNbhds}
Let $B \subseteq A$ be a set which satisfies Claim \ref{claim:discard}. For all $v \in B$, there exists a $j(v) \le k$ such that $|\Gamma^{2j}(v) \cap B| \ge |B| - C\binom{n}{k-1}pk$.
\end{claim}
\begin{proof}[Proof of Claim \ref{claim:vertexBigNbhds}]
Fix a vertex $v \in B$ and let $j$ be the least integer such that
\[
|B \cap \Gamma^{2(j+1)}(v)| < \frac{n}{64k^3}|B \cap \Gamma^{2j}(v)|.
\]
If $j \leq k$ then, by Lemma \ref{expand}, $|B \cap \Gamma^{2j}(v)|$ must be at least $\frac{1}{2} \binom{n}{k}$, which means that we must have $2j \geq k$. Since for $n$ large enough we have $\frac{1}{2} \binom{n}{k} \geq \frac{65k^3}{n} \binom{n}{k}$, by Claim \ref{claim:cleaning} we obtain $|B \cap \Gamma^{2j}(v)| \ge \binom{n}{k} - C\binom{n}{k-1}pk$ as desired.
Suppose now that $j \ge k+1$. Since $v \in B$, we have $|B \cap \Gamma^{0}(v)| = |B \cap \{v\}| = 1$. Then, by the choice of $j$, we obtain
\[
|B \cap \Gamma^{2(k+1)}(v)| \ge \left ( \frac{n}{64k^3} \right )^{k+1}.
\]
On the other hand,
\[
|B \cap \Gamma^{2(k+1)}(v)| \le |B| \le \binom{n}{k}.
\]
Since $k \le \frac{\log n}{3 \log \log n}$, we have a contradiction for $n$ sufficiently large, and so $j \le k$. This completes the proof of Claim \ref{claim:vertexBigNbhds}.
\end{proof}
For $j \le k$, let $H(j) = \{v \in B : j(v) = j\}$. Fix $j < k$, and suppose that there are distinct vertices $u,w \in H(j)$ such that $d(u,w) = 2j$. Without loss of generality, we may assume that $Z_u = \emptyset$ and $Z_w = [2j]$. Observe that
\[
\Gamma^{2j}(u) \cap \Gamma^{2j}(w) = \{U \cup W : U \in [2j]^{(j)}, W \in [2j+1,n]^{(j)}\}.
\]
The size of this set is clearly $\binom{2j}{j}\binom{n-2j}{j}$. On the other hand
\begin{align*}
\Gamma^{2j}(u) \cap \Gamma^{2j}(w) & \supseteq \Gamma^{2j}(u) \cap \Gamma^{2j}(w) \cap B \\
& = B \setminus \left(\left(B \setminus \Gamma^{2j}(w)\right) \cup \left(B \setminus \Gamma^{2j}(u)\right)\right).
\end{align*}
Recall that by the definition of $j = j(u) = j(w)$ we have $|B \setminus \Gamma^{2j}(u)| \le C\binom{n}{k-1}pk$ and $|B \setminus \Gamma^{2j}(w)| \le C\binom{n}{k-1}pk$, therefore
\begin{align*}
|\Gamma^{2j}(u) \cap \Gamma^{2j}(w)| &\ge \binom{n}{k} - 2C\binom{n}{k-1}pk \\
&\ge \binom{n}{k}\left(1-3C\delta\right) \ge \frac{1}{2}\binom{n}{k},
\end{align*}
for $\delta$ sufficiently small. Putting these bounds together gives $\binom{2j}{j}\binom{n-2j}{j} \ge \frac{1}{2}\binom{n}{k}$. But $j < k$ and so
\begin{align*}
\binom{2j}{j}\binom{n-2j}{j} & \leq 4^j\binom{n}{j} \\
& \leq 4^k \binom{n}{k} \frac{k}{n-k} \\
& < \frac{1}{2}\binom{n}{k}.
\end{align*}
for $n$ sufficiently large, since $k \le \tfrac{\log n}{3\log \log n}.$ We have a contradiction and so no two vertices from $H(j)$ can be at distance $2j$ from each other.
Since for any $v \in H(j)$ by definition we have $|B \setminus \Gamma^{2j}(v)| \le C\binom{n}{k-1}pk$, and no two vertices from $H(j)$ can be at distance $2j$ from each other, we obtain $|H(j)| \le C\binom{n}{k-1}pk$. Summing over $j < k$ gives
\begin{align*}
|H(k)| & = |B| - \sum_{j=0}^{k-1} |H(j)| \\
& \geq |B| - C\binom{n}{k-1}pk^2.
\end{align*}
Therefore for a vertex $v \in H(k),$ since $pk^2 \le \delta\tfrac{n}{k},$
\begin{align*}
\left|\Gamma^{2k}(v) \cap H(k)\right| &\ge |B\cap \Gamma^{2k}(v)| - \left|B\setminus H(k)\right| \\
&\ge \binom{n}{k} - C\binom{n}{k-1}pk - C\binom{n}{k-1}pk^2 \\
&\ge \binom{n}{k}\left(1-3C\delta \right),
\end{align*}
for $n$ sufficiently large. This is positive for $\delta$ sufficiently small so that there must exist two vertices in $H(k)$ at distance $2k$ from each other. Let $u,v \in V$ be such vertices and without loss of generality, suppose that $Z_u = \emptyset$ and $Z_v = [2k]$.
Any vertex in $\Gamma^{2k}(u)\cap \Gamma^{2k}(v) \cap B$ must be of the form $X \cup Y$, where $X \in [2k]^{(k)}$ and $Y \in [2k+1,n]^{(k)}$, and so any such vertex must be at distance $k$ from some vertex in $[2k]^{(k)}$. For $w \in [2k]^{(k)}$, let $f(w) = |\{z \in \Gamma^{2k}(u)\cap \Gamma^{2k}(v) \cap B : d(w,z) = k\}|$. Then we have for sufficiently large $n$ and sufficiently small $\delta,$
\begin{align*}
\sum_{w \in [2k]^{(k)}} f(w) & = |\Gamma^{2k}(u)\cap \Gamma^{2k}(v)\cap B| \\
& \geq \binom{n}{k} - C\binom{n}{k-1}pk \\
&\ge \binom{n}{k}\left(1-2C\delta\right) \ge \frac{1}{2}\binom{n}{k}.
\end{align*}
Hence by the pigeonhole principle, there exists a vertex $w \in [2k]^{(k)}$ for which we have
\[
|\Gamma^k(w) \cap B| \geq \frac{1}{2}\frac{\binom{n}{k}}{\binom{2k}{k}}.
\]
Recall that $k \le \frac{\log n}{3 \log \log n}$ so that $\binom{2k}{k} \le \tfrac{2n}{65 k^3}$ and so $|\Gamma^k(w) \cap B| \ge \tfrac{65k^3}{n}\binom{n}{k}$ for sufficiently large $n$. By Claim \ref{claim:cleaning} we have $|\Gamma^k(w) \cap B| = \binom{n}{k} -C \binom{n}{k-1}pk$, proving Theorem \ref{thm:f-k-stability}.
\end{proof}
We now complete our argument by proving Claims \ref{claim:discard} and \ref{claim:cleaning}.
\begin{proof}[Proof of Claim \ref{claim:discard}]
Let us run the following algorithm.
\begin{algorithm}[H]
\SetKw{KwFn}{Initialization}
\KwFn{Set $i=0$, $B_0=A$}\;
\While{$\exists S \subseteq B_i$ such that $|\Gamma(S) \setminus \Gamma(B_i \setminus S)| > |S|\frac{n}{(k+1)} \left( 1+\frac{1}{8k} \right)$}{
pick such an $S$\;
set $i = i+1$\;
set $L_i = S$\;
set $B_i = B_{i-1} \setminus S$\;
}
\label{discardalgorithm}
\end{algorithm}
Suppose that the algorithm terminates when $i=m$. Suppose that the algorithm terminates when $i=m$. Since the sets $L_1, \ldots, L_m, B_m$ partition $A$, for any $w \in \Gamma(A)$ we either have $w \in \Gamma(B_m)$, or $w \notin \Gamma(B_m)$ and there is some maximum $i$ such that $w \in \Gamma(L_i)$. This gives
\[
|\Gamma(A)| = \sum_{i=1}^m|\Gamma(L_i) \setminus \Gamma(B_{i-1}\setminus L_i)| + |\Gamma(B_m)|.
\]
Recall that for each $i<m$ we have $|\Gamma(L_i) \setminus \Gamma(B_{i-1} \setminus L_i)| > |L_i|\frac{n}{k+1}(1+\tfrac{1}{8k})$, and so
\[
|\Gamma(A)| \ge |A \setminus B_m|\frac{n}{k+1} \left( 1+\frac{1}{8k} \right) + |\Gamma(B_m)|.
\]
Corollary \ref{cor:harperBound} gives $|\Gamma(B_m)| \ge |B_m|\frac{n}{k+1} - 2 \binom{n}{k}$. Therefore
\begin{align*}
|\Gamma(A)| & \geq |A \setminus B_m| \frac{n}{k+1} + |A \setminus B_m|\frac{n}{8k(k+1)} + |B_m| \frac{n}{k+1} - 2 \binom{n}{k} \\
& = |A| \frac{n}{k+1} + |A \setminus B_m|\frac{n}{8k(k+1)} - 2 \binom{n}{k} \\
& = \frac{n!}{k!(n-k)!} \frac{n}{k+1} + |A \setminus B_m|\frac{n}{8k(k+1)} - 2 \binom{n}{k} \\
& \geq \binom{n}{k+1} + |A \setminus B_m|\frac{n}{8k(k+1)} - 2 \binom{n}{k}.
\end{align*}
Since by assumption $|\Gamma(A)| \le \binom{n}{k+1} + \binom{n}{k}p$ and $p \geq \rho$, we obtain
\begin{align*}
|A \setminus B_m| & \leq \left( \binom{n}{k}p + 2 \binom{n}{k} \right) \frac{8k(k+1)}{n} \\
& \leq \left (1+\frac{2}{\rho} \right ) \binom{n}{k} \frac{8pk(k+1)}{n} \\
& \leq \left (16+ \frac{32}{\rho} \right ) \binom{n}{k}\frac{pk^2}{n-k+1} \\
& = D \binom{n}{k-1}pk.
\end{align*}
Setting $B = B_m$ we obtain the desired result.
\end{proof}
\begin{proof}[Proof of Claim \ref{claim:cleaning}]
Let $B$ be the set given by Claim \ref{claim:discard} (so $|B| \geq \binom{n}{k} - D \binom{n}{k-1} p k$). Let $v \in V(Q_n)$ be such that for some $\ell \in [k,2k]$ we have $|B \cap \Gamma^{\ell}(v)| \geq \frac{65k^3}{n}\binom{n}{k}$. (Without loss of generality we again assume that $v=(0,\ldots,0)$, so that $Z_v = \emptyset$.) If we also have $|B \cap \Gamma^{\ell}(v)| \leq \frac{1}{2}\binom{n}{k}$ then by Lemma \ref{expand} we have
\[
|B \cap \Gamma^{\ell+2}(v)| \geq \frac{n}{64k^3} \frac{65k^3}{n}\binom{n}{k} > \binom{n}{k}
\]
which contradicts the fact that $|B| \le \binom{n}{k}$. Therefore we may assume that $|B \cap \Gamma^{\ell}(v)| \ge \frac{1}{2}\binom{n}{k}$ and so $|A \cap \Gamma^{\ell}(v)| \ge \frac{1}{2}\binom{n}{k}$ and $|A \setminus \Gamma^{\ell}(v)| \le \frac{1}{2}\binom{n}{k}$. Recall that $k \geq 1$ and $p \geq \rho$. Since
\[
pk+2 \leq pk(1+2/\rho) < Cpk,
\]
if $|A \cap \Gamma^\ell(v)| \geq \binom{n}{k} - \binom{n}{k-1}(pk+2)$ then we are done. Hence, throughout the proof, we assume $|A \setminus \Gamma^\ell(v)| \ge \binom{n}{k-1}(pk+2)$.
We can bound the size of the neighbourhood of $A$ from below as follows: We count the neighbours of $A \cap \Gamma^{\ell}(v)$ in $\Gamma^{\ell+1}(v)$ (ignoring the neighbours in $\Gamma^{\ell-1}(v)$), and then we add the neighbours of $A \setminus \Gamma^{\ell}(v)$ that are not in $\Gamma^{\ell+1}(v)$. The latter quantity can again be bounded from below by using the fact that any vertex $u$ in $A \setminus \Gamma^{\ell}(v)$ has either $\ell+2$ neighbours in $\Gamma^{\ell+1}(v)$ (if $u \in \Gamma^{\ell+2}(v)$), or otherwise no such neighbours at all. Therefore we have
\begin{equation}
|\Gamma(A)| \ge |\Gamma(A \cap \Gamma^{\ell}(v))\cap \Gamma^{\ell+1}(v)| + |\Gamma(A \setminus \Gamma^{\ell}(v))| - |A \setminus \Gamma^{\ell}(v)|(\ell+2). \label{count1}
\end{equation}
As we remarked at the beginning of the proof, we may assume $|A \setminus \Gamma^\ell(v)| \geq \binom{n}{k-1}(pk+2)$. By Theorem \ref{harpermod}, $|\Gamma(A\setminus \Gamma^\ell(v))|$ is at least as large as the upper shadow of the first $|A\setminus \Gamma^\ell(v)| - \sum_{i=0}^{k-1}\binom{n}{i}$ elements of $[n]^{(k)}$ according to the $<_L$ order. Write
\begin{equation}
\label{eq:cDefn}
c\binom{n}{k} = |A \setminus \Gamma^{\ell}(v)|-\sum_{i=0}^{k-1}\binom{n}{i},
\end{equation}
and observe that by the assumption that $|A \setminus \Gamma^{\ell}(v)| \le \frac{1}{2}\binom{n}{k}$ we have $c \leq 1/2$.
Let $\alpha \in (0,1)$ be such that
\[
c\binom{n}{k} = \binom{n}{k} - \binom{\alpha n}{k}.
\]
Denoting by $W$ the set of the first $c\binom{n}{k}$ elements of $[n]^{(k)}$ according to the $<_L$ order, by Lemma \ref{kruskalkatona} and Corollary \ref{cor:f+monotone} we have
\begin{align*}
|\Gamma(A \setminus \Gamma^{\ell}(v))| & \ge |\partial^+(W)| \geq |W| \frac{\binom{n}{k+1} - \binom{\alpha n}{k+1}}{\binom{n}{k} - \binom{\alpha n}{k}} = c\binom{n}{k}\frac{\binom{n}{k+1} - \binom{\alpha n}{k+1}}{\binom{n}{k} - \binom{\alpha n}{k}}
\end{align*}
(As in Lemma \ref{expand} we refrain from ensuring things are integer valued for ease of reading.)
Recalling the relation between $\alpha$ and $c$, Lemma \ref{kruskbound} gives
\begin{equation}
\label{eq:finalEstimate1}
|\Gamma(A \setminus \Gamma^{\ell}(v))| \geq c\binom{n}{k}\frac{n-k}{k+1} \left( 1+\frac{1-c}{k} \right).
\end{equation}
We clearly have
\[
|\Gamma(A \cap \Gamma^{\ell}(v))\cap \Gamma^{\ell+1}(v)| = |\partial^+(A \cap \Gamma^{\ell}(v))|.
\]
As we mentioned earlier, for a family $\mathcal{A} \subseteq [n]^{(\ell)}$ we have $\partial^+\mathcal{A} = (\partial \mathcal{A}^c)^c$, thus by Theorem \ref{thm:kruskat} the size of the upper shadow of $\mathcal{A}$ is minimised when $\mathcal{A}^c$ is isomorphic to the initial segment of colex $<_C$ on $[n]^{(n-\ell)}$, i.e., when $\mathcal{A}$ is isomorphic to the initial segment of lex $<_L$ on $[n]^{(\ell)}$.
Since $p$ is bounded from below by $\rho$, we have
\[
k \leq pk/\rho < Cpk.
\]
Thus, if $|A \cap \Gamma^{\ell}(v)| \geq \binom{n}{k} - \binom{n}{k-1}k$ then again the claim holds and there is nothing to prove. Hence, we may assume that $\tfrac{1}{2} \binom{n}{k} \leq |A \cap \Gamma^{\ell}(v)| \leq \binom{n}{k} - \binom{n}{k-1}k$. Applying the Pascal's rule $k$ times, we have
\begin{align*}
|A \cap \Gamma^{\ell}(v)| & \leq \binom{n}{k} - \binom{n}{k-1}k \\
& = \binom{n-1}{k} + \binom{n-1}{k-1} - \binom{n}{k-1}k \\
& \leq \binom{n-1}{k} - \binom{n}{k-1}(k-1) \leq \ldots \leq \binom{n-k}{k}.
\end{align*}
Recall also that we have $k \leq \ell \leq 2k$. This implies that $\binom{n-k}{k} \leq \binom{n-(\ell-k)}{k}$. Thus every set in the initial segment of size $|A \cap \Gamma^{\ell}(v)|$ of $<_L$ on $[n]^{(\ell)}$ consists of the set $[\ell - k]$ union one of the $\binom{n-(\ell-k)}{k}$ subsets of $[\ell-k+1,n]$ of size $k$. Hence we can again imagine removing $[\ell-k]$ from all sets in our segment and instead working in $[\ell-k+1,n]$. We now have an initial segment of size $|A \cap \Gamma^{\ell}(v)|$ in the $<_L$ order in $[\ell-k+1,n]^{(k)}$ which we denote by $\mathcal{H}$. Then \eqref{eq:locLYMupper}, together with the fact that $\ell \leq 2k$, gives
\begin{align}
|\partial^+(A \cap \Gamma^{\ell}(v))| & \ge |\partial^+(\mathcal{H})| \nonumber \\
& \ge |A \cap \Gamma^{\ell}(v)|\frac{n-(\ell-k)-k}{k+1} \nonumber \\
& = |A \cap \Gamma^{\ell}(v)| \left ( \frac{n-k}{k+1} - \frac{\ell-k}{k+1}\right ) \nonumber \\
& \geq |A \cap \Gamma^{\ell}(v)| \frac{n-k}{k+1} - |A \cap \Gamma^{\ell}(v)| \nonumber \\
& \geq |A \cap \Gamma^{\ell}(v)|\frac{n-k}{k+1} - \binom{n}{k}. \label{eq:finalEstimate2}
\end{align}
The facts that $|A \setminus \Gamma^{\ell}(v)| \leq \frac{1}{2}\binom{n}{k}$ and $\ell \leq 2k$ imply that
\begin{equation}
\label{eqn:sillyBound}
|A \setminus \Gamma^{\ell}(v)|(\ell+2) \leq \frac{1}{2}\binom{n}{k} (2k+2) \leq 2k\binom{n}{k}.
\end{equation}
Hence we can rewrite~\eqref{count1} using \eqref{eq:finalEstimate1}, \eqref{eq:finalEstimate2}, and \eqref{eqn:sillyBound}, to obtain
\begin{align*}
|\Gamma(A)| & \geq |A \cap \Gamma^{\ell}(v)|\frac{n-k}{k+1} - \binom{n}{k} + c\binom{n}{k}\frac{n-k}{k+1} \left( 1+\frac{1-c}{k} \right) -2\binom{n}{k}k \\
& = \left ( |A \cap \Gamma^{\ell}(v)| + c\binom{n}{k} \right ) \frac{n-k}{k+1} + \frac{c(1-c)}{k} \binom{n}{k} \frac{n-k}{k+1} -3\binom{n}{k}k.
\end{align*}
Since we defined $c\binom{n}{k} = |A \setminus \Gamma^{\ell}(v)|-\sum_{i=0}^{k-1}\binom{n}{i}$, and also we have $c \leq 1/2$, we obtain
\begin{align*}
|\Gamma(A)| & \geq \left(|A| - \sum_{i=0}^{k-1}\binom{n}{i} \right) \frac{n-k}{k+1} + \frac{c}{2k}\binom{n}{k+1} - 3\binom{n}{k}k \\
& \geq \binom{n}{k+1} - k \binom{n}{k-1} \frac{n-k}{k+1} + \frac{c}{2k}\binom{n}{k+1} -3\binom{n}{k}k \\
& \geq \binom{n}{k+1} + \frac{c}{2k}\binom{n}{k+1} - 4\binom{n}{k}k.
\end{align*}
Since we assume $|\Gamma(A)| \leq \binom{n}{k+1} + \binom{n}{k}p$, and $k \leq \kappa p$, we must have
\[
c \le \frac{2k}{\binom{n}{k+1}} \binom{n}{k}(p+4k) \leq \frac{2pk(k+1)(1+4\kappa)}{n-k}.
\]
By the definition of $c$ in \eqref{eq:cDefn}, we then have
\begin{align*}
|A \setminus \Gamma^\ell(v)| &= \sum_{i=0}^{k-1}\binom{n}{i} + \frac{2pk(k+1)(1+4\kappa)}{n-k}\binom{n}{k} \\
& \leq k \binom{n}{k-1} + \frac{8pk^2(1+4\kappa)}{n-k+1}\binom{n}{k} \\
& = k\binom{n}{k-1} +8pk(1+4\kappa)\binom{n}{k-1} \leq \binom{n}{k-1}pk(8+32\kappa + 1/\rho).
\end{align*}
and so $|B \setminus \Gamma^{\ell}(v)| \leq (8+32\kappa + 1/\rho)\binom{n}{k-1} pk$. Since $|B| \geq \binom{n}{k} - D \binom{n}{k-1} p k$, we then have
\begin{align*}
|B \cap \Gamma^{\ell}(v)| \geq \binom{n}{k} - \left(D+(8+32\kappa + 1/\rho) \right) \binom{n}{k-1} p k = \binom{n}{k} - C \binom{n}{k-1}.
\end{align*}
\end{proof}
\noindent
{\bf Acknowledgement} \hspace{.02in}
The authors would like to thank Alex Scott for helpful initial discussions of the problem considered in this paper. During a large part of this project, the first author was affiliated with the Mathematical Institute of the University of Oxford.
\bibliographystyle{plain}
|
1,116,691,497,907 | arxiv | \section{Introduction}
There are multidimensional physical problems modelled by partial differential equations in networks of spatial domains than are essentially straight. In such cases the governing equations can be assumed to be one-dimensional (1D), potentially resulting in significant computing savings. Examples include gas flow in pipes
\cite{Banda:2006a, Brouwer:2011a, Bales:2009a,Reigstad:2015a,Bermudez:2017},
traffic flow \cite{Coclite:2005a, Borsche:2014c, Bretti:2007a}
water flows \cite{Kesserwani,AkanYen,AralZhangJin,Zhang, Borsche:2014b, Kesserwani:2008a}
and blood flow in the human circulatory system
\cite{Quarteroni:2000a,Formaggia:1999a,Olufsen:2000a, Sherwin2003,Miglio2005_1,Alastruey:2011a,Liang:2009a,Liang:2009b,FullanaZaleski,Liang:2014a,Mueller:2014a,Mueller:2014b,Toro:2015a}. The challenge, however, is how to connect these 1D domains in a way that accounts for the multidimensional character of the equations, even in an approximately manner.
Current methods are reported to perform well in most cases. However, a shortcoming of existing methods is their inability to deal with transcritical and supercritical flow through junctions. In some cases, these methods fail even for subcritical flows at moderately high Froude numbers. Transcritical and supercritical flows are important flow regimes that may occur more often than one is aware of, for example at junctions, locally. In physiological flows this is found to be the case in the venous system, under postural changes. In open channel flows the occurrence of supercritical flows is not rare and may potentially take place in inundating flows emerging from dam-break events and tsunami waves. Supercritical regimes may also appear in networks of tubes transporting compressible fluids under extreme accidental conditions.
In this paper we present methods for dealing with junctions connecting 1D domains and illustrate the ideas for junctions of 1D shallow water channels. We note that the full problem is governed by the two-dimensional (2D) shallow water equations. The methods presented here make use of the finite volume approach, whereby the true geometry is accounted for locally at junctions, whereas away from junctions, the usual 1D equations are solved. In addition to mass conservation, our methods enforce conservation of momentum at junctions, which constitutes an improvement over methods currently available. It is noted that the approach, as applied to complex networks of channels, can lead to very significant computing savings, as compared to solving the full multidimensional problem, without compromising the solution quality. Systematic assessment of the methods for a variety of flow configurations is carried out.
It is worth noting that a similar method, which combines a 1D model for the channels and a 2D model on unstructured grids for junctions, has been investigated both in hydrodynamics (Miglio et al. \cite{Miglio2005_1}) and in haemodynamics (Formaggia et al \cite{Formaggia1999,FormaggiaQuarteroni2003}). However, Miglio et al. (\cite{Miglio2005_1}) use a finite element scheme and investigate only the case of subcritical flows.
The rest of this paper is structured as follows. In Section \ref{sec:mathmodnumerics} we briefly present the underlying mathematical models and the numerical framework upon which the proposed methods are constructed. Next, the novel methodology for coupling 1D domains at junctions is illustrated in Section \ref{sec:method}. Section \ref{sec:results} is devoted to the validation of the proposed methods, while concluding remarks are presented in Section \ref{sec:discussion}. \ref{section_1Dexisting} describes an existing method for junctions and \ref{chapter_theoretical} provides details on the 2D unstructured mesh method used to produce reliable reference solutions used to assess the methods presented in this paper.
\section{Mathematical models and numerical method} \label{sec:mathmodnumerics}
We are concerned with free-surface shallow water flow under gravity in a network system consisting of interconnected straight (or essentially straight) channels joined at junctions, as illustrated in Fig. \ref{introduzione_network}. The flow field has a significant 2D structure only in the vicinity of junctions, while it is essentially 1D along the straight channels, away from junctions. The purpose of this work is to develop a method that combines the use of a 1D model in channels and a 2D model only at junctions, coupling these two models with appropriate matching conditions. As we shall show later, the resulting methods show a huge computational efficiency gain with respect to solving the full 2D equations on the entire domain.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{figura2.png}
\caption{Example of a channel network. Regions with two-dimensional behaviour are encircled and zoomed in. }
\label{introduzione_network}
\end{figure}
The methods proposed in this paper adopt the finite volume framework and combine the 2D equations and its corresponding local geometry in a neighbourhood of junctions, along with the 1D equations in the straight channels. Next, we recall the governing equations and the finite volume method.
\subsection{The shallow water equations}
The time-dependent, non-linear, 2D shallow water equations written in conservation form read
\begin{equation} \label{swe1}
\partial_{t}{\bf Q} + \partial_{x}{\bf F}({\bf Q}) + \partial_{y}{\bf G}({\bf Q}) = {\bf S} ({\bf Q}) \;,
\end{equation}
with
\begin{equation} \label{swe2}
\left.
\begin{array}{ccc}
{\bf Q}
= \left[ \begin{array}{c}
h \\
h u \\
h v \\
\end{array} \right] \;,\hspace{1mm} &
{\bf F}({\bf Q})
= \left[ \begin{array}{c}
h u \\
h u^{2}+\frac{1}{2} g h^{2} \\
h u v
\end{array} \right] \;, \hspace{2mm} &
{\bf G}({\bf Q})
= \left[ \begin{array}{c}
h v \\
h v u \\
h v^{2}+\frac{1}{2} g h^{2}
\end{array} \right] \;, \\
\\
& {\bf S}({\bf Q})
= \left[ \begin{array}{c}
0 \\
gh(S_{ox}-S_{fx}) \\
gh(S_{oy}-S_{fy}) \\
\end{array} \right] \:. &
\end{array} \right\}
\end{equation}
Here ${\bf Q}$ is the vector of conserved variables; ${\bf F}({\bf Q})$ and ${\bf G}({\bf Q})$ are the fluxes in the $x$ and $y$ directions, respectively, and ${\bf S} ({\bf Q})$ is the vector of source terms. The physical variables are water depth $h(x,y,t)$ and velocity components $u(x,y,t)$ and $v(x,y,t)$, in the $x$ and $y$ directions respectively. In this paper the source term vector ${\bf S}({\bf Q})$ accounts for the variation of the bottom topography
\begin{equation} \label{swe3}
\begin{array}{ccc}
S_{ox}=-\partial_xb(x,y) & \quad\mbox{and}\quad\; & S_{oy}=-\partial_yb(x,y)
\end{array}
\end{equation}
and the bed friction
\begin{equation}\label{swe4}
\begin{array}{ccc}
S_{fx}=\displaystyle{\frac{n^2\,u\,\sqrt{u^2+v^2}}{h^{4/3}}} & \quad\mbox{and}\quad \;
& S_{fy}=\displaystyle{\frac{n^2\,v\,\sqrt{u^2+v^2}}{h^{4/3}}}\;.
\end{array}
\end{equation}
Here $b(x,y)$ represents bottom elevation above a horizontal datum, $n$ is the Manning's coefficient and $g$ is the acceleration due to gravity. In this work we only consider channels with horizontal bottom, that is $S_{ox}=0$, $S_{oy}=0$. Equations (\ref{swe1}) form a system of partial differential equations of hyperbolic type. For background on the shallow water equations and associated numerical methods see, for instance, \cite{Toro2001} and the many references therein.
The one-dimensional version of (\ref{swe1}), in the generic $s$ direction, reads
\begin{equation}\label{swe5}
\partial_{t}{\bf Q}_{1D} + \partial_{s}{\bf F}_{1D}({\bf Q}_{1D}) = {\bf S}_{1D}({\bf Q}_{1D}) \;,
\end{equation}
with
\begin{equation}\label{swe6}
\begin{array}{lll}
{\bf Q}_{1D}
= \left[ \begin{array}{c}
h_{1D} \\
h_{1D} u_{1D} \\
\end{array} \right] \;,\hspace{3mm} &
{\bf F}_{1D}({\bf Q}_{1D})
= \left[ \begin{array}{c}
h_{1D} u_{1D} \\
h_{1D} u^{2}_{1D}+\frac{1}{2} g h^{2}_{1D} \\
\end{array} \right] \;,\hspace{3mm} &
{\bf S}_{1D}({\bf Q}_{1D})
= \left[ \begin{array}{c}
0 \\
gh_{1D}(S_{os}-S_{fs}) \\
\end{array} \right] \;,
\end{array}
\end{equation}
where $h_{1D} = h$ and $u_{1D}$ is the velocity along $s$.
\subsection{Rotational invariance and numerical method}
As previously stated, the methods proposed in this paper combine 1D equations (\ref{swe5}) in a generic direction $s$ and the 2D equations (\ref{swe1}), locally, at a single 2D element with an arbitrary number of edges. Thus, it is convenient to recall the rotational invariance property of equations (\ref{swe1}). First, let us define an arbitrary 2D spatial control volume $V$ with boundary $\Omega$, as depicted in Fig. \ref{RotInv}, top frame. Equations (\ref{swe1}), expressed in integral form, read
\begin{equation} \label{rotinv1}
\frac{\partial}{\partial t} \int_{V} {\bf Q}\, dV+
\int_{\Omega}\left[\cos\theta {\bf F}({\bf Q})+\sin\theta
{\bf G}({\bf Q})\right]\,d \Omega = {\bf 0} \;.
\end{equation}
Moreover, we define the outward unit vector normal to $\Omega$, $\bf n$, as
\begin{equation} \label{rotinv2}
{\bf n}\equiv \left[n_{1},n_{2}\right] \equiv \left[\cos\theta,\sin\theta\right]\;.
\end{equation}
The top frame of Fig. \ref{RotInv} depicts the control volume $V$ in the Cartesian plane, where $x$ denotes the chosen reference direction, while the bottom frame depicts a typical computational finite volume with five vertices and five edges.
\begin{figure}
\centerline{
\includegraphics[scale=0.40,angle=-90]{RotInv.pdf}
}
\vspace{5.0mm}
\centerline{
\includegraphics[scale=0.25,angle=-90]{FigFV-sw.pdf}
}
\caption{Control volumes, rotational invariance and finite volumes. Top frame: arbitrary control volume $V$ in Cartesian plane; the $x$-direction is the
reference direction, $\theta$ is angle between the outward unit normal vector ${\bf n}$ and the
reference $x$-direction. Bottom frame: typical finite
volume in the Cartesian plane.}
\label{RotInv}
\end{figure}
Equations (\ref{swe1}) satisfy the rotational invariance property \cite{Toro2001}
\begin{equation} \label{rotinv3}
{\bf H} \equiv {\bf n} \cdot \left[{\bf F}({\bf Q}), {\bf G}({\bf Q}\right] =
\cos\theta {\bf F}({\bf Q}) + \sin\theta {\bf G}({\bf Q}) =
{\bf T}^{-1}{\bf F}\left({\bf T}({\bf Q})\right) \;
\end{equation}
for all vectors ${\bf Q}$ and for all real angles $\theta$, or equivalently, normal directions of $\Omega$. Here ${\bf T} = {\bf T}(\theta)$ is a rotation matrix and ${\bf T}^{-1}(\theta)$ is its inverse, given respectively as
\begin{equation} \label{rotinv4}
{\bf T}=\left[\begin{array}[c]{ccc}
1 & 0 & 0 \\
0 & \cos\theta & \sin\theta \\
0 & -\sin\theta & \cos\theta \\
\end{array}\right] \;,\hspace{2mm}
{\bf T}^{-1}=\left[\begin{array}[c]{ccc}
1 & 0 & 0 \\
0 & \cos\theta & -\sin\theta \\
0 & \sin\theta & \cos\theta \\
\end{array}\right] \;.
\end{equation}
Now, we choose a computational control volume $V_{k}$ in two-dimensional space, as shown for example in the bottom frame of Fig. \ref{RotInv}. Moreover, we define a space-time control volume $I_k = [t^{n+1},t^n] \times V_k$, over which we integrate (\ref{swe1}), yielding
\begin{equation} \label{rotinv5}
{\bf Q}_{k}^{n+1} = {\bf Q}_{k}^{n} -\frac{\Delta t}{ |V_{k}| }\sum_{e=1}^{N}{\cal F}_{e} \;,
\end{equation}
with cell averages defined as
\begin{equation} \label{rotinv6}
{\bf Q}^{n}_{k} = \frac{1}{ |V_{k}|} \int_{V_{k}} {\bf Q}(x,y,t ^{n})\, d V \;,
\end{equation}
where $|V_{k}|$ denotes the volume of $V_{k}$ (or area in the 2D case), while the intercell flux for edge $e$ is
\begin{equation} \label{rotinv7}
{\cal F}_{e} =
\int_{A_{e}}^{A_{e+1}} {\bf T_{e}}^{-1}{\bf F}\left({\bf T_{e}}({\bf Q})\right)\, d A \approx
{\cal{L}}_{e} {\bf T}_{e}^{-1} \hat{{\bf F}}_{e} \;.
\end{equation}
Here ${\cal{L}}_{e}$ is the length of edge $e$, the segment $A_{e}A_{e+1}$. $\hat{{\bf F}}_{e} \approx {\bf F}\left({\bf T_{e}}({\bf Q})\right)$ is an approximation to the flux ${\bf F}$ on the edge $e$ evaluated at the rotated state ${\bf T_{e}}({\bf Q})$, where the rotation is performed in the normal direction to side $e$ through the transformation matrix ${\bf T_{e}}$. The final expression of the finite volume scheme becomes
\begin{equation} \label{rotinv8}
{\bf Q}_{k}^{n+1} = {\bf Q}_{k}^{n} -\frac{\Delta t}{ |V_{k}| }\sum_{e=1}^{N} {\cal{L}}_{e} {\bf T}_{e}^{-1} \hat{{\bf F}}_{e} \;.
\end{equation}
For completness we now illustrate the computation of the flux $\hat{{\bf F}}_{e}$ for an arbitrary edge $e$, see Fig.3. Conventionally, the left side $L$ of edge $e$ is always in the interior of the control volume of interest and the right side $R$ is outside. The computation of the numerical flux $\hat{{\bf F}}_{e}$, as for a first-order Godunov type method for example, involves the augmented, local one-dimensional Riemann problem in the rotated frame normal to the edge, namely
\begin{equation} \label{rotinv9}
\left.
\begin{array}{ll}
\mbox{PDEs in normal direction:} & \partial_{t} {\bf Q}_{1D} + \partial_{s} {\bf F}_{1D}({\bf Q}_{1D}) = {\bf 0} \;, \\
\\
\mbox{Rotated initial conditions:} & {\bf Q}_{1D}(s,0) = \left\{ \begin{array}{lll}
{\bf Q}_{1D,L} = {\bf T}_{e}({\bf Q}_{2D,L}) & \mbox{if} & s < 0 \; , \\
\\
{\bf Q}_{1D,R} = {\bf T}_{e}({\bf Q}_{2D,R}) & \mbox{if} & s > 0 \; .
\end{array} \right.
\end{array}\right\}
\end{equation}
The steps to be followed in order to solve problem ( \ref{rotinv9}) can be summarised as:
\begin{enumerate}
\item Calculate the angle $\theta_{e}$ between the outward unit normal to edge $e$ and the fixed reference direction $x$, being positive in the counter-clock direction.
\item Calculate the corresponding rotation matrix ${\bf T}_{s}$ and its inverse from (\ref{rotinv4}).
\item Rotate left and right data as in (\ref{rotinv9}).
\item Solve the 1D Riemann problem (\ref{rotinv9}) on rotated data and compute the corresponding flux $\hat{\bf F}_{e}$.
\item Rotate back the flux as in (\ref{rotinv3}) and multiply it by edge length to get the final intercell numerical flux for edge $e$.
\end{enumerate}
Once numerical fluxes for all edges have been calculated, element $k$ can be updated through the finite volume formula (\ref{rotinv5}). This description applies to any two-dimensional finite volume method on a general mesh, assumed here to be unstructured. More details are given in \ref{chapter_theoretical}. However, for the junction method proposed in this paper, will only use the above description at a single 2D element placed right at the junction.
\begin{figure}
\centerline{
\includegraphics[scale=0.35,angle=0]{VolumeSide.png}
}
\vspace{3mm}
\caption{Generic edge of a control volume $V$ in $x$-$y$ space, where by convention the left side L lies inside the control volume and the right side R lies outside. The outward normal unit vector, with respect to the fixed reference direction $x$, is depicted as well as its corresponding angle.}
\end{figure}\label{fig:genericcontrolvolumeside}
\section{A methodology for channel junctions/bifurcations} \label{sec:method}
The geometrical and numerical approaches of our proposed junction method are described below.
\subsection{The approach}
In short, our method for a configuration as shown in Fig. \ref{introduzione_network} uses 1D formulations for every straight channel and a single 2D element at each junction, as depicted in Fig. \ref{methodA}. The 2D subdomain is then linked to 1D channels through appropriate matching conditions, to be described. As previously noted, similar methods have been investigated in the past, both in hydrodynamics \cite{Miglio2005_1} and in haemodynamics \cite{Formaggia1999, FormaggiaQuarteroni2003}). Miglio et al. \cite{Miglio2005_1} used a finite element scheme and investigated only the case of subcritical flows. In the present work we are interested in general configurations and, principally, in all possible flow regimes: subcritical, transcritical and supercritical. Our approach is independent of the particular numerical method chosen for solving the shallow water equations, but here we implement first, second and third order accurate Godunov-type finite volume methods in the ADER framework \cite{Toro:2009a}.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{methodA.png}
\caption{Single 2D element at junction. The single element exchanges fluxes with all connected 1D domains. At solid walls of the rectangular cross-section channel, suitable reflective boundary conditions apply through the corersponding numerical fluxes.}
\label{methodA}
\end{figure}
We remark that the choice of the shape for the 2D junction element is important and there many possible choices for fitting a single finite volume method at the junction. After investigating several possibilities we concluded that the best choice is that of a {\it junction-shaped} 2D element, as displayed in Fig. \ref{methodA}. This 2D element protrudes into the 1D converging channels by 0.1 times the channel width, incorporating in this manner, geometrical information on the direction of the 1D domains. Other choices for the shape of the 2D element were explored in \cite{TesiBellamoli}. The resulting method is called {\bf Method A} throughout this paper. A simple variation, called {\bf Method B}, results from the insertion of a local 2D unstructured grid composed of more than one element to represent the junction and its vicinity.
Regarding the numerical methodology for the 1D and the 2D shallow water equations we use
Godunov-type methods with the approximate Riemann solver HLLC \cite{HLLC}. First, second and third order accurate methods are implemented. The high-order methods follow the ADER approach \cite{ADER} with the Harten-type method to solve the generalised Riemann problem \cite{Harten}. For background on the ADER approach see chapters 19 and 20 of \cite{Toro:2009a}.
The time step $\Delta t$ is computed imposing a CFL condition on both the 1D and the 2D junction elements in the usual manner. Then the size $\Delta t$ is taken as the minimum among all time steps and applied to the full domain. Note that in the 2D case the maximum CFL number for stability is $CFL_{2D} = CFL_{1D}/2$. In what follows we address in more detail each of the issues arising from the coupling of 1D domains and 2D elements.
\subsection{Computing two-dimensional and one-dimensional fluxes}
We calculate the 2D fluxes by solving the rotated 1D Riemann problem (\ref{rotinv9}) in local coordinates. To this end we use the HLLC approximate Riemann solver \cite{HLLC}. As initial data we have $x$-velocity and $y$-velocity components in the 2D domain, where we use a global, predefined, reference system. We also have axial velocity and transverse velocity (that is zero) in the 1D domain, where we use a local reference system, see figure \ref{2DFluxesHLLC_1_MOD}. However, for the computation of 2D fluxes we need $x$-velocity and $y$-velocity components both to the right and to the left of each edge of the element. Therefore, we need to rotate the vectors of conserved variables as follows:
\begin{equation}
{\bf Q}_{R}=\left[\begin{array}{c}
h_{2D}\\ h_{2D}u_{2D}\\ h_{2D}v_{2D}
\end{array}\right] \qquad
{\bf Q}_{L}=\left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos(\alpha) & -\sin(\alpha) \\
0 & \sin(\alpha) & \cos(\alpha) \\
\end{array}\right]
\left[\begin{array}{c}
h_{1D}\\ h_{1D}u_{1D}\\ h_{1D}v_{1D}
\end{array}\right]\;,
\end{equation}
where $u_{1D}$, $v_{1D}$ and $h_{1D}$, are the variables in the 1D domain, while $u_{2D}$, $v_{2D}$ and $h_{2D}$ denote the variables in the 2D domain, as depicted in Fig. \ref{2DFluxesHLLC_1_MOD}. For a second or higher order scheme, these data values result from reconstructed polynomials evaluated at the edges. Once the variables are available, we can apply the classical HLLC solver \cite{HLLC} as described in \ref{chapter_theoretical}. For each channel and for each side of the junction there is a reference angle which we call $\alpha$.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{2DFluxesHLLC_1_MOD3.png}
\caption{Reference frame for the 2D element and the 1D domain of the left channel.}
\label{2DFluxesHLLC_1_MOD}
\end{figure}
Reflective boundary conditions are set on the remaining edges of the junction-shaped element, giving rise to symmetric Riemann problems, see \cite{Toro:2009a} for details.
With regard to the 1D channel on the left side of Fig. \ref{2DFluxesHLLC_1_MOD}, the problem is inverted; we need axial velocity and transversal velocity both to the rigth and to the left of each edge of the 1D cell. Vectors of conserved variables become:
\begin{equation}
{\bf Q}_{L}=\left[\begin{array}{c}
h_{1D}\\ h_{1D}u_{1D}\\ h_{1D}v_{1D}
\end{array}\right] \qquad
{\bf Q}_{R}=\left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos(\alpha) & \sin(\alpha) \\
0 & -\sin(\alpha) & \cos(\alpha) \\
\end{array}\right]
\left[\begin{array}{c}
h_{2D}\\ h_{2D}u_{2D}\\ h_{2D}v_{2D}
\end{array}\right]\;.
\end{equation}
\subsection{Dealing with transverse velocity in 1D domains}
Obviously, in all 1D elements we assume 1D motion and thus the transverse velocity component is zero.
However, a problem arises at the element of a 1D channel adjacent to a 2D junction element, since we might end up with a non-zero transverse velocity. In the 2D elements, at time $t^{n}$ we generally will have two non-zero velocity components, and consequently we could obtain a non-zero transversal velocity also in the 1D element at time $t^{n+1}$ due to the 2D flux at the edge of the 2D element. To deal with this difficulty we have considered two approaches. One possibility is to simply set to zero the transverse velocity and take as 1D axial velocity the normal velocity component. The second option, which we prefer, is to calculate the axial 1D velocity as
\begin{equation}
u_{1D}^{n+1}=\text{sign}(u_{1D}^{n+1})\sqrt{(u_{2D}^{n+1})^2+(v_{2D}^{n+1})^2}\;.
\end{equation}
This means that the 2D velocity vector has been rotated in the direction of the 1D channel and consequently the tranverse velocity component is zero. Inevitably, in both approaches, momentum balance at the 1D elements adjacent to the 2D element is effectively altered, even though in the 2D element momentum balance is strictly satisfied.
\subsection{Spatial reconstruction for high-order accuracy}
For 1D cells adjacent to 2D cells we perform a modified version of the spatial reconstruction described in \ref{chapter_theoretical}, by projecting the distance between the centroid of the 2D element and the centre of the 1D cell along the normal to the boundary. See figure \ref{1DFluxesHARTEN_2_MOD}.
\begin{figure}[H]
\centering
\subfigure[Cells used for 2D reconstruction.]{\label{2DFluxesHARTEN_1_MOD}
\includegraphics[width=0.4\textwidth]{2DFluxesHARTEN_1_MOD.png}}\qquad
\subfigure[One-dimensional reconstruction.]{\label{1DFluxesHARTEN_2_MOD}
\includegraphics[width=0.4\textwidth]{1DFluxesHARTEN_2_MOD.png}}
\caption{Illustration and notation for the spatial reconstruction in 2D (a) and 1D (b).}
\end{figure}
Concerning 2D elements, particular attention must be paid to the reconstruction process. As in the 1D case, at any given time level $n$ one has a set of constant volume averages that are approximations to integral averages within each finite volume. For a second-order scheme, we need to approximate the solution in the 2D element with a first order polynomial. To this end we need three equations, for which we consider the three neighbouring 1D cells, as shown in Fig. \ref{2DFluxesHARTEN_1_MOD}. We do not use fictious elements near reflective boundaries for the reconstruction.
The 1D reconstruction delivers a slope in the axial direction, while 2D reconstructions results in slopes in $x$- and $y$-directions. When passing from 1D to 2D or vice versa we need to transform the first into the second, so we have to rotate not only the vector of conserved variables, but also the gradients. In fact, in the 1D domain we have $\partial_n u$ and $\partial_n v$, but to apply Harten's approach to solve the generalized Riemann problem we need $\partial_x U$, $\partial_y U$, $\partial_x V$ and $\partial_y V$ (being $u$ and $v$ velocities in axial and transversal directions, and $U$ and $V$ velocity components in $x$ and $y$ directions). These slopes can be calculated as
\begin{equation}
\left(\begin{array}{c}
\partial_x u\\
\partial_y u
\end{array}\right)=
\left(\begin{array}{c}
\cos\alpha\\
\sin\alpha
\end{array}\right)\partial_n u\;, \qquad\quad
\left(\begin{array}{c}
\partial_x v\\
\partial_y v
\end{array}\right)=
\left(\begin{array}{c}
\cos\alpha\\
\sin\alpha
\end{array}\right)\partial_n v \;
\end{equation}
and
\begin{equation}
\begin{array}{c}
\left(\begin{array}{c}
\partial_x U\\
\partial_x V
\end{array}\right)=
\left[\begin{array}{cc}
\cos\alpha & -\sin\alpha\\
\sin\alpha & \cos\alpha
\end{array}\right]
\left(\begin{array}{c}
\partial_x u\\
\partial_x v
\end{array}\right) \;, \\
\\
\left(\begin{array}{c}
\partial_y U\\
\partial_y V
\end{array}\right)=
\left[\begin{array}{cc}
\cos\alpha & -\sin\alpha\\
\sin\alpha & \cos\alpha
\end{array}\right]
\left(\begin{array}{c}
\partial_y u\\
\partial_y v
\end{array}\right) \;.
\end{array}
\end{equation}
In the next section we assess the performance of the junction methods presented in this paper using a comprehensive suite of test problems, comparing results to 2D reference solutions obtained from an unstructured 2D second-order method discribed in \ref{chapter_theoretical}.
\section{Test problems and assesment of the methods} \label{sec:results}
In this paper we consider three methods to deal with junctions in the context of shallow-water channels, {\bf Method A} being our main contribution, in which a single 2D element is inserted in each junction. {\bf Method B} generalises {\bf method A} by inserting a local 2D unstructured grid in the vicinity of each junction, see Fig. \ref{methodB}. The third method considered for comparison is the method proposed by Peir\'o, Sherwin, Formaggia and Parker \cite{SherwinFormaggia,Sherwin2003}, which in this paper will be called {\bf PSFP method}. This method is summarised in \ref{section_1Dexisting}. Solutions from all 3 methods are compared to reference 2D solutions. All methods have been implemented to second-order accuracy in both space and time. Here we present results for five tests. For additional tests see \cite{TesiBellamoli}.
\begin{figure}[H]
\centering
\includegraphics[width=0.3\textwidth]{griglia008.png}
\caption{Example of a local 2D grid used in the vicinity of the junction region in {\bf Method B}.}
\label{methodB}
\end{figure}
\subsection{Single-junction test problems}
\noindent{\bf Test 1: Subcritical wave in channel with a $90^\circ$ bifurcation.}\label{testsub90}\\
In this test we consider a channel configuration as shown on the left of Fig. \ref{junct90onda}. We impose a subcritical wave ($Fr_{max}\simeq 0.4$) that gradually steepens up and becomes a shock wave just after a $90^\circ$ bifurcation. Results are shown in Fig. \ref{junct90onda}. Method A and B give very satisfactory results, as compared to the 2D reference solution, for channel 1, while the PSFP method gives rather inaccurate results. For channel 2 all three methods give quite similar results, methods A and B being slightly more accurate than the PSFP method.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.15\textwidth]{junction90_small.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 1]{\label{junct90onda_t8s_ch1_ordine}
\includegraphics[width=0.65\textwidth]{junct90onda_t20_ch1_ordine.png}}
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.16\textwidth]{junction90_small2.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 2]{\label{junct90onda_t8s_ch2_ordine}
\includegraphics[width=0.65\textwidth]{junct90onda_t20_ch2_ordine.png}}
\caption{Test 1: Subcritical wave. Water height at time $t=8\,s$. } \label{junct90onda}
\end{figure}
\noindent{\bf Test 2: Subcritical wave in a channel with a $90^\circ$ asymmetrical bifurcation.}\\
In this test we consider an asymmetrical channel configuration as shown on the left of Fig. \ref{junct90ASonda}. As for the previous test, methods A and B give very satisfactory results as compared to the reference 2D solution, outperforming the PSFP method.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.3\textwidth]{junction90AS_small.png}}}
\hspace{0.03\textwidth}
\subfigure[Channel 1]{\label{junct90ASonda_t8s_ch1}
\includegraphics[width=0.65\textwidth]{junct90ASonda_t20_ch1.png}}
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.3\textwidth]{junction90AS_small2.png}}}
\hspace{0.03\textwidth}
\subfigure[Channel 2]{\label{junct90ASonda_t8s_ch2}
\includegraphics[width=0.65\textwidth]{junct90ASonda_t20_ch2.png}}
\caption{Test 2: Subcritical wave (asymmetrical case): Water height at time $t=8\,s$.} \label{junct90ASonda}
\end{figure}
\noindent{\bf Test 3: Shock wave in a channel with a $45^\circ$ bifurcation.}\\
In this test we consider a channel configuration as shown on the left of Fig. \ref{junct90shocksuper}.
From channel 1 we send a shock with Froude number $Fr=0.75$.
Results are shown in Fig. \ref{junct90shocksuper}. It is seen that the performance of methods A and B is very satisfactory, as far as the shock wave is concerned. The PSFP method did not run for this test.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{5mm}{\includegraphics[width=0.2\textwidth]{junction45_shock_small.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 2]{\label{junct90shocksuper_t2s_ch2}
\includegraphics[width=0.65\textwidth]{junct45shock_t10_ch2.png}}
\caption{Test 3: Supercritical shock wave ($45^\circ$). Water height at time $t=2\,s$.} \label{junct90shocksuper}
\end{figure}
\noindent{\bf Test 4: Supercritical shock wave in a channel with a $90^\circ$ bifurcation.}\\
Finally we test our methods with a severe problem: a supercritical shock of Froude number $Fr=1.135$. Results are shown in Fig. \ref{junct90shocksuper}. Results obtained with method B are again very satisfactory, thanks to the local 2D grid. On the other hand, results obtained with method A are less accurate than that obtained in the previous case, because of the severity of the test. Again, the PSFP method did not run for this test.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{5mm}{\includegraphics[width=0.15\textwidth]{junction90_shock_small.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 2]{\label{junct90shocksuper_t2s_ch2}
\includegraphics[width=0.65\textwidth]{junct90shocksuper_t5_ch2_G.png}}
\caption{Test 4: Supercritical shock wave ($90^\circ$ bifurcation). Water height at time $t=2\,s$.} \label{junct90shocksuper}
\end{figure}
\subsection{\bf Test 5: the CADAM test problem.}
In this section we apply the methods to the CADAM test 1 (CADAM, Concerted Action on Dam-Break Modelling, 1996-1999), for which experimental measurements are available as well as numerous numerical simulations. For a full description of the test see \cite{Morris}. The geometrical configuration is depicted in Fig. \ref{cadam} in which a 2D reservoir is connected to a straight channel with a $45^\circ$ bend. Figs. \ref{cadam_grid} and \ref{cadam_element} show how the the $45^\circ$ bend was treated for methods A and B. In both cases the reservoir is discretised with a 2D unstructured mesh, while for the $45^\circ$ bend method B inserts a local 2D grid in the vicinity of the bend while method A considers a single 2D element.
\begin{figure}[H]
\centering
\subfigure[Method B]{\label{cadam_grid}
\includegraphics[width=0.48\textwidth]{cadam_grid.png}}
\subfigure[Method A]{\label{cadam_element}
\includegraphics[width=0.48\textwidth]{cadam.png}}
\caption{Test 5: the CADAM test problem. 2D and 1D domains used for numerical simulation of CADAM test 1.}
\label{cadam}
\end{figure}
In the CADAM experiment, measuring gauges 5 to 7 are placed around the bend, where the motion of the fluid is more complex. Gauges 2, 3, 4 and 9 are placed along the straight channels. For full details see \cite{Morris}. Numerical results and experimental measurements are all displayed in Fig \ref{cadam_coupled_source}. Results obtained with the methods proposed in this paper compare satisfactorily to measurements. The flow is supercritical, so the PSFP method did not run for this test.
\begin{figure}[H]
\centering
\subfigure[Gauge 2]{
\includegraphics[width=0.45\textwidth]{cadam2_source.png}}
\subfigure[Gauge 3]{
\includegraphics[width=0.45\textwidth]{cadam3_source.png}}
\subfigure[Gauge 4]{
\includegraphics[width=0.45\textwidth]{cadam4_source.png}}
\subfigure[Gauge 5]{
\includegraphics[width=0.45\textwidth]{cadam5_source.png}}
\subfigure[Gauge 6]{
\includegraphics[width=0.45\textwidth]{cadam6_source.png}}
\subfigure[Gauge 7]{
\includegraphics[width=0.45\textwidth]{cadam7_source.png}}
\subfigure[Gauge 8]{
\includegraphics[width=0.45\textwidth]{cadam8_source.png}}
\subfigure[Gauge 9]{
\includegraphics[width=0.45\textwidth]{cadam9_source.png}}
\caption{Test 5: the CADAM test problem. Computed free-surface elevation [meters] in time [seconds] and experimental measurements. Gauges 2 to 9 are the points of measurement used in the experimental test \cite{Morris}.}
\label{cadam_coupled_source}
\end{figure}
\subsection{Test 6: A multiple-channel network}
In this section we assess the performance of the various methods for the case of a multiple-channel network involving 16 junctions and 25 branches; see Figs. 14-16. We considered two cases, an incident subcritical wave and an incident supercritical shock. For the sake of simplicity we set the bed slope and the friction to zero. Solutions are computed with all three approximate junction methods considered, except for the supercritical shock case for which only methods A and B are used. For this test, due to the complexity of the situation with many shock wave reflections and wave interaction, for method A we use a 2D coarse grid inside the four junctions in the corners (see Fig. \ref{rete_elementgrid}), where the flow is very complex due to large variations in angles and the large space occupied by the junction. Results will be shown at the eight positions shown in Fig. \ref{rete3}.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{rete_grid.png}
\caption{Test 6: A multiple-channel network. Configuration for method B. Two-dimensional grids in the vicinity of junctions.}
\label{rete_grid}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{rete_elementgrid.png}
\caption{Test 6: A multiple-channel network. Method A. 2D single elements in most junctions, modified at shown four corner junctions.}
\label{rete_elementgrid}
\end{figure}
For the subcritical wave case, computed results are displayed in Figs. \ref{rete_onda_grafici1} to \ref{rete_onda_grafici8}. All three approximate junction methods run and are compared to the reference 2D solution. Methods A and B are seen to be very accurate; all three methods give very similar results for the arrival phase of the wave but differ at later times. For the supercritical shock wave case, computed results are displayed in Figs. \ref{rete_shock_grafici1} to \ref{rete_shock_grafici8}. For this case the PSFP method did not run. Not surprisingly, it is seen that method B hardly differs from the reference 2D solution but the simpler method A is also seen to be very accurate. As expected, the larger discrepancies between method A and the reference solution are seen in wave arrival times. Results at point 8 were expected to show the largest errors, as waves must transverse the full complex network, with multiples shock waves and complex interactions, and yet the end results at position 8 are satisfactory.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{rete3.png}
\caption{Test 6: A multiple-channel network. Points of the network where the free surface elevation is recorded and then reported in figures \ref{rete_onda_grafici1} to \ref{rete_shock_grafici8}.}
\label{rete3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{onda1.png}
\caption{Test 6 (subcritical wave): A multiple-channel network. Computed free-surface elevation [m] in time [s] for Point 1.}
\label{rete_onda_grafici1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{onda3.png}
\caption{Test 6 (subcritical wave): Computed free-surface elevation [m] in time [s] for Point 3.}
\label{rete_onda_grafici3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{onda6.png}
\caption{Test 6 (subcritical wave): Computed free-surface elevation [m] in time [s] for Point 6.}
\label{rete_onda_grafici6}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{onda8.png}
\caption{Test 6 (subcritical wave): Computed free-surface elevation [m] in time [s] for Point 8.}
\label{rete_onda_grafici8}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{shock1.png}
\caption{Test 6 ( supercritical shock): Computed free-surface elevation [m] in time [s] for Point 1.}
\label{rete_shock_grafici1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{shock3.png}
\caption{Computed free-surface elevation [m] in time [s] for Point 3. Supercritical shock.}
\label{rete_shock_grafici3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{shock6.png}
\caption{Test 6 ( supercritical shock): Computed free-surface elevation [m] in time [s] for Point 6.}
\label{rete_shock_grafici6}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{shock8.png}
\caption{Test 6 ( supercritical shock): Computed free-surface elevation [m] in time [s] for Point 8.}
\label{rete_shock_grafici8}
\end{figure}
\subsection{Computational times}
Here we show the computational times involved to solve each one of the six problems previously presented. Table \ref {tab_cputime} shows the tests on the left columns and the CPU times in seconds in the subsequent columns, for the various methods used. Missing values for the PSFP method regard tests for which this method did not work.
\begin{table}[H]
\renewcommand\arraystretch{1.2}
\centering {\small
\begin{tabular}{lcccc}
\hline
& & & & \\[-4.5mm]
{\bf Test} & {\bf 2D Reference} & {\bf PSFP method} & {\bf Method B} & {\bf Method A} \\
& & & & \\[-4.5mm]
\hline
& & & & \\[-4.5mm]
Test 1 & 392.1 & 3.31 & 34.8 & 1.08 \\
& & & & \\[-4.5mm]
Test 2 & 547.2 & 3.28 & 31.7 & 2.06 \\
& & & & \\[-4.5mm]
Test 3 & 1215 & - & 68.9 & 3.53 \\
& & & & \\[-4.5mm]
Test 4 & 1684 & - & 128.3 & 4.69 \\
& & & & \\[-4.5mm]
Test 6 (subcritical wave) & 5787 & 70.3 & 1413 & 19.2 \\
& & & & \\[-4.5mm]
Test 6 (supercritical shock) & 13775 & - & 3091 & 51.5 \\
& & & & \\[-4.5mm]
\hline
\end{tabular}}
\caption{Computational times [s] for all numerical methods reported in this paper, for six test problems.}
\label{tab_cputime}
\end{table}
As expected, the largest CPU times are those for the full 2D solver used to produce reference solutions. In terms of cost, there follows method B. The next one in CPU time cost is the PSFP method, with Method A being the fastest, even faster that the simplest of all methods, namely the PSFP method, which is based entirely on 1D assumptions. It appears as if the method of choice is our method A, since it runs for all very demanding test problems, while giving reasonably accurate solutions as compared to the full 2D solver and at lowest computational cost. Computational saving factors for method A, relative to the full 2D solver, are of the order of 300, making the method a realistic option for complex applications.
\section{Concluding remarks} \label{sec:discussion}
We have presented a novel method to treat junctions in networks of 1D shallow water channels. The method, called method A, inserts a single 2D, junction-shaped finite volume right at the junction, taking care that the element protrudes into the 1D channels. In this manner, the geometrical information, such as bifurcation angles and reflective boundaries is accounted for, locally. Method B results from generalising method A by inserting a local 2D unstructured grid in the vicinity of the junction. In addition, we briefly reviewed the existing junction method due to Peir\'o, Sherwin, Formaggia and Parker \cite{SherwinFormaggia,Sherwin2003}, which we termed PSFP method. All three approximate junction methods are assessed through a carefully selected suite of demanding test problems. No exact solutions to these problems exist to test the accuracy of approximate junction methods. We therefore use a fully 2D unstructured-mesh, second order method of the ADER type to compute accurate numerical solutions. Method A is the preferred one, since it is simple and sufficiently accurate for all test problems. Method B is the more accurate of all three approximate methods tested, but also the most expensive, as shown by our computational efficiency test. Method A is the fastest, about three times faster than the PSFP method and about 70 times faster than method B for the more realistic test problem involving a reasonably complex network. Methods A and B work well for all test problem, while the PSFP method only works for 3 of the 6 test problems. An attractive feature of method A, shared by method B, is that it can successfully cope with problems involving high subcritical, transcritical and supercritical flows at the junctions. We note that due to the single-element of method A, accuracy may deteriorate, depending on the mesh dimensions involved. This shortcoming is most evident in the first-order version of the methodology. Higher-order versions can ameliorate this deficiency. In fact, second order accuracy is found to be satisfactory, though we found a test problem for which only the third-order scheme produced fully satisfactory solutions, not shown here. Potential users of the schemes may have to assess this aspect of the methods before embarking on practical applications. For practical applications, both methods A and B may benefit from using local time-stepping, for example, following the methodology proposed in \cite{Dumbser:2007c,Mueller:2016}. This may be required by the disparity of spatial mesh sizes at the junctions and the 1D domains, which potentially implies disparity in time step sizes.
The methods presented in this paper can be applied to any problem involving networks of nearly straight 1D domains, provided the multidimensional version of the equations, 2D or 3D, are available.
\vspace{10mm}
\begin{center}
{\bf Acknowledgements}
\end{center}
The authors are indebted to Prof. Dr. M. Dumbser, University of Trento, for useful discussions on the subject.
\newpage
\section{Introduction}
There are multidimensional physical problems modelled by partial differential equations in networks of spatial domains than are essentially straight. In such cases the governing equations can be assumed to be one-dimensional (1D), potentially resulting in significant computing savings. Examples include gas flow in pipes
\cite{Banda:2006a, Brouwer:2011a, Bales:2009a,Reigstad:2015a,Bermudez:2017},
traffic flow \cite{Coclite:2005a, Borsche:2014c, Bretti:2007a}
water flows \cite{Kesserwani,AkanYen,AralZhangJin,Zhang, Borsche:2014b, Kesserwani:2008a}
and blood flow in the human circulatory system
\cite{Quarteroni:2000a,Formaggia:1999a,Olufsen:2000a, Sherwin2003,Miglio2005_1,Alastruey:2011a,Liang:2009a,Liang:2009b,FullanaZaleski,Liang:2014a,Mueller:2014a,Mueller:2014b,Toro:2015a}. The challenge, however, is how to connect these 1D domains in a way that accounts for the multidimensional character of the equations, even in an approximately manner.
Current methods are reported to perform well in most cases. However, a shortcoming of existing methods is their inability to deal with transcritical and supercritical flow through junctions. In some cases, these methods fail even for subcritical flows at moderately high Froude numbers. Transcritical and supercritical flows are important flow regimes that may occur more often than one is aware of, for example at junctions, locally. In physiological flows this is found to be the case in the venous system, under postural changes. In open channel flows the occurrence of supercritical flows is not rare and may potentially take place in inundating flows emerging from dam-break events and tsunami waves. Supercritical regimes may also appear in networks of tubes transporting compressible fluids under extreme accidental conditions.
In this paper we present methods for dealing with junctions connecting 1D domains and illustrate the ideas for junctions of 1D shallow water channels. We note that the full problem is governed by the two-dimensional (2D) shallow water equations. The methods presented here make use of the finite volume approach, whereby the true geometry is accounted for locally at junctions, whereas away from junctions, the usual 1D equations are solved. In addition to mass conservation, our methods enforce conservation of momentum at junctions, which constitutes an improvement over methods currently available. It is noted that the approach, as applied to complex networks of channels, can lead to very significant computing savings, as compared to solving the full multidimensional problem, without compromising the solution quality. Systematic assessment of the methods for a variety of flow configurations is carried out.
It is worth noting that a similar method, which combines a 1D model for the channels and a 2D model on unstructured grids for junctions, has been investigated both in hydrodynamics (Miglio et al. \cite{Miglio2005_1}) and in haemodynamics (Formaggia et al \cite{Formaggia1999,FormaggiaQuarteroni2003}). However, Miglio et al. (\cite{Miglio2005_1}) use a finite element scheme and investigate only the case of subcritical flows.
The rest of this paper is structured as follows. In Section \ref{sec:mathmodnumerics} we briefly present the underlying mathematical models and the numerical framework upon which the proposed methods are constructed. Next, the novel methodology for coupling 1D domains at junctions is illustrated in Section \ref{sec:method}. Section \ref{sec:results} is devoted to the validation of the proposed methods, while concluding remarks are presented in Section \ref{sec:discussion}. \ref{section_1Dexisting} describes an existing method for junctions and \ref{chapter_theoretical} provides details on the 2D unstructured mesh method used to produce reliable reference solutions used to assess the methods presented in this paper.
\section{Mathematical models and numerical method} \label{sec:mathmodnumerics}
We are concerned with free-surface shallow water flow under gravity in a network system consisting of interconnected straight (or essentially straight) channels joined at junctions, as illustrated in Fig. \ref{introduzione_network}. The flow field has a significant 2D structure only in the vicinity of junctions, while it is essentially 1D along the straight channels, away from junctions. The purpose of this work is to develop a method that combines the use of a 1D model in channels and a 2D model only at junctions, coupling these two models with appropriate matching conditions. As we shall show later, the resulting methods show a huge computational efficiency gain with respect to solving the full 2D equations on the entire domain.
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{figura2.png}
\caption{Example of a channel network. Regions with two-dimensional behaviour are encircled and zoomed in. }
\label{introduzione_network}
\end{figure}
The methods proposed in this paper adopt the finite volume framework and combine the 2D equations and its corresponding local geometry in a neighbourhood of junctions, along with the 1D equations in the straight channels. Next, we recall the governing equations and the finite volume method.
\subsection{The shallow water equations}
The time-dependent, non-linear, 2D shallow water equations written in conservation form read
\begin{equation} \label{swe1}
\partial_{t}{\bf Q} + \partial_{x}{\bf F}({\bf Q}) + \partial_{y}{\bf G}({\bf Q}) = {\bf S} ({\bf Q}) \;,
\end{equation}
with
\begin{equation} \label{swe2}
\left.
\begin{array}{ccc}
{\bf Q}
= \left[ \begin{array}{c}
h \\
h u \\
h v \\
\end{array} \right] \;,\hspace{1mm} &
{\bf F}({\bf Q})
= \left[ \begin{array}{c}
h u \\
h u^{2}+\frac{1}{2} g h^{2} \\
h u v
\end{array} \right] \;, \hspace{2mm} &
{\bf G}({\bf Q})
= \left[ \begin{array}{c}
h v \\
h v u \\
h v^{2}+\frac{1}{2} g h^{2}
\end{array} \right] \;, \\
\\
& {\bf S}({\bf Q})
= \left[ \begin{array}{c}
0 \\
gh(S_{ox}-S_{fx}) \\
gh(S_{oy}-S_{fy}) \\
\end{array} \right] \:. &
\end{array} \right\}
\end{equation}
Here ${\bf Q}$ is the vector of conserved variables; ${\bf F}({\bf Q})$ and ${\bf G}({\bf Q})$ are the fluxes in the $x$ and $y$ directions, respectively, and ${\bf S} ({\bf Q})$ is the vector of source terms. The physical variables are water depth $h(x,y,t)$ and velocity components $u(x,y,t)$ and $v(x,y,t)$, in the $x$ and $y$ directions respectively. In this paper the source term vector ${\bf S}({\bf Q})$ accounts for the variation of the bottom topography
\begin{equation} \label{swe3}
\begin{array}{ccc}
S_{ox}=-\partial_xb(x,y) & \quad\mbox{and}\quad\; & S_{oy}=-\partial_yb(x,y)
\end{array}
\end{equation}
and the bed friction
\begin{equation}\label{swe4}
\begin{array}{ccc}
S_{fx}=\displaystyle{\frac{n^2\,u\,\sqrt{u^2+v^2}}{h^{4/3}}} & \quad\mbox{and}\quad \;
& S_{fy}=\displaystyle{\frac{n^2\,v\,\sqrt{u^2+v^2}}{h^{4/3}}}\;.
\end{array}
\end{equation}
Here $b(x,y)$ represents bottom elevation above a horizontal datum, $n$ is the Manning's coefficient and $g$ is the acceleration due to gravity. In this work we only consider channels with horizontal bottom, that is $S_{ox}=0$, $S_{oy}=0$. Equations (\ref{swe1}) form a system of partial differential equations of hyperbolic type. For background on the shallow water equations and associated numerical methods see, for instance, \cite{Toro2001} and the many references therein.
The one-dimensional version of (\ref{swe1}), in the generic $s$ direction, reads
\begin{equation}\label{swe5}
\partial_{t}{\bf Q}_{1D} + \partial_{s}{\bf F}_{1D}({\bf Q}_{1D}) = {\bf S}_{1D}({\bf Q}_{1D}) \;,
\end{equation}
with
\begin{equation}\label{swe6}
\begin{array}{lll}
{\bf Q}_{1D}
= \left[ \begin{array}{c}
h_{1D} \\
h_{1D} u_{1D} \\
\end{array} \right] \;,\hspace{3mm} &
{\bf F}_{1D}({\bf Q}_{1D})
= \left[ \begin{array}{c}
h_{1D} u_{1D} \\
h_{1D} u^{2}_{1D}+\frac{1}{2} g h^{2}_{1D} \\
\end{array} \right] \;,\hspace{3mm} &
{\bf S}_{1D}({\bf Q}_{1D})
= \left[ \begin{array}{c}
0 \\
gh_{1D}(S_{os}-S_{fs}) \\
\end{array} \right] \;,
\end{array}
\end{equation}
where $h_{1D} = h$ and $u_{1D}$ is the velocity along $s$.
\subsection{Rotational invariance and numerical method}
As previously stated, the methods proposed in this paper combine 1D equations (\ref{swe5}) in a generic direction $s$ and the 2D equations (\ref{swe1}), locally, at a single 2D element with an arbitrary number of edges. Thus, it is convenient to recall the rotational invariance property of equations (\ref{swe1}). First, let us define an arbitrary 2D spatial control volume $V$ with boundary $\Omega$, as depicted in Fig. \ref{RotInv}, top frame. Equations (\ref{swe1}), expressed in integral form, read
\begin{equation} \label{rotinv1}
\frac{\partial}{\partial t} \int_{V} {\bf Q}\, dV+
\int_{\Omega}\left[\cos\theta {\bf F}({\bf Q})+\sin\theta
{\bf G}({\bf Q})\right]\,d \Omega = {\bf 0} \;.
\end{equation}
Moreover, we define the outward unit vector normal to $\Omega$, $\bf n$, as
\begin{equation} \label{rotinv2}
{\bf n}\equiv \left[n_{1},n_{2}\right] \equiv \left[\cos\theta,\sin\theta\right]\;.
\end{equation}
The top frame of Fig. \ref{RotInv} depicts the control volume $V$ in the Cartesian plane, where $x$ denotes the chosen reference direction, while the bottom frame depicts a typical computational finite volume with five vertices and five edges.
\begin{figure}
\centerline{
\includegraphics[scale=0.40,angle=-90]{figures/RotInv.pdf}
}
\vspace{5.0mm}
\centerline{
\includegraphics[scale=0.25,angle=-90]{figures/FigFV-sw.pdf}
}
\caption{Control volumes, rotational invariance and finite volumes. Top frame: arbitrary control volume $V$ in Cartesian plane; the $x$-direction is the
reference direction, $\theta$ is angle between the outward unit normal vector ${\bf n}$ and the
reference $x$-direction. Bottom frame: typical finite
volume in the Cartesian plane.}
\label{RotInv}
\end{figure}
Equations (\ref{swe1}) satisfy the rotational invariance property \cite{Toro2001}
\begin{equation} \label{rotinv3}
{\bf H} \equiv {\bf n} \cdot \left[{\bf F}({\bf Q}), {\bf G}({\bf Q}\right] =
\cos\theta {\bf F}({\bf Q}) + \sin\theta {\bf G}({\bf Q}) =
{\bf T}^{-1}{\bf F}\left({\bf T}({\bf Q})\right) \;
\end{equation}
for all vectors ${\bf Q}$ and for all real angles $\theta$, or equivalently, normal directions of $\Omega$. Here ${\bf T} = {\bf T}(\theta)$ is a rotation matrix and ${\bf T}^{-1}(\theta)$ is its inverse, given respectively as
\begin{equation} \label{rotinv4}
{\bf T}=\left[\begin{array}[c]{ccc}
1 & 0 & 0 \\
0 & \cos\theta & \sin\theta \\
0 & -\sin\theta & \cos\theta \\
\end{array}\right] \;,\hspace{2mm}
{\bf T}^{-1}=\left[\begin{array}[c]{ccc}
1 & 0 & 0 \\
0 & \cos\theta & -\sin\theta \\
0 & \sin\theta & \cos\theta \\
\end{array}\right] \;.
\end{equation}
Now, we choose a computational control volume $V_{k}$ in two-dimensional space, as shown for example in the bottom frame of Fig. \ref{RotInv}. Moreover, we define a space-time control volume $I_k = [t^{n+1},t^n] \times V_k$, over which we integrate (\ref{swe1}), yielding
\begin{equation} \label{rotinv5}
{\bf Q}_{k}^{n+1} = {\bf Q}_{k}^{n} -\frac{\Delta t}{ |V_{k}| }\sum_{e=1}^{N}{\cal F}_{e} \;,
\end{equation}
with cell averages defined as
\begin{equation} \label{rotinv6}
{\bf Q}^{n}_{k} = \frac{1}{ |V_{k}|} \int_{V_{k}} {\bf Q}(x,y,t ^{n})\, d V \;,
\end{equation}
where $|V_{k}|$ denotes the volume of $V_{k}$ (or area in the 2D case), while the intercell flux for edge $e$ is
\begin{equation} \label{rotinv7}
{\cal F}_{e} =
\int_{A_{e}}^{A_{e+1}} {\bf T_{e}}^{-1}{\bf F}\left({\bf T_{e}}({\bf Q})\right)\, d A \approx
{\cal{L}}_{e} {\bf T}_{e}^{-1} \hat{{\bf F}}_{e} \;.
\end{equation}
Here ${\cal{L}}_{e}$ is the length of edge $e$, the segment $A_{e}A_{e+1}$. $\hat{{\bf F}}_{e} \approx {\bf F}\left({\bf T_{e}}({\bf Q})\right)$ is an approximation to the flux ${\bf F}$ on the edge $e$ evaluated at the rotated state ${\bf T_{e}}({\bf Q})$, where the rotation is performed in the normal direction to side $e$ through the transformation matrix ${\bf T_{e}}$. The final expression of the finite volume scheme becomes
\begin{equation} \label{rotinv8}
{\bf Q}_{k}^{n+1} = {\bf Q}_{k}^{n} -\frac{\Delta t}{ |V_{k}| }\sum_{e=1}^{N} {\cal{L}}_{e} {\bf T}_{e}^{-1} \hat{{\bf F}}_{e} \;.
\end{equation}
For completness we now illustrate the computation of the flux $\hat{{\bf F}}_{e}$ for an arbitrary edge $e$, see Fig.3. Conventionally, the left side $L$ of edge $e$ is always in the interior of the control volume of interest and the right side $R$ is outside. The computation of the numerical flux $\hat{{\bf F}}_{e}$, as for a first-order Godunov type method for example, involves the augmented, local one-dimensional Riemann problem in the rotated frame normal to the edge, namely
\begin{equation} \label{rotinv9}
\left.
\begin{array}{ll}
\mbox{PDEs in normal direction:} & \partial_{t} {\bf Q}_{1D} + \partial_{s} {\bf F}_{1D}({\bf Q}_{1D}) = {\bf 0} \;, \\
\\
\mbox{Rotated initial conditions:} & {\bf Q}_{1D}(s,0) = \left\{ \begin{array}{lll}
{\bf Q}_{1D,L} = {\bf T}_{e}({\bf Q}_{2D,L}) & \mbox{if} & s < 0 \; , \\
\\
{\bf Q}_{1D,R} = {\bf T}_{e}({\bf Q}_{2D,R}) & \mbox{if} & s > 0 \; .
\end{array} \right.
\end{array}\right\}
\end{equation}
The steps to be followed in order to solve problem ( \ref{rotinv9}) can be summarised as:
\begin{enumerate}
\item Calculate the angle $\theta_{e}$ between the outward unit normal to edge $e$ and the fixed reference direction $x$, being positive in the counter-clock direction.
\item Calculate the corresponding rotation matrix ${\bf T}_{s}$ and its inverse from (\ref{rotinv4}).
\item Rotate left and right data as in (\ref{rotinv9}).
\item Solve the 1D Riemann problem (\ref{rotinv9}) on rotated data and compute the corresponding flux $\hat{\bf F}_{e}$.
\item Rotate back the flux as in (\ref{rotinv3}) and multiply it by edge length to get the final intercell numerical flux for edge $e$.
\end{enumerate}
Once numerical fluxes for all edges have been calculated, element $k$ can be updated through the finite volume formula (\ref{rotinv5}). This description applies to any two-dimensional finite volume method on a general mesh, assumed here to be unstructured. More details are given in \ref{chapter_theoretical}. However, for the junction method proposed in this paper, will only use the above description at a single 2D element placed right at the junction.
\begin{figure}
\centerline{
\includegraphics[scale=0.35,angle=0]{figures/VolumeSide.png}
}
\vspace{3mm}
\caption{Generic edge of a control volume $V$ in $x$-$y$ space, where by convention the left side L lies inside the control volume and the right side R lies outside. The outward normal unit vector, with respect to the fixed reference direction $x$, is depicted as well as its corresponding angle.}
\end{figure}\label{fig:genericcontrolvolumeside}
\section{A methodology for channel junctions/bifurcations} \label{sec:method}
The geometrical and numerical approaches of our proposed junction method are described below.
\subsection{The approach}
In short, our method for a configuration as shown in Fig. \ref{introduzione_network} uses 1D formulations for every straight channel and a single 2D element at each junction, as depicted in Fig. \ref{methodA}. The 2D subdomain is then linked to 1D channels through appropriate matching conditions, to be described. As previously noted, similar methods have been investigated in the past, both in hydrodynamics \cite{Miglio2005_1} and in haemodynamics \cite{Formaggia1999, FormaggiaQuarteroni2003}). Miglio et al. \cite{Miglio2005_1} used a finite element scheme and investigated only the case of subcritical flows. In the present work we are interested in general configurations and, principally, in all possible flow regimes: subcritical, transcritical and supercritical. Our approach is independent of the particular numerical method chosen for solving the shallow water equations, but here we implement first, second and third order accurate Godunov-type finite volume methods in the ADER framework \cite{Toro:2009a}.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{methodA.png}
\caption{Single 2D element at junction. The single element exchanges fluxes with all connected 1D domains. At solid walls of the rectangular cross-section channel, suitable reflective boundary conditions apply through the corersponding numerical fluxes.}
\label{methodA}
\end{figure}
We remark that the choice of the shape for the 2D junction element is important and there many possible choices for fitting a single finite volume method at the junction. After investigating several possibilities we concluded that the best choice is that of a {\it junction-shaped} 2D element, as displayed in Fig. \ref{methodA}. This 2D element protrudes into the 1D converging channels by 0.1 times the channel width, incorporating in this manner, geometrical information on the direction of the 1D domains. Other choices for the shape of the 2D element were explored in \cite{TesiBellamoli}. The resulting method is called {\bf Method A} throughout this paper. A simple variation, called {\bf Method B}, results from the insertion of a local 2D unstructured grid composed of more than one element to represent the junction and its vicinity.
Regarding the numerical methodology for the 1D and the 2D shallow water equations we use
Godunov-type methods with the approximate Riemann solver HLLC \cite{HLLC}. First, second and third order accurate methods are implemented. The high-order methods follow the ADER approach \cite{ADER} with the Harten-type method to solve the generalised Riemann problem \cite{Harten}. For background on the ADER approach see chapters 19 and 20 of \cite{Toro:2009a}.
The time step $\Delta t$ is computed imposing a CFL condition on both the 1D and the 2D junction elements in the usual manner. Then the size $\Delta t$ is taken as the minimum among all time steps and applied to the full domain. Note that in the 2D case the maximum CFL number for stability is $CFL_{2D} = CFL_{1D}/2$. In what follows we address in more detail each of the issues arising from the coupling of 1D domains and 2D elements.
\subsection{Computing two-dimensional and one-dimensional fluxes}
We calculate the 2D fluxes by solving the rotated 1D Riemann problem (\ref{rotinv9}) in local coordinates. To this end we use the HLLC approximate Riemann solver \cite{HLLC}. As initial data we have $x$-velocity and $y$-velocity components in the 2D domain, where we use a global, predefined, reference system. We also have axial velocity and transverse velocity (that is zero) in the 1D domain, where we use a local reference system, see figure \ref{2DFluxesHLLC_1_MOD}. However, for the computation of 2D fluxes we need $x$-velocity and $y$-velocity components both to the right and to the left of each edge of the element. Therefore, we need to rotate the vectors of conserved variables as follows:
\begin{equation}
{\bf Q}_{R}=\left[\begin{array}{c}
h_{2D}\\ h_{2D}u_{2D}\\ h_{2D}v_{2D}
\end{array}\right] \qquad
{\bf Q}_{L}=\left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos(\alpha) & -\sin(\alpha) \\
0 & \sin(\alpha) & \cos(\alpha) \\
\end{array}\right]
\left[\begin{array}{c}
h_{1D}\\ h_{1D}u_{1D}\\ h_{1D}v_{1D}
\end{array}\right]\;,
\end{equation}
where $u_{1D}$, $v_{1D}$ and $h_{1D}$, are the variables in the 1D domain, while $u_{2D}$, $v_{2D}$ and $h_{2D}$ denote the variables in the 2D domain, as depicted in Fig. \ref{2DFluxesHLLC_1_MOD}. For a second or higher order scheme, these data values result from reconstructed polynomials evaluated at the edges. Once the variables are available, we can apply the classical HLLC solver \cite{HLLC} as described in \ref{chapter_theoretical}. For each channel and for each side of the junction there is a reference angle which we call $\alpha$.
\begin{figure}[H]
\centering
\includegraphics[width=0.4\textwidth]{coupling/2DFluxesHLLC_1_MOD3.png}
\caption{Reference frame for the 2D element and the 1D domain of the left channel.}
\label{2DFluxesHLLC_1_MOD}
\end{figure}
Reflective boundary conditions are set on the remaining edges of the junction-shaped element, giving rise to symmetric Riemann problems, see \cite{Toro:2009a} for details.
With regard to the 1D channel on the left side of Fig. \ref{2DFluxesHLLC_1_MOD}, the problem is inverted; we need axial velocity and transversal velocity both to the rigth and to the left of each edge of the 1D cell. Vectors of conserved variables become:
\begin{equation}
{\bf Q}_{L}=\left[\begin{array}{c}
h_{1D}\\ h_{1D}u_{1D}\\ h_{1D}v_{1D}
\end{array}\right] \qquad
{\bf Q}_{R}=\left[\begin{array}{ccc}
1 & 0 & 0 \\
0 & \cos(\alpha) & \sin(\alpha) \\
0 & -\sin(\alpha) & \cos(\alpha) \\
\end{array}\right]
\left[\begin{array}{c}
h_{2D}\\ h_{2D}u_{2D}\\ h_{2D}v_{2D}
\end{array}\right]\;.
\end{equation}
\subsection{Dealing with transverse velocity in 1D domains}
Obviously, in all 1D elements we assume 1D motion and thus the transverse velocity component is zero.
However, a problem arises at the element of a 1D channel adjacent to a 2D junction element, since we might end up with a non-zero transverse velocity. In the 2D elements, at time $t^{n}$ we generally will have two non-zero velocity components, and consequently we could obtain a non-zero transversal velocity also in the 1D element at time $t^{n+1}$ due to the 2D flux at the edge of the 2D element. To deal with this difficulty we have considered two approaches. One possibility is to simply set to zero the transverse velocity and take as 1D axial velocity the normal velocity component. The second option, which we prefer, is to calculate the axial 1D velocity as
\begin{equation}
u_{1D}^{n+1}=\text{sign}(u_{1D}^{n+1})\sqrt{(u_{2D}^{n+1})^2+(v_{2D}^{n+1})^2}\;.
\end{equation}
This means that the 2D velocity vector has been rotated in the direction of the 1D channel and consequently the tranverse velocity component is zero. Inevitably, in both approaches, momentum balance at the 1D elements adjacent to the 2D element is effectively altered, even though in the 2D element momentum balance is strictly satisfied.
\subsection{Spatial reconstruction for high-order accuracy}
For 1D cells adjacent to 2D cells we perform a modified version of the spatial reconstruction described in \ref{chapter_theoretical}, by projecting the distance between the centroid of the 2D element and the centre of the 1D cell along the normal to the boundary. See figure \ref{1DFluxesHARTEN_2_MOD}.
\begin{figure}[H]
\centering
\subfigure[Cells used for 2D reconstruction.]{\label{2DFluxesHARTEN_1_MOD}
\includegraphics[width=0.4\textwidth]{coupling/2DFluxesHARTEN_1_MOD.png}}\qquad
\subfigure[One-dimensional reconstruction.]{\label{1DFluxesHARTEN_2_MOD}
\includegraphics[width=0.4\textwidth]{coupling/1DFluxesHARTEN_2_MOD.png}}
\caption{Illustration and notation for the spatial reconstruction in 2D (a) and 1D (b).}
\end{figure}
Concerning 2D elements, particular attention must be paid to the reconstruction process. As in the 1D case, at any given time level $n$ one has a set of constant volume averages that are approximations to integral averages within each finite volume. For a second-order scheme, we need to approximate the solution in the 2D element with a first order polynomial. To this end we need three equations, for which we consider the three neighbouring 1D cells, as shown in Fig. \ref{2DFluxesHARTEN_1_MOD}. We do not use fictious elements near reflective boundaries for the reconstruction.
The 1D reconstruction delivers a slope in the axial direction, while 2D reconstructions results in slopes in $x$- and $y$-directions. When passing from 1D to 2D or vice versa we need to transform the first into the second, so we have to rotate not only the vector of conserved variables, but also the gradients. In fact, in the 1D domain we have $\partial_n u$ and $\partial_n v$, but to apply Harten's approach to solve the generalized Riemann problem we need $\partial_x U$, $\partial_y U$, $\partial_x V$ and $\partial_y V$ (being $u$ and $v$ velocities in axial and transversal directions, and $U$ and $V$ velocity components in $x$ and $y$ directions). These slopes can be calculated as
\begin{equation}
\left(\begin{array}{c}
\partial_x u\\
\partial_y u
\end{array}\right)=
\left(\begin{array}{c}
\cos\alpha\\
\sin\alpha
\end{array}\right)\partial_n u\;, \qquad\quad
\left(\begin{array}{c}
\partial_x v\\
\partial_y v
\end{array}\right)=
\left(\begin{array}{c}
\cos\alpha\\
\sin\alpha
\end{array}\right)\partial_n v \;
\end{equation}
and
\begin{equation}
\begin{array}{c}
\left(\begin{array}{c}
\partial_x U\\
\partial_x V
\end{array}\right)=
\left[\begin{array}{cc}
\cos\alpha & -\sin\alpha\\
\sin\alpha & \cos\alpha
\end{array}\right]
\left(\begin{array}{c}
\partial_x u\\
\partial_x v
\end{array}\right) \;, \\
\\
\left(\begin{array}{c}
\partial_y U\\
\partial_y V
\end{array}\right)=
\left[\begin{array}{cc}
\cos\alpha & -\sin\alpha\\
\sin\alpha & \cos\alpha
\end{array}\right]
\left(\begin{array}{c}
\partial_y u\\
\partial_y v
\end{array}\right) \;.
\end{array}
\end{equation}
In the next section we assess the performance of the junction methods presented in this paper using a comprehensive suite of test problems, comparing results to 2D reference solutions obtained from an unstructured 2D second-order method discribed in \ref{chapter_theoretical}.
\section{Test problems and assesment of the methods} \label{sec:results}
In this paper we consider three methods to deal with junctions in the context of shallow-water channels, {\bf Method A} being our main contribution, in which a single 2D element is inserted in each junction. {\bf Method B} generalises {\bf method A} by inserting a local 2D unstructured grid in the vicinity of each junction, see Fig. \ref{methodB}. The third method considered for comparison is the method proposed by Peir\'o, Sherwin, Formaggia and Parker \cite{SherwinFormaggia,Sherwin2003}, which in this paper will be called {\bf PSFP method}. This method is summarised in \ref{section_1Dexisting}. Solutions from all 3 methods are compared to reference 2D solutions. All methods have been implemented to second-order accuracy in both space and time. Here we present results for five tests. For additional tests see \cite{TesiBellamoli}.
\begin{figure}[H]
\centering
\includegraphics[width=0.3\textwidth]{coupling/griglia008.png}
\caption{Example of a local 2D grid used in the vicinity of the junction region in {\bf Method B}.}
\label{methodB}
\end{figure}
\subsection{Single-junction test problems}
\noindent{\bf Test 1: Subcritical wave in channel with a $90^\circ$ bifurcation.}\label{testsub90}\\
In this test we consider a channel configuration as shown on the left of Fig. \ref{junct90onda}. We impose a subcritical wave ($Fr_{max}\simeq 0.4$) that gradually steepens up and becomes a shock wave just after a $90^\circ$ bifurcation. Results are shown in Fig. \ref{junct90onda}. Method A and B give very satisfactory results, as compared to the 2D reference solution, for channel 1, while the PSFP method gives rather inaccurate results. For channel 2 all three methods give quite similar results, methods A and B being slightly more accurate than the PSFP method.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.15\textwidth]{confronti/junction90_small.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 1]{\label{junct90onda_t8s_ch1_ordine}
\includegraphics[width=0.65\textwidth]{confronti/junct90onda_t20_ch1_ordine.png}}
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.16\textwidth]{confronti/junction90_small2.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 2]{\label{junct90onda_t8s_ch2_ordine}
\includegraphics[width=0.65\textwidth]{confronti/junct90onda_t20_ch2_ordine.png}}
\caption{Test 1: Subcritical wave. Water height at time $t=8\,s$. } \label{junct90onda}
\end{figure}
\noindent{\bf Test 2: Subcritical wave in a channel with a $90^\circ$ asymmetrical bifurcation.}\\
In this test we consider an asymmetrical channel configuration as shown on the left of Fig. \ref{junct90ASonda}. As for the previous test, methods A and B give very satisfactory results as compared to the reference 2D solution, outperforming the PSFP method.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.3\textwidth]{confronti/junction90AS_small.png}}}
\hspace{0.03\textwidth}
\subfigure[Channel 1]{\label{junct90ASonda_t8s_ch1}
\includegraphics[width=0.65\textwidth]{confronti/junct90ASonda_t20_ch1.png}}
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{10mm}{\includegraphics[width=0.3\textwidth]{confronti/junction90AS_small2.png}}}
\hspace{0.03\textwidth}
\subfigure[Channel 2]{\label{junct90ASonda_t8s_ch2}
\includegraphics[width=0.65\textwidth]{confronti/junct90ASonda_t20_ch2.png}}
\caption{Test 2: Subcritical wave (asymmetrical case): Water height at time $t=8\,s$.} \label{junct90ASonda}
\end{figure}
\noindent{\bf Test 3: Shock wave in a channel with a $45^\circ$ bifurcation.}\\
In this test we consider a channel configuration as shown on the left of Fig. \ref{junct90shocksuper}.
From channel 1 we send a shock with Froude number $Fr=0.75$.
Results are shown in Fig. \ref{junct90shocksuper}. It is seen that the performance of methods A and B is very satisfactory, as far as the shock wave is concerned. The PSFP method did not run for this test.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{5mm}{\includegraphics[width=0.2\textwidth]{confronti/junction45_shock_small.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 2]{\label{junct90shocksuper_t2s_ch2}
\includegraphics[width=0.65\textwidth]{confronti/junct45shock_t10_ch2.png}}
\caption{Test 3: Supercritical shock wave ($45^\circ$). Water height at time $t=2\,s$.} \label{junct90shocksuper}
\end{figure}
\noindent{\bf Test 4: Supercritical shock wave in a channel with a $90^\circ$ bifurcation.}\\
Finally we test our methods with a severe problem: a supercritical shock of Froude number $Fr=1.135$. Results are shown in Fig. \ref{junct90shocksuper}. Results obtained with method B are again very satisfactory, thanks to the local 2D grid. On the other hand, results obtained with method A are less accurate than that obtained in the previous case, because of the severity of the test. Again, the PSFP method did not run for this test.
\begin{figure}[H]
\centering
\addtocounter{subfigure}{-1}
\subfigure{\raisebox{5mm}{\includegraphics[width=0.15\textwidth]{confronti/junction90_shock_small.png}}}
\hspace{0.05\textwidth}
\subfigure[Channel 2]{\label{junct90shocksuper_t2s_ch2}
\includegraphics[width=0.65\textwidth]{confronti/junct90shocksuper_t5_ch2_G.png}}
\caption{Test 4: Supercritical shock wave ($90^\circ$ bifurcation). Water height at time $t=2\,s$.} \label{junct90shocksuper}
\end{figure}
\subsection{\bf Test 5: the CADAM test problem.}
In this section we apply the methods to the CADAM test 1 (CADAM, Concerted Action on Dam-Break Modelling, 1996-1999), for which experimental measurements are available as well as numerous numerical simulations. For a full description of the test see \cite{Morris}. The geometrical configuration is depicted in Fig. \ref{cadam} in which a 2D reservoir is connected to a straight channel with a $45^\circ$ bend. Figs. \ref{cadam_grid} and \ref{cadam_element} show how the the $45^\circ$ bend was treated for methods A and B. In both cases the reservoir is discretised with a 2D unstructured mesh, while for the $45^\circ$ bend method B inserts a local 2D grid in the vicinity of the bend while method A considers a single 2D element.
\begin{figure}[H]
\centering
\subfigure[Method B]{\label{cadam_grid}
\includegraphics[width=0.48\textwidth]{coupling/cadam_grid.png}}
\subfigure[Method A]{\label{cadam_element}
\includegraphics[width=0.48\textwidth]{coupling2/cadam.png}}
\caption{Test 5: the CADAM test problem. 2D and 1D domains used for numerical simulation of CADAM test 1.}
\label{cadam}
\end{figure}
In the CADAM experiment, measuring gauges 5 to 7 are placed around the bend, where the motion of the fluid is more complex. Gauges 2, 3, 4 and 9 are placed along the straight channels. For full details see \cite{Morris}. Numerical results and experimental measurements are all displayed in Fig \ref{cadam_coupled_source}. Results obtained with the methods proposed in this paper compare satisfactorily to measurements. The flow is supercritical, so the PSFP method did not run for this test.
\begin{figure}[H]
\centering
\subfigure[Gauge 2]{
\includegraphics[width=0.45\textwidth]{coupling/cadam2_source.png}}
\subfigure[Gauge 3]{
\includegraphics[width=0.45\textwidth]{coupling/cadam3_source.png}}
\subfigure[Gauge 4]{
\includegraphics[width=0.45\textwidth]{coupling/cadam4_source.png}}
\subfigure[Gauge 5]{
\includegraphics[width=0.45\textwidth]{coupling/cadam5_source.png}}
\subfigure[Gauge 6]{
\includegraphics[width=0.45\textwidth]{coupling/cadam6_source.png}}
\subfigure[Gauge 7]{
\includegraphics[width=0.45\textwidth]{coupling/cadam7_source.png}}
\subfigure[Gauge 8]{
\includegraphics[width=0.45\textwidth]{coupling/cadam8_source.png}}
\subfigure[Gauge 9]{
\includegraphics[width=0.45\textwidth]{coupling/cadam9_source.png}}
\caption{Test 5: the CADAM test problem. Computed free-surface elevation [meters] in time [seconds] and experimental measurements. Gauges 2 to 9 are the points of measurement used in the experimental test \cite{Morris}.}
\label{cadam_coupled_source}
\end{figure}
\subsection{Test 6: A multiple-channel network}
In this section we assess the performance of the various methods for the case of a multiple-channel network involving 16 junctions and 25 branches; see Figs. 14-16. We considered two cases, an incident subcritical wave and an incident supercritical shock. For the sake of simplicity we set the bed slope and the friction to zero. Solutions are computed with all three approximate junction methods considered, except for the supercritical shock case for which only methods A and B are used. For this test, due to the complexity of the situation with many shock wave reflections and wave interaction, for method A we use a 2D coarse grid inside the four junctions in the corners (see Fig. \ref{rete_elementgrid}), where the flow is very complex due to large variations in angles and the large space occupied by the junction. Results will be shown at the eight positions shown in Fig. \ref{rete3}.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{rete/rete_grid.png}
\caption{Test 6: A multiple-channel network. Configuration for method B. Two-dimensional grids in the vicinity of junctions.}
\label{rete_grid}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{rete/rete_elementgrid.png}
\caption{Test 6: A multiple-channel network. Method A. 2D single elements in most junctions, modified at shown four corner junctions.}
\label{rete_elementgrid}
\end{figure}
For the subcritical wave case, computed results are displayed in Figs. \ref{rete_onda_grafici1} to \ref{rete_onda_grafici8}. All three approximate junction methods run and are compared to the reference 2D solution. Methods A and B are seen to be very accurate; all three methods give very similar results for the arrival phase of the wave but differ at later times. For the supercritical shock wave case, computed results are displayed in Figs. \ref{rete_shock_grafici1} to \ref{rete_shock_grafici8}. For this case the PSFP method did not run. Not surprisingly, it is seen that method B hardly differs from the reference 2D solution but the simpler method A is also seen to be very accurate. As expected, the larger discrepancies between method A and the reference solution are seen in wave arrival times. Results at point 8 were expected to show the largest errors, as waves must transverse the full complex network, with multiples shock waves and complex interactions, and yet the end results at position 8 are satisfactory.
\begin{figure}[H]
\centering
\includegraphics[width=0.85\textwidth]{rete/rete3.png}
\caption{Test 6: A multiple-channel network. Points of the network where the free surface elevation is recorded and then reported in figures \ref{rete_onda_grafici1} to \ref{rete_shock_grafici8}.}
\label{rete3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/onda1.png}
\caption{Test 6 (subcritical wave): A multiple-channel network. Computed free-surface elevation [m] in time [s] for Point 1.}
\label{rete_onda_grafici1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/onda3.png}
\caption{Test 6 (subcritical wave): Computed free-surface elevation [m] in time [s] for Point 3.}
\label{rete_onda_grafici3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/onda6.png}
\caption{Test 6 (subcritical wave): Computed free-surface elevation [m] in time [s] for Point 6.}
\label{rete_onda_grafici6}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/onda8.png}
\caption{Test 6 (subcritical wave): Computed free-surface elevation [m] in time [s] for Point 8.}
\label{rete_onda_grafici8}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/shock1.png}
\caption{Test 6 ( supercritical shock): Computed free-surface elevation [m] in time [s] for Point 1.}
\label{rete_shock_grafici1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/shock3.png}
\caption{Computed free-surface elevation [m] in time [s] for Point 3. Supercritical shock.}
\label{rete_shock_grafici3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/shock6.png}
\caption{Test 6 ( supercritical shock): Computed free-surface elevation [m] in time [s] for Point 6.}
\label{rete_shock_grafici6}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.7\textwidth]{rete/shock8.png}
\caption{Test 6 ( supercritical shock): Computed free-surface elevation [m] in time [s] for Point 8.}
\label{rete_shock_grafici8}
\end{figure}
\subsection{Computational times}
Here we show the computational times involved to solve each one of the six problems previously presented. Table \ref {tab_cputime} shows the tests on the left columns and the CPU times in seconds in the subsequent columns, for the various methods used. Missing values for the PSFP method regard tests for which this method did not work.
\begin{table}[H]
\renewcommand\arraystretch{1.2}
\centering {\small
\begin{tabular}{lcccc}
\hline
& & & & \\[-4.5mm]
{\bf Test} & {\bf 2D Reference} & {\bf PSFP method} & {\bf Method B} & {\bf Method A} \\
& & & & \\[-4.5mm]
\hline
& & & & \\[-4.5mm]
Test 1 & 392.1 & 3.31 & 34.8 & 1.08 \\
& & & & \\[-4.5mm]
Test 2 & 547.2 & 3.28 & 31.7 & 2.06 \\
& & & & \\[-4.5mm]
Test 3 & 1215 & - & 68.9 & 3.53 \\
& & & & \\[-4.5mm]
Test 4 & 1684 & - & 128.3 & 4.69 \\
& & & & \\[-4.5mm]
Test 6 (subcritical wave) & 5787 & 70.3 & 1413 & 19.2 \\
& & & & \\[-4.5mm]
Test 6 (supercritical shock) & 13775 & - & 3091 & 51.5 \\
& & & & \\[-4.5mm]
\hline
\end{tabular}}
\caption{Computational times [s] for all numerical methods reported in this paper, for six test problems.}
\label{tab_cputime}
\end{table}
As expected, the largest CPU times are those for the full 2D solver used to produce reference solutions. In terms of cost, there follows method B. The next one in CPU time cost is the PSFP method, with Method A being the fastest, even faster that the simplest of all methods, namely the PSFP method, which is based entirely on 1D assumptions. It appears as if the method of choice is our method A, since it runs for all very demanding test problems, while giving reasonably accurate solutions as compared to the full 2D solver and at lowest computational cost. Computational saving factors for method A, relative to the full 2D solver, are of the order of 300, making the method a realistic option for complex applications.
\section{Concluding remarks} \label{sec:discussion}
We have presented a novel method to treat junctions in networks of 1D shallow water channels. The method, called method A, inserts a single 2D, junction-shaped finite volume right at the junction, taking care that the element protrudes into the 1D channels. In this manner, the geometrical information, such as bifurcation angles and reflective boundaries is accounted for, locally. Method B results from generalising method A by inserting a local 2D unstructured grid in the vicinity of the junction. In addition, we briefly reviewed the existing junction method due to Peir\'o, Sherwin, Formaggia and Parker \cite{SherwinFormaggia,Sherwin2003}, which we termed PSFP method. All three approximate junction methods are assessed through a carefully selected suite of demanding test problems. No exact solutions to these problems exist to test the accuracy of approximate junction methods. We therefore use a fully 2D unstructured-mesh, second order method of the ADER type to compute accurate numerical solutions. Method A is the preferred one, since it is simple and sufficiently accurate for all test problems. Method B is the more accurate of all three approximate methods tested, but also the most expensive, as shown by our computational efficiency test. Method A is the fastest, about three times faster than the PSFP method and about 70 times faster than method B for the more realistic test problem involving a reasonably complex network. Methods A and B work well for all test problem, while the PSFP method only works for 3 of the 6 test problems. An attractive feature of method A, shared by method B, is that it can successfully cope with problems involving high subcritical, transcritical and supercritical flows at the junctions. We note that due to the single-element of method A, accuracy may deteriorate, depending on the mesh dimensions involved. This shortcoming is most evident in the first-order version of the methodology. Higher-order versions can ameliorate this deficiency. In fact, second order accuracy is found to be satisfactory, though we found a test problem for which only the third-order scheme produced fully satisfactory solutions, not shown here. Potential users of the schemes may have to assess this aspect of the methods before embarking on practical applications. For practical applications, both methods A and B may benefit from using local time-stepping, for example, following the methodology proposed in \cite{Dumbser:2007c,Mueller:2016}. This may be required by the disparity of spatial mesh sizes at the junctions and the 1D domains, which potentially implies disparity in time step sizes.
The methods presented in this paper can be applied to any problem involving networks of nearly straight 1D domains, provided the multidimensional version of the equations, 2D or 3D, are available.
\vspace{10mm}
\begin{center}
{\bf Acknowledgements}
\end{center}
The authors are indebted to Prof. Dr. M. Dumbser, University of Trento, for useful discussions on the subject.
\newpage
|
1,116,691,497,908 | arxiv | \section{Introduction}
\label{sec:intro}
\textit{Respondent-Driven Sampling} (RDS) is a network-based sampling technique that leverages social relationships to recruit individuals of hard-to-reach populations into research studies \citep{Hec97}. The RDS process, which proceeds through recruitment \textit{waves}, starts with the selection of initial \textit{seed} participants who, after being interviewed, receive a fixed number of \textit{coupons} to distribute among their peers. RDS offers many advantages over existing network-based sampling methods. Through many waves of recruitment, the process samples farther from the initial recruits, which should ensure greater representativeness and hence generalizability of the sample. This is because seeds typically represent a convenience sample, even if thoughtfully chosen with the view to optimizing representation of their social spheres. Moreover, RDS reduces the privacy concerns that are associated with the identification of participants' social networks or the community population that could occur in a more traditional study that would aim to enumerate the members of the target population by relying on members to recruit their peers into the study.
An RDS sample has a graphical structure, which is typically a partially observed social network of recruited individuals with an unknown underlying dependence structure in which it is common to observe a tendency for individuals with similar traits to share social ties, a feature termed \textit{homophily}. Moreover, the RDS process is not one that is purely random, but rather some individuals are more likely to be selected into the sample than others. An assumed underlying principle in RDS is that the probability of an individual being recruited depends on the size of their personal network of social contacts (\citealt{Gile11,Hec97}). However, the true RDS sampling design is unknown, warranting inferential methods that rely on approximations to the true RDS process to estimate sampling weights.
As highlighted in \cite{Gile18}, the current literature of RDS data lacks principled approaches to multivariable modeling. This is reflected in the variety of analytic approaches taken in the applied literature. Some studies have treated RDS data as though collected by random sampling and applied ANOVA, linear and logistic regressions without any adjustment for RDS sampling weights \citep{Ram13}. Others have included RDS weights in regression models, relying on the typical RDS assumption that some individuals are more likely to be recruited into the sample than others, while ignoring the dependence between observations within the RDS network \citep{Johnston2010TheAO}. In yet another approach, \cite{Rho15} included seeds as random effects to adjust for the dependence within recruitment chains but ignored RDS weights. A mixed effects model including random effects on features such as seeds and recruiters to account for the dependence, and using weights at different levels of clustering when appropriate, has been proposed by \cite{Spi09}. The author further proposed to model social effects driven by homophily
by including a parameter to account for possible interactions between recruiters and recruits' values of homophilic covariates. This approach was presented as a general guidance for RDS regression; however no theoretical details or practical (simulation) demonstrations of the performance of the proposed methodology were provided.
Thus, while there are well-developed strategies for estimating means and prevalences from RDS studies, best practices for regression modeling remain poorly characterized. And yet, understanding dependence between variables is often a primary goal in epidemiologic research. Take for example the question of whether socio-demographic characteristics can predict optimism about the value of antiretroviral therapy, either as a pre-exposure prophylaxis (PrEP) or post-infection treatment, in a population of gay, bisexual and other men who have sex with men (GBM). There have been suggestions that younger people (aged less than 35) were less likely to have optimism, while people with lower annual income (less than \$20,000) were more likely to have optimism \citep{levy2017longitudinal,craib2002hiv}, which could potentially mitigate the effectiveness of HIV preventive measures.
The Engage study, which is an RDS study conducted in Montreal, Toronto and Vancouver, provides a unique opportunity to study this question in a large sample of the GBM community -- but doing so requires appropriate modeling strategies.
One of the most challenging issues of multivariate modeling for RDS is one of missing data. In fact, the observed data reveal partial information about the full RDS network in which all connections between recruited individuals are reported (see \citealt{Weeks02, Mosher2015AQA} for a rare example of an RDS study in which those traditionally missing connections are reported). This problem is fundamentally design-based \citep{Crawford17}. In this case, when conducting inference about homophily-driven effects and/or network-induced correlation structures, different full data distributions give rise to the same distribution for the observed data. This lack of identification has been thoroughly discussed in Yauck et al.~(2020b). A crucial implication for the validity of inferential procedures is that an infinite number of observations will not yield a perfect knowledge of the parameters for homophily-driven effects or/and network dependence unless the full RDS network is observed.
The paper is organized as follows. In Section \ref{sec:graph}, we provide a brief background to respondent-driven sampling, and define the resulting network structure of an RDS sample where social connections can be viewed as exhibiting a correlation structure that is analogous to a spatial pattern (where the ``distance'' metric is the number of social separations between individuals). In Section \ref{sec:methodo}, we propose a generalized mixed effects model, with homophily-driven effects to deal with homophilic covariates, and with spatial random effects to model the dependence between outcomes within the network. We briefly discuss the issue of identification when the full network of recruited individuals is only partially observed by design, and the inclusion of RDS weights to account for the non-random sampling of the target population when recruited individuals (accurately) report on their personal network sizes. The validity of the proposed methodology is investigated in simulations presented in Section \ref{sec:simulations}. In Section \ref{sec: casestudy}, we analyze the Engage data collected in Montreal to investigate the relationship between HIV treatment optimism and socio-demographic characteristics, providing reliable parameter estimates and appropriate standard errors via our proposed approach. We conclude in Section \ref{sec:discussion} with a discussion of the approach and future considerations.
\section{A brief review of RDS}\label{sec:graph}
In this section, we briefly review the assumptions needed for an RDS design, and graphically display an example of the resulting observed network structure -- which is a partial view of the underlying network structure.
Suppose an infinite population in which individuals are connected by social ties. We define this as the population network and state the following:
\begin{assum}{\textbf{(The population network).}}\label{assum:subgraph}
The population network represents an infinite number of non-overlapping clusters of finite sizes.
\end{assum}
In other words, the population is clustered, with individuals partitioned into well-defined clusters. Note that in much of the RDS literature, the population is assumed to form a connected network, with no disjoint clusters. We believe that to be an overly restrictive and unrealistic assumption. For example, the Colorado Springs Project 90 study \citep{klovdahl1994social} revealed a real-world social network of 125 connected, disjoint clusters.
Now, consider an RDS process operating across social connections of the population network.
\begin{assum}{\textbf{(The RDS recruitment).}}\label{assum:recruit}
The recruitment process takes place within a subset of clusters of the network and progresses across individuals' social connections.
\end{assum}
This assumption implies that the RDS sampling process can be characterized as a two-stage sampling design in which seeds and then, subsequently, additional individuals are selected from non-overlapping clusters.
\begin{assum}{\textbf{(No multiple recruitments).}}\label{assum:once}
No individual can be recruited more than once into the study.
\end{assum}
Once again, this is not a typical RDS assumption. Previous work on theory for RDS estimators of means \citep{Vol08} has assumed that sampling take place with replacement, and yet in practice this does not occur. We therefore dispense with that unrealistic assumption.
The above three assumptions imply that the observed RDS network can be represented as a finite set of non-overlapping trees. For practical purposes, consider the Engage study in Montreal. The RDS recruitment consisted of three main steps.
\begin{itemize}
\item[Step 1.] Sampling started off with the purposeful selection of a first group of 27 GBM, the seed participants. Seeds were selected to be representative with respect to the diversity of the GBM community based on a community mapping exercise. The seeds were invited to a community-based survey site to complete a questionnaire and to undergo testing for sexually transmitted and bloodborne infections. Seeds who successfully completed the study received a (monetary) remuneration known as primary incentive. This is wave zero of recruitment.
\item[Step 2.] Successful seed participants were each given six uniquely identified coupons, and asked to recruit their GBM peers into the study; the social ties between a recruiter and any new participants recruited were then known to the study through the coupon, and recorded in the study database. Successful recruiters received a secondary (monetary) incentive for each peer that they recruited.
\item[Step 3.] The process continued through successive waves until the desired sample size was reached.
\end{itemize}
Figure \ref{fig:engagenet} illustrates the largest single cluster of the RDS recruitment tree from the Engage study of GBM in Montreal.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=.65]{engagenet3.pdf}
\end{center}
\vspace*{-3cm}
\caption{Representation of the largest single tree of 412 recruits from the Engage recruitment tree of $n=1179$ gay, bisexual and men who have sex with men in Montreal, 2018. Individuals are aligned by wave of recruitment.}
\label{fig:engagenet}
\end{figure}
\section{Methodology}\label{sec:methodo}
In this section, we jointly model homophily-driven effects and the dependence between outcomes from the clusters of the unobserved population network. This allows us to view the fitting of the assumed model to the observed RDS data as a missing data problem. The resulting identification issue is discussed in Section \ref{sec:designid}. Common strategies to account for the non-random sampling of the population and the question of whether to weight the model are discussed in Section \ref{sec:RDSweights}.
\subsection{Underlying, data-generating model and assumptions}\label{sec:modelbased}
Let $y_{ij}$ be the outcome on the $j$th individual of the $i$th cluster, $j=1,\dots,N_i$, where $N_i$ is the size of the $i$th cluster, and $i=1,\dots, m$. Let $x_{ij}$ be the value of the covariate for the $j$th individual of the $i$th cluster, and $\mathbf{x}_{i}$ the vector of covariates for all individuals in the $i$th cluster. We assume that $\{y_{ij}, x_{ij}; i=1,\dots, m; j=1, \dots, N_i\}$ is the realization of a random sample whose distribution is identical to that of the superpopulation of clusters defined in Section \ref{sec:graph}, so that any inference based on the sample pertains to the parameters of the infinite population from which the sample is drawn. Inspired by \cite{Manski03}, we assume the underlying relationship between the outcome and covariates in the population is characterized by a generalized linear mixed model in which $\bm{\delta}_i=\left(\delta_{i1},\dots,\delta_{iN_i}\right)$ is a vector of random effects for the $i$th cluster, $\mu_{ij}=\mbox{E}\left(y_{ij}|\mathbf{x}_{i},\delta_{ij}\right)$, and
\begin{equation}\label{GLMmodel}
g\left(\mu_{ij}\right)=\beta_0+\beta_1 x_{ij}+\gamma \frac{1}{n_{ij}}\sum_{k\sim j}x_{ik}+\delta_{ij},
\end{equation}
where $g(.)$ is a (monotonic) function of the mean, $k \sim j$ represents the set of individuals who share ties with the $j$th individual, $n_{ij}$ is the number of social connections that the $j$th individual of the $i$th cluster shares with other individuals within the same cluster, or \textit{degree}. We further assume that $\bm{\delta}_i \sim N(\bm{0},\bm{\Sigma}_i)$, with $\mbox{cov}\left(\bm{\delta}_i,\bm{\delta}_{j}\right)=\bm{0}$ for $i\neq j$.
The parameter $\gamma$ measures homophily-driven effects, or the influence of peers' characteristics on the outcome of an individual. In this model, the parameters $\beta_0$ and (the potentially vector-valued parameter) $\beta_1$ are of primary interest.
Now let $\mathbf{S}^{(i)}=(s^{(i)}_{jk})$ be a (\textit{neighborhood}) matrix representing social ties in the $i$th cluster such that $s^{(i)}_{jk}=1$ if individual $j$ and individual $k$ share a tie and $s^{(i)}_{jk}=0$ otherwise, with $s^{(i)}_{jj}=0$, and $\mathbf{S}=\mbox{diag}(\mathbf{S}^{(i)})$.
We assume a Simultaneous Autoregressive (SAR) model (\citealt{Whittle54, cressie1993statistics}) for the vector of random effects $\bm{\delta}_i$:
\begin{equation}\label{SAR}
\bm{\delta}_i=\rho\mathbf{S}^{(i)}\bm{\delta}_i+\bm{u}_i,
\end{equation}
where $\rho$ represents the strength of the dependence within the network, and $\bm{u}_i \sim N(\bm{0},\sigma^2\bm{I}_{N_i})$. Given $\mathbf{W}_i=(\bm{I}_{N_i}-\rho\mathbf{S}^{(i)})^{-1}$ exists, the covariance of $\bm{\delta}_i$, $\bm{\Sigma}_i$, can be written as
\begin{equation}\label{Sigmaneighbour}
\bm{\Sigma}_i(\sigma^2,\rho)=\sigma^2\mathbf{W}_i\mathbf{W}_i^{\top}.
\end{equation}
The SAR correlation matrix is such that outcomes from \textit{neighboring} (i.e.~socially connected) individuals are more correlated than outcomes from non-neighbors. Other correlation models for $\bm{\delta}_i$ with such properties include Conditional Autoregressive (CAR) models, which belong in the same class of areal models as SAR models \citep{banerjee2003hierarchical}, and models which assume a correlation function that depends on the ``distance'' between observations \citep{f2007methods}.
\subsection{Identification and the validity of inference}\label{sec:designid}
Consider the observed data from RDS $\bm{\mathcal{D}}_T=(y_{ij}, x_{ij}; i=1,\dots,m, j=1,\dots, n_i)$, where $n_i$ is the number of recruits belonging in the $i$th cluster. Let $\bm{S}_T$ represents the observed neighborhood matrix for the RDS recruitment tree. When data are collected under traditional RDS designs, the complete information on recruited inidividuals is only partially observed through $\{\mathbf{S}_T, \bm{\mathcal{D}}_T\}$. Yauck et al. (2020b) showed that, in the presence of homophily-driven effects and/or when the dependence within the network is modeled using network-induced correlation structures, traditional RDS studies suffer from the lack of identification, which arises when different full data distributions give rise to the same distribution for the observed data. This has two major implications regarding the validity of inferential procedures for model (\ref{GLMmodel}). First, an infinite number of observations will not provide a perfect knowledge of the homophily-driven effects and network-induced structure parameters unless the full RDS network is observed. Further, valid inference about those parameters can be drawn only when the recruitment tree is identical to the unobserved RDS network. Thus, fitting model (\ref{GLMmodel}) to the observed RDS data $\{\mathbf{S}_T, \bm{\mathcal{D}}_T\}$ might be an ineffective strategy.
Now, consider the modeling of homophily-driven effects in (\ref{GLMmodel}). \cite{Spi09} recommended the inclusion of a regression parameter (the equivalent of $\gamma$) to account for a possible effect of the recruiter's value of the homophilic covariate on the outcome of the recruit. \cite{Ave19} showed empirically that ignoring that effect induces a minimal loss of precision but does not add any bias to the estimator for $\beta_1$ when fitting the model to the observed RDS data. This is encouraging since the homophily-driven effects ($\gamma$), in model (\ref{GLMmodel}), cannot be consistently estimated given $\{\mathbf{S}_T, \bm{\mathcal{D}}_T\}$. Following these results, Yauck et al.~(2020b) proposed fitting a regression model \textit{without} homophily-driven effects to the observed data as a way of minimizing the risk of performing misleading inference when the observed RDS network is incomplete. The accuracy and precision of $\hat \beta_1$, and the coverage of 95\% confidence interval for $\beta_1$ when $\gamma$ is omitted from the analytic model will be investigated via simulations in Section \ref{sec:simulations}.
Further, consider the SAR model for the random effects $\bm{\delta}_i$. Due to the lack of identification, the parameters for the induced correlation structure, which is a function of the neighborhood matrix of social connections, is inestimable given the observed data (Yauck et al., 2020b): it is not possible to adequately model $\bm{\Sigma}_i$ with incomplete information on the social ties within the observed recruitment tree. Other network-induced correlation structures such as the autoregressive, the `RDS-tree' \citep{Beckette018272} and the Toepliz, although suitable for the branching structure of the recruitment tree, also fail to adequately capture the correlation structure for the aforementioned reason. In Section \ref{sec:simulations}, we consider an alternative class of correlation models for which the dependence within the $i$th tree is induced by a cluster-specific random effect $\bm{\delta}_i=\delta_i,\,i=1,\dots,m$; clustering is assumed at the seed level and at the recruiter level \citep{Spi09}. The accuracy and precision of $\hat \beta_1$, and the coverage of 95\% confidence interval for $\beta_1$ in this case of model misspecification for the random effects will also be investigated in Section \ref{sec:simulations}.
\subsection{RDS weights}\label{sec:RDSweights}
When conventional sampling methods are used to gather information on a target population, sampling probabilities are known throughout the sampling process. This allows the researcher to compute and take into account design weights when estimating finite population parameters. These approaches are infeasible in an RDS setting since sampling probabilities are unknown. The sampling process is only (partially) controlled by the researcher through the selection of an initial set of seeds -- who, while carefully chosen, still represent a convenience sample -- with the remainder of the recruitment working through a sampling mechanism that relies on individuals' social networks and personal decisions. Let $R_{ij}=1$ if the $j$th individual in the $i$th cluster is sampled. If the true sampling design $\mathcal{S}$ were known, the inclusion probability of the $j$th individual in the $i$th cluster would be computed as
$$
\pi_{ij}=\mbox{E}(R_{ij}|\mathcal{S}).
$$
\cite{Vol08} approximated the RDS process as a random walk on the nodes of an undirected graph and treated RDS samples as independent draws from its stationary distribution. The resulting inclusion probability for the $j$th individual is estimated by
$$
\hat{\pi}^{RDS-II}_{ij}=\frac{1}{n_{ij}}\frac{\sum_{i=1}^{m}\sum_{j=1}^{n_i}n_{ij}}{n}.
$$
Recalling that $n_{ij}$ is the number of social connections that the $j$th individual of the $i$th cluster shares with others in the same cluster, these weights have the appealing intuition of adjusting for the `popularity' of an individual, and hence their likelihood of being recruited.
\cite{Gile11} showed that the resulting estimators for means and proportions can be severely biased when sample fractions are large, among other factors. They proposed successive sampling (SS) weights based on a SS approximation of the RDS sampling design, which is viewed as a probability proportional to size without replacement design, and showed that resulting estimators consistently outperform estimators based on RDS-II weights.
Details of the algorithm for computing the weights are given in \cite{Gile11}. An important drawback of this approach is that the computation of inclusion probabilities requires knowledge of the population size. Another is that the weights vary depending on the chosen outcome, and so must be computed a new for each outcome or analysis; this can be impractical in large, collaborative or multi-site studies.
Until recently, the majority of inferential methods in the RDS literature dealt with the estimation of population means or proportions. The use of RDS weights in these settings is principled and straightforward.
The use of sampling weights in a regression setting is more challenging (see, for example, \citealt{lohr2009sampling} for a more thorough and rigorous discussion of these issues in general, and \cite{Spi09} for a discussion regarding RDS regression in particular).
In light of these discussions,
we consider the use of unit-level weights -- specifically RDS-II and SS weights -- when fitting the model as a way of taking the RDS design information into account, as these are widely used in the RDS literature.
\subsection{Bootstrap variance estimators}
We consider two bootstrap methods for estimating uncertainty in RDS: $(i)$ the \textit{tree} bootstrap \citep{baraff2016estimating} and $(ii)$ the \textit{neighborhood} bootstrap \citep{2020arXiv201000165Y}.
The tree bootstrap method is based on resampling the RDS tree. Bootstrap samples are typically drawn from the observed recruitment tree by mimicking its hierarchical structure. The first level of the tree generation consists of resampling with (or without) replacement from the sets of seeds of the observed recruitment tree. In the second level of the bootstrap procedure, we resample with (or without) replacement from each of the sampled seeds' recruits. The third level is created by resampling from the wave 1 participants' recruits. The process continues until there are no more recruits from which to sample. The tree bootstrap method mimics the recruitment tree and corresponding features such as the recruitment chain, the number of seeds and waves, thus taking into account the underlying network structure of RDS. Recent findings suggest that this method consistently outperforms existing bootstrap methods, but overestimates uncertainty (\citealt{baraff2016estimating,Gile18,2020arXiv201000165Y}).
The neighborhood bootstrap method is based on sequentially resampling individuals and their neighbors within the RDS tree. The first stage of resampling consists of uniformly selecting $n/c_r$ recruits, where $c_r$ is the average number of connections within the resampled RDS tree. We then include, in the second stage of resampling, the neighbors of all selected recruits in the bootstrap sample. The network component of the resampled RDS data is the subgraph induced by the selected recruits and their neighbors. This method captures the `local' neighborhood structure of the network by reporting all connections that a resampled unit has within the tree, without much reliance on its branching structure. \cite{2020arXiv201000165Y} empirically showed that the neighborhood bootstrap outperforms the tree bootstrap in terms of coverage, bias and mean interval width under realistic RDS assumptions.
\section{Simulations}\label{sec:simulations}
We conducted two separate simulation studies to assess the accuracy of regression parameter estimators under two distinct modeling scenarios. Under the assumption that Equation (\ref{GLMmodel}) is the data-generating model, and that the variable $x$ is uncorrelated with degree, the goal of the first simulation study is to assess the accuracy, precision and coverage of the 95\% (model-based and bootstrap) confidence intervals for estimators of $\beta_1$ if $(i)$ homophily-driven effects $\gamma$ are ignored when present and $(ii)$ the correlation model (\ref{SAR}) for the random effects $\bm{\delta}_i$ is misspecified. We consider fitting the model without RDS weights, with RDS-II weights, and with SS weights under three potential population sizes (one of which is correct).
In the second simulation study, we assume a simpler version of the data-generating model (\ref{GLMmodel}) with no homophily-driven effects (implying that there are no missing covariates in the subsequent fitted model) and assess the accuracy, precision and coverage of the 95\% confidence intervals for estimators of $\beta_1$ when the variable $x$ is correlated with degree.
\subsection{RDS sampling}\label{sec:paramsamp}
We simulated networks using Exponential Random Graph Models (ERGM) \citep{harris2014introduction}, a class of generative models for modeling network dependence. Let $\mathbf{S}$ be the random adjacency matrix of the network, and $\mathbf{x}$ a vector of nodal attributes. The joint distribution of its elements is:
\begin{equation}\label{ergm}
\mbox{P}\left(\mathbf{S}=\mathbf{s}|\mathbf{x},\bm{\eta}\right)=\frac{\mbox{exp}\left\lbrace \bm{\eta} g\left(\mathbf{s},\mathbf{x}\right) \right\rbrace }{\bm{\kappa\left(\bm{\eta}\right)}},
\end{equation}
where $\bm{\eta}$ is a vector of parameters and $g\left(\mathbf{a},\mathbf{x}\right) $ its corresponding vector of network statistics, $\bm{\kappa\left(\bm{\eta}\right)}=\sum_{\mathbf{s}} \mbox{exp}\left\lbrace \bm{\eta} g\left(\mathbf{s},\mathbf{x}\right) \right\rbrace$ is a normalizing constant. The features of the network are captured in (\ref{ergm}) by choosing network statistics to represent density ($\mathcal{D}_n$) or the ratio of ties in the observed network over the total number of possible ties, degree distribution and homophily. The degree distribution is mainly controlled by setting different values for the Geometrically-Weighted Degree parameter $\eta_{G}$ along with a `decay' parameter $\eta_{d}$ that controls for the level of geometric weighting. When $\eta_{G}<0$ there are more high- and low-degree individuals than expected by chance, while when $\eta_{G}>0$ the network is more centralized \citep{Lev16}.
We simulated 10 clusters of equal sizes from which RDS samples were drawn for the following set of network and sample characteristics.
The population size $N=1000$, $\mathcal{D}_n=1\%$, and $\eta_{G}(\eta_d)=-6 (3)$. We consider $s=10$ seeds, $c=3$ coupons, and sampling fractions of either 20\%, 50\%, or 80\%. We also consider RDS-II weights ($\pi_{RDS}$), SS weights with $N$ known ($\pi_{SS}$), SS weights with $\hat N_u=N-(N-n)/2$ ($\pi_{SS}^u$) and SS weights with $\hat N_o=N+(N-n)/2$ ($\pi_{SS}^o$).
\begin{comment}
\begin{table}[H]
\caption{Parameters of the first simulation study. Networks of size $N$ and density $\mathcal{D}_n$ were simulated; from these, RDS samples with $s$ seeds were drawn using an RDS design with $c$ coupons and sample fractions $f$.}
\begin{center}
\setlength\extrarowheight{-3pt}
\small
\begin{tabular}{lcc} \hline
\multicolumn{3}{l}{\textbf{Population network}} \\
\qquad \qquad & {\underline{Parameter}} & {\underline{Values}} \\
& $N$ & $10^3$ \\
& $\mathcal{D}_n$& $1\%$ \\
& $\eta_{GWD}(\eta_d)$& $-6 (3)$ \\ \\
\multicolumn{3}{l}{\textbf{RDS sample}} \\
\qquad \qquad & {\underline{Parameter}} & {\underline{Values}} \\
& $s$ & $10$ \\
&$c$& $3$ \\
& $f$& $20\%,\,50\%,\,80\%$ \\ \\
\multicolumn{3}{l}{\textbf{RDS weights}} \\
\qquad \qquad & {\underline{Weights}} & {\underline{Label}} \\
& RDS-II & $\pi_{RDS}$\\
& SS ($N$ known) & $\pi_{SS}$ \\
& SS ($\hat N_u=N-(N-n)/2$) & $\pi_{SS}^u$ \\
& SS ($\hat N_o=N+(N-n)/2$) & $\pi_{SS}^o$ \\
\hline
\end{tabular}
\end{center}
\label{table:SimDetails}
\end{table}
\end{comment}
\subsection{Regression models}\label{sec:regsimresults}
In the first simulation study, we generated a continuous covariate $X$ from a normal distribution with mean $3$ and standard deviation $1.5$. We define the following model:
$$
g(\mu_{ij})=\beta_0+\beta_1 X_{ij}+\gamma \frac{1}{n_{ij}}\sum_{k\sim j}X_{ik}+\delta_{ij},
$$
where $\delta_{ij}$ follows the SAR model (\ref{Sigmaneighbour}). We set the parameter vector to $(\beta_0,\beta_1, \gamma, \sigma^2)=(0,\,2,\,1.5,\,1)$ for each value of the autocorrelation parameter $\rho=0.05,\,0.1$.
We considered three link functions: $g(\mu_{ij})=\mu_{ij}$, $g(\mu_{ij})=\log(\mu_{ij})$ and $g(\mu_{ij})=\mbox{logit}(\mu_{ij})$; for the logistic model, we set the prevalence of the outcome variable to $30\%$ by calibrating the intercept parameter to $\beta_0=-12$ using the cumulative distribution function of the logistic distribution. For each combination of network and sample characteristics, we fitted models in which the parameter $\gamma$ is ignored.
In the second simulation study, we assume the following data-generating model:
$$
g(\mu_{ij})=\beta_0+\beta_1 X_{ij}+\delta_{ij},
$$
where $\delta_{ij}$ follows the SAR model (\ref{Sigmaneighbour}). The parameter vector is set to $(\beta_0,\beta_1, \sigma^2, \rho)=(0,\,2,\,1,\,0.05)$. We generated the continuous covariate $X$ in such a way that the correlation with degree, measured using the Pearson correlation coefficient, is $\rho_d=0.4$ or $0.6$. The setting for the link functions, the population network and the RDS process are identical to that of the first simulation study; the sample fraction is fixed at $20\%$ across all combinations of simulation parameters.
To account for the dependence between observations in models for both simulations, we assumed clustering at both seed and recruiter levels, with seed-specific and recruiter-specific random effects.
We weighted the models using the set of RDS weights described in Section \ref{sec:paramsamp}; we assumed that each individual's reported network size is precisely known. RDS-II and SS weights were computed via \textsf{vh.weights} and \textsf{gile.ss.weights} respectively, both functions of the \textsf{R} package \textsf{RDS}. We computed the relative bias and the root mean squared error of $\hat{\beta}_1$, and the coverage of the $95\%$ (model-based and bootstrap) confidence intervals for ${\beta_1}$.
\begin{comment}
\begin{table}[H]
\caption{RDS weights. }
\begin{center}
\setlength\extrarowheight{-1pt}
\scriptsize
\begin{tabular}{lcc} \hline
{\textbf{Weights}} & {} & {\textbf{Label}} \\ \hline
Unweighted & & $\pi_0$ \\
RDS-II & & $\pi_{RDS}$\\
SS ($N$ known) & & $\pi_{SS}$ \\
SS ($\hat N_u=N-(N-n)/2$) & & $\pi_{SS}^u$ \\
SS ($\hat N_o=N+(N-n)/2$) & & $\pi_{SS}^o$ \\
\hline
\end{tabular}
\end{center}
\label{table:models}
\end{table}
\end{comment}
\subsection{Results from the first simulation study: ignoring homophily-driven effects and/or misspecifying the correlation model}
Tables \ref{table:simulation_Lin}, \ref{table:simulation_P} and \ref{table:simulation_Lk} report the relative bias and the root mean squared error of $\hat \beta_1$, and the coverage of the $95\%$ confidence interval for $\beta_1$ in the linear, Poisson and logistic regression cases, respectively. Additional results for a smaller sample fraction ($f=10\%$) are reported in tables S1-S3 of the Web supplement.
For the linear regression, estimators are unbiased across all sampling fractions and network dependence parameters considered. The precision minimally increases with increasing sample fractions, but decreases with increasing network dependence. The coverage of the $95\%$ confidence interval is consistently close to the nominal value; the unweighted estimator offers better coverage than weighted estimators.
For the Poisson regression, estimators exhibit small biases across all sample fractions and network dependence; the unweighted estimator is slightly less biased than weighted estimators. The bias slightly increases with an increasing network dependence but does not consistently decrease with an increasing sample fraction. As in the linear case, the precision minimally increases with an increasing sample size, but does not consistently decrease with an increasing network dependence. The coverage of the $95\%$ model-based confidence intervals are far below their nominal values; the coverage for the tree bootstrap confidence interval exceeds or is at the nominal value while, for the neighborhood confidence interval, the coverage is slightly below or at the nominal value. Assuming clustering at the recruiter level offers better coverage than assuming clustering at the seed level.
The logistic regression analysis yields estimators that are heavily biased across all sampling fractions, network dependence and sampling weights when clustering is assumed at the seed level. Models that assume clustering at the recruiter level yield estimators that exhibit small to negligible biases. The coverage of the model-based confidence intervals are below their nominal values; the coverage for the tree bootstrap confidence interval is above the nominal value, and the coverage for the neighborhood bootstrap is slightly below or at the nominal value in most cases, when the bias is small to negligible. Again, clustering on the recruiter level yields better coverage.
These results are consistent with previous findings that omitting a non-confounding covariate (assuming the random effects model is correctly specified) does not induce bias for the linear and the Poisson regressions. In the logistic regression case, the omission of the covariate for the homohpily-driven effects induces attenuation bias because of the inappropriate collapsing of the contingency tables (\citealt{Cologne2019EffectsOO,Gail1984BiasedEO}).
To better understand the observed coverage for the Poisson and logistic regressions, we reported the relative biases for the model-based and the bootstrap variance estimators in Web Supplement tables S4-S6.
The model-based variance estimator underestimates uncertainty across all sampling fractions, levels of clustering and network dependence. The tree bootstrap variance estimator severely overestimates uncertainty in most cases while the neighborhood bootstrap variance estimator is, in absolute value, less biased than both estimators in most cases, especially for the linear model. This aligns with previous findings in the RDS literature that, for the tree bootstrap method, covering at or above the nominal level generally comes at a significant cost in terms of power (\citealt{Gile18,2020arXiv201000165Y}).
\begin{table}[H]
\caption{\small Linear - Relative bias and root mean squared error of $\hat \beta_1$ , model-based coverage (CI), the tree bootstrap coverage (TCI) and the neighborhood bootstrap coverage (NCI) of the $95\%$ confidence interval of $\beta_1$ for increasing levels of sample fraction ($f$), network dependence ($\rho$) and various RDS weights ($\pi$). Clustering (Clstr.) is assumed at the seed level (S) and at the recruiter level (R).}
\begin{center}
\setlength\extrarowheight{-3pt}
\footnotesize
\begin{tabular}{llc ccccc c ccccc} \hline
\multicolumn{6}{r}{$f=20\%$}&&&&&&{$f=80\%$}\\
\cline{4-8} \cline{10-14}
{$\rho$}& {Clstr.} & {$\pi$}& {RB}&{RMSE}&{CI}&{TCI}&{NCI}& & {RB}&{RMSE}&{CI}&{TCI}&{NCI}\\ \hline
\multirow{5}{1em}{$.05$}& \multirow{1}{1em}{S} & 1 & 0 &0.06 &0.96 & 0.99&0.94&&0 &0.03&0.94 &1.00&0.92 \\
& & $\pi_{RDS}$ &0 & 0.07 & 0.90 & 0.98&0.93& &0&0.05&0.82&1.00&0.92 \\
& &$\pi_{SS}$ &0 & 0.07 & 0.92 &0.98 &0.93 &&0 &0.04&0.90& 1.00&0.91\\
& &$\pi_{SS}^u$ &0 & 0.07 & 0.93 &0.98 & 0.94 &&0&0.04&0.91& 1.00&0.91\\
& &$\pi_{SS}^o$ & 0 & 0.07 &0.92 & 0.98 &0.93 &&0 &0.04&0.89&1.00 &0.91 \\ \\
& \multirow{1}{1em}{R} & 1 & 0 &0.07 &0.95 &0.99& 0.94& &0 &0.03&0.94&1.00& 0.95 \\
& & $\pi_{RDS}$ &0 & 0.08 & 0.88 & 0.99&0.93 &&0&0.05&0.81&1.00& 0.95\\
& &$\pi_{SS}$ &0 & 0.07 & 0.89 &0.99&0.93& &0 &0.04&0.91& 1.00&0.94\\
& &$\pi_{SS}^u$ &0 & 0.07 & 0.90 & 0.99 &0.94 &&0&0.04&0.92&0.99&0.94 \\
& &$\pi_{SS}^o$ & 0 & 0.08 &0.88 & 0.99& 0.93&&0 &0.04&0.90& 0.99 &0.93\\
\hline
\multirow{5}{1em}{$.1$} & \multirow{1}{1em}{S} &1 &0 &0.07 &0.96 &1.00&0.94& &0&0.04& 0.94&1.00& 0.96\\
& &$\pi_{RDS}$ & 0 & 0.09& 0.91 &0.98&0.94& &0&0.06&0.84&0.99 &0.95\\
& &$\pi_{SS}$ & 0 & 0.08& 0.94 &0.99 & 0.95&&0&0.04&0.93&0.99&0.95 \\
& &$\pi_{SS}^u$ & 0&0.08 & 0.94 &0.99&0.95 &&0&0.04&0.93&1.00&0.95 \\
& & $\pi_{SS}^o$ & 0&0.08& 0.93 & 0.99&0.94 & &0&0.04&0.92& 0.99&0.96\\ \\
& \multirow{1}{1em}{R} & 1 &0 &0.08 &0.93 &1.00&0.94& &0&0.04& 0.93&1.00 &0.96\\
& &$\pi_{RDS}$ & 0 & 0.09& 0.89 &1.00 &0.93 &&0&0.06&0.82& 1.00&0.96\\
& &$\pi_{SS}$ & 0 & 0.09& 0.89 & 1.00& 0.92& &0&0.04&0.91& 1.00&0.95\\
& &$\pi_{SS}^u$ & 0&0.09 & 0.91 & 1.00& 0.92 &&0&0.04&0.91&1.00 &0.95\\
& & $\pi_{SS}^o$ & 0&0.09& 0.89 & 1.00&0.93 &&0&0.04&0.91&1.00& 0.96\\
\hline
\end{tabular}
\end{center}
\label{table:simulation_Lin}
\end{table}
\begin{table}[H]
\caption{\small Poisson - Relative bias and root mean squared error of $\hat \beta_1$ , model-based coverage (CI), the tree bootstrap coverage (TCI) and the neighborhood bootstrap coverage (NCI) of the $95\%$ confidence interval of $\beta_1$ for increasing levels of sample fraction ($f$), network dependence ($\rho$) and various RDS weights ($\pi$). Clustering (Clstr.) is assumed at the seed level (S) and at the recruiter level (R).}
\begin{center}
\setlength\extrarowheight{-3pt}
\footnotesize
\begin{tabular}{llc ccccc c ccccc} \hline
\multicolumn{6}{r}{$f=20\%$}&&&&&&{$f=80\%$}\\
\cline{4-8} \cline{10-14}
{$\rho$}& {Clstr.} & {$\pi$}& {RB}&{RMSE}&{CI}&{TCI}&{NCI}& & {RB}&{RMSE}&{CI}&{TCI}&{NCI}\\ \hline
\multirow{5}{1em}{$.05$} & \multirow{1}{1em}{S} & 1 & -0.05 & 0.41 & 0.41 & 0.98&0.90 & &-0.08&0.30&0.29 &0.87&0.80\\
& & $\pi_{RDS}$ & -0.08 & 0.49& 0.38 &0.97&0.89&&-0.12&0.38&0.23&0.82&0.79\\
&& $\pi_{SS}$ & -0.08 & 0.48 & 0.39 &0.97&0.89& &-0.10&0.34&0.24&0.86&0.80\\
&& $\pi_{SS}^u$ & -0.08 & 0.46& 0.38 &0.98&0.89 &&-0.10&0.33&0.25&0.85&0.80\\
&& $\pi_{SS}^o$ & -0.08 & 0.48 & 0.40 &0.97 &0.89&&-0.10&0.34&0.24&0.85 &0.79\\ \\
& \multirow{1}{1em}{R} & 1 & -0.05 & 0.36 & 0.51 &0.96&0.93 &&-0.06&0.24&0.36& 0.97&0.92\\
& & $\pi_{RDS}$ & -0.06 & 0.46& 0.46 & 0.93&0.93&&-0.09&0.30&0.31&0.96&0.94\\
&& $\pi_{SS}$ & -0.08 & 0.47 & 0.53 &0.94 &0.93&&-0.08&0.27&0.36&0.97&0.94\\
&& $\pi_{SS}^u$ & -0.08 & 0.45& 0.52 & 0.95 &0.93 &&-0.08&0.27&0.31&0.97&0.94\\
&& $\pi_{SS}^o$ & -0.08 & 0.47 & 0.55 & 0.93&0.92&&-0.08&0.28&0.36&0.97 &0.92\\
\hline
\multirow{5}{1em}{$.1$}& \multirow{1}{1em}{S} & 1 &-0.08 & 0.43 &0.42 &0.92&0.90& &-0.12&0.35&0.27&0.96&0.79 \\
& & $\pi_{RDS}$ & -0.10 &0.44 &0.37 & 0.90&0.88&&-0.15&0.40&0.23&0.95&0.80\\
& & $\pi_{SS}$ &-0.10 & 0.44& 0.39 & 0.91&0.90&&-0.14&0.37&0.24&0.95&0.79\\
& & $\pi_{SS}^u$ & -0.10 &0.44 & 0.40 &0.91 &0.90& &-0.13&0.37&0.24&0.95&0.78\\
& & $\pi_{SS}^o$ &-0.10 & 0.44& 0.39 & 0.91 &0.90&&-0.14&0.38&0.24&0.95&0.80\\\\
& \multirow{1}{1em}{R} & 1 &-0.10 & 0.52 &0.47 &0.97&0.96& &-0.09&0.31&0.32& 0.98&0.94\\
& & $\pi_{RDS}$ & -0.11 &0.51 &0.48 & 0.95&0.96&&-0.11&0.34&0.29&0.98&0.94\\
& & $\pi_{SS}$ &-0.13 & 0.63& 0.56 &0.95&0.94& &-0.10&0.33&0.32&0.99&0.94\\
& & $\pi_{SS}^u$ & -0.12 &0.60 & 0.54 &0.95 &0.94 & &-0.10&0.33&0.33&0.99&0.96\\
& & $\pi_{SS}^o$ &-0.13 & 0.63& 0.60 & 0.94& 0.92&&-0.10&0.34&0.33&1.00&0.96\\
\hline
\end{tabular}
\end{center}
\label{table:simulation_P}
\end{table}
\begin{table}[H]
\caption{\small Logistic - Relative bias and root mean squared error of $\hat \beta_1$ , model-based coverage (CI), the tree bootstrap coverage (TCI) and the neighborhood bootstrap coverage (NCI) of the $95\%$ confidence interval of $\beta_1$ for increasing levels of sample fraction ($f$), network dependence ($\rho$) and various RDS weights ($\pi$). Clustering (Clstr.) is assumed at the seed level (S) and at the recruiter level (R).}
\begin{center}
\setlength\extrarowheight{-3pt}
\footnotesize
\begin{tabular}{llc ccccc c ccccc} \hline
\multicolumn{6}{r}{$f=20\%$}&&&&&&{$f=80\%$}\\
\cline{4-8} \cline{10-14}
{$\rho$}& {Clstr.} & {$\pi$}& {RB}&{RMSE}&{CI}&{TCI}&{NCI}& & {RB}&{RMSE}&{CI}&{TCI}&{NCI}\\ \hline
\multirow{5}{1em}{$.05$}& \multirow{1}{1em}{S} & 1 & -0.17 & 0.43 & 0.71& 0.92 &0.76& &-0.21&0.43&0.43&0.59&0.45 \\
&& $\pi_{RDS}$ & -0.15 & 0.47& 0.62& 0.92 &0.84&&-0.23&0.49&0.19&0.73&0.42\\
&& $\pi_{SS}$ & -0.16 & 0.46 & 0.63&0.93 &0.82& &-0.22&0.46&0.34&0.57&0.38\\
& & $\pi_{SS}^u$ & -0.16 & 0.46& 0.64& 0.92 &0.80& &-0.22&0.45&0.37&0.55&0.39\\
&& $\pi_{SS}^o$ & -0.16 & 0.47 & 0.63&0.93 &0.83& &-0.22&0.47&0.31&0.59& 0.38\\ \\
& \multirow{1}{1em}{R} & 1 & 0.12 & 1.35 & 0.63 & 1.00&0.98& &0.01&0.43&0.54&1.00& 0.98\\
&& $\pi_{RDS}$ & 0.15 & 1.19& 0.63& 1.00&0.99&&0.03&0.36&0.61&1.00&0.97\\
&& $\pi_{SS}$ & 0.14 & 1.20 & 0.62 & 1.00&0.99 &&0.03&0.39&0.54&1.00&0.98\\
& & $\pi_{SS}^u$ & 0.14 & 1.27& 0.62& 1.00&0.99&&0.03&0.40&0.53&1.00&0.98\\
&& $\pi_{SS}^o$ & 0.14 & 1.19 & 0.63& 1.00&0.99& &0.03&0.38&0.57 &1.00&0.98\\
\hline
\multirow{5}{1em}{$.1$} & \multirow{1}{1em}{S} & 1 &-0.24 & 0.53 &0.61& 0.85 & 0.74& &-0.25&0.51&0.35&0.30&0.20 \\
& & $\pi_{RDS}$ & -0.21 &0.52 &0.53& 0.95 &0.82& &-0.25&0.53&0.18&0.59&0.23\\
& & $\pi_{SS}$ &-0.22 & 0.52& 0.54& 0.95&0.82& &-0.25&0.52&0.30&0.32&0.25\\
& & $\pi_{SS}^u$ & -0.22 &0.52 & 0.52& 0.94&0.80& &-0.25&0.52&0.34&0.30&0.25\\
& & $\pi_{SS}^o$ &-0.21 & 0.52& 0.53 & 0.95&0.82& &-0.25&0.52&0.28&0.37&0.25\\ \\
& \multirow{1}{1em}{R} & 1 &-0.01 & 0.83 &0.58& 1.00 &0.98& &-0.08&0.36&0.50&1.00 &0.98\\
& & $\pi_{RDS}$ & 0.05 &0.80 &0.65& 1.00 & 0.99&&-0.04&0.31&0.59&1.00&0.99\\
& & $\pi_{SS}$ &0.06 & 1.15& 0.62&1.00&0.99& &-0.06&0.32&0.49&1.00&0.99\\
& & $\pi_{SS}^u$ & 0.05 &1.11 & 0.60 &1.00&0.99& &-0.07&0.33&0.49&1.00&0.98\\
& & $\pi_{SS}^o$ &0.07& 1.17& 0.63 &1.00&0.99 &&-0.06&0.31&0.51&1.00&0.99\\
\hline
\end{tabular}
\end{center}
\label{table:simulation_Lk}
\end{table}
\subsection{Results for the second simulation study: correlated predictor and degree}
Tables \ref{table:simulation_Lin_corr}, \ref{table:simulation_P_corr} and \ref{table:simulation_Lk_corr} report the results for the linear, Poisson and logistic regression respectively. Weighted estimators are (slightly) less biased than unweighted estimators across all models, clustering levels and levels of correlation between the predictor and the degree. Further, RDS-II weights perform as well as the SS weights across all models.
\begin{table}[H]
\caption{ \small Linear, with degree/covariate correlation - Relative bias and root mean squared error of $\hat \beta_1$, model-based coverage (CI), tree bootstrap coverage (TCI) and neighborhood bootstrap coverage (NCI) of the $95\%$ confidence interval of $\beta_1$ with increasing association between predictor and degree ($\rho_d$) and for various RDS weights ($\pi$). Clustering (Clstr.) is assumed at the seed level (S) and at the recruiter level (R).}
\begin{center}
\setlength\extrarowheight{-3pt}
\footnotesize
\begin{tabular}{lc ccccc c ccccc} \hline
\multicolumn{5}{r}{$\rho_d=0.4$}&&&&&&{$\rho_d=0.6$}\\
\cline{3-7} \cline{9-13}
{Clstr.} & {$\pi$}& {RB}&{RMSE}&{CI}&{TCI}&{NCI}& & {RB}&{RMSE}&{CI}&{TCI}&{NCI}\\ \hline
\multirow{1}{1em}{S} & 1 & 0 &0.02 &0.98 & 0.99&0.95&& 0 &0.02 &0.95&0.99&0.93 \\
& $\pi_{RDS}$ &0 & 0.02 & 0.84 & 0.99&0.91 & &0&0.02&0.89&0.97 &0.90\\
&$\pi_{SS}$ &0 & 0.02 & 0.85 &0.99 &0.91& &0&0.02&0.90&0.97&0.91\\
&$\pi_{SS}^u$ &0 & 0.02 & 0.85 &0.99 & 0.91 & &0 &0.02&0.90&0.98&0.91\\
&$\pi_{SS}^o$ & 0 & 0.02 &0.85 & 0.99&0.91 & &0 &0.02&0.90&0.97&0.91 \\ \\
\multirow{1}{1em}{R} & 1 & 0 &0.01 &0.96 &0.99&0.95& & 0 &0.02 &0.95&0.99 & 0.92\\
& $\pi_{RDS}$ &0 & 0.02 & 0.88 & 0.99& 0.93 & &0&0.02&0.86&0.99 &0.91\\
&$\pi_{SS}$ &0 & 0.02 & 0.90 &0.99& 0.93& &0&0.02&0.87&0.99&0.92\\
&$\pi_{SS}^u$ &0 & 0.02 & 0.92 & 0.99 &0.93 & &0 &0.02&0.88&0.99& 0.91\\
&$\pi_{SS}^o$ & 0 & 0.02 &0.89 & 0.99& 0.93& &0 &0.02&0.87&0.99&0.91\\
\hline
\end{tabular}
\end{center}
\label{table:simulation_Lin_corr}
\end{table}
\begin{table}[H]
\caption{\small Poisson, with degree/covariate correlation - Relative bias and root mean squared error of $\hat \beta_1$, model-based coverage (CI), tree bootstrap coverage (TCI) and neighborhood bootstrap coverage (NCI) of the $95\%$ confidence interval of $\beta_1$ with increasing association between predictor and degree ($\rho_d$) and for various RDS weights ($\pi$). Clustering (Clstr.) is assumed at the seed level (S) and at the recruiter level (R).}
\begin{center}
\setlength\extrarowheight{-3pt}
\footnotesize
\begin{tabular}{lc ccccc c ccccc} \hline
\multicolumn{5}{r}{$\rho_d=0.4$}&&&&&&{$\rho_d=0.6$}\\
\cline{3-7} \cline{9-13}
{Clstr.} & {$\pi$}& {RB}&{RMSE}&{CI}&{TCI}&{NCI}& & {RB}&{RMSE}&{CI}&{TCI}&{NCI}\\ \hline
\multirow{1}{1em}{S} & 1 & -0.02 & 1.01& 0.32 & 0.94 &0.92&&-0.01&1.26&0.29&0.97&0.93\\
& $\pi_{RDS}$ & -0.01 & 0.82& 0.33 &0.96& 0.92& & 0&1.02&0.32&0.96&0.92\\
& $\pi_{SS}$ & -0.01 & 0.83 & 0.32 &0.96&0.92& & 0&1.06&0.31&0.96&0.92\\
& $\pi_{SS}^u$ & -0.01 & 0.87& 0.32 &0.95& 0.92 & & -0.01&1.11&0.31&0.96&0.92\\
& $\pi_{SS}^o$ & -0.01 & 0.85 & 0.32&0.96 &0.92 & & 0 &1.05&0.32&0.96&0.92\\ \\
\multirow{1}{1em}{R} & 1 & -0.05 & 2.17& 0.35 &0.98&0.96&&-0.03&0.30 &0.55&0.97&0.95\\
& $\pi_{RDS}$ & -0.02 & 1.04& 0.38 & 0.98&0.95& & -0.02&0.24&0.60&0.98&0.96\\
& $\pi_{SS}$ & -0.03 & 1.26 & 0.37 &0.98 &0.95&& -0.02&0.25&0.60& 0.98&0.96\\
& $\pi_{SS}^u$ & -0.03 & 1.09& 0.35 & 0.98 &0.95 & & -0.02&0.26&0.60&0.98&0.96\\
& $\pi_{SS}^o$ & -0.04 & 1.59& 0.37& 0.98&0.95 && -0.02 &0.25&0.60& 0.98&0.96\\
\hline
\end{tabular}
\end{center}
\label{table:simulation_P_corr}
\end{table}
\begin{table}[H]
\caption{\small Logistic, with degree/covariate correlation - Relative bias and root mean squared error of $\hat \beta_1$, model-based coverage (CI), tree bootstrap coverage (TCI) and neighborhood bootstrap coverage (NCI) of the $95\%$ confidence interval of $\beta_1$ with increasing association between predictor and degree ($\rho_d$) and for various RDS weights ($\pi$). Clustering (Clstr.) is assumed at the seed level (S) and at the recruiter level (R).}
\begin{center}
\setlength\extrarowheight{-3pt}
\footnotesize
\begin{tabular}{lc ccccc c ccccc} \hline
\multicolumn{5}{r}{$\rho_d=0.4$}&&&&&&{$\rho_d=0.6$}\\
\cline{3-7} \cline{9-13}
{Clstr.} & {$\pi$}& {RB}&{RMSE}&{CI}&{TCI}&{NCI}& & {RB}&{RMSE}&{CI}&{TCI}&{NCI}\\ \hline
\multirow{1}{1em}{S} & 1 & -0.14 & 0.39 & 0.82& 0.95 &0.80&&-0.14&0.37 &0.79&0.94 &0.82\\
& $\pi_{RDS}$ & -0.11 & 0.41& 0.84& 0.99 &0.88& & -0.10&0.37&0.79&0.97&0.91\\
& $\pi_{SS}$ & -0.11 & 0.40 & 0.83&0.99 & 0.87&& -0.11&0.37&0.76&0.97&0.90\\
& $\pi_{SS}^u$ & -0.12 & 0.39& 0.84& 0.99 & 0.86 & & -0.11&0.37&0.76&0.97&0.89\\
& $\pi_{SS}^o$ & -0.11 & 0.40 & 0.85&0.99 & 0.87 & & -0.11 &0.37&0.78&0.97& 0.91\\ \\
\multirow{1}{1em}{R} & 1 & -0.05 & 0.52 & 0.46 & 1.00&0.98&&-0.05&0.39 &0.88&1.00 &0.98\\
& $\pi_{RDS}$ & 0.04 & 0.54& 0.62& 1.00&0.99 & & 0.04&0.44&0.87&1.00&0.98\\
& $\pi_{SS}$ & 0.03 & 0.52 & 0.60& 1.00&0.98& & 0.03&0.43&0.88&1.00&0.98\\
& $\pi_{SS}^u$ & 0.01 & 0.50& 0.54&1.00& 0.98 & & 0.02&0.43&0.86&1.00&0.98\\
& $\pi_{SS}^o$ & 0.03 & 0.52 & 0.61& 1.00& 0.98 && 0.03 &0.43&0.87&1.00&0.98\\
\hline
\end{tabular}
\end{center}
\label{table:simulation_Lk_corr}
\end{table}
\subsection{Summary and guidelines}
Our results show that ignoring homophily-driven effects, if present, induces a negligible to small bias for linear and Poisson models while, for the logistic regression, this strategy induces a substantial bias in the estimates when clustering is assumed at the seed level, and less bias but increased variability when clustering is assumed at the recruiter level. Moreover, misspecifying the SAR correlation model for the random effects induces an increasing bias as the dependence within the network increases, as well as a poor coverage of the model-based confidence interval for the Poisson and logistic regressions. Bootstrap-based confidence intervals yield better coverage than model-based confidence intervals, particularly for Poisson and logistic regressions. Also, fitting mixed models in which clustering is assumed at the recruiter level yields estimators with less bias and better coverage than models in which clustering is assumed at the seed level.
As for RDS weights, unweighted regression methods consistently outperform weighted methods in terms of precision and coverage when the predictor is uncorrelated with degree at the population level. The difference in precision can be attributed to the diffusion of the degree distribution of the network, which resulted in individuals having more small and large weights than expected by chance, hence increased variability in the estimates \citep{Ave19}. On the other hand, weighted regression methods consistently outperform unweighted methods in terms of bias when the predictor is correlated with degree.
We can therefore provide some general guidance for regression in RDS studies: $(i)$ analyses that omit homophily-driven effects terms, while including a random effect for recruiter, outperform other modeling strategies in terms of bias and coverage of the confidence interval, and $(ii)$ weighted regression methods outperform unweighted regression methods in terms of bias and precision when the predictor is correlated with degree; when the predictor is uncorrelated with degree, weighting the model only increases variability in the estimates. As observed previously \citep{2020arXiv201000165Y}, neighbourhood bootstrap provides better estimates of standard errors than any existing alternatives.
\section{Case Study}\label{sec: casestudy}
We now turn to an analysis of the Engage study, a Canadian study conducted in three cities: Montreal, Toronto and Vancouver. The study aims to determine the individual, social and community-level risk factors for transmission of HIV and sexually transmitted infections and related behaviours within the GBM community. In this example, we focus on the study conducted in Montreal. The Engage data-analysis team designed two databases and a tracker to monitor the RDS recruitment process. The study led to the recruitment of $n$ = 1179 GBM from Montreal between February 2017 through June 2018. Approximately 45\% of recruited individuals were successful at recruiting, and 82\% of these effective recruiters brought 1 to 3 peers into the study; 6 seeds of a total of 27 seed participants were unsuccessful at starting recruitment chains.
\subsection{Descriptive statistics}
Treatment optimism was measured on a scale of 12 items, developed by \cite{ven2000scale} to measure attitutes towards HIV treatment within the Australian GBM community. All items were measured on a 4-point Likert scale (strongly disagree, disagree, agree, strongly agree). The optimism score (TMTOPT) was obtained by summing 10 items and substracting 2 items. This gives a range of possible values between 0 (highly skeptical) and 36 (highly optimistic).
Following \cite{levy2017longitudinal} who found age, education and income as correlates of optimism through a range of bivariate analyses, we chose these same socio-demographic characteristics, among others, as possible predictors for treatment optimism. Descriptive (unweighted) statistics for these variables in the sample are presented in Table S7 of the Web Supplement.
Around $33\%$ of respondents were aged less than 30, about $70\%$ were born in Canada, less than a third had a high school diploma or lower, and about $58\%$ earned less than \$30,000.
Younger and more educated participants are less optimistic with regard to HIV treatment than other socio-demographic groups; the absolute difference is more pronounced for age. Further, participants who were born in Canada and those who earn less than \$30,000 in annual income have higher optimism scores than other participants.
\begin{comment}
\begin{table}[H]
\caption{ Descriptive (unweighted) statistics of the RDS sample of $n$=1179 gay, bisexual and other men who have sex with men (GBM) recruits in Montréal: number ($n$), percent ($\%$) of socio-demographic characteristics and risk behaviors, mean ($m$) and standard deviation ($SD$) of the treatment optimism score (TMTOPT) broken drown by socio-demographic groups (reference group and others).}
\vspace*{-0.4cm}
\begin{center}
\setlength\extrarowheight{1pt}
\small
\begin{tabular}{lllllccccc} \hline
\multicolumn{9}{r}{\bf TMTOPT score}\\
\cline{8-10}
{\bf Reference groups} &&&&&&& {Ref.}& &{Other} \\
\cline{1-2} \cline{8-10}
\multicolumn{6}{r}{\bf $n\, (\%)$}&&&{ m\,(SD)}&\\
\textbf{Socio-demographic characteristics}& & &&&&& &&\\
\,\,\,\,\,\,Age $\leq$ 30 & &&& & $384\, (32.6)$ & & $16.2(5.2)$& &$17.3(5.9)$ \\
\,\,\,\,\,\,Born in Canada& &&& & $821\, (69.6)$ & & $17.0(5.9)$& &16.7(5.4)\\
\,\,\,\,\,\,Highest diploma $<$ college& &&& & $352\, (29.9)$ & & $16.7(6.0)$ &&17.0(5.6)\\
\,\,\,\,\,\, $\leq$ $30\,000\$$ in annual income& &&& & $678\, (57.5)$ & & $17.0(5.7)$& &16.8(5.8)\\
\hline
\end{tabular}
\end{center}
\label{table:descstat}
\end{table}
\end{comment}
\subsection{Model fitting}
We chose the potential socio-demographic characteristics correlates
as predictors of HIV optimism for the aforementioned reasons. We fit various linear mixed-effects models with seed-specific (for comparison purposes) and recruiter-specific random intercepts, in a weighted and unweighted fashion. Parameter estimates, standard errors and $95\%$ (model-based and bootstrap) confidence intervals are reported in Table \ref{table:estimates}.
We performed non-parametric Mann-Whitney U-tests to compare the distribution of degree between groups defined by the socio-demographic characteristics. The null hypothesis of the test is that for randomly selected values of degrees $d_i$ and $d_j$ from two groups, the probability of $d_i$ being greater than $d_j$ is equal to the probability of $d_j$ being greater than $d_i$. In the Engage sample, the p-values of the test for age, education, being born in Canada and annual income are 0.0, 0.13, 0.0 and 0.0, respectively. This shows differences in the median number of social connections between groups defined by age, being born in Canada and the annual income of participants, thus suggesting the use of weighted regression.
Guided by the simulations presented in Section \ref{sec:regsimresults} and by the discussion in the preceding paragraph, we focus on the weighted regression estimates with clustering at the recruiter level. We computed standard error estimates and 95\% confidence intervals using the neighborhood bootstrap method. The results show that annual income is significantly (and positively) associated with the optimism about the efficacy of the treatment, with a change of 1.5 points in the expected optimism score.
\begin{table}[H]
\caption{\small Point estimates, standard errors and asymptotic $95\%$ confidence intervals for a linear mixed model applied to the Engage Montreal data, where clustering is assumed at the seed level (S) and at the recruiter level (R), estimated without weights (1), with RDS-II ($\pi_{RDS}$) weights, and SS weights ($\pi_{SS}$). The standard deviation of the random intercept is $\sigma_0$ and the intraclass correlation is $\rho$.}
\begin{center}
\setlength\extrarowheight{-3pt}
\small
\begin{tabular}{l ccc ccccc} \hline
\multicolumn{4}{r}{S}&&&&{R}\\
\cline{3-5} \cline{7-9}
{$\pi$} & {}& {Est.}&{SE}&{CI}& & {Est.}&{SE}&{CI}\\ \hline
\multirow{1}{1em}{1} & Constant & 16.21 &0.48 &[15.3,17.1]&& 16.21 &0.45 &[15.3,17.1]\\
& Age ($\leq$ 30) &-0.72 & 0.40 & [-1.5, 0.0] & &-0.72&0.44&[-1.6, 0.1]\\
&Education ($<$ college) &0.92 & 0.90 & [-0.8, 2.7] & &0.85&1.00&[-1.1, 2.8]\\
&Born in Canada &0.58 & 0.43 & [-0.3, 1.4] & &0.55 &0.49&[-0.4, 1.5]\\
& Annual income ($\leq$ \$30,000) &0.38 & 0.41& [-0.4, 1.2] & &0.43 &0.43&[-0.4, 1.3]\\
&Born in Canada $\times$ Education & -1.96 & 1.04 &[-4.0, 0.0] & &-1.93 &1.15&[-4.2, 0.3]\\
&$\sigma_0$($\rho$) & 0.0(0.0) & - &- & &1.33(0.06)&-&- \\\\
\multirow{1}{1em}{$\pi_{RDS}$} & Constant & 15.09 &0.60&[13.9,16.3] && 15.50 &0.66 &[14.2,16.8]\\
& Age ($\leq$ 30) &-0.84 & 0.59 & [-2.0, 0.3] & &-0.86&0.61&[-2.1, 0.3]\\
&Education ($<$ college) &2.62 & 1.42 & [-0.2, 5.4]& &0.59&1.17&[-1.7, 2.9]\\
&Born in Canada &0.93 & 0.63 &[-0.3, 2.2] & &0.56 &0.82&[-1.0, 2.2]\\
&Annual income ($\leq$ \$30,000) &1.30& 0.69 &[-0.1, 2.6] & &1.54 &0.62&[0.3, 2.8]\\
&Born in Canada $\times$ Education &-4.34 & 1.75 &[-7.8, -0.9]& &-2.53&1.60&[-5.7, 0.6] \\
&$\sigma_0$($\rho$) & 0.96(0.01) & - &- & &3.12(0.15) &-&-\\\\
\multirow{1}{1em}{$\pi_{SS}$} & Constant & 15.09 &0.60 &[13.9,16.3] && 15.51 &0.65 &[14.2,16.8] \\
& Age ($\leq$ 30) &-0.84 & 0.59 & [-2.0, 0.3] & &-0.86&0.60&[-2.0, 0.3]\\
&Education ($<$ college) &2.62 & 1.41 &[-0.2, 5.4]& &0.63&1.16&[-1.6, 2.9]\\
&Born in Canada &0.93 & 0.62 & [-0.3, 2.1] & &0.57 &0.80&[-1.0, 2.1]\\
&Annual income ($\leq$ \$30,000) &1.29 & 0.68 & [0, 2.6] & &1.52 &0.61&[0.3, 2.7]\\
&Born in Canada $\times$ Education & -4.32 & 1.74 &[-7.7, -0.9] & &-2.55&1.58&[-5.6, 0.5] \\
&$\sigma_0$($\rho$) & 0.95(0.0) & - &- & &3.10(0.02) &-&- \\
\hline
\end{tabular}
\end{center}
\label{table:estimates}
\end{table}
It is also worth noting that the directions of the associations between each covariate and the optimism score are consistent across all levels of clustering, regardless of the chosen RDS weight. However, the conclusions in terms of significance of the parameter effects differ whether we fit models with seed-specific random effects or recruiter-specific random effects.
We performed non-parametric hypothesis tests to decide whether or not to weight the model. It is important to highlight that we have not evaluated this approach, but rather use it as an informal tool to guide our analyses. Reasonably, a non significant test does not exclude the possibility that there may be differences in the degree distribution across levels defined by the predictor, suggesting at least the use of weighted regression as a sensitivity check.
In our analyses, we chose socio-demographic factors as potential predictors of treatment optimism based on available evidence in the literature, but we have not tried to fully understand all predictors of the treatment score construct. Thus, this is a `limited' consideration of all potential predictors of treatment optimism, which can be further extended as more associational studies are conducted on the subject.
\section{Conclusion}\label{sec:discussion}
The development of regression methods for RDS is limited by a missing data problem as the observed RDS data reveal only partial information about the full, unobserved RDS network. In this case, valid inference about homophily-driven effects and/or network-induced correlation structures cannot be conducted without additional network data or strict topological constraints on the RDS network (Yauck et al., 2020b). We proposed alternative modeling strategies for RDS when the network is partially missing. Our results showed that ignoring homophily-driven effects, if present, induces a small to negligible bias in the parameter estimator (of the homophilic covariate) for linear and Poisson models while inducing a substantial bias for the logistic regression when clustering is assumed at the seed level. Furthermore, misspecifying the correlation model induces an increasing bias as the dependence within the RDS network increases, and poor coverage for the model-based confidence intervals. In this case, the neighborhood bootstrap method yields a variance estimator that is less biased than the model-based and the tree bootstrap variance estimators while offering confidence intervals with coverages that are slightly below
or at the nominal level for the linear and the Poisson regression. We also showed that weighted regression methods outperform unweighted regression methods in terms of bias when the predictor is correlated with degree, assuming that there is no missing covariate in the model. Weighting the model only adds variability in the estimates when predictor and outcome are uncorrelated.
In the case study, we restricted our analyses to the Engage Montreal dataset. This could be extended to the analysis of the data collected in Toronto and Vancouver by pooling across cities. This problem of conducting regression analyses using multi-city/state RDS data can be easily embedded within our inferential framework, where we could assume that city-specific networks are drawn from the same population network. This will be the subject of future work.
\bibliographystyle{jasaauthyear}
{\small |
1,116,691,497,909 | arxiv |
\section{Introduction}
Extracting and assessing common features amongst multiple variables is a natural task occurring in many different problem settings. Wyner's common information~\cite{Wyner} provides one answer to this, which was originally defined for finite alphabets as follows
\begin{align} \label{eqn:Wynerdef}
C(X;Y)= \inf_{W: X - W - Y } I(X,Y;W).
\end{align}
For a pair of random variables, it seeks to find the most compact third variable that makes the pair conditionally independent. Compactness is measured in terms of the mutual information between the pair and the third variable.
In~\cite{Wyner}, Wyner also identifies two operational interpretations.
The first concerns a source coding network often referred to as the Gray-Wyner network.
For this scenario, Wyner's common information characterizes the smallest common rate required to enable two decoders to recover $X$ and $Y,$ respectively, in a lossless fashion.
The second operational interpretation pertains to the distributed simulation of common randomness. Here, Wyner's common information characterizes the smallest number of random bits that need to be shared between the processors.
In subsequent work, Wyner's common information was extended to continuous random variables and was computed for a pair of Gaussian random variables \cite{Xu--Liu--Chen,Xu--Liu--Chen-2} and for a pair of additive ``Gaussian channel'' distributions \cite{Yang-Chen14}. Other related works include \cite{Veld--Gastpar, Lapidoth--Wigger}. Wyner's common information has many applications, including to communication networks~\cite{Wyner}, to caching~\cite[Section III.C]{Wang--Lim--Gastpar}, to source coding \cite{Satpathy--Cuff}, and to feature extraction~\cite{SulaG:21entropy}.
In this paper, we derive a new lower bound on Wyner's common information for continuous random variables. The proof is based on a method known as factorization of convex envelopes, which was originally introduced in \cite{Geng--Nair}.
The proof strategy is fundamentally different from the techniques that were used to solve Wyner's original common information problem. Specifically, for the latter, the generic approach is to first characterize the class of variables that enable conditional independence, and then inside this class to find the optimal variable.
By contrast, we lower bound the Wyner's common information problem by a convex problem, which we can then solve via optimizing.
We illustrate the promise of the new lower bound by considering Gaussian mixture distributions and Laplace distributions.
We also establish that the new lower bound is tight for a simple case of the so-called ``Gaussian channels'' distribution. Here, $X$ and $Y$ can be written as the sum of a single arbitrary random variable and jointly Gaussian noises.
We note that for this special case, Wyner's common information was previously found, using different methods, in~\cite{Yang-Chen14}.
We use the following notation. Random variables are denoted by uppercase letters $X,Y,Z$ and their realizations by lowercase letters $x,y,z$.
For the cross-covariance matrix of $X$ and $Y$, we use the shorthand notation $K_{X Y}$, and for the covariance matrix of a random vector $ X$ we use the shorthand notation $K_{ X}:= K_{ X X}$. Let $p_X(x)$ denote the probability density function of random variable $X$ at realisation $x$. Let $\mathcal{N}(m,\sigma^2)$ be the Gaussian probability density function with mean $m$ and variance $\sigma^2$.
\section{Main Result}
Here we present our lower bound on Wyner's common information. The bound is given in terms of the entropy of the pair, entropy and Wyner's common information for Gaussian random variables. The theorem says:
\begin{theorem} \label{thm:lowerWyner}
Let $(X,Y)$ have probability density function $p_{(X,Y)}$ that satisfy the covariance constraint $K_{(X,Y)}$. Let, $(X_g,Y_g) \sim \mathcal{N}(0,K_{(X,Y)})$, then
\begin{align} \label{eqn:mainineq}
C(X;Y) \geq \max \{ C(X_g;Y_g) +h(X,Y)-h(X_g,Y_g), 0 \}.
\end{align}
where
\begin{align}
C(X_g;Y_g) =\frac{1}{2} \log{\frac{1+|\rho|}{1-|\rho|}},
\end{align}
and $\rho$ is the correlation coefficient between $X$ and $Y$.
\end{theorem}
The proof is given in Section \ref{sec:proofmainthm}. A similar argument is used for the max-entropy bound where the probability density functions have covariance constraints. Interestingly, once we plug in Gaussian random variables and additive ``Gaussian channel'' distributions, then the bound is attained with equality.
\begin{remark}
In \cite{Wyner}, it is showed that $C(X;Y) \geq I(X;Y)$. In Section \ref{Sec-mixture-exact}-\ref{sec:Laplace} we show that our lower bound from Theorem \ref{thm:lowerWyner} can be tighter.
\end{remark}
\begin{remark}
The bound of Theorem~\ref{thm:lowerWyner} can be expressed equivalently as
\begin{align} \label{eqn:derivedineq}
C(X;Y)\geq C(X_g;Y_g) -D\left(p_{(X,Y)}\left\|p_{(X_g,Y_g)}\right)\right. .
\end{align}
\end{remark}
\begin{remark}
The bound of Theorem~\ref{thm:lowerWyner} can be negative (if not for the correction). If we choose $X$ and $Y$ to be independent, then $X_g$ and $Y_g$ will be independent as well. Thus, the bound in (\ref{eqn:derivedineq}) becomes
\begin{align}
C(X;Y)\geq -D\left(p_{X}\left\|p_{X_g}\right) \right. - D\left(p_{Y}\left\|p_{Y_g}\right) \right. ,
\end{align}
that is a negative bound from the positivity of the Kullback-Leibler divergence.
\end{remark}
In the latter section, we provide pairs of random variable and compute our lower bounds on Wyner's common information to verify the usefulness of the derived bound.
\section{Additive ``Gaussian Channel'' Distributions}\label{Sec-mixture-exact}
In this section, we consider the distributions that are described as follows.
Let $(\hat{X}, \hat{Y})$ be a Gaussian distribution with mean zero and covariance matrix
\begin{align}
K_{(\hat{X}, \hat{Y})}=
\begin{pmatrix}
1 & \hat{\rho}\\
\hat{\rho} & 1
\end{pmatrix}.
\end{align}
Then, we consider the two-dimensional source given by
\begin{align} \label{eqn:mixturerv}
\begin{pmatrix}
X\\
Y
\end{pmatrix} &=
\begin{pmatrix}
\hat{X}\\
\hat{Y}
\end{pmatrix} +
\begin{pmatrix}
A\\
B
\end{pmatrix}.
\end{align}
Let $(A,B)$ be arbitrary random variables with mean zero and covariance
\begin{align} \label{eqn:covAB}
K_{(A,B)}=
\begin{pmatrix}
\sigma_A^2 & r \sigma_A \sigma_B\\
r \sigma_A \sigma_B & \sigma_B^2
\end{pmatrix},
\end{align}
where $\sigma_A=\sigma_B$ and $(A,B)$ is independent of the pair $(\hat{X},\hat{Y})$. For this particular distribution, we evaluate our lower bound in (\ref{thm:lowerWyner}) and also provide an upper bound.
\subsection{Lower Bound}
We have that ${\mathbb E}[X]={\mathbb E}[Y]=0$ and
\begin{align} \label{eqn:K_XY_comp}
{\mathbb E}[X^2] &= {\mathbb E}[\hat{X}^2]+ {\mathbb E}[A^2] =1+\sigma_A^2, \\
{\mathbb E}[XY] &= {\mathbb E}[\hat{X} \hat{Y}]+{\mathbb E}[AB] =\hat{\rho}+r \sigma_A^2.
\end{align}
By symmetry ${\mathbb E}[Y^2]={\mathbb E}[X^2]$ and
\begin{align} \label{eqn:rho_XY_comp}
\rho=\frac{{\mathbb E}[XY] }{\sqrt{{\mathbb E}[X^2]{\mathbb E}[Y^2]}} =\frac{\hat{\rho}+r \sigma_A^2}{1+\sigma_A^2}.
\end{align}
Therefore, the formula given in Theorem~\ref{thm:lowerWyner} evaluates to
\begin{align}
C(X;Y) &\geq C(X_g;Y_g) +h(X,Y)-h(X_g,Y_g) \\
&= \frac{1}{2}\log{\frac{1+\rho}{1-\rho}} + h(X,Y) \nonumber \\
& \hspace{1.2em} -\frac{1}{2}\log{(2 \pi e)^2 \left( (1+\sigma_A^2)^2 -(\hat{\rho}+r \sigma_A^2)^2 \right)} \label{eqn:expl_lowercomp}\\
&= h(X,Y) -\log{ \left( 2 \pi e (1-\hat{\rho}+(1-r)\sigma_A^2 \right)}. \label{eqn:LBAG}
\end{align}
where (\ref{eqn:expl_lowercomp}) follows from substituting for $K_{(X,Y)}$ and (\ref{eqn:LBAG}) follows from substituting for $\rho$ computed in (\ref{eqn:rho_XY_comp}).
\subsection{Upper Bound} \label{sec:UpperBound}
Next we give an upper bound on Wyner's common information for the example of this section.
To accomplish this, rewrite the pair $(\hat{X},\hat{Y})$ as
\begin{align} \label{eqn:gausssplit}
\hat{X} &= \sqrt{\hat{\rho}} V + Z_x, \nonumber \\
\hat{Y} &= \sqrt{\hat{\rho}} V + Z_y,
\end{align}
where $V,Z_x,Z_y$ are mutually independent, $V \sim \mathcal{N}(0,1)$ and $Z_x,Z_y \sim \mathcal{N}(0,1-\hat{\rho})$.
Then, a valid choice to make $X$ and $Y$ conditionally independent on $W$ is $W =(\sqrt{\hat{\rho}} V + A,\sqrt{\hat{\rho}} V + B)$.
By combining (\ref{eqn:mixturerv}) and (\ref{eqn:gausssplit}) we can rewrite the pair $(X,Y)$ as
\begin{align}
X=&\sqrt{\hat{\rho}} V + A+Z_x, \nonumber \\
Y=&\sqrt{\hat{\rho}} V + B+Z_y,
\end{align}
where $W$ is independent of $Z_x$ and $Z_y$.
So we have
\begin{align}
&I(X;Y|W) \nonumber \\
&=I(\sqrt{\hat{\rho}} V+A + Z_x;\sqrt{\hat{\rho}} V+B + Z_y| W) \\
&=I(Z_x;Z_y|W) \label{eqn:exp1lb}\\
&=I(Z_x;Z_y) \label{eqn:exp2lb}\\
&=0,\label{eqn:exp3lb}
\end{align}
where (\ref{eqn:exp1lb}) follows by subtracting the parts that are in the conditioning by recalling that $W=(\sqrt{\hat{\rho}} V+A,\sqrt{\hat{\rho}} V+B)$, (\ref{eqn:exp2lb}) follows from independence of $W$ and $(Z_x,Z_y)$ and (\ref{eqn:exp3lb}) follows from the independence of $Z_x$ and $Z_y$.
Thus, the upper bound is
\begin{align}
&C(X;Y) \nonumber \\
&\le I(X,Y; W) \label{eqn:upexp1}\\
&= h(X,Y) - h(\sqrt{\hat{\rho}} V+A+Z_x,\sqrt{\hat{\rho}} V+B+Z_y|W) \label{eqn:upexp2}\\
&= h(X,Y) - h(Z_x,Z_y|W) \label{eqn:upexp3}\\
&= h(X,Y) - h(Z_x,Z_y) \label{eqn:upexp4}\\
&= h(X,Y) -\log{ \left( 2 \pi e (1-\hat{\rho}) \right) }. \label{eqn:UBAG}
\end{align}
where (\ref{eqn:upexp1}) follows from the definition of $C(X;Y)$ where $W$ satisfies $X-W-Y$, (\ref{eqn:upexp2}) follows by rewriting the mutual information, (\ref{eqn:upexp3}) follows from subtracting the parts that are in the conditioning and (\ref{eqn:upexp4}) follows from independence of $W$ and $(Z_x,Z_y)$.
\subsection{Example 1}
\begin{lemma} \label{lem:Gaussaddchannel}
For the additive ``Gaussian channel'' distributions described in (\ref{eqn:mixturerv}) and $A=B$, we have
\begin{align}
C(X;Y) &= h(X,Y) -\log{ \left( 2 \pi e (1-\hat{\rho}) \right)}.
\end{align}
\end{lemma}
The proof follows from the fact that the lower bound (\ref{eqn:LBAG}) and upper bound (\ref{eqn:UBAG}) coincide when $A=B$, which means $r=1$. The same result is derived by a different approach in \cite{Yang-Chen14}.
To illustrate Lemma \ref{lem:Gaussaddchannel}, let $A$ be binary $\pm \sigma_A$ with uniform probability. Then, we get Figure \ref{fig:Gaussaddchannel1}.
\begin{figure}[h!]
\centering
\scalebox{0.85}{\input{Exact_WCI_GM.tex}}
\vspace{-1em}
\caption{The o-line is the exact Wyner's common information $C(X;Y)$ for the specified Gaussian mixture distribution. The dashed line is the mutual information $I(X;Y)$. In this setup we plot $C(X;Y)$ and $I(X;Y)$ in nats versus $\sigma_A$ for $\hat{\rho}=0.5$.} \label{fig:Gaussaddchannel1}
\end{figure}
\subsection{Example 2}
Another example is to choose $(A,B)$ be doubly symmetric binary distribution where $p_{(A,B)}(A=B=\sigma_A)=p_{(A,B)}(A=B=-\sigma_A)=\frac{1+r}{4}$ and $p_{(A,B)}(A=-B=\sigma_A)=p_{(A,B)}(A=-B=-\sigma_A)=\frac{1-r}{4}$.
Note that for these choices, the covariance matrix of $A$ and $B$ is given by Equation (\ref{eqn:covAB}). If we select $A=B$ or $r=1,$ this model is precisely the model studied in Example 1. A numerical evaluation is shown in Figure \ref{fig:Gaussaddchannel2}.
\begin{figure}[h!]
\centering
\scalebox{0.85}{\input{WCI_GM_LB.tex}}
\vspace{-1em}
\caption{The $*$-line is the lower bound on $C(X;Y)$ from Theorem \ref{thm:lowerWyner} and the $\diamond$-line is the upper bound on $C(X;Y)$ from Section \ref{sec:UpperBound}. The dashed line is the mutual information $I(X;Y)$. In this setup we plot the bounds on $C(X;Y)$ in nats versus $\sigma_A$ for $\hat{\rho}=0.5$ and $r=0.9$.} \label{fig:Gaussaddchannel2}
\end{figure}
\section{Laplace Distributions} \label{sec:Laplace}
In this section, we consider the case when $(X,Y)$ is distributed according to the bivariate Laplace distribution described \cite[Section~5.1.3]{Kotz01} by
\begin{align}
p_{(X,Y)}(x,y)=\frac{1}{\pi \sqrt{1-\rho_{\ell}^2}} K_0 \left( \sqrt{\frac{2(x^2-2 \rho_{\ell}xy+y^2)}{1-\rho_{\ell}^2}} \right),
\end{align}
where $K_0$ is the modified Bessel function of the second kind described by
\begin{align}
K_0(z)=\frac{1}{2}\int_{-\infty}^{\infty} \frac{e^{i z t}}{\sqrt{t^2+1}} dt.
\end{align}
The variances of $X$ and $Y$ are unity and the correlation coefficient is $\rho_{\ell}$.
Define the entropy power of $(X,Y)$ as
\begin{eqnarray}
N(X,Y) &= \frac{1}{2\pi e} \exp(h(X,Y)).
\end{eqnarray}
Then, the bound of Theorem~\ref{thm:lowerWyner} can be expressed as
\begin{align}
C(X;Y) &\ge \log \frac{N(X,Y)}{1-\rho_{\ell}}.
\end{align}
Computation of the joint entropy $h(X,Y)$ as well as the mutual information $I(X;Y)$
leads to the curves in Figure~\ref{fig:Laplace}, further illustrating the potential of the new bound.
\begin{figure}[h!]
\centering
\scalebox{0.85}{\input{WCI_Lap_LB.tex}}
\vspace{-1em}
\caption{The $*$-line is the lower bound on $C(X;Y)$ from Theorem \ref{thm:lowerWyner} and the dashed line is the mutual information $I(X;Y)$ for the described Laplace distribution. In this setup we plot the bounds on $C(X;Y)$ in nats versus $\rho_{\ell}$.} \label{fig:Laplace}
\end{figure}
\section{Proof of Theorem \ref{thm:lowerWyner}} \label{sec:proofmainthm}
\subsection{Preliminary}
\begin{theorem}[Theorem~2 in \cite{Hyper_Gauss}] \label{Thm:Hypercontract}
For $K \succeq 0$, $0 < \lambda < 1$, there exists a $0\preceq K^{\prime} \preceq K$ and $(X^{\prime},Y^{\prime})\sim \mathcal{N}(0,K^{\prime})$ such that $(X, Y)$ have distribution $p_{(X,Y)}$ with covariance constraint $K$, the following inequality holds
\begin{align}
\inf_W h(Y|W)+ &h(X|W) - (1+\lambda) h(X, Y|W) \nonumber \\
&\geq h(Y^{\prime})+ h(X^{\prime}) -(1+\lambda)h(X^{\prime}, Y^{\prime}) . \label{Eq-Thm:Hypercontract}
\end{align}
\end{theorem}
\begin{proof}
The theorem is a consequence of \cite[Theorem~2]{Hyper_Gauss}, for a specific choice of $p=\frac{1}{\lambda}+1$. The proof regarding the existence of the infimum that is missing in \cite{Hyper_Gauss} is given in \cite{SulaG:19it}.
\end{proof}
Before we jump into details it is important to realise that $\inf_W h(Y|W)+ h(X|W) - (1+\lambda) h(X, Y|W)$ is indeed the lower convex envelope of $h(Y)+ h(X) - (1+\lambda) h(X, Y)$ by thinking of $W$ as a time sharing random variable. In other words, we are taking the infimum over all convex envelopes such that for a covariance constraint on the pair $(X,Y)$ it satisfies the following
\begin{align}
&\inf_{(X,Y)} \inf_W h(Y|W)+ h(X|W) - (1+\lambda) h(X, Y|W) \nonumber \\
&\quad \quad = \inf_{(X,Y)} h(Y)+ h(X) - (1+\lambda) h(X, Y).
\end{align}
The next lemma that is an optimization problem on the covariance matrix constraint for Gaussian random variables is needed for the proof of the theorems.
\begin{lemma} \label{lem:lemmacontractivity}
For $(X^{\prime},Y^{\prime})\sim \mathcal{N}(0,K^{\prime})$, the following inequality holds
\begin{align}
&\min_{K^{\prime}: 0 \preceq K^{\prime} \preceq \begin{pmatrix} 1 & \rho \\ \rho &1 \end{pmatrix}} h(X^{\prime})+h(Y^{\prime})-(1+\lambda)h(X^{\prime},Y^{\prime}) \nonumber \\
& \quad \quad \geq \frac{1}{2} \log{\frac{1}{1-\lambda^2}}-\frac{\lambda}{2} \log{(2\pi e)^2\frac{(1-\rho)^2(1+\lambda)}{1-\lambda}},
\end{align}
where $\lambda \leq \rho$.
\end{lemma}
\begin{proof}
The proof outline is given in Appendix \ref{App:lowerboundWCI}. For the full proof, refer to \cite{SulaG:19it}.
\end{proof}
\subsection{Lower bound on (relaxed) Wyner's common information}
Here, we consider a slightly more general case that is, we give a lower bound on relaxed Wyner's common information in Theorem \ref{thm:lowerWynerelaxed}. Thus, as a special case we obtain Theorem \ref{thm:lowerWyner}.
Let us define the relaxed Wyner's common information as in \cite{GastparS:19itw,SulaG:19it}.
For jointly continuous random variables $X$ and $Y$ with joint distribution $p(x,y),$ we define
\begin{align}
C_{\gamma} (X; Y) &= \inf_{W:I(X;Y|W) \le \gamma} I(X,Y ; W), \label{Eq-def-Wyner-relaxed}
\end{align}
where the constraint of conditional independence is relaxed into an upper bound on the conditional mutual information. For $\gamma=0,$ we have $C_0(X;Y) = C(X;Y),$ the standard Wyner's common information.
A lower bound on relaxed Wyner's common information is given in the following theorem.
\begin{theorem} \label{thm:lowerWynerelaxed}
Let $(X,Y)$ have probability density functions $p_{(X,Y)}$ that satisfy the covariance constraint $K_{(X,Y)}$. Let, $(X_g,Y_g) \sim \mathcal{N}(0,K_{(X,Y)})$, then
\begin{align}
C_{\gamma}(X;Y)\geq \max \{ C_{\gamma}(X_g;Y_g) +h(X,Y)-h(X_g,Y_g),0 \},
\end{align}
where
\begin{align}
C_{\gamma}(X_g;Y_g) &= \frac{1}{2} \log^+ \left( \frac{1 + |\rho|}{1-|\rho|} \cdot \frac{1 - \sqrt{1-e^{-2\gamma}}}{1 + \sqrt{1-e^{-2\gamma}}} \right),
\end{align}
and $\rho$ is the correlation coefficient between $X$ and $Y$.
\end{theorem}
\begin{proof}
Note that the mean of the random variables does not affect the Wyner's common information and its relaxed variant thus, we assume mean zero for both $X$ and $Y$. Also, the relaxed Wyner's common information is invariant to scaling of $X$ and $Y$.
Thus, without loss of generality we assume $X$ and $Y$ to be mean zero, unity variance and correlation coefficient $\rho$, so we proceed as follows
\begin{align}
&C_{\gamma}(X;Y) \nonumber \\
&=\inf_{W:I(X;Y|W) \leq \gamma} I(X,Y;W) \label{eqn:defWynerCI} \\
& \geq \inf_{W} (1+\mu)I(X,Y;W)-\mu I(X;W) -\mu I(Y;W) \nonumber \\
& \quad \quad +\mu I(X;Y) - \mu \gamma \label{eqn:alllambda} \\
&= \mu \inf_{W}h(X|W)+h(Y|W) -(1+\frac{1}{\mu})h(X,Y|W) \nonumber \\
& \quad \quad +h(X,Y) -\mu \gamma \label{eqn:rewritealllambda} \\
&\geq \mu \hspace{-2em} \min_{K^{\prime}: 0 \preceq K^{\prime} \preceq \begin{pmatrix} 1 & \rho \\ \rho &1 \end{pmatrix}} h(X^{\prime})+h(Y^{\prime})-(1+\frac{1}{\mu})h(X^{\prime},Y^{\prime}) \nonumber \\
& \quad \quad +h(X,Y) -\mu \gamma \label{eqn:thm2sim} \\
& \geq h(X,Y) + \frac{\mu}{2} \log{\frac{\mu^2}{\mu^2-1}} \nonumber \\
& \quad \quad -\frac{1}{2} \log{(2\pi e)^2\frac{(1-\rho)^2(\mu+1)}{\mu-1}} -\mu \gamma \label{eqn:lastexp} \\
& \geq h(X,Y) -h(X_g,Y_g) +C(X_g;Y_g) \label{eqn:lastexp2}
\end{align}
where (\ref{eqn:alllambda}) follows from weak duality and the bound is valid for all $\mu \geq 0$; (\ref{eqn:rewritealllambda}) follows from simplification; (\ref{eqn:thm2sim}) follows from Theorem \ref{Thm:Hypercontract} under the assumption that $\mu > 1$ where $(X^{\prime},Y^{\prime}) \sim \mathcal{N}(0,K^{\prime})$;
(\ref{eqn:lastexp}) follows from Lemma \ref{lem:lemmacontractivity} under the assumption $\mu \geq \frac{1}{\rho}$ and (\ref{eqn:lastexp2}) follows by maximizing the function
\begin{align}
g(\mu)&=h(X,Y) -\mu \gamma + \frac{\mu}{2} \log{\frac{\mu^2}{\mu^2-1}} \nonumber \\
& \quad \quad -\frac{1}{2} \log{(2\pi e)^2\frac{(1-\rho)^2(\mu+1)}{\mu-1}},
\end{align}
for $\mu \geq \frac{1}{\rho}$. Now we need to solve $\max_{\mu \geq \frac{1}{\rho}} g(\mu).$
The function $g$ is concave in $\mu$,
\begin{align}
\frac{\partial^2 g}{\partial \mu^2}&=-\frac{1}{\mu(\mu^2-1)} < 0,
\end{align}
and by studying the monotonicity we obtain
\begin{align}
\frac{\partial g}{\partial \mu}&=-\frac{1}{2}\log{\frac{\mu^2-1}{\mu^2}} -\gamma.
\end{align}
Since the function is concave, the maximum is attained when the first derivative vanishes. That leads to the optimal solution $\mu_*=\frac{1}{\sqrt{1-e^{-2\gamma}}}$, where $\mu_*$ has to satisfy $\mu_* \geq \frac{1}{\rho}$. Substituting for the optimal solution we get
\begin{align}
C_{\gamma}(X;Y) &\geq g \left( \frac{1}{\sqrt{1-e^{-2\gamma}}} \right) \\
&=h(X,Y) -h(X_g,Y_g)+C_{\gamma}(X_g;Y_g).
\end{align}
\end{proof}
\section{Vector Wyner's common information}
It is well-known that for $n$ independent pairs of random variables, we have
\begin{align}
C (X^n; Y^n) = \sum_{i=1}^n C(X_i; Y_i). \label{Eqn-thm:gensplit}
\end{align}
For the proof see \cite[Lemma~2]{SulaG:19it} by letting $\gamma=0$.
By making use of Theorem \ref{thm:lowerWyner} and (\ref{Eqn-thm:gensplit}) we can lower bound the Wyner's common information for $n$ independent pairs of random variables as
\begin{align}
C (X^n; Y^n) \geq \sum_{i=1}^n C(X_{g_i};Y_{g_i}) +h(X_i,Y_i)-h(X_{g_i},Y_{g_i}).
\end{align}
An interesting problem is finding a bound for arbitrary $(X^n,Y^n)$, for any dependencies between $X^n$ and $Y^n$.
This is not studied here and is left for future investigation.
\appendices
\section{Proof Outline of Lemma \ref{lem:lemmacontractivity}} \label{App:lowerboundWCI}
Let us parametrize $K^{\prime}$ as $K^{\prime} = \begin{pmatrix} \sigma^2_X & q\sigma_X \sigma_Y \\ q\sigma_X \sigma_Y & \sigma^2_Y \end{pmatrix} \succeq 0$. By substituting we obtain
\begin{align}
&\min_{K^{\prime}: 0 \preceq K^{\prime} \preceq \begin{pmatrix} 1 & \rho \\ \rho &1 \end{pmatrix}} h(X^{\prime})+h(Y^{\prime})-(1+\lambda)h(X^{\prime},Y^{\prime}) \nonumber \\
& \quad \quad \quad = \min_{(\sigma_X,\sigma_Y,q) \in \mathcal{A}_{\rho}} \frac{1}{2}\log{(2 \pi e)^2 \sigma_X^2\sigma_Y^2} \\
& \quad \quad \quad -\frac{1+\lambda}{2}\log{(2 \pi e)^2 \sigma_X^2\sigma_Y^2(1-q^2)} \label{eqn:mlproof1}
\end{align}
where the set $\mathcal{A}_{\rho}$ is
\begin{align}
\mathcal{A}_{\rho}=\left\{( \sigma_X,\sigma_Y,q): \begin{pmatrix} \sigma^2_X -1& q\sigma_X \sigma_Y -\rho\\ q\sigma_X \sigma_Y-\rho & \sigma^2_Y -1 \end{pmatrix} \preceq 0 \right\}.
\end{align}
Another way of rewriting $\mathcal{A}_{\rho}$ is
\begin{align}
\mathcal{A}_{\rho}=\left\{(\sigma_X,\sigma_Y,q): \hspace{-1.5em} \substack{\sigma^2_X+\sigma^2_Y \leq 2, \\ \quad (1-q^2)\sigma^2_X\sigma^2_Y +2\rho q \sigma_X\sigma_Y +1-\rho^2-(\sigma^2_X+\sigma^2_Y) \geq 0} \right\}.
\end{align}
Let us define
\begin{align}
\mathcal{B}_{\rho}=\left\{ (\sigma_X,\sigma_Y,q): \hspace{-1.5em} \substack{\sigma_X\sigma_Y \leq 1, \\ \quad (1-q^2)\sigma^2_X\sigma^2_Y +2\rho q \sigma_X\sigma_Y +1-\rho^2-2\sigma_X\sigma_Y) \geq 0} \right\},
\end{align}
and the inequality $\sigma^2_X+\sigma^2_Y \geq 2\sigma_X\sigma_Y$, implies that $\mathcal{A}_{\rho} \subseteq \mathcal{B}_{\rho}$.
By reparametrizing $\sigma^2=\sigma_X\sigma_Y$, the set $\mathcal{B}_{\rho}$ becomes
\begin{align}
\mathcal{D}_{\rho}=\left\{( \sigma^2,q): \substack{\sigma^2 \leq 1, \\ (\sigma^2(1-q)-1+\rho)(\sigma^2(1+q)-1-\rho) \geq 0} \right\}.
\end{align}
The set $\mathcal{D}_{\rho}$ is rewritten as
\begin{align}
\mathcal{D}_{\rho}=\left\{( \sigma^2,q): \substack{\text{for } \rho \geq q, \quad \sigma^2(1-q) \leq 1-\rho \\ \text{for } \rho < q, \quad \sigma^2(1+q) \leq 1+\rho } \label{eqn:D} \right\}.
\end{align}
Thus, we have
\begin{align}
&\min_{(\sigma_X,\sigma_Y,q) \in \mathcal{A}_{\rho}} \frac{1}{2}\log{(2 \pi e)^2 \sigma_X^2\sigma_Y^2} \\
& -\frac{1+\lambda}{2}\log{(2 \pi e)^2 \sigma_X^2\sigma_Y^2(1-q^2)} \geq \min_{(\sigma^2,q) \in \mathcal{D}_{\rho}} f(\lambda,\sigma^2,q)
\label{eqn:mlproof2}
\end{align}
where,
\begin{align}f(\lambda,\sigma^2,q)&=\frac{1}{2}\log{(2 \pi e)^2 \sigma^4}-\frac{1+\lambda}{2}\log{(2 \pi e)^2 \sigma^4(1-q^2)} \label{eqn:f}.
\end{align}
\begin{itemize}
\item Let us consider the case $\rho \geq q$ for $\rho$ is positive. Then, by weak duality we have
\begin{align}
&\min\limits_{(\sigma^2,q) \in \mathcal{D}_{\rho}} f(\lambda,\sigma^2,q) \nonumber \\
& \quad \quad \geq \min\limits_{\sigma^2,q} f(\lambda,\sigma^2,q) + \mu(\sigma^2(1-q)-1+\rho), \label{eqn:mlproof3}
\end{align}
for any $\mu \geq 0$.
By applying Karush-Kuhn-Tucker (KKT) conditions on (\ref{eqn:mlproof3}) we get
\begin{align}
\frac{\partial }{\partial \sigma^2}=-\frac{\lambda}{\sigma^2} + \mu (1-q)&=0, \label{eqn:KKT1} \\
\frac{\partial }{\partial q}=\frac{(1+\lambda)q}{1-q^2} - \mu \sigma^2&=0, \label{eqn:KKT2} \\
\mu(\sigma^2(1-q)-1+\rho))&=0. \label{eqn:KKT3}
\end{align}
The optimal solutions to satisfy the KKT conditions are
\begin{align}
q_*=\lambda, \quad \mu_*=\frac{\lambda}{1-\rho}, \quad \sigma^2_*=\frac{1-\rho}{1-\lambda}.
\end{align}
Since the KKT conditions are satisfied by $q_*,\sigma^2_*$ and $\mu_*$ then strong duality holds, thus
\begin{align}
&\min\limits_{(\sigma^2,q) \in \mathcal{D}_{\rho}} f(\lambda,\sigma^2,q) \label{eqn:mlproof3star}\\
&=\max_{\mu} \min\limits_{\sigma^2,q} f(\lambda,\sigma^2,q) + \mu(\sigma^2(1-q)-1+\rho)) \\
&= f(\lambda,\frac{1-\rho}{1-\lambda},\lambda) \\
&= \frac{1}{2} \log{\frac{1}{1-\lambda^2}}-\frac{\lambda}{2} \log{(2\pi e)^2\frac{(1-\rho)^2(1+\lambda)}{1-\lambda}}. \label{eqn:mlprooffinal}
\end{align}
By combining (\ref{eqn:mlproof1}), (\ref{eqn:mlproof2}), (\ref{eqn:mlproof3star}) and (\ref{eqn:mlprooffinal}) we get the desired lower bound.
\item For the case $\rho < q$ we omit the details due to lack of space. The optimal solutions are
\begin{align}
q_*=\rho, \quad \sigma^2_*=\frac{1+\rho}{1+q}.
\end{align}
To conclude we show that $f(\lambda,\frac{1-\rho}{1-\lambda},\lambda) \leq f(\lambda,1,\rho)$ for $\lambda \leq \rho$. The argument goes through also for the case when $\rho$ is negative, which completes the proof.
\end{itemize}
\section*{Acknowledgment}
This work was supported in part by the Swiss National Science Foundation under Grant 169294.
\bibliographystyle{IEEEtran}
|
1,116,691,497,910 | arxiv | \section{Introduction}
The fundamental combinatorial problem of graph coloring is as
ancient as the cartographer's task of coloring a map without using the
same color on neighboring regions. In the context of general graphs,
we say that an assignment of a color to every vertex is a \emph{proper
coloring}\/ if no two adjacent vertices receive the same color, and
we say that a graph is \emph{$q$-colorable}\/ it has a proper coloring
using only at most $q$ different colors.
The problem of counting the number $P_G(q)$ of $q$-colorings of a
given graph $G$ has been the focus of much research over the past
century. Although it is already NP-hard even to determine whether
this number is nonzero, the function $P_G(q)$ itself has very
interesting properties. $P_G(q)$ was first introduced by Birkhoff
\cite{Birkhoff-1912}, who proved that it is always a polynomial in
$q$. It is now called the \emph{chromatic polynomial}\/ of $G$.
Although $P_G(q)$ has been studied for its own sake (e.g., Whitney
\cite{Whitney} expressed its coefficients in terms of graph theoretic
parameters), perhaps more interestingly there is a long history of
diverse applications which has led researchers to minimize or maximize
$P_G(q)$ over various families of graphs. In fact, Birkhoff's
original motivation for investigating the chromatic polynomial was to
use it to attack the famous four-color theorem. Indeed, one way to
show that every planar graph is 4-colorable is to minimize $P_G(4)$
over all planar $G$, and show that the minimum is nonzero. In this
direction Birkhoff \cite{Birkhoff-1930} proved the tight lower bound
$P_G(q) \geq q(q-1)(q-2)(q-3)^{n-3}$ for all $n$-vertex planar graphs
$G$ when $q \geq 5$, later conjecturing with Lewis in
\cite{Birkhoff-Lewis-1946} that it extended to $q=4$ as well.
Linial \cite{Linial} arrived at the problem of minimizing the
chromatic polynomial from a completely different motivation. The
worst-case computational complexity of determining whether a
particular function $f : V(G) \rightarrow \mathbb{R}$ is a proper
coloring (i.e., satisfies $f(x) \neq f(y)$ for every pair of adjacent
vertices $x$ and $y$) is closely related to the number of \emph{acyclic
orientations}\/ of a graph, which equals $|P_G(-1)|$, obtained by
substituting $q = -1$ into the formal polynomial expression of
$P_G(q)$.
Lower bounding the worst-case complexity therefore corresponds to
minimizing $|P_G(-1)|$ over the family $\mathcal{F}_{n,m}$ of graphs
with $n$ vertices and $m$ edges. Linial showed that that
surprisingly, for any $n,m$ there is a graph which
\emph{simultaneously}\/ minimizes each $|P_G(q)|$ over
$\mathcal{F}_{n,m}$, for \emph{every}\/ integer $q$. This graph is
simply a clique $K_k$ with an additional vertex adjacent to $l$
vertices of the $K_k$, plus $n-k-1$ isolated vertices, where $k,l$ are
the unique integers satisfying $m = {k \choose 2} + l$ with $k > l
\geq 0$. At the end of his paper, Linial posed the problem of
maximizing $P_G(q)$ over all graphs in $\mathcal{F}_{n,m}$.
Around the same time, Wilf arrived at exactly that maximization
problem while analyzing the \emph{backtrack}\/ algorithm for finding a
proper $q$-coloring of a graph (see \cite{Bender-Wilf, Wilf}).
Although this generated much interest in the problem, it was only
solved in sporadic cases. The special case $q=2$ was completely
solved for all $m,n$, by Lazebnik in \cite{Lazebnik-q23}. For $q \geq
3$, the only nontrivial pairs $m,n$ for which extremal graphs were known
corresponded to the number of vertices and edges in the Tur\'an graph
$T_r(n)$, which is the complete $r$-partite graph on $n$ vertices with
all parts of size either $\lfloor n/r \rfloor$ or $\lceil n/r \rceil$.
In this vein, Lazebnik \cite{Lazebnik-largeq} proved that $T_r(n)$ is
optimal for very large $q = \Omega(n^6)$, and proved with Pikhurko and Woldar
\cite{LPW} that $T_2(2k)$ is optimal when $q=3$ and asymptotically
optimal when $q=4$.
Outside these isolated cases, very little was known for general $m,n$.
Although many upper and lower bounds for $P_G(q)$ were proved by
various researchers \cite{Byer, Lazebnik-q23, Lazebnik-bounds, Liu},
these bounds were widely separated. Even the $q=3$ case resisted
solution: twenty years ago, Lazebnik \cite{Lazebnik-q23} conjectured
that when $m \leq n^2/4$, the $n$-vertex graphs with $m$ edges which
maximized the number of 3-colorings were complete bipartite graphs
minus the edges of a star, plus isolated vertices. Only very
recently, Simonelli \cite{Simonelli} managed to make some progress on
this conjecture, verifying it under the additional very strong
assumption that all optimal graphs are already bipartite.
Perhaps part of the difficulty for general $m,n,q$ stems from the fact
that the maximal graphs are substantially more complicated than the
minimal graphs that Linial found. For number-theoretic reasons, it is
essentially impossible to explicitly construct maximal graphs for
general $m,n$. Furthermore, even their coarse structure depends on
the density $\frac{m}{n^2}$. For example, when $\frac{m}{n^2}$ is
small, the maximal graphs are roughly complete bipartite graphs, but
after $\frac{m}{n^2} > \frac{1}{4}$, the maximal graphs become
tripartite. At the most extreme density, when $m,n$ correspond to the
Tur\'an graph $T_q(n)$, the unique maximal graph is obviously the
complete $q$-partite graph. Therefore, in order to tackle the general
case of this problem, one must devise a unified approach that can
handle all of the outcomes.
In this paper, we propose such an approach, developing the machinery
that one might be able to use to determine the maximal graphs in many
nontrivial ranges of $m,n$. Our methodology can be roughly outlined
as follows. We show, via Szemer\'edi's Regularity Lemma, that the
asymptotic solution to the problem reduces to a certain
quadratically-constrained linear program in $2^q - 1$ variables. For any
given $q$, this task can in principle be automated by a computer code
that symbolically solves the optimization problem, although a more
sophisticated approach was required to solve this for all $q$. Our
solutions to the optimization problem then give us the approximate
structure of the maximal graphs. Finally, we use various local
arguments, such as the so-called ``stability'' approach introduced by
Simonovits \cite{Simonovits}, to refine their structure into precise
results.
We successfully applied our machinery to solve the Linial-Wilf problem
for many nontrivial ranges of $m,n$, and $q \geq 3$. In particular,
for $q=3$, our results confirm a stronger form of Lazebnik's
conjecture when $m$ is large. In addition, for each $q \geq 4$ we
show that for all densities $\frac{m}{n^2}$ up to approximately
$\frac{1}{q \log q}$, the extremal graphs are also complete bipartite
graphs minus a star. In order to state our results precisely, we need
the following definition.
\begin{definition}
\label{def:semi-complete}
Let $a \leq b$ be positive integers. We say that $G$ is a
\textbf{semi-complete subgraph of $\boldsymbol{K_{a,b}}$} if the
number of missing edges $E(K_{a,b}) \setminus E(G)$ is less than
$a$, and they form a star (i.e., they share a common endpoint $v$
which we call the \textbf{center}). If $v$ belongs to the larger
side of $K_{a,b}$, then we also say that $G$ is \textbf{correctly
oriented}.
\end{definition}
Define the constant $\kappa_q = \left( \sqrt{\frac{\log (q/(q-1))}{\log
q}} + \sqrt{\frac{\log q}{\log (q/(q-1))}} \right)^{-2} \approx
\frac{1}{q \log q}$. All logarithms here and in the rest of the paper
are in base $e \approx 2.718$. In the following theorems, we write
$o(1)$ to represent a quantity that tends to zero as $m,n \rightarrow
\infty$.
\begin{theorem}
\label{thm:main:sparse}
For every fixed integer $q \geq 3$, and any $\kappa < \kappa_q$, the
following holds for all sufficiently large $m$ with $m \leq \kappa n^2$. Every
$n$-vertex graph with $m$ edges which maximizes the number of
$q$-colorings is a semi-complete subgraph (correctly oriented if $q
\geq 4$) of some $K_{a,b}$, plus isolated vertices, where $a =
(1+o(1))\sqrt{m \cdot \log \frac{q}{q-1} / \log q}$ and $b =
(1+o(1)) \sqrt{m \cdot \log q / \log \frac{q}{q-1}}$. The
corresponding number of $q$-colorings is $q^n
e^{(-c+o(1))\sqrt{m}}$, where $c = 2\sqrt{\log \frac{q}{q-1} \log
q}$.
\end{theorem}
\noindent \textbf{Remark.}\, The part sizes of the maximal graphs
above all have the ratio roughly $\log q / \log \frac{q}{q-1}$. The
constant $\kappa_q$ corresponds to the density $m/n^2$ at which the
number of isolated vertices becomes $o(n)$ in the optimal
construction.
\vspace{3mm}
For 3 colors, we can push our argument further, beyond the density
$\kappa_3$. Now, due to the absence of isolated vertices, a rare
exception occurs, which requires us to include an additional
possibility. Here, a ``pendant edge'' means that a new vertex is
added, along with a single edge between it and any other vertex in the
graph. Proposition \ref{prop:pendant-necessary} shows that this
outcome is in fact necessary.
\begin{theorem}
\label{thm:main:q=3}
The following holds for all sufficiently large $m \leq n^2/4$.
Every $n$-vertex graph with $m$ edges and the maximum number of
3-colorings is either \textbf{(i)} a semi-complete subgraph of some
$K_{a,b}$, plus isolated vertices if necessary, or \textbf{(ii)} a
complete bipartite graph $K_{a,b}$ plus a pendant edge.
Furthermore:
\begin{itemize}
\item If $m \leq \kappa_3 n^2$, then $a = (1+o(1))\sqrt{m \cdot
\frac{\log 3/2}{\log 3}}$ and $b = (1+o(1))\sqrt{m \cdot
\frac{\log 3}{\log 3/2}}$. The corresponding number of
colorings is $3^n e^{-(c+o(1))\sqrt{m}}$, where $c = 2\sqrt{ \log
\frac{3}{2} \cdot \log 3 }$.
\item If $\kappa_3 n^2 \leq m \leq \frac{1}{4} n^2$, then $a =
(1+o(1))\frac{n-\sqrt{n^2-4m}}{2}$ and $b =
(1+o(1))\frac{n+\sqrt{n^2-4m}}{2}$. The corresponding number of
colorings is $2^{b + o(n)}$.
\end{itemize}
\end{theorem}
We also considered another conjecture of Lazebnik (see, e.g.,
\cite{LPW}), that the Tur\'an graphs $T_r(n)$ are always extremal
when $r \leq q$. Building upon the techniques in \cite{LPW} that
answered the $r=2, q=3$ case, we confirmed this conjecture for large
$n$ and $r = q-1$.
\begin{theorem}
\label{thm:exact:turan}
Fix an integer $q \geq 4$. For all sufficiently large $n$, the
Tur\'an graph $T_{q-1}(n)$ has more $q$-colorings than any other
graph with the same number of vertices and edges.
\end{theorem}
We close by mentioning some related work. Tomescu \cite{Tomescu-max,
Tomescu-q3-conn, Tomescu-min, Tomescu-1975, Tomescu-hamiltonian,
Tomescu-conn-planar, Tomescu-2conn, Tomescu-blocks} and Dohmen
\cite{Dohmen-1, Dohmen-2} considered the problem of maximizing or
minimizing the number of $q$-colorings of $G$ given some other
parameters, such as chromatic number, connectedness, planarity, and
girth. Wright \cite{Wright} asymptotically determined the total
number of $q$-colored labeled $n$-vertex graphs with $m$ edges, for
the entire range of $m$; this immediately gives an asymptotic
approximation for the \emph{average}\/ value of $P_G(q)$ over all
labeled $n$-vertex graphs with $m$ edges.
Graph coloring is also a special case of a homomorphism problem, and
as we will discuss in our concluding remarks, our approach easily
extends to that more general setting. Recall that a graph
homomorphism $\phi : G \rightarrow H$ is a map from the vertices of
$G$ to those of $H$, such that adjacent vertices in $G$ are mapped to
adjacent vertices in $H$. Thus, the number of $q$-colorings of $G$ is
precisely the number of homomorphisms from $G$ to $K_q$. Another
interesting target graph $H$ is the two-vertex graph consisting of a
single edge, plus a loop at one vertex. Then, the number of
homomorphisms is precisely the number of independent sets in $G$, and
the problem of estimating that number given some partial information
about $G$ is motivated by various questions in statistical physics and
the theory of partially ordered sets. Alon \cite{Alon-indep-sets}
studied the maximum number of independent sets that a $k$-regular
graph of order $n$ can have, and Kahn \cite{Kahn-hard-core,
Kahn-dedekind} considered this problem under the additional
assumption that the $k$-regular graph is bipartite. Galvin and Tetali
\cite{Galvin-Tetali} generalized the main result from
\cite{Kahn-hard-core} to arbitrary target graphs $H$.
Another direction of related research was initiated by the question of
Erd\H{o}s and Rothschild (see Erd\H{o}s \cite{Erdos-1974, Erdos-1992},
Yuster \cite{Yuster}, Alon, Balogh, Keevash, and Sudakov \cite{ABKS},
Balogh \cite{Balogh}, and others), about the maximum over all
$n$-vertex graphs of the number of $q$-edge-colorings (not necessarily
proper) that do not contain a monochromatic $K_r$-subgraph. Our
method is somewhat similar to that in \cite{ABKS}, and these two
problems may be more deeply related than just a similarity in their
formulations.
\vspace{3mm}
The rest of this paper is organized as follows. The next section
contains some definitions, and a formulation of the Szemer\'edi
Regularity Lemma. In Section \ref{sec:reduction-to-opt}, we prove
Theorems \ref{thm:asymp-number} and \ref{thm:asymp-stability}, which
(asymptotically) reduce the general case of the problem to a
quadratically constrained linear program. Then, in the next section
we solve the relevant instances of the optimization problem to give
approximate versions of our main theorems. Sections
\ref{sec:exact:sparse} and \ref{sec:exact:q=3} refine these into the
precise forms of Theorems \ref{thm:main:sparse} and
\ref{thm:main:q=3}. We prove Theorem \ref{thm:exact:turan} in Section
\ref{sec:exact:turan}. The final section contains some concluding
remarks and open problems.
\section{Preliminaries}
\label{sec:preliminaries}
The following (standard) asymptotic notation will be utilized
extensively. For two functions $f(n)$ and $g(n)$, we write $f(n) =
o(g(n))$ if $\lim_{n \rightarrow \infty} f(n)/g(n) = 0$, and $f(n) =
O(g(n))$ or $g(n) = \Omega(f(n))$ if there exists a constant $M$ such
that $|f(n)| \leq M|g(n)|$ for all sufficiently large $n$. We also
write $f(n) = \Theta(g(n))$ if both $f(n) = O(g(n))$ and $f(n) =
\Omega(g(n))$ are satisfied.
We will use $[q]$ to denote the set $\{1, 2, \ldots, q\}$, and
$2^{[q]}$ to denote the collection of all of its subsets. As
mentioned in the introduction, the \emph{Tur\'an graph} $T_q(n)$ is
the complete $r$-partite graph on $n$ vertices with all parts of size
either $\lfloor n/r \rfloor$ or $\lceil n/r \rceil$.
Given two graphs with the same number of vertices, their \emph{edit
distance}\/ is the minimum number of edges that need to be added or
deleted from one graph to make it isomorphic to the other. We say
that two graphs are \emph{$d$-close}\/ if their edit distance is at
most $d$.
The rest of this section is devoted to formulating
the celebrated Szemer\'edi Regularity Lemma. This theorem roughly
states that every graph, no matter how large, can be approximated by
an object of bounded complexity, which corresponds to a union of a
bounded number of random-looking graphs. To measure the randomness of
edge distribution, we use the following definition. Let the edge
density $d(A, B)$ be the fraction $\frac{e(A, B)}{|A| |B|}$, where
$e(A, B)$ is the number of edges between $A$ and $B$.
\begin{definition}
A pair $(X, Y)$ of disjoint subsets of a graph is
\textbf{$\epsilon$-regular} if every pair of subsets $X' \subset X$
and $Y' \subset Y$ with $|X'| \geq \epsilon |X|$ and $|Y'| \geq
\epsilon |Y|$ has $|d(X', Y') - d(X, Y)| < \epsilon$.
\end{definition}
In this paper, we use the following convenient form of the Regularity
Lemma, which is essentially Theorem IV.5.$29'$ in the textbook
\cite{B-modern-graph-theory}.
\begin{theorem}
\label{thm:regularity-lemma} For every $\epsilon > 0$, there is a
natural number $M' = M'(\epsilon)$ such that \textbf{every} graph $G
= (V, E)$ has a partition $V = \bigcup_{i=1}^M V_i$ with the
following properties. The sizes of the vertex clusters $V_i$ are as
equal as possible (differing by at most 1), their number is between
$1/\epsilon \leq M \leq M'$, and all but at most $\epsilon M^2$ of
the pairs $(V_i, V_j)$ are $\epsilon$-regular.
\end{theorem}
\section{Reduction to an optimization problem}
\label{sec:reduction-to-opt}
In this section, we show that the solution of the following
quadratically constrained linear\footnote{Observe that the logarithms
are merely constant multipliers for the variables $\alpha_A$.}
program answers our main problem asymptotically.
\vspace{3mm}
\noindent \textbf{Optimization Problem 1.}\, Fix an integer $q \geq 2$
and a real parameter $\gamma$. Consider the following objective and
constraint functions:
\begin{displaymath}
\text{\sc obj}({\boldsymbol\alpha}) := \sum_{A \neq \emptyset} \alpha_A \log |A|\,;
\quad\quad\quad
\text{\sc v}({\boldsymbol\alpha}) := \sum_{A \neq \emptyset} \alpha_A,
\quad
\text{\sc e}({\boldsymbol\alpha}) := \sum_{A \cap B = \emptyset} \alpha_A \alpha_B.
\end{displaymath}
The vector ${\boldsymbol\alpha}$ has $2^q - 1$ coordinates $\alpha_A \in
\mathbb{R}$ indexed by the nonempty subsets $A \subset [q]$, and the
sum in $\text{\sc e}({\boldsymbol\alpha})$ runs over \emph{unordered}\/ pairs of disjoint
nonempty sets $\{A,B\}$. Let $\text{\sc Feas}(\gamma)$ be the \emph{feasible set}\/ of
vectors defined by the constraints ${\boldsymbol\alpha} \geq 0$, $\text{\sc v}({\boldsymbol\alpha}) = 1$,
and $\text{\sc e}({\boldsymbol\alpha}) \geq \gamma$. We seek to maximize $\text{\sc obj}({\boldsymbol\alpha})$
over the set $\text{\sc Feas}(\gamma)$, and we define $\text{\sc opt}(\gamma)$ to be this
maximum value, which exists by compactness. We will write that the
vector ${\boldsymbol\alpha}$ \emph{solves}\/ $\text{\sc opt}(\gamma)$ when both ${\boldsymbol\alpha} \in
\text{\sc Feas}(\gamma)$ and $\text{\sc obj}({\boldsymbol\alpha}) = \text{\sc opt}(\gamma)$.
\vspace{3mm}
\noindent \textbf{Note.}\, In the remainder of this paper, we will
write $\sum_A$ instead of $\sum_{A \neq \emptyset}$ because it is
clear from the definition of ${\boldsymbol\alpha}$ that the empty set is excluded.
\vspace{3mm}
\noindent \textbf{Construction 1: $\boldsymbol{G_\alpha(n)}$.}\, Let
$n$ and $m$ be the desired numbers of vertices and edges, and let
${\boldsymbol\alpha} \in \text{\sc Feas}(m/n^2)$ be a feasible vector. Consider the
following $n$-vertex graph, which we call $G_{\boldsymbol\alpha}(n)$. Partition
the vertices into (possibly empty) clusters $V_A$ such that each
$|V_A|$ differs from $n \alpha_A$ by less than 1. For every pair of
clusters $(V_A, V_B)$ which is indexed by disjoint subsets, place a
complete bipartite graph between the clusters.
\vspace{3mm}
Observe that any coloring that for each cluster $V_A$ uses only colors
from $A$ is a proper coloring. Therefore, if all $n \alpha_A$
happened to be integers, then $G_{\boldsymbol\alpha}(n)$ would have at least
$\prod_A |A|^{n\alpha_A} = e^{\text{\sc obj}({\boldsymbol\alpha})n}$ colorings, and also
precisely $\text{\sc e}({\boldsymbol\alpha}) n^2$ edges. But we cannot simply apply
Construction 1 to the ${\boldsymbol\alpha}$ that solves $\text{\sc opt}(m/n^2)$, because it
may happen that $G_{\boldsymbol\alpha}(n)$ has fewer than $m$ edges if the entries
of ${\boldsymbol\alpha}$ are not integer multiples of $1/n$. Fortunately, the
shortfall cannot be substantial:
\begin{proposition}
\label{prop:construction-asymp-edges}
The number of edges in any $G_{\boldsymbol\alpha}(n)$ differs from $\text{\sc e}({\boldsymbol\alpha})
n^2$ by less than $2^q n$. Also, for any other vector ${\boldsymbol\nu}$, the
edit-distance between $G_{\boldsymbol\alpha}(n)$ and $G_{\boldsymbol\nu}(n)$ is at most $\|
{\boldsymbol\alpha} - {\boldsymbol\nu} \|_1 n^2 + 2^{q+1} n$, where $\| \cdot \|_1$ is the
$L^1$-norm.
\end{proposition}
The proof is elementary and routine, so we will defer it to Section
\ref{sec:pf-asymp-number-ii} so as not to interrupt this exposition.
To recover from the $O(n)$ edge deficit, we extend the construction
in the following way.
\vspace{3mm}
\noindent \textbf{Construction 2: $\boldsymbol{G_{\boldsymbol\alpha}'(n)}$.}\, Let
$n$ and $m$ be the desired numbers of vertices and edges, and let
${\boldsymbol\alpha} \in \text{\sc Feas}(m/n^2)$ be a feasible vector. If $G_{\boldsymbol\alpha}(n)$ from
Construction 1 already has at least $m$ edges, then set $G_{\boldsymbol\alpha}'(n)
= G_{\boldsymbol\alpha}(n)$.
Otherwise, $G_{\boldsymbol\alpha}(n)$ is short by, say, $k$ edges, and $k = O(n)$
by Proposition \ref{prop:construction-asymp-edges}. Let $V_A$ be its
largest cluster whose index $A$ is not a singleton. Suppose first
that $|V_A| \geq 2\lceil \sqrt{k} \rceil$. So far $V_A$ does not span
any edges, so we can add $k$ edges to $G_{\boldsymbol\alpha}(n)$ by selecting two
disjoint subsets $U_1, U_2 \subset V_A$ of size $\lceil \sqrt{k}
\rceil$, and putting a $k$-edge bipartite graph between them. Call
the result $G_{\boldsymbol\alpha}'(n)$.
The last case is $|V_A| < 2\lceil \sqrt{k} \rceil$. We will later
show that this only arises when the maximum number of colorings is
only $2^{o(n)}$, and this is already achieved by the Tur\'an graph
$T_q(n)$. So, to clean up the statements of our theorems, we just
define $G_{\boldsymbol\alpha}'(n) = T_q(n)$ here.
\subsection{Structure of asymptotic argument}
\label{sec:dense-structure-argument}
We are now ready to state our theorem, which shows that solutions to
Optimization Problem 1 produce graphs which asymptotically maximize
the number of $q$-colorings.
\begin{theorem}
\label{thm:asymp-number}
For any $\epsilon > 0$, the following holds for any sufficiently
large $n$, and any $m$ less than or equal to the number of edges in
the Tur\'an graph $T_q(n)$.
\begin{description}
\item[(i)] Every $n$-vertex graph with $m$ edges has fewer than
$e^{(\text{\sc opt}(m/n^2) + \epsilon) n}$ proper $q$-colorings.
\item[(ii)] Any ${\boldsymbol\alpha}$ which solves $\text{\sc opt}(m/n^2)$ yields a graph
$G_{\boldsymbol\alpha}'(n)$ via Construction 2 which has at least $m$ edges and
more than $e^{(\text{\sc opt}(m/n^2) - \epsilon) n}$ proper $q$-colorings.
\end{description}
\end{theorem}
\noindent \textbf{Remark.}\, The number of colorings can only increase
when edges are deleted, so one may take an arbitrary $m$-edge subgraph
of $G_{\boldsymbol\alpha}'(n)$ if one requires a graph with exactly $m$ edges.
\vspace{3mm}
The key ingredient in the proof of Theorem \ref{thm:asymp-number} is
Szemer\'edi's Regularity Lemma. Part (ii) is routine, and full
details are given in Section \ref{sec:pf-asymp-number-ii}. On the
other hand, the argument for part (i) is more involved, so we
highlight its structure here so that the reader does not get lost in
the details. The proof breaks into the following claims.
\begin{description}
\item[Claim 1.] For any $\delta > 0$, there exists $n_0$ such that
the following holds for any graph $G = (V, E)$ with $n > n_0$
vertices and $m$ edges. The Regularity Lemma gives a special
partition of the vertex set into sets $V_1$, \ldots, $V_M$ of almost
equal size, where $M$ is upper bounded by a constant depending only
on $\delta$. Then, we may delete at most $\delta n^2$ edges of $G$
in such a way that the resulting graph $G'$ has the following
properties.
\begin{description}
\item[(i)] Each $G'[V_i]$ spans no edges.
\item[(ii)] If $G'$ has any edges at all between two parts $V_i$
and $V_j$, then in fact it has an edge between every pair of
subsets $U \subset V_i$, $W \subset V_j$ with $|U| \geq \delta
|V_i|$ and $W \geq \delta |V_j|$.
\end{description}
Note that since $G'$ is a subgraph of $G$, the number of
$q$-colorings can only increase.
\item[Claim 2.] Let $\mathcal{C}_1$ be the set of colorings of $G'$.
Then, if we keep only those colorings $\mathcal{C}_2 \subset
\mathcal{C}_1$ with the property that in each $V_i$, any color is
used either zero times or at least $\delta |V_i|$ times, we will
still have $|\mathcal{C}_2| \geq e^{-c_\delta n} |\mathcal{C}_1|$.
Here, $c_\delta$ is a constant which tends to zero with $\delta$.
Now each coloring in $\mathcal{C}_2$ has the special property that
whenever the same color appears on two parts $V_i$ and $V_j$, then
there cannot be any edges between those entire parts.
\item[Claim 3.] By looking at which colors appear on each part $V_i$,
we may associate each coloring with a map $[M] \rightarrow
2^{[q]}$. Let $\phi : [M] \rightarrow 2^{[q]}$ be a map which is
associated with the maximum number of colorings in $\mathcal{C}_2$.
Then, if we keep only those colorings $\mathcal{C}_3 \subset
\mathcal{C}_2$ which give $\phi$, we still have $|\mathcal{C}_3|
\geq 2^{-qM} |\mathcal{C}_2|$.
\item[Claim 4.] For every nonempty $A \subset [q]$, let $V_A$ be the
union of those parts $V_i$ for which $\phi(i) = A$. (These are the
parts that in all colorings in $\mathcal{C}_3$ are colored using
exactly colors from $A$.) Define the vector ${\boldsymbol\alpha}$ by setting
each $\alpha_A = |V_A|/n$. Then $G' \subset G_{\boldsymbol\alpha}(n)$, and since
$G'$ only differs from our original $G$ by at most $\delta n^2$
edges, we also have ${\boldsymbol\alpha} \in \text{\sc Feas}(m/n^2 - \delta)$. Thus:
\begin{displaymath}
|\mathcal{C}_3|
\ \leq \ \prod_A |A|^{|V_A|}
\ = \ e^{\text{\sc obj}({\boldsymbol\alpha}) n}
\ \leq \ e^{\text{\sc opt}(m/n^2 - \delta) n} \, .
\end{displaymath}
\item[Claim 5.] The function $\text{\sc opt}$ is uniformly continuous. Thus,
for an appropriate (sufficiently small) choice of $\delta > 0$, we
have for all sufficiently large $n$ that
\begin{displaymath}
P_G(q)
\ \leq \ P_{G'}(q)
\ \leq \ e^{c_\delta n} \cdot 2^{qM} \cdot e^{\text{\sc opt}(m/n^2 - \delta) n}
\ < \ e^{(\text{\sc opt}(m/n^2) + \epsilon) n} \, ,
\end{displaymath}
as desired. (Recall that $P_G(q)$ is the number of $q$-colorings of $G$.)
\end{description}
By combining these five claims with an elementary analysis argument,
we also obtain a stability result, which roughly states that if a
graph has ``close'' to the optimal number of colorings, then it must
resemble a graph from Construction 1. A stability result is very
useful, because the approximate structure later allows us to apply
combinatorial arguments to refine our asymptotic results into exact
results. We quantify this in terms of the edit-distance, which we
defined in Section \ref{sec:preliminaries}. Recall that we say that two
graphs are $d$-close when their edit distance is at most $d$. We
prove the following theorem in Section \ref{sec:asymp-stability}.
\begin{theorem}
\label{thm:asymp-stability}
For any $\epsilon, \kappa > 0$, the following holds for all
sufficiently large $n$. Let $G$ be an $n$-vertex, graph with $m
\leq \kappa n^2$ edges, which maximizes the number of $q$-colorings.
Then $G$ is $\epsilon n^2$-close to some $G_{\boldsymbol\alpha}(n)$ from
Construction 1, for an ${\boldsymbol\alpha}$ which solves $\text{\sc opt}(\gamma)$ for some
$|\gamma - m/n^2| \leq \epsilon$ with $\gamma \leq \kappa$.
\end{theorem}
\noindent \textbf{Remark.}\, This theorem is only useful if the
resulting $\gamma$ falls within the range of densities for which the
solution of $\text{\sc opt}$ is known. The technical parameter $\kappa$ is used
to keep $\gamma$ within this range.
\subsection{Finer resolution in the sparse case}
\label{sec:sparse-summary}
The Regularity Lemma is nontrivial only for graphs with positive edge
density (i.e., quadratic number of edges). This typically presents a
serious and often insurmountable obstacle when trying to extend
Regularity-based results to situations involving sparse graphs.
Although much work has been done to develop sparse variants of the
Regularity Lemma, the resulting analogues are weaker and much more
difficult to apply.
Let us illustrate the issue by attempting to apply Theorem
\ref{thm:asymp-number} when $m = o(n^2)$. Then, we find that the
maximum number of $q$-colorings of any $n$-vertex graph with $m$ edges
is $e^{cn + o(n)}$, where $c = \text{\sc opt}(0) = \log q$ is a constant entirely
determined by $q$. Note that the final asymptotic is independent of
$m$, even if $m$ grows extremely slowly compared to $n^2$. This is
because the key parameter was the density $m/n^2$, which already
vanished once $m = o(n^2)$. Thus, the interesting question in the
sparse case is to distinguish between sparse graphs and very sparse
graphs, by looking inside the $o(n)$ error term in the exponent.
We are able to circumvent these difficulties by making the following
key observation which allows us to pass to a dense subgraph. As it
turns out, every sparse graph which maximizes the number of
$q$-colorings has a nice structure: most of the vertices are isolated,
and \emph{all}\/ of the edges are contained in a subgraph which is
dense, but not too dense. Section \ref{sec:sparse-proofs} contains
the following lemma's short proof, which basically boils down to a
comparison against the smallest Tur\'an graph with at least $m$ edges.
\begin{lemma}
\label{lem:sparse-has-core}
Fix an integer $q \geq 2$ and a threshold $\kappa > 0$. Given any
positive integer $m$, there exists an $n_0 = \Theta(\sqrt{m})$ with
$m/n_0^2 \leq \kappa$ such that the following holds for any $n \geq
n_0$. In every $n$-vertex graph $G$ with $m$ edges, which maximizes
the number of $q$-colorings, there is a set of $n_0$ vertices which
spans all of the edges.
\end{lemma}
The fact that our graph is sparse becomes a benefit rather than a
drawback, because it allows us to limit the edge density from above by
any fixed threshold. This is useful, because we can
completely solve the optimization problem for all densities below
$\kappa_q = \left( \sqrt{\frac{\log q/(q-1)}{\log q}} +
\sqrt{\frac{\log q}{\log q/(q-1)}} \right)^{-2}$. We will prove the
following proposition in Section \ref{sec:solve-opt-sparse}.
\begin{proposition}
\label{prop:solve-opt-sparse}
Fix an integer $q \geq 3$. For any $0 \leq \gamma \leq
\kappa_q$, the \textbf{unique} solution (up to a permutation of the
ground set $[q]$) to $\text{\sc opt}(\gamma)$ has the following form.
\begin{equation}
\label{eq:sparse-opt-soln}
\alpha_{\{1\}} = \sqrt{\gamma \cdot \log \frac{q}{q-1} \, / \, \log q},
\quad\quad
\alpha_{\{2, \ldots q\}} = \frac{\gamma}{\alpha_{\{1\}}},
\quad\quad
\alpha_{[q]} = 1 - \alpha_{\{1\}} - \alpha_{\{2, \ldots q\}},
\end{equation}
with all other $\alpha_A = 0$. This gives $\text{\sc opt}(\gamma) = \log q -
2\sqrt{\gamma \cdot \log \frac{q}{q-1} \cdot \log q}$.
\end{proposition}
Since we have the complete solution of the relevant instance of the
optimization problem, we can give explicit bounds when we transfer our
asymptotic results from the previous section to the sparse case. We
can also explicitly describe the graph that approximates any optimal
graph, as follows. Let $t_1$ and $t_2$ be real numbers that satisfy
$t_1/t_2 = \log \frac{q}{q-1} / \log q$ and $t_1 t_2 = m$. Take a
complete bipartite graph between two vertex clusters $V_1$ and $V_2$
with sizes $|V_i| = \lceil t_i \rceil$, and add enough isolated
vertices to make the total number of vertices exactly $n$. Call the
result $G_{n,m}$.
\begin{proposition}
\label{prop:asymp-sparse}
Fix an integer $q \geq 3$. The following hold for all sufficiently
large $m \leq \kappa_q n^2$.
\begin{description}
\item[(i)] The maximum number of $q$-colorings of an $n$-vertex
graph with $m$ edges is $q^n e^{(-c + o(1)) \sqrt{m}}$, where $c =
2\sqrt{\log \frac{q}{q-1} \log q}$. Here, the $o(1)$ term tends
to zero as $m \rightarrow \infty$.
\item[(ii)] For any $\epsilon > 0$, as long as $m$ is sufficiently
large, every $n$-vertex graph $G$ with $m$ edges, which maximizes
the number of $q$-colorings, is $\epsilon m$-close to the graph
$G_{n, m}$ which we described above.
\end{description}
\end{proposition}
We prove this proposition in Section \ref{sec:sparse-proofs}. Note
that part (i) is precisely the final claim of Theorem
\ref{thm:main:sparse}.
\subsection{Proof of Theorem \ref{thm:asymp-number}, part (i)}
This section contains the proofs of the claims in Section
\ref{sec:dense-structure-argument}, except for Claim 3, which is
obvious. Together, these establish part (i) of Theorem
\ref{thm:asymp-number}, which gives the asymptotic upper bound for the
number of $q$-colorings of an $n$-vertex graph with $m$ edges.
\vspace{3mm}
\noindent \textbf{Proof of Claim 1.}\, Apply Szemer\'edi's Regularity
Lemma (Theorem \ref{thm:regularity-lemma}) with parameter $\epsilon =
\delta/3$ to partition of $V$ into nearly-equal parts $V_1$, \ldots,
$V_M$. Then, all but $\epsilon M^2$ of the pairs $(V_i, V_j)$ are
$\epsilon$-regular, and $M \geq 1/\epsilon$. Importantly, $M$ is also
upper bounded by a constant independent of $n$. We clean up the graph
in a way typical of many applications of the Regularity Lemma. Delete
all edges in each induced subgraph $G[V_i]$, all edges between pairs
$(V_i, V_j)$ which are not $\epsilon$-regular, and all edges between
pairs $(V_i, V_j)$ whose edge density is at most $\epsilon$. Since all
$|V_i| = (1+o(1))n/M$, the number of deleted edges is at most
\begin{displaymath}
(1+o(1)) \left[
M {n/M \choose 2}
+
\epsilon M^2 (n/M)^2
+
\epsilon {n \choose 2}
\right]
\ \leq \
(1+o(1)) [
\epsilon n^2 / 2
+
\epsilon n^2
+
\epsilon n^2 / 2
],
\end{displaymath}
which is indeed less than $\delta n^2$ when $n$ is sufficiently large.
It remains to show property (ii). The only edges remaining in $G'$
are those between $\epsilon$-regular pairs $(V_i, V_j)$ with
edge-density greater than $\epsilon$. By definition of
$\epsilon$-regularity (and since $\delta > \epsilon)$,
the edge density between
every pair of
sets $|U| \geq \delta |V_i|$, $|W| \geq \delta |V_j|$ must be
positive. In particular, there must be at least one
edge, which establishes property (ii). \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 2.}\, We aim to establish
$|\mathcal{C}_2| \geq e^{-c_\delta n} |\mathcal{C}_1|$, with $c_\delta
= q \delta \log \frac{e^2}{\delta}$. It is a simple calculus exercise
to verify that $c_\delta \rightarrow 0$ as $\delta \rightarrow 0$.
Let us show that we can obtain any coloring $\psi \in \mathcal{C}_1$
by starting with an appropriate coloring $\psi' \in \mathcal{C}_2$,
and changing only a few color choices. Since we may assume $\delta <
\frac{1}{q}$, every part $V_i$ has some color $c_i^*$ which appears on
at least $\delta$-fraction of its vertices. Now consider each $V_i$.
For every color $c$ which appears less than $\delta |V_i|$ times in
$V_i$, use color $c_i^*$ to re-color all vertices of $V_i$ that had
color $c$ under $\psi$. Now all colors appear either 0 or at least
$\delta |V_i|$ times, so once we verify that the coloring is still
proper, we will have our desired $\psi' \in \mathcal{C}_2$. But the
only way to make a monochromatic edge is to have two distinct parts
$V_i$, $V_j$, with $c_i^* = c_j^*$, joined by at least one edge. Then
part (ii) of Claim 1 implies that there is also some edge between the
$\delta |V_i|$ vertices in $V_i$ originally colored $c_i^*$ under
$\psi$, and the $\delta |V_j|$ vertices in $V_j$ originally colored
$c_j^*$. This contradicts the fact that $\psi$ was a proper coloring.
Reversing the process, it is clear that $\psi$ can be recovered by
taking $\psi' \in \mathcal{C}_2$ and changing the colors of at most
$\delta |V_i|$ vertices for every color $c \in [q]$ and every $1 \leq i \leq M$.
Note that for each $c \in [q]$, we recolor a subset of $G$ of total
size at most $\sum_i \delta |V_i| = \delta n$. Using the bounds ${n
\choose r} \leq (en/r)^r$ and $(1+x) \leq e^x$, we see that the
total number of distinct ways in which we can modify any given $\psi' \in \mathcal{C}_2$ is at most
\begin{displaymath}
\left[ \sum_{r = 0}^{\delta n} {n \choose r} \right]^q
\ \leq \ \left[ (1 + \delta n) {n \choose \delta n} \right]^q
\ \leq \ \left[ e^{\delta n} \left(\frac{en}{\delta n}\right)^{\delta n} \right]^q
\ = \ e^{c_\delta n},
\end{displaymath}
which provides the desired upper bound on $|\mathcal{C}_1| /
|\mathcal{C}_2|$.
The final part of this claim is a simple consequence of property (ii)
of Claim 1. Indeed, suppose that some coloring in $\mathcal{C}_2$
assigns the same color $c$ to some vertices $U_i \subset V_i$ and $U_j
\subset V_j$. Since this is a proper coloring, there cannot be any
edges between $U_i$ and $U_j$. Yet $|U_i| \geq \delta |V_i|$ and
$|U_j| \geq \delta |V_j|$ by definition of $\mathcal{C}_2$. Therefore, by
property (ii) of Claim 1, there are no edges at all between
$V_i$ and $V_j$, as claimed. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 4.}\, Recall that $G_{\boldsymbol\alpha}(n)$ was
obtained in Construction 1 by putting a complete bipartite graph
between every pair ($V_A, V_B$) indexed by disjoint subsets. The last
part of Claim 2 implies that $G'$ has no edges at all between parts
$V_i$ and $V_j$ which receive overlapping color sets under
$\mathcal{C}_3$. Furthermore, each $G'[V_i]$ is empty by part (i) of
Claim 1. So, $G'$ has no edges in each $V_A$, and also has no edges
between any $V_A$ and $V_B$ that are indexed by overlapping sets.
Hence $G'$ is indeed a subgraph of $G_{\boldsymbol\alpha}(n)$.
Furthermore, $G_{\boldsymbol\alpha}(n)$ has at least $m - \delta n^2$ edges,
because $G'$ differs from $G$ by at most $\delta n^2$ edges. Yet all
$n \alpha_A$ are integers by construction, so $G_{\boldsymbol\alpha}(n)$ has
precisely $\text{\sc e}({\boldsymbol\alpha}) n^2$ edges. Therefore, ${\boldsymbol\alpha} \in \text{\sc Feas}(m/n^2
- \delta)$, as claimed. The final inequality in Claim 4 follows from
the fact that $\mathcal{C}_3$ only uses colors from $A$ to color each
$V_A$, and the definitions of $\alpha_A = |V_A|/n$ and $\text{\sc obj}({\boldsymbol\alpha}) =
\sum_A \alpha_A \log |A|$. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 5.}\, The only nontrivial part of
this claim is the continuity of $\text{\sc opt}$ on its domain, which is the set
of $\gamma$ for which $\text{\sc Feas}(\gamma) \neq \emptyset$. This is easily
recognized as the interval $\big(-\infty, \frac{q-1}{2q}\big]$, where
the upper endpoint, which corresponds to the $q$-partite Tur\'an
graph, equals $\text{\sc e}({\boldsymbol\alpha})$ for the vector ${\boldsymbol\alpha}$ with $\alpha_A =
1/q$ for all singletons $A$. Note that the constraint ${\boldsymbol\alpha} \geq 0$
already guarantees that $\text{\sc e}({\boldsymbol\alpha}) \geq 0$, so $\text{\sc opt}$ is constant on
$(-\infty, 0]$.
Fix an $\epsilon > 0$. Since $\text{\sc opt}$ is monotonically decreasing by
definition, and constant on $(-\infty, 0]$, it suffices to show that
any $0 \leq \gamma < \gamma' \leq \frac{q-1}{2q}$ with $|\gamma' -
\gamma| < \epsilon^2$ has $\text{\sc opt}(\gamma') > \text{\sc opt}(\gamma) - 2^{q+1}
\epsilon \log q$. Select any ${\boldsymbol\alpha}$ which solves $\text{\sc opt}(\gamma)$.
We will adjust ${\boldsymbol\alpha}$ to find an ${\boldsymbol\alpha}' \in \text{\sc Feas}(\gamma')$ with
$\text{\sc obj}({\boldsymbol\alpha}') > \text{\sc obj}({\boldsymbol\alpha}) - 2^{q+1} \epsilon \log q$, using
essentially the same perturbation as in Construction 2.
If there is an $\alpha_A \geq 2\epsilon$ with $|A| \geq 2$, shift
$\epsilon$ of $\alpha_A$'s value\footnote{Formally, $\alpha_A$ falls
by $2\epsilon$, and each of $\alpha_{\{i\}}$ and $\alpha_{\{j\}}$
increase by $\epsilon$.} to each of $\alpha_{\{i\}}$ and
$\alpha_{\{j\}}$ for distinct $i, j \in A$. This clearly keeps
$\text{\sc v}({\boldsymbol\alpha})$ invariant, and it increases $\text{\sc e}({\boldsymbol\alpha})$ by at least
$\epsilon^2$ because $\alpha_{\{i\}} \alpha_{\{j\}}$ is a summand of
$\text{\sc e}({\boldsymbol\alpha})$. Yet it only reduces $\text{\sc obj}({\boldsymbol\alpha})$ by at most
$2\epsilon \log |A| \leq 2 \epsilon \log q$, so $\text{\sc obj}({\boldsymbol\alpha}') \geq
\text{\sc obj}({\boldsymbol\alpha}) - 2 \epsilon \log q$, finishing this case.
On the other hand, if all non-singletons $A$ have $\alpha_A <
2\epsilon$, then $\text{\sc obj}({\boldsymbol\alpha})$ is already less than $2^q \cdot 2
\epsilon \log q$. Since $\text{\sc opt}$ is always nonnegative, we trivially
have $\text{\sc opt}(\gamma') \geq 0 > \text{\sc opt}(\gamma) - 2^{q+1} \epsilon \log q$,
as desired. \hfill $\Box$
\subsection{Proof of Theorem \ref{thm:asymp-number}, part (ii)}
\label{sec:pf-asymp-number-ii}
In this section, we establish the asymptotic tightness of our upper
bound, by showing that Construction 2 produces graphs that
asymptotically maximize the number of $q$-colorings. We will need
Proposition \ref{prop:construction-asymp-edges}, so we prove it first.
\vspace{3mm}
\noindent \textbf{Proof of Proposition
\ref{prop:construction-asymp-edges}.}\, Define the variables $n_A =
n \alpha_A$ (not necessarily integers), and call the expressions
$\sum_A n_A$ and $\sum_{A \cap B = \emptyset} n_A n_B$ the numbers of
\emph{fractional vertices}\/ and \emph{fractional edges},
respectively. Initially, there are exactly $n$ fractional vertices
and $\text{\sc e}({\boldsymbol\alpha}) n^2$ fractional edges.
Recall that the construction rounds each $n_A$ either up or down to
the next integer. Let us perform these individual roundings
sequentially, finishing all of the downward roundings before the upward
roundings. This ensures that the number of fractional vertices is
kept $\leq n$ throughout the process. But each iteration changes the
number of fractional edges by at most $\sum_A n_A \leq n$, and there
are at most $2^q$ iterations, so our final number of edges is indeed
within $2^q n$ of $m$.
The second part of the proposition is proved similarly. We can apply
the same iterative process to change each part size from $\alpha_A n$
to $\nu_A n$, in such a way that all downward adjustments are
performed first. When updating the coordinate indexed by $A \subset
[q]$, we affect at most $(|\alpha_A n - \nu_A n| + 2)n$ edges,
where the extra 2 comes from the fact that the part sizes were rounded
off. Therefore, after the $\leq 2^q$ total iterations, the total
number of edges we edit is indeed at most $\| {\boldsymbol\alpha} - {\boldsymbol\nu} \|_1 n^2 +
2^{q+1} n$. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Theorem \ref{thm:asymp-number}(ii).}\, Let
$n$ and $m$ be given, with $m$ less than the number of edges in the
Tur\'an graph $T_q(n)$. Suppose we have a vector ${\boldsymbol\alpha} \in
\text{\sc Feas}(m/n^2)$ which achieves the maximum $\text{\sc obj}({\boldsymbol\alpha}) = \text{\sc opt}(m/n^2)$.
Construction 2 produces a graph $G_{\boldsymbol\alpha}'(n)$ with $n$ vertices and
at least $m$ edges, which we will show has more than
$e^{(\text{\sc opt}(m/n^2)-\epsilon)n}$ proper $q$-colorings, as long as $n$ is
sufficiently large.
If $G_{\boldsymbol\alpha}(n)$ already has at least $m$ edges, then we defined
$G_{\boldsymbol\alpha}'(n) = G_{\boldsymbol\alpha}(n)$, which has at least $\prod_A |A|^{\lfloor
n\alpha_A \rfloor} \geq \prod_A |A|^{n \alpha_A - 1} =
e^{\text{\sc obj}({\boldsymbol\alpha})n} / \prod_A |A| = e^{\text{\sc obj}({\boldsymbol\alpha})n - O(1)}$ colorings,
because all colorings that use only colors from $A$ for each $V_A$ are
proper.
Otherwise, $G_{\boldsymbol\alpha}(n)$ is short by, say, $k$ edges, which is $\leq
2^q n$ by Proposition \ref{prop:construction-asymp-edges}. If the
largest $|V_A|$ indexed by a non-singleton is at least $2 \lceil
\sqrt{k} \rceil$, our construction places a $k$-edge bipartite graph
between $U_1, U_2 \subset V_A$. Let $c_1$ and $c_2$ be two distinct
colors in $A$. Even if we force every vertex in each $U_i$ to take
the color $c_i$, we only lose at most a factor of $q^{2 \lceil
\sqrt{k} \rceil} = e^{o(n)}$ compared to the bound in the previous
paragraph. This is because each of the $2 \lceil \sqrt{k} \rceil$
vertices in $U_1 \cup U_2$ had its number of color choices reduced
from $|A| \leq q$ to 1. So, $G_{\boldsymbol\alpha}'(n)$ still has at least
$e^{\text{\sc obj}({\boldsymbol\alpha})n - o(n)}$ colorings.
The final case is when all parts $V_A$ indexed by non-singletons are
smaller than $2 \lceil \sqrt{k} \rceil$. Here, the construction
simply defines $G_{\boldsymbol\alpha}'(n)$ to be the Tur\'an graph $T_q(n)$. Since
$\log |A| = 0$ for singletons $A$, the upper bound on $|V_A|$ implies
that $\text{\sc obj}({\boldsymbol\alpha}) \leq 2^q \cdot \frac{2 \lceil \sqrt{k} \rceil}{n}
\cdot \log q$. This is less than $\epsilon$ for sufficiently large
$n$, because we had $k \leq 2^q n$. Then, $e^{(\text{\sc opt}(m/n^2) -
\epsilon)n} < 1$, which is of course less than the number of
$q$-colorings of the Tur\'an graph $T_q(n)$. This completes our proof.
\hfill $\Box$
\subsection{Proof of Theorem \ref{thm:asymp-stability}}
\label{sec:asymp-stability}
In this section, we prove that any $n$-vertex graph with $m$ edges,
which maximizes the number of $q$-colorings, is in fact close (in
edit-distance) to a graph $G_{\boldsymbol\alpha}(n)$ from Construction 1. In fact,
we prove something slightly stronger: if a graph has ``close'' to the
maximum number of $q$-colorings, then it must be ``close'' (in
edit-distance) to an asymptotically optimal graph from Construction 1.
\begin{lemma}
\label{lem:asymp-stability}
For any $\epsilon, \kappa > 0$, there exists $\delta > 0$ such that
the following holds for all sufficiently large $n$. Let $G$ be an
$n$-vertex graph with $m \leq \kappa n^2$ edges and at least
$e^{(\text{\sc opt}(m/n^2) - \delta) n}$ proper $q$-colorings. Then $G$ is
$\epsilon n^2$-close to some $G_{\boldsymbol\alpha}(n)$ from Construction 1, for
an ${\boldsymbol\alpha}$ which solves $\text{\sc opt}(\gamma)$ for some $|\gamma - m/n^2|
\leq \epsilon$ with $\gamma \leq \kappa$.
\end{lemma}
Note that this lemma immediately implies Theorem
\ref{thm:asymp-stability}, because Theorem \ref{thm:asymp-number}
established that the maximum number of colorings of an $n$-vertex
graph with $m$ edges was $e^{(\text{\sc opt}(m/n^2) +o(1)) n}$. Its proof is an
elementary analysis exercise in compactness, which only requires the
continuity of $\text{\sc obj}$, $\text{\sc opt}$, $\text{\sc v}$, and $\text{\sc e}$, the fact that ${\boldsymbol\alpha}$
and the edge densities $m/n^2$ reside in compact spaces, and the
following consequence of Claims 1--4 of Section
\ref{sec:dense-structure-argument} (whose simple proof we omit):
\begin{corollary}
\label{cor:stability-to-suboptimal}
For every $\delta > 0$, the following holds for all sufficiently
large $n$. Every $q$-colorable, $n$-vertex graph $G$ with $m$ edges
is $\delta n^2$-close to a subgraph of some $G_{\boldsymbol\alpha}(n)$ with
${\boldsymbol\alpha} \in \text{\sc Feas}(m/n^2 - \delta)$. Also, $G$ has at most
$e^{(\text{\sc obj}({\boldsymbol\alpha}) + \delta) n}$ proper $q$-colorings.
\end{corollary}
\vspace{3mm}
\noindent \textbf{Proof of Lemma \ref{lem:asymp-stability}.}\, We
proceed by contradiction. Then, there is some fixed $\epsilon > 0$, a
sequence $\delta_i \rightarrow 0$, and a sequence of graphs $G_i$ with
the following properties.
\begin{description}
\item[(i)] $G_i$ has at least as many vertices as required to apply
Corollary \ref{cor:stability-to-suboptimal} with parameter
$\delta_i$.
\item[(ii)] $G_i$ has at least $e^{(\text{\sc opt}(m_i/n_i^2) - \delta_i) n_i}$
colorings, where $n_i$ and $m_i$ are its numbers of vertices and
edges, and $m_i \leq \kappa n_i^2$.
\item[(iii)] $G_i$ is at least $\epsilon n_i^2$-far from
$G_{\boldsymbol\alpha}(n_i)$ for every ${\boldsymbol\alpha}$ that solves $\text{\sc opt}(\gamma)$ with
$|\gamma - m_i/n_i^2| \leq \epsilon$.
\end{description}
Applying Corollary \ref{cor:stability-to-suboptimal} to each $G_i$
with parameter $\delta_i$, we find vectors ${\boldsymbol\alpha}_i
\in \text{\sc Feas}(m_i/n_i^2 - \delta_i)$ such that $G_i$ is $\delta_i
n_i^2$-close to some subgraph $G_i'$ of $G_{{\boldsymbol\alpha}_i}(n_i)$, and each
$G_i$ has at most $e^{(\text{\sc obj}({\boldsymbol\alpha}_i)+\delta_i)n_i}$ proper
$q$-colorings. Combining this with property (ii) above, we find that
each $\text{\sc obj}({\boldsymbol\alpha}_i) \geq \text{\sc opt}(m_i/n_i^2) - 2\delta_i$. The densities
$m_i/n_i^2$ and the vectors ${\boldsymbol\alpha}_i$ live in bounded (hence compact)
spaces. So, by passing to a subsequence, we may assume that
$m_i/n_i^2 \rightarrow \gamma \leq \kappa$ and ${\boldsymbol\alpha}_i \rightarrow
{\boldsymbol\alpha}$ for some limit points $\gamma$ and ${\boldsymbol\alpha}$.
Observe that by continuity, both ${\boldsymbol\alpha} \in \text{\sc Feas}(\gamma)$ and $\text{\sc obj}({\boldsymbol\alpha}) \geq \text{\sc opt}(\gamma)$.
Therefore ${\boldsymbol\alpha}$ solves $\text{\sc opt}(\gamma)$, i.e., $\text{\sc obj}({\boldsymbol\alpha})=\text{\sc opt}(\gamma)$.
Furthermore, although \emph{a priori}\/ we only knew
that $\text{\sc e}({\boldsymbol\alpha}) \geq \gamma$, maximality implies that in fact
$\text{\sc e}({\boldsymbol\alpha}) = \gamma$. Indeed, if not then one could shift more mass to
$\alpha_{[q]}$ to increase $\text{\sc obj}({\boldsymbol\alpha})$ while staying within the
feasible set. This would contradict that $\text{\sc obj}({\boldsymbol\alpha})=\text{\sc opt}(\gamma)$.
We finish by showing that eventually $G_i$ is $\epsilon n_i^2$-close
to $G_{\boldsymbol\alpha}(n_i)$, contradicting (iii). To do this, we show that all
three of the edit-distances between $G_i \leftrightarrow G_i'
\leftrightarrow G_{{\boldsymbol\alpha}_i}(n_i) \leftrightarrow G_{\boldsymbol\alpha}(n_i)$ are
$o(n_i^2)$. The closeness of the first pair follows by construction
since $\delta_i \rightarrow 0$, and the closeness of the last pair
follows from Proposition \ref{prop:construction-asymp-edges}
because ${\boldsymbol\alpha}_i \rightarrow {\boldsymbol\alpha}$.
For the central pair, recall that $G_i'$ is actually contained in
$G_{{\boldsymbol\alpha}_i}(n_i)$, so we only need to compare their numbers of
edges. In fact, since we already established $o(n_i^2)$-closeness of
the first and last pairs, it suffices to show that the difference
between the number of edges in $G_i$ and $G_{\boldsymbol\alpha}(n_i)$ is
$o(n_i^2)$. Recall from above that $\text{\sc e}({\boldsymbol\alpha}) = \gamma$, and
therefore by Proposition \ref{prop:construction-asymp-edges},
$G_{\boldsymbol\alpha}(n_i)$ has $\text{\sc e}({\boldsymbol\alpha})n_i^2 + o(n_i^2) = (\gamma + o(1))
n_i^2$ edges. Yet $G_i$ also has $(\gamma+o(1)) n_i^2$ edges, because
$m_i/n_i^2 \rightarrow \gamma$. This completes the proof. \hfill
$\Box$
\subsection{Proofs for the sparse case}
\label{sec:sparse-proofs}
In this section, we prove the statements which refine our results in
the case when the graph is sparse, i.e., $m = o(n^2)$. We begin with
the lemma which shows that every sparse graph with the maximum number
of colorings has a dense core which spans all of the edges.
\vspace{3mm}
\noindent \textbf{Proof of Lemma \ref{lem:sparse-has-core}.}\, Let
$n_1$ be the number of non-isolated vertices in $G$, and let $r$ be
the number of connected components in the subgraph induced by the
non-isolated vertices. Since all such vertices there have degree at
least 1, we have $r \leq n_1/2$.
Any connected graph on $t$ vertices has at most $q (q-1)^{t-1}$ proper
$q$-colorings, because we may iteratively color the vertices along a
depth-first-search tree rooted at an arbitrary vertex; when we visit
any vertex other than the root, there will only be at most $q-1$
colors left to choose from. So, $G$ has at most $q^{n-n_1} \cdot q^r
\cdot (q-1)^{n_1-r}$ colorings, where the first factor comes from the
fact that isolated vertices have a free choice over all $q$ colors.
Using $r \leq n_1/2$, this bound is at most $q^{n-n_1/2}
(q-1)^{n_1/2}$.
But since $G$ is optimal, it must have at least as many colorings as
the Tur\'an graph $T_q(n_2)$ plus $n-n_2$ isolated vertices, where
$n_2 = \Theta(\sqrt{m})$ is the minimum number of vertices in a
$q$-partite Tur\'an graph with at least $m$ edges. The isolated
vertices already give the latter graph at least $q^{n-n_2}$ colorings,
so we must have $q^{n-n_2} \leq q^{n-n_1/2} (q-1)^{n_1/2}$, which
implies that
\begin{equation}
\label{eq:n_1}
n_1 \leq n_2 \cdot (2 \log q) / \left(\log \frac{q}{q-1}\right) .
\end{equation}
The expression on the right hand side is $\Theta(n_2) =
\Theta(\sqrt{m})$, so if we define the integer $n_0$ to be the maximum
of right hand side in \eqref{eq:n_1} and $\sqrt{m/\kappa}$ (rounding up
to the next integer if necessary) then we indeed have $n_1 \leq n_0 =
\Theta(n_2) = \Theta(\sqrt{m})$. \hfill $\Box$
\vspace{3mm}
Next, we prove the first part of Proposition \ref{prop:asymp-sparse},
which claims that the maximum number of $q$-colorings of an $n$-vertex
graph with $m \leq \kappa_q n^2$ edges is asymptotically $q^n e^{(-c +
o(1)) \sqrt{m}}$, where $\kappa_q = \left( \sqrt{\frac{\log
q/(q-1)}{\log q}} + \sqrt{\frac{\log q}{\log q/(q-1)}}
\right)^{-2}$ and $c = 2\sqrt{\log \frac{q}{q-1} \log q}$.
\vspace{3mm}
\noindent \textbf{Proof of Proposition \ref{prop:asymp-sparse}(i).}\,
Let $G$ be an $n$-vertex graph with $m$ edges, which maximizes the
number of $q$-colorings. Let $n_0$ be the integer obtained by
applying Lemma \ref{lem:sparse-has-core} with threshold $\kappa_q$.
If $n \geq n_0$, the lemma gives a dense $n_0$-vertex subgraph $G'
\subset G$ which contains all of the edges. Otherwise, set $G' = G$.
In either case, we obtain a graph $G'$ whose number of vertices $n'$
is $\Theta(\sqrt{m})$, and $m/(n')^2 \leq \kappa_q$.
Since the vertices in $G \setminus G'$ (if any) are isolated, the
number of $q$-colorings of $G$ is precisely $q^{n-n'}$ times the
number of $q$-colorings of $G'$. Therefore, $G'$ must also have the
maximum number of $q$-colorings over all $n'$-vertex graphs with $m$
edges. Applying Theorem \ref{thm:asymp-number} to $G'$, we find that
$G'$ has $e^{(\text{\sc opt}(m/(n')^2) + o(1))n'}$ colorings. Proposition
\ref{prop:solve-opt-sparse} gives us the precise answer
$\text{\sc opt}(m/(n')^2) = \log q - 2\sqrt{\frac{m}{(n')^2} \cdot \log
\frac{q}{q-1} \cdot \log q}$, so substituting that in gives us that
the number of $q$-colorings of $G$ is:
\begin{displaymath}
q^{n-n'} \cdot e^{(\text{\sc opt}(m/(n')^2) + o(1))n'}
\ = \
q^{n-n'} \cdot q^{n'} e^{(-c+o(1))\sqrt{m}}
\ = \
q^n e^{(-c+o(1))\sqrt{m}},
\end{displaymath}
where $c$ is indeed the same constant as claimed in the statement of
this proposition. \hfill $\Box$
\vspace{3mm}
We finish this section by proving the stability result which shows
that any optimal sparse graph is $\epsilon m$-close (in edit-distance)
to the graph $G_{n, m}$ defined in Section \ref{sec:sparse-summary}.
\vspace{3mm}
\noindent \textbf{Proof of Proposition \ref{prop:asymp-sparse}(ii).}\,
Let $G$ be an $n$-vertex graph with $m$ edges, which maximizes the
number of $q$-colorings. We will actually show the equivalent
statement that $G$ is $O((\epsilon + \sqrt{\epsilon})m)$-close to
$G_{n, m}$.
As in the proof of part (i) above, we find a dense $n'$-vertex
subgraph $G' \subset G$ that spans all of the edges, which itself must
maximize the number of $q$-colorings. Using the same parameters as
above, we have $n' = \Theta(\sqrt{m})$ and $m \leq \kappa_q (n')^2$.
By Theorem \ref{thm:asymp-stability}, $G'$ must be $\epsilon
(n')^2$-close to a graph $G_{\boldsymbol\alpha}(n')$ from Construction 1, for some
${\boldsymbol\alpha}$ that solves $\text{\sc opt}(\gamma)$ with $\gamma \leq \kappa_q$.
Since $n' = \Theta(\sqrt{m})$, the graphs are $O(\epsilon m)$-close.
The $\gamma$ is within the range in which Proposition
\ref{prop:solve-opt-sparse} solved Optimization Problem 1, so
$G_{\boldsymbol\alpha}(n')$ is a complete bipartite graph plus isolated vertices,
which indeed resembles $G_{n, m}$.
Moreover, the ratio between the sizes of the sides of the complete
bipartite graph in $G_{\boldsymbol\alpha}(n')$ is correct, because it tends to the
constant $\log \frac{q}{q-1} / \log q$ regardless of the value of
$\gamma$. Also, their product, which equals the number of edges in
$G_{\boldsymbol\alpha}(n')$, is within $O(\epsilon m)$ of $m$ because
$G_{\boldsymbol\alpha}(n')$ is $O(\epsilon m)$-close to the $m$-edge graph $G'$.
Therefore, each of the sides of the complete bipartite graph in
$G_{\boldsymbol\alpha}(n')$ differs in size from its corresponding side in $G_{n,
m}$ by at most $O(\sqrt{\epsilon m})$. Since each side of the
bipartite graph in $G_{n, m}$ has size $\Theta(\sqrt{m})$, we can
transform $G_{\boldsymbol\alpha}(n')$ into $G_{n, m}$ by adding isolated vertices
and editing at most $O(\sqrt{\epsilon} \cdot m)$ edges. Yet by
construction of ${\boldsymbol\alpha}$, the graphs $G'$ and $G_{\boldsymbol\alpha}(n')$ were
$O(\epsilon m)$-close, modulo isolated vertices. Therefore, $G$ and
$G_{n, m}$ are indeed $O((\epsilon + \sqrt{\epsilon})m)$-close, as
claimed. \hfill $\Box$
\section{Solving the optimization problem}
\label{sec:solve-opt}
In this section, we solve the optimization problem for low densities,
for all values of $q$. We also solve it for all densities in the case
when $q=3$.
\subsection{Sparse case}
\label{sec:solve-opt-sparse}
The key observation is that when the edge density is low, we can
reduce the optimization problem to one with no edge density parameter
and no vertex constraint. This turns out to be substantially easier
to solve.
\vspace{3mm}
\noindent \textbf{Optimization Problem 2.}\, Fix an integer $q$, and
consider the following objective and constraint functions:
\begin{displaymath}
\text{\sc obj}^*({\boldsymbol\alpha}) := \sum_A \alpha_A \log \frac{|A|}{q}\,;
\quad\quad\quad
\text{\sc e}({\boldsymbol\alpha}) := \sum_{A \cap B = \emptyset} \alpha_A \alpha_B.
\end{displaymath}
The vector ${\boldsymbol\alpha}$ has $2^q - 2$ coordinates $\alpha_A \in
\mathbb{R}$ indexed by the nonempty \textbf{proper} subsets $A \subset
[q]$, and the sum in $\text{\sc e}({\boldsymbol\alpha})$ runs over unordered pairs of
disjoint sets $\{A,B\}$. Let $\text{\sc Feas}^*$ be the feasible set of vectors
defined by the constraints ${\boldsymbol\alpha} \geq 0$ and $\text{\sc e}({\boldsymbol\alpha}) \geq 1$.
We seek to maximize $\text{\sc obj}^*({\boldsymbol\alpha})$ over the set $\text{\sc Feas}^*$, and we
define $\text{\sc opt}^*$ to be this maximum value, which we will show to exist
in Section \ref{sec:sparse:observ}. We write that the vector ${\boldsymbol\alpha}$
\emph{solves}\/ $\text{\sc opt}^*$ when both ${\boldsymbol\alpha} \in \text{\sc Feas}^*$ and
$\text{\sc obj}^*({\boldsymbol\alpha}) = \text{\sc opt}^*$.
\begin{proposition}
\label{prop:solve-opt-2}
For any given $q \geq 3$, the \textbf{unique} solution (up to a permutation
of the base set $[q]$) to Optimization Problem 2 is the vector
${\boldsymbol\alpha}^*$ with
\begin{displaymath}
\alpha_{\{1\}}^* = \sqrt{\log \frac{q}{q-1} \, / \, \log q},
\quad\quad\quad
\alpha_{\{2, \ldots q\}}^* = \frac{1}{\alpha_{\{1\}}^*},
\quad\quad\quad
\text{and all other $\alpha_A^* = 0$.}
\end{displaymath}
This gives $\text{\sc obj}^*({\boldsymbol\alpha}^*) = -2\sqrt{\log \frac{q}{q-1} \log q}$.
\end{proposition}
Let us show how Proposition \ref{prop:solve-opt-2} implies Proposition
\ref{prop:solve-opt-sparse}, which gave the solution to Optimization
Problem 1 for sufficiently low edge densities $\gamma$.
\vspace{3mm}
\noindent \textbf{Proof of Proposition \ref{prop:solve-opt-sparse}.}\,
Let ${\boldsymbol\alpha}^*$ be the unique maximizer for Optimization Problem 2, and
consider any number $t \geq \text{\sc v}({\boldsymbol\alpha}^*)$. Then ${\boldsymbol\alpha}^*$ is still
the unique maximizer of $\text{\sc obj}^*({\boldsymbol\alpha})$ when ${\boldsymbol\alpha}$ is required to
satisfy the vacuous condition $\text{\sc v}({\boldsymbol\alpha}) \leq t$ as well. Let
$\overline{{\boldsymbol\alpha}}$ be the vector obtained by dividing every entry of
${\boldsymbol\alpha}^*$ by $t$, and adding a new entry $\overline{\alpha}_{[q]}$ so
that $\text{\sc v}(\overline{{\boldsymbol\alpha}}) = 1$.
Then, $\overline{{\boldsymbol\alpha}}$ is the unique maximizer of $\text{\sc obj}^*({\boldsymbol\alpha})$
when ${\boldsymbol\alpha}$ is constrained by $\text{\sc v}({\boldsymbol\alpha}) = 1$ and $\text{\sc e}({\boldsymbol\alpha})
\geq t^{-2}$. But when $\text{\sc v}({\boldsymbol\alpha}) = 1$ is one of the constraints,
then $\text{\sc obj}^*({\boldsymbol\alpha}) = \text{\sc obj}({\boldsymbol\alpha}) - \log q$, so this implies that
$\overline{{\boldsymbol\alpha}}$ is the unique solution to $\text{\sc opt}(t^{-2})$. Using
the substitution $\gamma = t^{-2}$, we see that $\overline{{\boldsymbol\alpha}}$ is
precisely the vector described in \eqref{eq:sparse-opt-soln}. Since
$t \geq \text{\sc v}({\boldsymbol\alpha}^*)$ was arbitrary, we conclude that this holds for
all $\gamma$ below $\text{\sc v}({\boldsymbol\alpha}^*)^{-2} = \left( \sqrt{\frac{\log
q/(q-1)}{\log q}} + \sqrt{\frac{\log q}{\log q/(q-1)}}
\right)^{-2} = \kappa_q$. \hfill $\Box$
\subsubsection{Observations for Optimization Problem 2}
\label{sec:sparse:observ}
We begin by showing that $\text{\sc obj}^*$ attains its maximum on the feasible
set $\text{\sc Feas}^*$. Since $\text{\sc Feas}^*$ is clearly nonempty, there is some
finite $c \in \mathbb{R}$ for which $\text{\sc opt}^* \geq c$. In the formula
for $\text{\sc obj}^*$, all coefficients $\log \frac{|A|}{q}$ of the $\alpha_A$
are negative, so we only need to consider the compact region bounded
by $0 \leq \alpha_A \leq c/\log \frac{|A|}{q}$ for each $A$.
Therefore, by compactness, $\text{\sc obj}^*$ indeed attains its maximum on
$\text{\sc Feas}^*$.
Now that we know the maximum is attained, we can use perturbation
arguments to determine its location. The following definition will be
convenient for our analysis.
\begin{definition}
Let the \textbf{support} of a vector ${\boldsymbol\alpha}$ be the collection of
$A$ for which $\alpha_A \neq 0$.
\end{definition}
The following lemma will allow us to reduce to the case of considering
optimal vectors whose supports are a partition of $[q]$.
\begin{lemma}
\label{lem:sparse:support=partition}
One of the vectors ${\boldsymbol\alpha}$ which solves $\text{\sc opt}^*$ has support that is
a partition\footnote{A collection of disjoint sets whose union
is $[q]$.} of $[q]$. Furthermore, if the only partitions that
support optimal vectors consist of a singleton plus a $(q-1)$-set,
then in fact every vector which solves $\text{\sc opt}^*$ is supported by such
a partition.
\end{lemma}
\noindent \textbf{Proof.}\, We begin with the first statement. Let
${\boldsymbol\alpha}$ be a vector which solves $\text{\sc opt}^*$, and suppose that its
support contains two intersecting sets $A$ and $B$. We will perturb
$\alpha_A$ and $\alpha_B$ while keeping all other $\alpha$'s fixed.
Since $A$ and $B$ intersect, the polynomial $\text{\sc e}({\boldsymbol\alpha})$ has no
products $\alpha_A \alpha_B$, i.e., it is of the form $x \alpha_A + y
\alpha_B + z$, for some constants $x, y, z \geq 0$.
Furthermore, $x \neq 0$, or else we could reduce $\alpha_A$ to zero
without affecting $\text{\sc e}({\boldsymbol\alpha})$, but this would strictly increase
$\text{\sc obj}^*({\boldsymbol\alpha})$ because all coefficients $\log \frac{|A|}{q}$ in
$\text{\sc obj}^*$ are negative. Similarly, $y \neq 0$. Therefore, we may
perturb $\alpha_A$ by $+ty$ and $\alpha_B$ by $-tx$, while keeping
$\text{\sc e}({\boldsymbol\alpha})$ fixed. Since we may use both positive and negative $t$
and $\text{\sc obj}^*$ itself is linear in $\alpha_A$ and $\alpha_B$, optimality
implies that $\text{\sc obj}^*$ does not depend on $t$. Hence we may choose a
$t$ which drives one of $\alpha_A$ or $\alpha_B$ to zero (we are free to pick which one), and $\text{\sc obj}^*$
will remain unchanged.
Repeating this process, we eventually obtain a vector ${\boldsymbol\alpha}$ which
is supported by disjoint sets. Their union must be the entire $[q]$,
because otherwise we could simply grow one of the sets in the support
by adding the unused elements of $[q]$. This would not affect
$\text{\sc e}({\boldsymbol\alpha})$, but it would strictly increase $\text{\sc obj}^*$.
It remains to prove the second part of our lemma. Let ${\boldsymbol\alpha}$ be an
optimal vector, and apply the above reduction process to simplify its
support. At the end, we will have a vector supported by $|A| = 1$ and
$|B| = q-1$, by assumption. Each iteration of the reduction removes
exactly one set from the support, so the second to last stage will have
some ${\boldsymbol\alpha}'$ supported by three distinct sets, two of which are the
final $A$ and $B$, and the third which we call $C$.
In the reduction, when we consider two overlapping sets, we are free
to select which one is removed. Therefore, we could choose to keep
the third set $C$ and remove one of $A$ and $B$, and then continue
reducing until the support is disjoint, while keeping $\text{\sc obj}^*$
unchanged. Yet no matter what $C$ was, it is impossible for this
alternative reduction route to terminate in a partition of $[q]$,
contradicting the above observation that any reduction must terminate
in a partition. \hfill $\Box$
\begin{definition}
\label{def:sparse:IJ}
Let ${\boldsymbol\alpha}$ be a fixed vector whose support is a partition of
$[q]$. For each $A \subset [q]$, define the expressions:
\begin{displaymath}
I_A \ = \ \alpha_A \sum_{B \neq A} \alpha_B
\quad\quad\quad
J_A \ = \ \frac{1}{\text{\sc obj}^*({\boldsymbol\alpha})} \cdot \alpha_A \log \frac{|A|}{q}.
\end{displaymath}
\end{definition}
\begin{lemma}
\label{lem:sparse:IJ}
Let ${\boldsymbol\alpha}$ be a vector which solves $\text{\sc opt}^*$, whose support is a
partition of $[q]$. Then:
\begin{description}
\item[(i)] For every $A \subset [q]$, we have $I_A = 2J_A$. In
particular, for each $A$ in the support, $I_A / \alpha_A = 2J_A /
\alpha_A$.
\item[(ii)] Suppose $A$ and $B$ are both in the support, and $|A| =
|B|$. Then $\alpha_A = \alpha_B$ as well.
\end{description}
\end{lemma}
\noindent \textbf{Proof.}\, We begin with part (i). Fix any $A
\subset [q]$. Consider the following operation for small $\epsilon >
0$. First, replace $\alpha_A$ by $(1+\epsilon)\alpha_A$.
Observe that $I_A = \alpha_A \sum_{B: B \cap A
= \emptyset} \alpha_B$ because the support of ${\boldsymbol\alpha}$ is a
partition of $[q]$. Therefore we increase $\text{\sc e}({\boldsymbol\alpha}) = \sum_{A \cap B=\emptyset} \alpha_A \alpha_B$ by
$\epsilon I_A$. Next,
multiply all $\alpha$'s (including the one we just increased) by $(1 +
\epsilon I_A)^{-1/2}$. Then $\text{\sc e}({\boldsymbol\alpha})$ is still at least 1 and
our perturbed vector is in $\text{\sc Feas}^*$. Its new objective equals $\text{\sc obj}^*({\boldsymbol\alpha}) \cdot \frac{1 + \epsilon
J_A}{\sqrt{1 + \epsilon I_A}}$. Since ${\boldsymbol\alpha}$ maximized the
objective (which is always negative), we must have $\frac{1 + \epsilon
J_A}{\sqrt{1 + \epsilon I_A}} \geq 1$. Rearranging, this implies
that $I_A \leq 2J_A + \epsilon J_A^2$. Sending $\epsilon \rightarrow
0$, we see that $I_A \leq 2J_A$. The opposite inequality follows from
considering the replacement of $\alpha_A$ by $(1-\epsilon)\alpha_A$,
and then multiplying $\alpha$'s by $(1 - \epsilon I_A)^{-1/2}$. This
establishes part (i).
For part (ii), let $S = \sum_C \alpha_C$. Since the support of
${\boldsymbol\alpha}$ is a partition of $[q]$, $S - \alpha_A = I_A/\alpha_A$. By
part (i), this equals $2J_A/\alpha_A = \log \frac{|A|}{q} /
\text{\sc obj}^*({\boldsymbol\alpha})$, which is determined by the cardinality of $A$.
Therefore, $S - \alpha_A = S - \alpha_B$, which implies (ii). \hfill
$\Box$
\subsubsection{Solution to Optimization Problem 2 for $\boldsymbol{q < 9}$}
In its original form, Optimization Problem 2 involves exponentially
many variables, but Lemma \ref{lem:sparse:support=partition}
dramatically reduces their number by allowing us to consider only
supports that are partitions of $[q]$. Therefore, we need to make one
computation per partition of $[q]$, which can actually be done
\emph{symbolically}\/ (hence exactly) by \emph{Mathematica}. The
running time of \emph{Mathematica}'s symbolic maximization is
double-exponential in the number of variables, so it was particularly
helpful to reduce the number of variables. The entire computation for
$q \in \{3, \ldots, 8\}$ took less than an hour, and the complete
\emph{Mathematica}\/ program and output appear in Appendix
\ref{sec:mathematica}.
Let us illustrate this process by showing what needs to be done for
the partition $7 = 2+2+3$. This corresponds to maximizing $\alpha_A
\log \frac{2}{7} + \alpha_B \log \frac{2}{7} + \alpha_C \log
\frac{3}{7}$ subject to the constraints $\alpha_A \alpha_B + \alpha_B
\alpha_C + \alpha_C \alpha_A \geq 1$ and ${\boldsymbol\alpha} \geq 0$. By Lemma
\ref{lem:sparse:IJ}(ii), we may assume $\alpha_A = \alpha_B$, so it
suffices to maximize $2 x \log \frac{2}{7} + y \log \frac{3}{7}$
subject to $x^2 + 2xy \geq 1$ and $x, y \geq 0$. This is achieved by
\emph{Mathematica}'s \texttt{Maximize} function:
\begin{verbatim}
Maximize[{2 x Log[2/7] + y Log[3/7], x^2 + 2 x y >= 1 && x >= 0 && y >= 0}, {x, y}]
\end{verbatim}
\emph{Mathematica}\/ answers that the maximum value is
$-\sqrt{-\big(\log\frac{7}{3}\big)^2 + 4 \log \frac{7}{3} \log
\frac{7}{2}} \approx -1.9$, which is indeed less than the claimed
value $-2 \sqrt{\log \frac{7}{7-1} \log 7} \approx -1.1$.
We performed one such computation per partition of each $q \in \{3,
\ldots, 8\}$. In every case except for the partition $q = 1 + (q-1)$,
the maximum indeed fell short of the claimed value. That final
partition is completely solved analytically (i.e., including the
uniqueness result) by Lemma \ref{lem:sparse:2sets=>done} in the next
section. This completes the analysis for all $q < 9$.
\subsubsection{Solution to Optimization Problem 2 for $\boldsymbol{q \geq 9}$}
\label{sec:solve-opt-sparse-q>=9}
We begin by ruling out several extreme partitions that our general
argument below will not handle. As one may expect, each of these
special cases has a fairly pedestrian proof, so we postpone the
proofs of the following two lemmas to the appendix.
\begin{lemma}
\label{lem:sparse:2sets=>done}
Fix any integer $q \geq 3$, and let ${\boldsymbol\alpha}$ be a vector which
solves $\text{\sc opt}^*$. If the support of ${\boldsymbol\alpha}$ is a partition of $[q]$
into exactly two sets, then (up to permutation of the ground set
$[q]$) ${\boldsymbol\alpha}$ must be equal to the claimed unique optimal vector
${\boldsymbol\alpha}^*$ in Proposition \ref{prop:solve-opt-2}.
\end{lemma}
\begin{lemma}
\label{lem:sparse:extreme}
Fix any integer $q \geq 4$, and let ${\boldsymbol\alpha}$ be a vector which
solves $\text{\sc opt}^*$, whose support is a partition of $[q]$. Then that
partition cannot have any of the following forms:
\begin{description}
\item[(i)] all singletons;
\item[(ii)] all singletons, except for one 2-set;
\item[(iii)] have a $(q-2)$-set as one of the parts.
\end{description}
\end{lemma}
The heart of the solution to the optimization problem is the following
general case, which we will prove momentarily.
\begin{lemma}
\label{lem:sparse:<q-2=>done}
Fix any integer $q \geq 9$, and let ${\boldsymbol\alpha}$ be a vector which
solves $\text{\sc opt}^*$, whose support is a partition of $[q]$. Then that
partition must have a set of size at least $q-2$.
\end{lemma}
These collected results show that $\text{\sc opt}^*$ has the unique solution that
we claimed at the beginning of this section.
\vspace{3mm}
\noindent \textbf{Proof of Proposition \ref{prop:solve-opt-2} for
$\boldsymbol{q \geq 9}$.}\, Let ${\boldsymbol\alpha}$ be a vector which solves
$\text{\sc opt}^*$. By Lemma \ref{lem:sparse:support=partition}, we may assume
that its support is a partition of $[q]$. It cannot be a single set
(of cardinality $q$), because then $\text{\sc e}({\boldsymbol\alpha}) = 0$, and by Lemmas
\ref{lem:sparse:extreme}(iii) and \ref{lem:sparse:<q-2=>done}, the
support cannot contain a set of size $\leq q-2$.
Thus, the support must contain a set of size $q-1$, and since it is a
partition, the only other set is a singleton. Then Lemma
\ref{lem:sparse:2sets=>done} gives us that ${\boldsymbol\alpha}$ equals the claimed
unique optimal vector ${\boldsymbol\alpha}^*$, up to a permutation of the ground
set $[q]$. This completes the proof. \hfill $\Box$
\vspace{3mm}
In the remainder of this section, we prove the general case (Lemma
\ref{lem:sparse:<q-2=>done}). The following definition and fact are
convenient, but the proof is a routine calculus exercise, so we
postpone it to the appendix.
\begin{lemma}
\label{lem:sparse:Fq}
Define the function $F_q(x) = \log \frac{q}{q-x} \cdot \log
\frac{q}{x}$.
\begin{description}
\item[(i)] For $q > 0$, $F_q(x)$ strictly increases on $0 < x <
q/2$ and strictly decreases on $q/2 < x < q$.
\item[(ii)] For $q \geq 9$, we have the inequality $F_q(3) > 2F_q(1)
\cdot \frac{q-3}{q-2}$.
\end{description}
\end{lemma}
\vspace{3mm}
\noindent \textbf{Proof of Lemma \ref{lem:sparse:<q-2=>done}.}\,
Assume for the sake of contradiction that all sets in the support of
the optimal ${\boldsymbol\alpha}$ have size at most $q-3$. In terms of the
expressions $I$ and $J$ from Definition \ref{def:sparse:IJ}, we have
the following equality, where the sums should be interpreted as only
over sets in the support of ${\boldsymbol\alpha}$:
\begin{displaymath}
\frac{2 \log \frac{|A|}{q} }{\text{\sc obj}^*({\boldsymbol\alpha})}
\ \ = \ \ \frac{2 J_A}{\alpha_A}
\ \ = \ \ \frac{I_A}{\alpha_A}
\ \ = \ \ \sum_{B \neq A} \alpha_B
\ \ = \ \ \sum_{B \neq A} \frac{J_B \cdot \text{\sc obj}^*({\boldsymbol\alpha})}{\log \frac{|B|}{q}}.
\end{displaymath}
(The second equality is Lemma \ref{lem:sparse:IJ}(i), and the other
three equalities come from the definitions of $I$ and $J$.) Note
that the above logarithms are always negative. It is cleaner to work
with positive quantities, so we rewrite the above equality in the
equivalent form:
\begin{displaymath}
\frac{2 \log \frac{q}{|A|} }{\text{\sc obj}^*({\boldsymbol\alpha})} = \sum_{B \neq A} \frac{J_B \cdot \text{\sc obj}^*({\boldsymbol\alpha})}{\log \frac{q}{|B|}}.
\end{displaymath}
Since every $B$ in the above sum is disjoint from $A$ and we assumed
all sets in the support have size at most $q-3$, we have that every
$B$ above has size $|B|\leq q - \max\{|A|, 3\}$. This gives the upper
bound:
\begin{eqnarray*}
\frac{2 \log \frac{q}{|A|} }{\text{\sc obj}^*({\boldsymbol\alpha})}
&\leq& \sum_{B \neq A} \frac{J_B \cdot \text{\sc obj}^*({\boldsymbol\alpha})}{\log \frac{q}{q-\max\{|A|, 3\}}} \\
\frac{ 2 \cdot \log \frac{q}{|A|} \cdot \log \frac{q}{q-\max\{|A|, 3\}} }{\text{\sc obj}^*({\boldsymbol\alpha})^2} &\leq& \sum_{B \neq A} J_B.
\end{eqnarray*}
Since $|A| \leq \max\{|A|, 3\}$, the left hand side is at least
$2F_q(\max\{|A|, 3\}) / \text{\sc obj}^*({\boldsymbol\alpha})^2$. Also, $F_q(x)$ is symmetric
about $x=q/2$ and we assumed that $3 \leq q/2$ and $|A| \leq q-3$, so
Lemma \ref{lem:sparse:Fq}(i) implies that this is in turn $\geq
2F_q(3) / \text{\sc obj}^*({\boldsymbol\alpha})^2$. Lemma \ref{lem:sparse:Fq}(ii) bounds this
in terms of $F_q(1)$, which ultimately gives us the following
bound for $\sum_{B \neq A} J_B$:
\begin{equation}
\label{ineq:sparse:Jsum}
\frac{q-3}{q-2}
\ \ \leq \ \ \frac{ \text{\sc obj}^*({\boldsymbol\alpha}^*)^2 }{ \text{\sc obj}^*({\boldsymbol\alpha})^2 } \cdot \frac{q-3}{q-2}
\ \ = \ \ \frac{ 4 F_q(1) }{ \text{\sc obj}^*({\boldsymbol\alpha})^2 } \cdot \frac{q-3}{q-2}
\ \ < \ \ \frac{2F_q(3)}{\text{\sc obj}^*({\boldsymbol\alpha})^2}
\ \ \leq \ \ \sum_{B \neq A} J_B.
\end{equation}
Here, ${\boldsymbol\alpha}^*$ is the claimed optimal vector in Proposition
\ref{prop:solve-opt-2}, and we recognize $4F_q(1) =
\text{\sc obj}^*({\boldsymbol\alpha}^*)^2$. The first inequality follows from the maximality
of ${\boldsymbol\alpha}$, and its direction is reversed because $\text{\sc obj}^*$ is always
negative.
Let $t$ be the number of sets in the support of ${\boldsymbol\alpha}$. Summing
\eqref{ineq:sparse:Jsum} over all sets $A$ in the support:
\begin{displaymath}
t \cdot \frac{q-3}{q-2} \ \ < \ \ \sum_A \sum_{B \neq A} J_B \ \ = \ \ \sum_B J_B (t-1).
\end{displaymath}
Yet $\sum_B J_B = 1$ by definition, so this implies $\frac{t}{t-1} <
\frac{q-2}{q-3}$, which forces $t > q-2$. Then, the support must be
all singletons, except possibly for a single 2-set. This contradicts
Lemma \ref{lem:sparse:extreme}, and completes our proof. \hfill
$\Box$
\subsection{Solving the optimization problem for 3 colors}
In this section, we provide the complete analytic solution to
Optimization Problem 1, for the entire range of the edge density
parameter $\gamma$ when the number of colors $q$ is exactly 3. To
simplify notation, we will write $\alpha_{12}$ instead of
$\alpha_{\{1, 2\}}$, etc.
\begin{proposition}
\label{prop:solve-opt}
Define the constant $c = \left( \sqrt{\frac{\log 3/2}{\log 3}} +
\sqrt{\frac{\log 3}{\log 3/2}} \right)^{-2} \approx 0.1969$.
Then, the \textbf{unique} solution (up to a permutation of the index
set $\{1,2,3\}$) of Optimization Problem 1 with edge density
parameter $\gamma$ is the vector ${\boldsymbol\alpha}$ defined as follows.
(All unspecified $\alpha_A$ below are zero.)
\begin{description}
\item[(i)] If $0 \leq \gamma \leq c$, then $\alpha_3 = \sqrt{\gamma
\cdot \frac{\log 3/2}{\log 3}}$, $\alpha_{12} =
\frac{\gamma}{\alpha_3}$, and $\alpha_{123} = 1-\alpha_{12} -
\alpha_3$. This gives $\text{\sc opt}(\gamma) = \log 3 - 2\sqrt{\gamma
\cdot \log 3 \cdot \log \frac{3}{2}}$.
\item[(ii)] If $c \leq \gamma \leq \frac{1}{4}$, then $\alpha_{12} =
\frac{1+\sqrt{1-4\gamma}}{2}$ and $\alpha_3 = 1-\alpha_{12}$,
which gives $\text{\sc opt}(\gamma) = \frac{1+\sqrt{1-4\gamma}}{2} \cdot
\log 2$.
\item[(iii)] If $\frac{1}{4} \leq \gamma \leq \frac{1}{3}$, then
$\alpha_{12} = \frac{1 - \sqrt{12\gamma - 3}}{2}$, $\alpha_1 =
\alpha_2 = \frac{1-2\alpha_{12}}{3}$, and $\alpha_3 =
\frac{1+\alpha_{12}}{3}$, which gives $\text{\sc opt}(\gamma) = \frac{1 -
\sqrt{12\gamma - 3}}{2} \cdot \log 2$.
\end{description}
\end{proposition}
This covers the entire range of admissible $\gamma$, because $\gamma =
1/3$ corresponds to the density of the Tur\'an graph $T_3(n)$, which
is the densest 3-colorable graph.
\subsubsection{Outline of solution}
The strategy of the solution is as follows. Suppose we have some
${\boldsymbol\alpha}$ that solves $\text{\sc opt}(\gamma)$. Since we may permute the index
set, we may assume without loss of generality that $\alpha_1 \leq
\alpha_2 \leq \alpha_3$. We then use perturbation arguments to
pinpoint the location of ${\boldsymbol\alpha}$. Although the problem initially
looks cumbersome (there are 7 nontrivially-related variables), the
solution cleanly follows from 6 short steps.
\begin{description}
\item[Step 1.] By \emph{shifting mass}\footnote{Adjusting the values
of the $\alpha_A$ while conserving their sum $\sum_A \alpha_A =
\text{\sc v}({\boldsymbol\alpha})$.} between the $\alpha_A$ with $|A| = 2$, we deduce
that $\alpha_{23}$ and $\alpha_{13}$ are both zero.
\item[Step 2.] By smoothing together $\alpha_1$ and $\alpha_2$, we
deduce that $\alpha_1 = \alpha_2$.
\item[Step 3.] By shifting mass between the variables $\alpha_A$ with
$|A| = 1$, we reduce to one of the following two situations. Either
$\alpha_1 = \alpha_2 = 0$, or $0 < \alpha_1 = \alpha_2 = \alpha_3 -
\alpha_{12}$.
\item[Step 4.] We solve the first case resulting from Step 3, which is
vastly simpler than the original problem. We find that the solution
corresponds to outcomes (i) and (ii) of Proposition
\ref{prop:solve-opt}.
\item[Step 5.] It remains to consider the second case resulting from
Step 3. By taking mass away from both $\alpha_{123}$ and
$\alpha_1$, and giving it to $\alpha_{12}$, we conclude that
$\alpha_{123} = 0$.
\item[Step 6.] We are left with the situation where the only nonzero
variables are $\alpha_1$, $\alpha_2$, $\alpha_3$, and $\alpha_{12}$,
and they are related by the equation $\alpha_1 = \alpha_2 = \alpha_3
- \alpha_{12}$. Again, this is vastly simpler than the original
problem, and we find that its solution corresponds to outcome (iii)
of Proposition \ref{prop:solve-opt}.
\end{description}
\subsubsection{Details of solution}
We begin by recording a simple result that we will use repeatedly in
the solution.
\begin{lemma}
\label{lem:shift-equal-index}
Let ${\boldsymbol\alpha}$ be a vector that solves $\text{\sc opt}(\gamma)$. Then
$\text{\sc e}({\boldsymbol\alpha}) = \gamma$. Furthermore, if ${\boldsymbol\alpha}'$ is obtained from
${\boldsymbol\alpha}$ by shifting mass from some $\alpha_A$ to another $\alpha_B$
with $|A| = |B|$, then $\text{\sc e}({\boldsymbol\alpha}') \leq \text{\sc e}({\boldsymbol\alpha})$.
\end{lemma}
\noindent \textbf{Proof.}\, Suppose for contradiction that
$\text{\sc e}({\boldsymbol\alpha}) > \gamma$. The slack in the edge constraint lets us
shift some more mass to $\alpha_{123}$ while keeping $\text{\sc e}({\boldsymbol\alpha}) \geq
\gamma$. But in the definition of $\text{\sc obj}$, the coefficient ($\log 3$)
of $\alpha_{123}$ is the largest, so this shift strictly increases
$\text{\sc obj}$, contradicting maximality of ${\boldsymbol\alpha}$.
For the second claim, observe that $\text{\sc obj}$ is invariant under the shift
since $|A| = |B|$. Now suppose for contradiction that $\text{\sc e}({\boldsymbol\alpha}') >
\text{\sc e}({\boldsymbol\alpha})$. Then, as above, we could shift more mass to
$\alpha_{123}$, which would strictly increase $\text{\sc obj}$, again
contradicting the maximality of ${\boldsymbol\alpha}$. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Step 1.}\, Consider shifting mass among
$\{\alpha_{12}, \alpha_{23}, \alpha_{13}\}$. If we hold all other
$\alpha_A$ constant, then $\text{\sc e}({\boldsymbol\alpha}) = \alpha_1 \alpha_{23} +
\alpha_2 \alpha_{13} + \alpha_3 \alpha_{12} + \text{constant}$, which
is linear in the three variables of interest.
Let us postpone the uniqueness claim for a moment. Since we ordered
$\alpha_1 \leq \alpha_2 \leq \alpha_3$, shifting all of the mass from
$\{\alpha_{13}, \alpha_{23}\}$ to $\alpha_{12}$ will either strictly
grow $\text{\sc e}({\boldsymbol\alpha})$ if $\alpha_2 < \alpha_3$, or keep $\text{\sc e}({\boldsymbol\alpha})$
unchanged. Also, $\text{\sc obj}({\boldsymbol\alpha})$ will be invariant. Therefore, if we
are only looking for an upper bound for $\text{\sc opt}(\gamma)$, we may perform
this shift, and reduce to the case when $\alpha_{13} = 0 =
\alpha_{23}$ without loss of generality.
We return to the topic of uniqueness. The next five steps of this
solution will deduce that, conditioned on $\alpha_{13} = 0 =
\alpha_{23}$, the unique optimal ${\boldsymbol\alpha}$ always has either $\alpha_2
< \alpha_3$ or $\alpha_{12} = \alpha_{13} = \alpha_{23} = 0$. We
claim that this implies that our initial shift of mass to
$\alpha_{12}$ \emph{never happened}. Indeed, in the case with
$\alpha_2 < \alpha_3$, the previous paragraph shows that an initial
shift would have strictly increased $\text{\sc e}({\boldsymbol\alpha})$, violating Lemma
\ref{lem:shift-equal-index}. And in the case with $\alpha_{12} =
\alpha_{13} = \alpha_{23} = 0$, there was not even any mass at all to
shift. Therefore, this will imply the full uniqueness result.
\vspace{3mm}
\noindent \textbf{Step 2.}\, Consider shifting mass between $\alpha_1$
and $\alpha_2$ until they become equal. If we hold all other
$\alpha_A$ constant, then $\text{\sc e}({\boldsymbol\alpha}) = \alpha_1 \alpha_2 + (\alpha_1
+ \alpha_2) \alpha_3 + \text{constant}$. This ``smoothing'' operation
strictly increases the first term, while keeping the other terms
invariant. But Lemma \ref{lem:shift-equal-index} prohibits
$\text{\sc e}({\boldsymbol\alpha})$ from increasing, so we conclude that we must have had
$\alpha_1 = \alpha_2$.
\vspace{3mm}
\noindent \textbf{Step 3.}\, Consider shifting mass among $\{\alpha_1,
\alpha_2, \alpha_3\}$. That is, fix $S = \alpha_1 + \alpha_2 +
\alpha_3$, and vary $t = \alpha_3$ in the range $0 \leq t \leq S$. By
Step 2, $\alpha_1 = \alpha_2 = \frac{S-t}{2}$. Step 1 gave $\alpha_{13} =
\alpha_{23} = 0$, so we have:
\begin{eqnarray*}
\text{\sc e}({\boldsymbol\alpha})
\ \ = \ \ \alpha_1 \alpha_2 + \alpha_1 \alpha_3 + \alpha_2 \alpha_3 + \alpha_{12} \alpha_3
&=& \frac{(S-t)^2}{4} + 2 \cdot \frac{S-t}{2} \cdot t + \alpha_{12} t \\
&=& -\frac{3}{4} t^2 + \left(\frac{S}{2} + \alpha_{12}\right) t + \frac{S^2}{4}.
\end{eqnarray*}
By Lemma \ref{lem:shift-equal-index}, $\alpha_3 = t$ must maximize
this downward-opening parabola in the range $0 \leq t \leq S$. Recall
that quadratics $f(x) = ax^2 + bx + c$ reach their extreme value at $x
= -\frac{b}{2a}$, which corresponds to $t = -\big(\frac{S}{2} +
\alpha_{12}\big)/\big(2 \cdot \big(-\frac{3}{4}\big)\big) = \frac{S +
2\alpha_{12}}{3}$ above. Thus, if $\frac{S + 2\alpha_{12}}{3} < S$,
then we must have $\alpha_3 = \frac{S + 2\alpha_{12}}{3} =
\frac{\alpha_1 + \alpha_2 + \alpha_3 + 2\alpha_{12}}{3}$. Step 2 gave
us $\alpha_1 = \alpha_2$, which forces $0 < \alpha_1 = \alpha_2 =
\alpha_3 - \alpha_{12}$. This is the second claimed outcome of this
step.
On the other hand, if $\frac{S + 2\alpha_{12}}{3} \geq S$, then the
quadratic is strictly increasing on the interval $0 \leq t \leq S$.
Therefore, we must have $\alpha_3 = S$, forcing $\alpha_1 = \alpha_2 =
0$. This is the first claimed outcome of this step.
\vspace{3mm}
\noindent \textbf{Step 4.}\, In this case, only $\alpha_3$,
$\alpha_{12}$, and $\alpha_{123}$ are nonzero. Then the edge
constraint is simply $\text{\sc e}({\boldsymbol\alpha}) = \alpha_3 \alpha_{12} = \gamma$
(Lemma \ref{lem:shift-equal-index} forces equality). Note that since
$\alpha_3 + \alpha_{12} \leq \text{\sc v}({\boldsymbol\alpha}) = 1$, their product $\alpha_3
\alpha_{12}$ is always at most $1/4$, \textbf{so we can only be in
this case when} $\boldsymbol{\gamma \leq 1/4}$.
Now let $x = \alpha_3$ and $y = \alpha_{12}$. The vertex constraint
forces $\alpha_{123} = 1-x-y$, so we are left with the routine problem
of maximizing $\text{\sc obj} = y \log 2 + (1-x-y) \log 3 = \log 3 - x \log 3 -
y \log \frac{3}{2}$ subject to the constraints
\begin{displaymath}
x,y \geq 0,
\quad \quad
x+y \leq 1,
\quad \quad
xy = \gamma.
\end{displaymath}
These constraints specify a segment of a hyperbola (a convex function)
in the first quadrant of the $xy$-plane, and the objective is linear
in $x$ and $y$. Therefore, by convexity, the maximum would be at the
global maximum of $\text{\sc obj}$ on the entire first quadrant branch of the
hyperbola, unless that fell outside the segment, in which case it must
be at an endpoint, forcing $x+y=1$.
The maximum over the entire branch of $xy = \gamma$ follows easily
from the inequality of arithmetic and geometric means: $\text{\sc obj} \leq \log
3 - 2\sqrt{x\log 3 \cdot y \log \frac{3}{2}} = \log 3 - 2\sqrt{\gamma
\cdot \log 3 \cdot \log \frac{3}{2}}$, with equality when $x \log 3
= y \log \frac{3}{2}$. Using $xy = \gamma$ to solve for $x$ and $y$,
we see that the unique global maximum is at $x = \sqrt{\gamma \cdot
\frac{\log 3/2}{\log 3}}$ and $y = \sqrt{\gamma \cdot \frac{\log
3}{\log 3/2}}$. This lies on our segment (satisfies $x+y \leq
1$) precisely when $\gamma$ is below the constant $c \approx 0.1969$
in Proposition \ref{prop:solve-opt}, and these values of $\alpha_3 =
x$ and $\alpha_{12} = y$ indeed match those claimed in that regime.
On the other hand, when $\gamma > c$, we are outside the segment, so
by the above we must have $x+y = 1$, and we may substitute $x = 1-y$.
We are left with the single-variable maximization of $\text{\sc obj} = y \log 2$
subject to $0 \leq y \leq 1$ and $(1-y)y = \gamma$. By the quadratic
formula, this is at $\alpha_{12} = y = \frac{1+\sqrt{1-4\gamma}}{2}
\leq 1$, which produces $\alpha_3 = x = 1-y = 1-\alpha_{12}$. This
indeed matches outcome (ii) of our proposition.
\vspace{3mm}
\noindent \textbf{Step 5.}\, The remaining case is $0 < \alpha_1 =
\alpha_2 = \alpha_3 - \alpha_{12}$, and we will show that this forces
$\alpha_{123} = 0$. Indeed, suppose for the sake of contradiction
that $\alpha_{123} > 0$. Shift mass to $\alpha_{12}$ by taking
$\epsilon$ from $\alpha_{123}$ and $\epsilon' = \epsilon \alpha_3 /
\alpha_2$ from $\alpha_1$. Since many $\alpha_A$ are zero,
$\text{\sc e}({\boldsymbol\alpha}) = \alpha_1(\alpha_2 + \alpha_3) + \alpha_2 \alpha_3 +
\alpha_{12} \alpha_3 $. Our perturbation decreases the first term by
$\epsilon' (\alpha_2 + \alpha_3)$, increases the third term by
$(\epsilon + \epsilon')\alpha_3$, and does not change the second term,
so our choice of $\epsilon'$ keeps $\text{\sc e}({\boldsymbol\alpha})$ invariant.
On the other hand, $\text{\sc obj}$ increases by $(\epsilon + \epsilon') \log 2
-\epsilon \log 3$. Since we know $\alpha_2 = \alpha_3 - \alpha_{12}$,
in particular we always have $\alpha_3 \geq \alpha_2$, which implies
that $\epsilon' \geq \epsilon$ because we assume $\alpha_2,\alpha_3 > 0$. Hence the increase in $\text{\sc obj}$ is
$(\epsilon + \epsilon') \log 2 - \epsilon \log 3
\geq
(\epsilon + \epsilon) \log 2 - \epsilon \log 3
> 0$,
contradicting the maximality of ${\boldsymbol\alpha}$. Therefore, we must have had
$\alpha_{123} = 0$.
\vspace{3mm}
\noindent \textbf{Step 6.}\, Now only $\alpha_1$, $\alpha_2$,
$\alpha_3$, and $\alpha_{12}$ remain. Let $t = \alpha_3$ and $r =
\alpha_{12}$. Step 3 gives $\alpha_1 = \alpha_2 = \alpha_3 -
\alpha_{12} = t-r$. We use the vertex constraint to eliminate $t$: $1
= \text{\sc v}({\boldsymbol\alpha}) = 2(t-r) + t + r$, so $t = \frac{1+r}{3}$. Substituting
this for $t$, we are left with $\alpha_1 = \alpha_2 = \frac{1-2r}{3}$
and $\alpha_3 = \frac{1+r}{3}$. Since we need all $\alpha_A \geq 0$,
the range for $r$ is $0 \leq r \leq 1/2$.
The above expressions give $\text{\sc e}({\boldsymbol\alpha}) = \left(\frac{1 -
2r}{3}\right)^2 + 2\left(\frac{1-2r}{3}\right)
\left(\frac{1+r}{3}\right) + \left(\frac{1+r}{3}\right) r = \frac{r^2
- r + 1}{3}$, and Lemma \ref{lem:shift-equal-index} forces
$\text{\sc e}({\boldsymbol\alpha}) = \gamma$. The quadratic formula gives the roots $r =
\frac{1 \pm \sqrt{12\gamma - 3}}{2}$. These are only real when
$12\gamma - 3 \geq 0$, so \textbf{this case only occurs when}
$\boldsymbol{\gamma \geq 1/4}$. Furthermore, the only root within the
interval $0 \leq r \leq 1/2$ is $r = \frac{1 - \sqrt{12\gamma -
3}}{2}$. Plugging this value of $r$ into the expressions for the
$\alpha_A$, we indeed obtain outcome (iii) of Proposition
\ref{prop:solve-opt}.
\vspace{3mm}
\noindent \textbf{Conclusion.}\, The only steps which proposed
possible maxima were Steps 4 and 6. Conveniently, Step 4 also
required that $\gamma \leq 1/4$, while Step 6 required $\gamma \geq
1/4$ (both deductions are bolded above), so we do not need to compare
them except at $\gamma = 1/4$, which is trivial. Finally, note that
all extremal outcomes indeed have $\alpha_2 < \alpha_3$, except at
$\gamma = 1/3$, in which case $\alpha_{12} = \alpha_{13} = \alpha_{23}
= 0$. This justifies the uniqueness argument that we used at the end
of Step 1, and completes our proof of Proposition
\ref{prop:solve-opt}. \hfill $\Box$
\section{Exact result for sparse graphs}
\label{sec:exact:sparse}
In this section, we determine the precise structure of the sparse
graphs that maximize the number of colorings, completing the proof of
Theorem \ref{thm:main:sparse}. Proposition
\ref{prop:asymp-sparse}(ii) showed that in this regime, the optimal
graphs were close, in edit distance, to complete bipartite graphs. As
a warm-up for the arguments that will follow in this section, let us
begin by showing that the semi-complete subgraphs of Definition
\ref{def:semi-complete} are optimal among bipartite graphs. We will
use this in the final stage of our proof of the exact result.
\begin{lemma}
\label{lem:semi-complete}
Let $q \geq 3$ and $r < a \leq b$ be positive integers. Among all
subgraphs of $K_{a,b}$ with $r$ missing edges, the ones which
maximize the number of $q$-colorings are precisely:
\begin{description}
\item[(i)] both the correctly and incorrectly oriented semi-complete
subgraphs, when $q=3$, and
\item[(ii)] the correctly oriented semi-complete subgraph, when $q
\geq 4$ and $\frac{b}{a} \geq \log q / \log \frac{q-2}{q-3}$ and
$a$ is sufficiently large (i.e., $a > N_q$, where $N_q$ depends
only on $q$).
\end{description}
\end{lemma}
\noindent \textbf{Remark.}\, The above result is not as clean when
more than 3 colors are used, but is sufficient for our purposes. In
the sparse case, we encounter only highly unbalanced bipartite graphs,
all of which have part size ratio approximately $\log q / \log
\frac{q}{q-1}$. Apparently out of sheer coincidence (and good
fortune), this is just barely enough to satisfy the additional
condition of the lemma. Nevertheless, it would be nice to remove that
condition.
\vspace{3mm}
\noindent \textbf{Proof of Lemma \ref{lem:semi-complete}(ii).}\, Let
$A \cup B$ be the vertex partition of $K_{a,b}$, with $|A| = a$ and
$|B| = b$. Let $F^*$ be the correctly oriented semi-complete subgraph
of $K_{a,b}$ with exactly $r$ missing edges. Let $F$ be another
non-isomorphic subgraph of $K_{a,b}$ with the same number of edges.
We will show that $F$ has fewer colorings. Since $F$ and $F^*$ are
both bipartite, they share every coloring that uses disjoint sets of
colors on the sides of the bipartition. Discrepancies arise when the
same color appears on both sides. Note, however, that whenever this
occurs, every edge between same-colored vertices must be missing from
the graph. This set of forced missing edges,\footnote{In this lemma,
\emph{missing edges}\/ refer only to those missing from the
bipartite $K_{a,b}$, not the entire $K_{a+b}$.} which we call the
coloring's \emph{footprint}, is always a union of vertex-disjoint
complete bipartite graphs, one per color that appears on both sides.
For each subset $H$ of the missing edges of $F$, let $n_H$ be the
number of colorings of $F$ with footprint $H$. Then, $\sum n_H$ is
exactly the number of colorings of $F$. To give each $n_H$ a
counterpart from $F^*$, fix an arbitrary bijection $\phi$ between the
missing edges of $F$ and $F^*$, and let $n_H^*$ be the number of
colorings of $F^*$ with footprint $\phi(H)$. Since $F^*$ has $\sum
n_H^*$ colorings, it suffices to show that $n_H \leq n_H^*$ for all
$H$, with strict inequality for at least one $H$.
Clearly, when $H$ is empty, or a star centered in $B$, then $n_H =
n_H^*$. We observed that all footprints are unions $\Gamma_1 \cup
\cdots \cup \Gamma_k$ of vertex-disjoint complete bipartite graphs, so
all $H$ not of that form automatically have $n_H = 0 \leq n_H^*$. It
remains to consider $H$ that have this form, but are not stars
centered in $B$. Colorings with this footprint are monochromatic on
each $\Gamma_i$, and there are ${q \choose k} k!$ ways to choose a
distinct color for each $\Gamma_i$. The remaining $q-k$ colors are
partitioned into two sets, one for $A \setminus V(H)$ and one for $B
\setminus V(H)$. Crucially, $|B \setminus V(H)| \leq b-2$ because $H$ is
not a star centered in $B$. Thus,
\begin{eqnarray*}
n_H &\leq& \left[ {q \choose k} k! \right] \cdot
\sum_{i=1}^{q-k-1} {q-k \choose i} i^{|A \setminus V(H)|}
(q-k-i)^{|B \setminus V(H)|} \\
&\leq&
q^k \cdot \sum_{i=1}^{q-k-1} {q-k \choose i} i^a (q-k-i)^{b-2}.
\end{eqnarray*}
To see that the sum is dominated by the $i=1$ term, note that since we
assumed that $\frac{b}{a} \geq \log q / \log \frac{q-2}{q-3}$, for
sufficiently large $a$ we have
\begin{displaymath}
\frac{b-2}{a} \geq \log (q-1) / \log
\frac{q-2}{q-3} \geq \log (q-k) / \log \frac{q-k-1}{q-k-2},
\end{displaymath}
so we may apply Inequality \ref{ineq:partition-colors}(ii) from the Appendix. This
gives $n_H \leq q^k \cdot 1.1 (q-k) (q-k-1)^{b-2}$. Next, we claim
that this bound is greatest when $k$ is smallest. Indeed, when $k$
increases by one, $q^k$ increases by the factor $q$, but
$(q-k-1)^{b-2}$ decreases by a factor of at least $\big(
\frac{q-2}{q-3} \big)^{b-2} \gg q$ for large $b$. Hence we have $n_H
\leq 1.1 q (q-1) (q-2)^{b-2}$.
On the other hand, $\phi(H)$ is always a star centered in $B$, so we
can easily construct $q(q-1)(q-2)^{b-1}$ colorings of $F^*$. Indeed,
choose one color for the vertices of the graph $\phi(H)$, a different
color for the remainder of $A \setminus \phi(H)$, and allow each
vertex left in $B \setminus \phi(H)$ to take any of the other $q-2$
colors. Since $\phi(H)$ intersects $B$ in exactly one vertex, $n_H^*
\geq q(q-1)(q-2)^{b-1}$, as claimed. But $q-2 \geq 2$, so we have the
desired strict inequality $n_H^* \geq 2 q(q-1) (q-2)^{b-2} > n_H$ for
all remaining $H$. \hfill $\Box$
\vspace{3mm}
Part (i) is a consequence of the following more precise result, which
we will also need later.
\begin{lemma}
\label{lem:subgraph-bipartite:q=3}
Let $F$ be a subgraph of the complete bipartite graph $K_{a,b}$ with
vertex partition $A \cup B$, and $r < \max\{a, b\}$ missing
edges. Suppose $F$ has $x \in A$ and $y \in B$ with $x$ complete to
$B$ and $y$ complete to $A$. Then its number of 3-colorings is
precisely $3 \cdot 2^a + 3 \cdot 2^b - 6 + 6s$, where $s$ is
the number of nonempty subsets of missing edges which form complete
bipartite graphs. This is at most $3 \cdot 2^a + 3 \cdot 2^b +
6 \cdot (2^r-2)$, with equality exactly when the missing edges form a star.
\end{lemma}
\noindent \textbf{Proof.}\, As in the proof of Lemma
\ref{lem:semi-complete}(ii), let $n_H$ be the number of 3-colorings of
$F$ with footprint $H$. The key observation is that for every
nonempty $H$, $n_H = 6$ when $H$ is a complete bipartite graph, and
$n_H = 0$ otherwise. Indeed, if $H$ is not a complete bipartite
graph, then it cannot be a footprint of a 3-coloring, so $n_H = 0$.
Otherwise, there are 3 ways to choose a color for the vertices of $H$,
and then by definition of footprint, the remaining two colors must be
split between $A \setminus H$ and $B \setminus H$. Both of these sets
are nonempty, because $A \setminus H$ must contain the given vertex
$x$ and $B \setminus H$ must contain $y$, so the only way to split the
two colors is to use one on all of $A \setminus H$ and the other on
all of $B \setminus H$. There are 2 ways to decide how to do this.
So, $n_H = 3 \cdot 2 = 6$, as claimed, and this produces the $6s$ in
the formula.
The rest of the formula follows from $n_\emptyset = 3 \cdot 2^a +
3 \cdot 2^b - 6$. Indeed, the terms correspond to the colorings
that use a single color (for which there are three choices) on $B$ and
allow the other two on $A$, those that use one on $A$ and allow the
others on $B$, and those that use only one on each of $A$ and $B$
(hence were double-counted). The final claim in the statement comes
from the fact that stars are the only $r$-edge graphs which have all
$2^r-1$ of their nonempty subgraphs complete bipartite. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Lemma \ref{lem:semi-complete}(i).}\, Since
the number of missing edges $r$ is less than both $|A|$ and $|B|$, the
vertices $x$ and $y$ of Lemma \ref{lem:subgraph-bipartite:q=3} must
exist. Therefore, its equality condition implies that the optimal
subgraphs are indeed semi-complete. \hfill $\Box$
\subsection{Structure of proof}
We will use several small constants with relative order of magnitude
$\epsilon_1 \ll \epsilon_2 \ll \epsilon_3$, related by $\epsilon_1 =
\epsilon_2^2 = \epsilon_3^3$. We do not send them to zero; rather, we
show that there is an eventual choice of the $\epsilon_i$, determined
by $q$ and $\kappa$, that makes our argument work. So, to avoid
confusion, the $O$, $\Theta$, and $o$ notation that we employ in this
proof will only mask constants depending on $q,\kappa$ alone. For
example, we will write $X = O(\epsilon_2 Y)$ when there is a constant
$C_{q,\kappa}$ such that $X \leq C_{q,\kappa} \epsilon_2 Y$ for
sufficiently large $m$ and $n$. Occasionally, we will use phrases
like ``almost all colorings have property $P$'' when
$(1-o(1))$-fraction of all colorings have that property.
\vspace{3mm}
\noindent \textbf{Proof of Theorem \ref{thm:main:sparse}.}\, Let $G =
(V, E)$ be an optimal graph with $n$ vertices and $m \leq \kappa n^2$
edges. We begin with a convenient technical modification: if $G$ has
an isolated edge $xy$, replace it with an edge between $x$ and another
non-isolated vertex of minimal degree. Do this only once, even if $G$
had multiple isolated edges. The number of colorings stays the same
because both graphs share the same partial colorings of $V \setminus
\{x\}$, and each of those has exactly $q-1$ extensions (in each graph)
to the degree-1 vertex $x$.
This adjustment will not compromise the uniqueness claim, because it
cannot create one of the optimal graphs listed in Theorem
\ref{thm:main:sparse}. Indeed, if it did, then the degree-1 vertex
$x$ would now have to be the center of the missing star of the
semi-complete subgraph $H \subset K_{a,b}$. But we made $x$
adjacent to a vertex of minimal degree, so $x$ must be on the smaller
side of $H$'s bipartition. Then the number of $K_{a,b}$-edges missing
from the semi-complete $H$ is precisely $b-d(x) = b-1$. This exceeds
$a$ for all optimal graphs listed in Theorem \ref{thm:main:sparse},
but our definition of semi-completeness required that the number of
missing edges was strictly less than the size of the smaller part.
This contradiction shows that we may assume without loss of generality
that if $G$ has an isolated edge $uv$, then it also contains a
degree-1 vertex $x \not \in \{u, v\}$.
Define $u_1 = \sqrt{m \cdot \log \frac{q}{q-1} / \log q}$ and $u_2 =
\sqrt{m \cdot \log q / \log \frac{q}{q-1}}$, and note that
$\frac{u_1}{u_2} = \log \frac{q}{q-1} / \log q$ and $u_1 u_2 = m$.
So, Proposition \ref{prop:asymp-sparse}(ii) gives disjoint subsets
$U_1, U_2 \subset V$ of size $|U_i| = \lceil u_i \rceil$, such that by
editing at most $\epsilon_1 m$ edges, we can transform $G$ into the
complete bipartite graph between $U_1$ and $U_2$, with all other
vertices isolated. Call that graph $G^*$.
Let $(V_1, V_2)$ be a max-cut partition of the \textbf{non-isolated}
vertices of $G$, such that $V_1$ contains at least as many vertices of
$U_1$ as $V_2$ does. We would like to show that this partition is
very close to $(U_1, U_2)$, so we keep track of the $U_i$ by defining
$U_i' = U_i \cap V_i$ and $U_i'' = U_i \cap V_{3-i}$ for each $i \in
\{1, 2\}$. To help us recognize vertices that are ``mostly correct,''
let $X_i \subset U_i'$ be the vertices that are adjacent to all but at
most $\epsilon_2\sqrt{m}$ vertices of $U_{3-i}'$.
The following series of claims will complete the proof of Theorem
\ref{thm:main:sparse}, since Proposition \ref{prop:asymp-sparse}(i)
already determined the asymptotic maximum number of colorings.
\begin{description}
\item[Claim 1.] For each $i$, $|U_i'|$ is within $O(\epsilon_1
\sqrt{m})$ of $u_i$, $|X_i|$ is within $O(\epsilon_2 \sqrt{m})$ of
$u_i$, and $|U_i''| \leq O(\epsilon_1 \sqrt{m})$.
\item[Claim 2.] Almost all colorings of $G$ are \emph{$(X_1,
X_2)$-regular}, which means that they only use one color on $X_1$,
and avoid that color on $X_2$.
\item[Claim 3.] At most one non-isolated vertex $v_0$ has degree $\leq
2 \epsilon_3 \sqrt{m}$. We use this to show that each $|V_i|$ is
within $O(\epsilon_2 \sqrt{m})$ of $u_i$. Let $V_0 = \{v_0\}$ if it
exists; otherwise, let $V_0 = \emptyset$. Let $V_i^* = V_i
\setminus V_0$.
\item[Claim 4.] Almost all colorings are \emph{$(V_1^*, V_2^*)$-regular},
i.e., use one color for $V_1^*$, and avoid it on $V_2^*$.
\item[Claim 5.] Each $V_i^*$ is an independent set, and $v_0$ (if it
exists) has neighbors in only one of the $V_i^*$. Hence $G$ is a
bipartite graph plus isolated vertices.
\item[Claim 6.] $G$ is a semi-complete subgraph of $K_{|V_1|, |V_2|}$
plus isolated vertices, correctly oriented if $q \geq 4$.
\end{description}
\subsection{Details of proof}
\label{sec:exact:sparse:details}
\noindent \textbf{Proof of Claim 1.}\, We know that by editing at most
$\epsilon_1 m$ edges, $G$ can be transformed into $G^*$, the complete
bipartite graph between $(U_1, U_2)$, plus isolated vertices. Since
$|U_i| = \lceil u_i \rceil = \Theta(\sqrt{m})$, all vertices in the
$U_i$ have degree $\Theta(\sqrt{m})$ in $G^*$. So, the number of
$U_i$-vertices that are isolated in $G$ is at most $\frac{\epsilon_1
m}{\Theta(\sqrt{m})} = O(\epsilon_1 \sqrt{m})$, implying in
particular that the number of $U_1$-vertices in $V_1 \cup V_2$ is at
least $|U_1| - O(\epsilon_1 \sqrt{m}) \geq \frac{2}{3} u_1$. (Recall
that $(V_1, V_2)$ is a max-cut partition of the \emph{non-isolated}
vertices of $G$.) Since more $U_1$-vertices are in $V_1$ than in
$V_2$, and $U_1' = U_1 \cap V_1$, we have $|U_1'| \geq \frac{1}{3} u_1
= \Theta(\sqrt{m})$.
Also, $G^*$ has at least $m$ edges crossing between $(U_1, U_2)$, so
$G$ has at least $m-\epsilon_1 m$ edges crossing between $(U_1, U_2)$,
and at least that many between its max-cut $(V_1, V_2)$. As $G$ has
only $m$ edges, this shows that each $G[V_i]$ spans at most
$\epsilon_1 m$ edges. But the sets $U_1', U_2'' \subset V_1$ are
complete to each other in $G^*$, so among the $\leq \epsilon_1 m$
edges of $G[V_1]$, at least $|U_1'| |U_2''| - \epsilon_1 m$ of them
must go between $U_1'$ and $U_2''$. Combining this with the above
result that $|U_1'| \geq \Theta(\sqrt{m})$, we obtain the desired
bound $|U_2''| \leq O(\epsilon_1 \sqrt{m})$.
Then $U_2'$, the set of $U_2$-vertices in $V_2$, has size at least
$u_2 - O(\epsilon_1 \sqrt{m}) \geq \Theta(\sqrt{m})$, because only
$O(\epsilon_1 \sqrt{m})$ of the $U_2$-vertices are isolated and
$|U_2''| \leq O(\epsilon_1 \sqrt{m})$ of them are in $V_1$. Repeating
the previous paragraph's argument with respect to $U_2'$ and $U_1''$,
we find that $|U_1''| \leq O(\epsilon_1 \sqrt{m})$, which then implies
that $|U_1'| \geq u_1 - O(\epsilon_1 \sqrt{m})$.
It remains to control $X_i$, which we recall to be the vertices of
$U_i'$ which had at most $\epsilon_2\sqrt{m}$ non-neighbors in
$U_{3-i}'$. The $U_i'$ are complete to each other in $G^*$, so each
vertex not in $X_i$ contributes at least $\epsilon_2\sqrt{m}$ to the
total edit distance of $\leq \epsilon_1 m$. We set $\epsilon_2^2 =
\epsilon_1$, so this implies that all but at most $\epsilon_2
\sqrt{m}$ vertices of $U_i'$ belong to $X_i$. Since $|U_i'|$ is
within $O(\epsilon_1 \sqrt{m})$ of $u_i$, this gives the desired
result. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 2.}\, We bound the number of
colorings that are not $(X_1, X_2)$-regular. For each partition $[q]
= C_0 \cup C_1 \cup C_2 \cup C_3$, we count the colorings which use
the colors $C_1$ in $X_1$ but not $X_2$, use $C_2$ in $X_2$ but not
$X_1$, use $C_3$ in both $X_1$ and $X_2$, and do not use $C_0$ in
either $X_1$ or $X_2$. Then we sum over all \emph{irregular}\/
partitions, which are all partitions with $|C_1| \geq 2$ or
$|C_3| \geq 1$. It suffices to
show that the result is of smaller order than the total number of
colorings of $G$.
For any given partition with $|C_i| = c_i$, we claim that the
corresponding number of colorings is at most $(|X_1| |X_2|)^{c_3}
\cdot c_1^{|X_1| - q \epsilon_2 \sqrt{m}} \cdot c_2^{|X_2| - q
\epsilon_2 \sqrt{m}} \cdot q^{n - 2c_3 - (|X_1| - q \epsilon_2
\sqrt{m}) - (|X_2| - q \epsilon_2 \sqrt{m})}$. The first factor
comes from choosing $c_3$ pairs of vertices $x_i \in X_1$, $y_i \in
X_2$ on which to use each color of $C_3$. Then, every vertex in the
common neighborhood of $\{y_i\}$ must avoid $C_3$ in order to produce
a proper coloring. By definition of $X_2$, the number of vertices of
$U_1'$ that are not in this common neighborhood is at most $|C_3|
\epsilon_2 \sqrt{m} \leq q \epsilon_2 \sqrt{m}$. Thus all but at most
$q \epsilon_2 \sqrt{m}$ vertices of $X_1 \subset U_1'$ are adjacent to
every $\{y_i\}$, and therefore restricted to colors in $C_1$. This
produces the second factor in our bound, and the third factor is
obtained analogously. Of course every vertex has at most $q$ color
choices, and we use that trivial bound for all remaining vertices,
producing our final factor. Using that each $|X_i|$ is within
$O(\epsilon_2 \sqrt{m})$ of $u_i = \Theta(\sqrt{m})$, we find that the
sum $\Sigma_1$ of this bound over all $\leq 4^q$ irregular partitions
is:
\begin{eqnarray*}
\Sigma_1 &=& \sum_{\text{irregular}} (|X_1| |X_2|)^{c_3} \cdot c_1^{|X_1|
- q \epsilon_2 \sqrt{m}} \cdot c_2^{|X_2| - q \epsilon_2 \sqrt{m}}
\cdot q^{n - 2c_3 - (|X_1| - q \epsilon_2 \sqrt{m}) - (|X_2| - q \epsilon_2
\sqrt{m})} \\
&\leq& e^{O(\epsilon_2 \sqrt{m})} \sum_{\text{irregular}} (\Theta(\sqrt{m}) \cdot \Theta(\sqrt{m}))^{c_3} \cdot c_1^{u_1} \cdot c_2^{u_2}
\cdot q^{n - u_1 - u_2} \\
&\leq& e^{O(\epsilon_2 \sqrt{m})} \cdot 4^q \cdot O(m^q) \cdot
\max_{c_1 \geq 2 \text{ or } c_3 \geq 1} \left\{ c_1^{u_1} c_2^{u_2} \right\} \cdot
q^{n - u_1 - u_2}.
\end{eqnarray*}
For any irregular partition with $c_1 + c_2 < q$, it is clear that
$c_1^{u_1} c_2^{u_2}$ increases when $C_1$ is replaced by $C_1 \cup
C_0 \cup C_3$, and $C_0$ and $C_3$ are reduced to $\emptyset$. It is
also clear that this procedure gives another irregular partition, but
this time with $c_1 + c_2 = q$.
Yet $\frac{u_2}{u_1} = \log q /
\log \frac{q}{q-1} \geq \log q / \log \frac{q-1}{q-2}$, so we may apply
Inequality \ref{ineq:partition-colors}(i), which gives
\begin{displaymath}
\max_{c_1 \geq 2 \text{ or } c_3 \geq 1} c_1^{u_1} c_2^{u_2}
\ \ = \ \
2^{u_1} (q-2)^{u_2}
\ \ \leq \ \
1.5^{-u_1} \cdot 1^{u_1} (q-1)^{u_2}
\ \ = \ \
e^{-\Theta(\sqrt{m})} \cdot (q-1)^{u_2}.
\end{displaymath}
Thus for small $\epsilon_2$, we have $\Sigma_1 \leq
e^{-\Theta(\sqrt{m})} \cdot (q-1)^{u_2} \cdot q^{n - u_1 - u_2}$.
On the other hand, Proposition \ref{prop:asymp-sparse}(i) shows that
the optimal graph has at least $\Sigma_0 := q^n e^{(-c-\epsilon_1)
\sqrt{m}}$ colorings, where $c = 2\sqrt{\log \frac{q}{q-1} \log
q}$. Since $u_1 = \sqrt{m \cdot \log \frac{q}{q-1} / \log q}$ and
$u_2 = \sqrt{m \cdot \log q / \log \frac{q}{q-1}}$, routine
algebra shows that $\Sigma_0$ is precisely $e^{-\epsilon_1 \sqrt{m}}
(q-1)^{u_2} q^{n-u_1-u_2}$. Therefore, for small $\epsilon_1$ we
have $\Sigma_1 / \Sigma_0 \leq e^{-\Theta(\sqrt{m})} = o(1)$, i.e.,
almost all colorings of $G$ are $(X_1, X_2)$-regular. \hfill $\Box$
\vspace{3mm}
Before proving the next claim, it is convenient to establish the
following lemma, which should be understood in the context of Claim 3.
\begin{lemma}
\label{lem:exact:sparse:sum-degrees}
Let $x,y$ be a pair of non-isolated vertices of $G$, such that $xy$
is not an isolated edge. Then $d(x) + d(y) \geq |X_1| - 1$.
\end{lemma}
\noindent \textbf{Proof.}\, Suppose for contradiction that there is
such a pair $x,y$ with $d(x) + d(y) \leq |X_1| - 2$. Let $G'$ be the
graph obtained by deleting the $\leq |X_1|-2$ edges incident to $x$ or
$y$, and adding back as many edges between $x$ and $X_1 \setminus
\{x,y\}$. In $G'$, any $(X_1 \setminus \{x,y\}, X_2 \setminus
\{x,y\})$-regular partial coloring\footnote{A proper coloring of the
vertices $V \setminus \{x,y\}$, which uses only one color on $X_1
\setminus \{x,y\}$, and avoids that color on $X_2 \setminus
\{x,y\}$.} of $V \setminus \{x,y\}$ has exactly $q-1$ extensions to
$x$ since only one color appears on $N_{G'}(x) \subset X_1 \setminus
\{x,y\}$, and then exactly $q$ further extensions to the
newly-isolated vertex $y$.
On the other hand, since $x$ and $y$ both have degree at least 1 and do not form an isolated edge, one of them,
say $x$, has a neighbor in the rest of the graph. Therefore, in $G$
the same partial coloring has at most $q-1$ extensions to the vertex $x$,
and then at most $q-1$ further
extensions to the non-isolated vertex $y$. Yet by Claim 2, almost all
colorings of $G$ arise in this way, so for sufficiently large $m$, $G$
has fewer colorings than $G'$, contradiction. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 3.}\, Recall that our initial
technical adjustment allows us to assume that if $G$ contains an
isolated edge $uv$, then it also contains a degree-1 vertex $x \not
\in \{u,v\}$. This would give $d(x) + d(u) = 2 \ll |X_1| - 1$,
contradicting Lemma \ref{lem:exact:sparse:sum-degrees} because $xu$
cannot be an isolated edge. Hence $G$ in fact has no isolated edges.
But then the same lemma implies that at most one vertex $v_0$ has
degree $\leq 2 \epsilon_3 \sqrt{m}$, since $|X_1| = \Theta(\sqrt{m})$
by Claim 1.
It remains to show that each $|V_i|$ is within $O(\epsilon_2
\sqrt{m})$ of $u_i$. Recall that $U_1'$ and $U_2''$ are the the
$U_1$- and $U_2$-vertices that are in $V_1$. All other vertices of
$V_1$ are isolated in the graph $G^*$ which is within edit-distance
$\epsilon_1 m$ of $G$. So by the previous paragraph, each of them
(except $v_0$ if it exists) has degree at least $2 \epsilon_3
\sqrt{m}$, and thus contributes at least $2 \epsilon_3 \sqrt{m}$ to
the edit distance between $G$ and $G^*$. Therefore, there are at most
$1 + \frac{\epsilon_1 m}{2 \epsilon_3 \sqrt{m}} \ll \epsilon_2
\sqrt{m}$ of them, where we used $\epsilon_3^3 = \epsilon_2^2 =
\epsilon_1$. Claim 1 controls $|U_i'|$ and $|U_i''|$, so we indeed
find that $|V_1|$ is within $O(\epsilon_2 \sqrt{m})$ of $u_1$. The
analogous result for $V_2$ follows by a similar argument. \hfill
$\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 4.}\, Since almost all colorings are
$(X_1, X_2)$-regular, it suffices to prove this claim only for those
colorings. So, we bound the $(X_1, X_2)$-regular colorings that
\textbf{(i)} use a common color on both $V_2^*$ and $V_1^*$, or
\textbf{(ii)} use at most $q-2$ colors on $V_2^*$. Note that every
$(X_1, X_2)$-regular coloring which avoids both (i) and (ii) must use
exactly $q-1$ colors on $V_2^*$ and only the remaining color on
$V_1^*$, and so is automatically $(V_1^*, V_2^*)$-regular. It therefore
suffices to show that these two types of colorings constitute
$o(1)$-fraction of all colorings. The key observation is that every
$v \in V_2^*$ has a neighbor in $X_1$. Indeed, $(V_1, V_2)$ is a
max-cut, so at least half of the $\geq 2 \epsilon_3 \sqrt{m}$
neighbors of $v$ must be in $V_1$. These cannot all avoid $X_1$,
because Claims 1 and 3 show that only $O(\epsilon_2 \sqrt{m})$
vertices of $V_1$ are outside $X_1$, and $\epsilon_2 \ll \epsilon_3$.
To bound the number of colorings of type (i) above, first choose a
color $c_1$ for all $X_1$. By the key observation, $c_1$ cannot
appear on $V_2^*$, so the shared color $c_2$ must be different. Hence
we have $q-1$ choices for $c_2$, and must pick a pair of vertices $x
\in V_1^* \setminus X_1$ and $y \in V_2^*$ to use it on. The $\geq
\epsilon_3 \sqrt{m}$ neighbors of $x$ in $V_2^*$ must avoid $c_2$ as
well as $c_1$, so they each have at most $q-2$ color choices. Every
other vertex of $V_2^*$ must still avoid $c_1$, so we use the bound of
$\leq q-1$ color choices there. Using the trivial bound $\leq q$ for
all other vertices, and the fact that $|X_i|$ and $|V_i^*|$ are within
$O(\epsilon_2 \sqrt{m})$ of $u_i = \Theta(\sqrt{m})$, we find that the
number of type-(i) colorings is at most:
\begin{eqnarray*}
\Sigma_2 &:=& q \cdot (q-1) \cdot |V_1^* \setminus X_1| |V_2^*| \cdot
(q-2)^{\epsilon_3 \sqrt{m}} \cdot (q-1)^{|V_2^*| - \epsilon_3 \sqrt{m}} \cdot
q^{n-|X_1|-|V_2^*|-1} \\
&\leq& O(m) \cdot \left( \frac{q-2}{q-1} \right)^{\epsilon_3 \sqrt{m}} \cdot
(q-1)^{|V_2^*|} \cdot q^{n-|X_1|-|V_2^*|-1} \\
&\leq& e^{O(\epsilon_2 \sqrt{m})} \cdot \left( \frac{q-2}{q-1} \right)^{\epsilon_3 \sqrt{m}} \cdot
(q-1)^{u_2} \cdot
q^{n-u_1-u_2}.
\end{eqnarray*}
On the other hand, we showed at the end of the proof of Claim 2 that
$G$ had at least $\Sigma_0 = e^{-\epsilon_1 \sqrt{m}} (q-1)^{u_2}
q^{n-u_1-u_2}$ colorings. Since $\epsilon_1 \ll \epsilon_2 \ll
\epsilon_3$, we have $\Sigma_2 / \Sigma_0 \leq e^{-\Theta(\epsilon_3
\sqrt{m})} = o(1)$, as desired.
The number of type-(ii) colorings is easily bounded by $\Sigma_3 := q
\cdot (q-1) \cdot (q-2)^{|V_2^*|} \cdot q^{n-|X_1|-|V_2^*|}$. The
four factors correspond to choosing a color for $X_1$, choosing
another color to avoid on $V_2^*$, coloring $V_2^*$, and coloring all
remaining vertices. Using that $|X_i|$ and $|V_i^*|$ are within
$O(\epsilon_2 \sqrt{m})$ of $u_i$, we obtain $\Sigma_3 \leq
e^{O(\epsilon_2 \sqrt{m})} (q-2)^{u_2} q^{n-u_1-u_2}$, so $\Sigma_3 /
\Sigma_0 \leq e^{O(\epsilon_2 \sqrt{m})}
\big(\frac{q-2}{q-1}\big)^{u_2}$. Since $u_2 = \Theta(\sqrt{m})$, for
small enough $\epsilon_2$ we indeed have $\Sigma_3 / \Sigma_0 \leq
e^{-\Theta(\sqrt{m})} = o(1)$, as desired. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 5.}\, Almost all colorings are
$(V_1^*, V_2^*)$-regular, so $G[V_1^*]$ spans no edges. We turn our
attention to $V_2^*$, and start by showing that all degrees within
$G[V_2^*]$ are at most $\epsilon_3 \sqrt{m}$. Indeed, suppose for
contradiction that some $x \in V_2^*$ has at least $\epsilon_3
\sqrt{m}$ neighbors in $V_2^*$. Then the number of $(V_1^*,
V_2^*)$-regular colorings is at most $\Sigma_4 := q \cdot (q-1) \cdot
(q-2)^{\epsilon_3 \sqrt{m}} \cdot (q-1)^{|V_2^*| - \epsilon_3
\sqrt{m}} \cdot q^{n-|V_1^*|-|V_2^*|}$. Here, the factors
correspond to choosing a color $c_1$ for $|V_1^*|$, choosing a color
$c_2$ for $x$, coloring $V_2^* \cap N(x)$ without $c_1$ or $c_2$,
coloring the rest of $V_2^*$ without $c_1$, and coloring the remaining
vertices. Using that each $|V_i^*|$ is within $O(\epsilon_2
\sqrt{m})$ of $u_i$, we find that
\begin{eqnarray*}
\Sigma_4 &\leq& e^{O(\epsilon_2 \sqrt{m})} \cdot q \cdot (q-1) \cdot
(q-2)^{\epsilon_3 \sqrt{m}} \cdot (q-1)^{u_2 - \epsilon_3
\sqrt{m}} \cdot q^{n - u_1 - u_2} \\
&\leq& e^{O(\epsilon_2 \sqrt{m})} \cdot \left(
\frac{q-2}{q-1} \right)^{\epsilon_3 \sqrt{m}} \cdot (q-1)^{u_2}
q^{n-u_1-u_2}.
\end{eqnarray*}
Yet we showed at the end of the proof of Claim 2 that $G$ had at least
$\Sigma_0 = e^{-\epsilon_1 \sqrt{m}} (q-1)^{u_2} q^{n-u_1-u_2}$
colorings, so using $\epsilon_1 \ll \epsilon_2 \ll \epsilon_3$, we
obtain $\Sigma_4 / \Sigma_0 \leq e^{-\Theta(\epsilon_3 \sqrt{m})}$.
This contradicts the fact that $\Sigma_4$ includes almost all
colorings. Therefore, all degrees within $G[V_2^*]$ are indeed at
most $\epsilon_3 \sqrt{m}$.
We now use this intermediate bound to show that all such degrees are
in fact zero. Suppose for contradiction that some $x \in V_2^*$ has
neighbors within $V_2^*$. Let $G'$ be the graph obtained by deleting
all edges between $x$ and $V_2^*$ and all edges incident to $v_0$ (if
it exists), and adding back as many edges between $V_1^*$ and some
formerly isolated vertex $z$.\footnote{Isolated vertices exist because
Claim 3 shows that each $|V_i|$ is within $O(\epsilon_2 \sqrt{m})$
of $u_i$, so the number of non-isolated vertices is $|V_1 \cup V_2|
\leq u_1 + u_2 + O(\epsilon_2 \sqrt{m})$. This is strictly below
$n$ for small $\epsilon_2$, because $u_1 + u_2 = \sqrt{m/\kappa_q}$,
and we assumed that $m \leq \kappa n^2$ with $\kappa < \kappa_q$.}
This is possible because $d(v_0) \leq 2\epsilon_3 \sqrt{m}$ and $x$
has at most $\epsilon_3 \sqrt{m}$ neighbors within $V_2^*$, while
$|V_1^*| = \Theta(\sqrt{m})$. Observe that any $(V_1^*, V_2^*
\setminus \{x\})$-regular partial coloring of $V \setminus
\{x,z,v_0\}$ has exactly $(q-1)^2 q^{|V_0|}$ extensions to all of
$G'$, because $x$ and $z$ only need to avoid the single color which
appears on $V_1^*$, and $v_0$ is now isolated, if it exists. On the
other hand, we claim that the same partial coloring has at most
$(q-2)q(q-1)^{|V_0|}$ extensions in $G$. Indeed, there are at most
$q-2$ extensions to $x$ because $x$ must avoid the color of $V_1^*$ as
well as some (different) color which appears on its neighbor in
$V_2^*$. Then, there are $q$ ways to color the isolated vertex $z$,
and finally at most $q-1$ further extensions to the non-isolated
vertex $v_0$ if it exists. Yet by Claim 2, almost all colorings of
$G$ arise in this way, so for sufficiently large $m$, $G$ has fewer
colorings than $G'$. This is impossible, so $V_2^*$ must indeed be an
independent set.
It remains to show that $v_0$, if it exists, has neighbors in only one
$V_i^*$. Suppose for contradiction that $v_0$ is adjacent to both
$V_1^*$ and $V_2^*$, and consider the graph $G'$ obtained by deleting all edges
incident to $v_0$, and replacing them with edges to $V_1^*$ only.
This is possible because $d(v_0) \leq 2\epsilon_3 \sqrt{m}$ and
$|V_1^*| = \Theta(\sqrt{m})$. Any partial $(V_1^*, V_2^*)$-regular
coloring of $G \setminus \{v_0\}$ has at most $q-2$ extensions to
$v_0$, because $v_0$'s neighbors in $V_2^*$ are colored differently
from its neighbors in $V_1^*$. Yet the same partial coloring has
exactly $q-1$ extensions with respect to $G'$, since it uses the same
color on all of $v_0$'s neighbors (now in $V_1^*$). So, for
sufficiently large $m$, $G'$ has more colorings than $G$, giving the
required contradiction. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 6.}\, First, consider the case when
$V_0$ is empty. Then all non-isolated vertices are already in the
bipartite graph $(V_1^*, V_2^*)$. If that subgraph is less than
$|V_1^*|$ edges away from being complete bipartite, then Lemma
\ref{lem:semi-complete} already implies\footnote{$V_1^*$ is the
smaller side of the bipartite graph $(V_1^*, V_2^*)$ because Claim 3
shows that $|V_1^*|$ is within $O(\epsilon_2 \sqrt{m})$ of $u_1 =
\sqrt{m \cdot \log \frac{q}{q-1} / \log q}$ and $|V_2^*|$ is within
$O(\epsilon_2 \sqrt{m})$ of $u_2 = \sqrt{m \cdot \log q / \log
\frac{q}{q-1}}$.} that $G[V_1^* \cup V_2^*]$ is semi-complete
(and correctly oriented if $q \geq 4$), so we are done. On the other
hand, if that subgraph has at least $|V_1^*|$ missing edges, then we
can construct an $n$-vertex graph $G'$ with at least $m$ edges by
taking $K_{|V_1^*|, |V_2^*| - 1}$ and adding enough isolated vertices.
Then, $G'$ has at least $q(q-1)^{|V_2^*|-1}q^{n-|V_1^*|-|V_2^*|+1}$
colorings because there are $q$ choices of a single color for the
$|V_1^*|$-side, $q-1$ color choices for each vertex on the other side,
and $q$ choices for each remaining (isolated) vertex. However, the
same counting shows that $G$ has exactly $q(q-1)^{|V_2^*|}
q^{n-|V_1^*|-|V_2^*|}$ colorings that are $(V_1^*, V_2^*)$-regular,
which includes almost all colorings by Claim 4. Hence for
sufficiently large $m$, $G'$ has more colorings, and this
contradiction completes the case when $V_0$ is empty.
Now suppose the vertex $v_0$ with degree $\leq 2\epsilon_3 \sqrt{m}$
exists. By counting $(V_1^*, V_2^*)$-regular colorings, we find that
$G$ has at most $\Sigma_5 := (1+o(1))
q(q-1)^{|V_2^*|}(q-1)q^{n-|V_1^*|-|V_2^*|-1}$ colorings. Here, the
factors correspond to choosing a color for $V_1^*$, coloring $V_2^*$,
coloring the non-isolated vertex $v_0$ which must avoid a neighbor's
color, and coloring the remaining vertices. Observe that if there
were at least $d(v_0)$ edges missing between $V_1^*$ and $V_2^*$, then
we could isolate $v_0$ by deleting its edges and adding back as many
between $V_1^*$ and $V_2^*$. The resulting graph would have at least
$q(q-1)^{|V_2^*|} q^{n-|V_1^*|-|V_2^*|}$ colorings, where the factors
correspond to choosing a color for $V_1^*$, coloring $V_2^*$, and
coloring the remaining (isolated) vertices. For sufficiently large
$m$, this exceeds the number of colorings of $G$, which is impossible.
Therefore, less than $d(v_0)$ edges are missing between $(V_1^*,
V_2^*)$.
By Claim 5, $v_0$ has neighbors in only one $V_i^*$. If it is
$V_1^*$, we must have $V_1 = V_1^*$ and $V_2 = V_2^* \cup \{v_0\}$
because $(V_1, V_2)$ is a max-cut. The previous paragraph then
implies that less than $|V_1|$ edges are missing between $(V_1, V_2)$,
so Lemma \ref{lem:semi-complete} shows that $G$ is indeed
semi-complete on its non-isolated vertices (and correctly oriented if
$q \geq 4$).
The only remaining case is when $v_0$ has neighbors only in $V_2^*$,
which we will show is impossible. This time, the max-cut gives $V_1 =
V_1^* \cup \{v_0\}$ and $V_2 = V_2^*$. Since $d(v_0) \leq 2\epsilon_3
\sqrt{m}$, there are at least $|V_2| - 2\epsilon_3 \sqrt{m}$ missing
edges between $(V_1, V_2)$. So, if we let $t = \big\lfloor
\frac{|V_2| - 2\epsilon_3 \sqrt{m}}{|V_1|} \big\rfloor = \big\lfloor
\frac{u_2}{u_1} - O(\epsilon_3) \big\rfloor = \big\lfloor \log q /
\log \frac{q}{q-1} - O(\epsilon_3) \big\rfloor$, we can construct an
$n$-vertex graph $G'$ with at least $m$ edges by taking $K_{|V_1|,
|V_2|-t}$ and adding enough isolated vertices. This graph has at
least $\Sigma_6 := q(q-1)^{|V_2|-t} q^{n-|V_1|-|V_2|+t}$ colorings, by
the same counting as earlier in this proof. Let us compare this with
the number of colorings $\Sigma_5$ of $G$, which we calculated above.
Since $|V_1^*| = |V_1| - 1$ and $|V_2^*| = |V_2|$, we have $\Sigma_6 /
\Sigma_5 \geq (1-o(1)) \big( \frac{q}{q-1} \big)^t \cdot
\frac{1}{q-1}$.
Crucially, $\log q / \log \frac{q}{q-1}$ is always irrational, because
any positive integral solution to $q^x = \big(\frac{q}{q-1}\big)^y$
would require $q$ and $q-1$ to have a nontrivial common factor. So,
by choosing our $\epsilon$'s sufficiently small in advance (based only
on $q$), we may ensure that $t \geq \log q / \log \frac{q}{q-1} - 1 +
c_q$ for some small positive constant $c_q$. Since
$\big(\frac{q}{q-1}\big)^{\log q / \log \frac{q}{q-1} - 1} \cdot
\frac{1}{q-1} = 1$, this gives $\Sigma_6 / \Sigma_5 \geq (1-o(1))
\big( \frac{q}{q-1} \big)^{c_q}$, which exceeds 1 for large $m$,
leaving $G'$ with more colorings than $G$. This contradiction
finishes our last case, and our entire proof. \hfill $\Box$
\section{Exact result for 3 colors}
\label{sec:exact:q=3}
Our arguments can be pushed further when only three colors are used.
In this section, we complete the proof of Theorem \ref{thm:main:q=3},
determining the precise structure of the graphs that maximize the
number of 3-colorings, for edge densities up to $m \leq \frac{1}{4}
n^2$ (i.e., up to the density of the complete bipartite graph). The
structure of this proof closely resembles that of the previous
section, so parts that are essentially the same are rewritten briefly.
We would, however, like to draw attention to a new piece of notation.
Recall that, as defined in the previous section, a coloring is $(X,
Y)$-regular if it uses only one color on $X$ and the other $q-1$ on
$Y$. This time, we will also need a symmetric version of this
concept, which we denote with square brackets. We will say that a
coloring is \emph{$[X, Y]$-regular} if one of $X$ or $Y$ is monochromatic,
and the other avoids that color entirely. Note that this is
equivalent to having no colors shared between $X$ and $Y$, because
there are only 3 colors altogether.
\vspace{3mm}
\noindent \textbf{Proof of Theorem \ref{thm:main:q=3}.}\, Theorem
\ref{thm:main:sparse} already established our result for densities up
to $m \leq \kappa n^2$ for some constant $\kappa$, so we may assume
that $m = \Theta(n^2)$. Routine algebra verifies that Proposition
\ref{prop:solve-opt} and Theorem \ref{thm:asymp-number} establish the
claimed numbers of colorings in this theorem. This leaves us to
concentrate on the optimal graph structure. We use several constants
$\epsilon_1 \ll \epsilon_2 \ll \epsilon_3$, related by $\epsilon_1 =
\epsilon_2^2 = \epsilon_3^3$, and show that there is an eventual
choice that makes our argument work. To avoid confusion, our $O$,
$\Theta$, and $o$ notation will only mask absolute constants.
Let $G = (V, E)$ be an optimal graph whose density $m/n^2$ is between
$\kappa$ and $1/4$. Let $u_1 = \alpha_3 n$ and $u_2 = \alpha_{12} n$,
where the $\alpha$'s are determined by Proposition
\ref{prop:solve-opt} with density parameter $\gamma = m/n^2$. Note
that since $\kappa \leq \gamma \leq \frac{1}{4}$, each $u_i =
\Theta(n)$. Theorem \ref{thm:asymp-stability} gives disjoint subsets
$U_1, U_2 \subset V$ with $|U_i| \in \{\lfloor u_i \rfloor, \lceil u_i
\rceil\}$, such that by editing at most $\epsilon_1 n^2$ edges, we can
transform $G$ into the complete bipartite graph between $U_1$ and
$U_2$, plus isolated vertices. Call that graph $G^*$.
Let $(V_1, V_2)$ be a max-cut partition of the \textbf{non-isolated}
vertices of $G$, such that $V_1$ contains at least as many vertices of
$U_1$ as $V_2$ does. Define $U_i' = U_i \cap V_i$ and $U_i'' = U_i
\cap V_{3-i}$, and let $X_i \subset U_i'$ be the vertices that are
adjacent to all but at most $\epsilon_2 n$ vertices of $U_{3-i}'$.
The following series of claims will complete the proof of Theorem
\ref{thm:main:q=3}.
\begin{description}
\item[Claim 1.] For each $i$, $|U_i'|$ is within $O(\epsilon_1 n)$ of
$u_i$, $|X_i|$ is within $O(\epsilon_2 n)$ of $u_i$, and
$|U_i''| \leq O(\epsilon_1 n)$.
\item[Claim 2.] Almost all colorings of $G$ are \emph{$[X_1,
X_2]$-regular}, meaning that one $X_i$ is monochromatic, and the
other $X_{3-i}$ avoids that color entirely.
\item[Claim 3.] All nonzero degrees are at least $2 \epsilon_3 n$,
except possibly for either (i) only one isolated edge $w_1 w_2$, or
(ii) only one non-isolated vertex $v_0$. We use this to show that
each $|V_i|$ is within $O(\epsilon_2 n)$ of $u_i$. Let $V_0 =
\{w_1, w_2\}$ if exception (i) occurs, let $V_0 = \{v_0\}$ if (ii)
occurs, and let $V_0 = \emptyset$ otherwise. Let $V_i^* = V_i
\setminus V_0$.
\item[Claim 4.] Almost all colorings are $[V_1^*, V_2^*]$-regular.
\item[Claim 5.] Each $V_i^*$ is an independent set, and $v_0$ (if it
exists) has neighbors in only one of the $V_i^*$. Hence $G$ is a
bipartite graph plus isolated vertices.
\item[Claim 6.] $G$ is either a semi-complete subgraph of $K_{|V_1|,
|V_2|}$ plus isolated vertices, or a complete bipartite subgraph
$K_{|V_1^*|, |V_2^*|}$ plus a pendant edge to $v_0$.
\end{description}
\subsection{Supporting claims}
\noindent \textbf{Proof of Claim 1.}\, The sets $|U_i| = \Theta(n)$
are complete to each other in $G^*$, so all $U_i$-vertices have degree
$\Theta(n)$ in $G^*$. As $G$ is at most $\epsilon_1 n^2$ edges away
from $G^*$, the number of $U_i$-vertices that are isolated in $G$ is
at most $\frac{\epsilon_1 n^2}{\Theta(n)} = O(\epsilon_1 n)$. Since
$V_1$ received more non-isolated $U_1$-vertices than $V_2$ did, we
must have $|U_1'| \geq \frac{1}{3} u_1 = \Theta(n)$. By Proposition
\ref{prop:construction-asymp-edges}, $G^*$ has at least $m - O(n)$
edges, all of which cross between $(U_1, U_2)$. So $G$ has at least
$m - O(n)-\epsilon_1 n^2$ edges there, and at least that many between
its max-cut $(V_1, V_2)$. As $G$ has only $m$ edges, this shows that
each $G[V_i]$ spans $O(\epsilon_1 n^2)$ edges. But the sets $U_1',
U_2'' \subset V_1$ are complete to each other in $G^*$, so $|U_1'|
|U_2''| - \epsilon_1 n^2 \leq e(G[V_i]) \leq O(\epsilon_1 n^2)$.
Using $|U_1'| \geq \Theta(n)$, we indeed obtain $|U_2''| \leq
O(\epsilon_1 n)$.
Then $|U_2'| \geq u_2 - O(\epsilon_1 n) \geq \Theta(n)$, because only
$O(\epsilon_1 n)$ of the $U_2$-vertices are isolated and $|U_2''| \leq
O(\epsilon_1 n)$ of them are in $V_1$. So, repeating the above with
respect to $U_2'$ and $U_1''$ instead of $U_1'$ and $U_2''$, we find
that $|U_1''| \leq O(\epsilon_1 n)$, which then implies that $|U_1'|
\geq u_1 - O(\epsilon_1 n)$.
To control $X_i$, observe that since the $U_i'$ are complete to each
other in $G^*$, each vertex not in $X_i$ contributes at least
$\epsilon_2 n$ to the total edit distance of $\leq \epsilon_1 n^2$
between $G$ and $G^*$. We set $\epsilon_2^2 = \epsilon_1$, so all but
at most $\epsilon_2 n$ vertices of $U_i'$ belong to $X_i$. Since
$|U_i'|$ is within $O(\epsilon_1 n)$ of $u_i$, this gives the desired
result. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 2.}\, For each partition $\{1, 2, 3\}
= C_0 \cup C_1 \cup C_2 \cup C_3$, we count the colorings which use
the colors $C_1$ in $X_1$ but not $X_2$, use $C_2$ in $X_2$ but not
$X_1$, use $C_3$ in both $X_1$ and $X_2$, and do not use $C_0$ in
either $X_1$ or $X_2$. Then we sum over all \emph{irregular}\/
partitions, which are all partitions with $|C_3| \geq 1$. Note that a
coloring is $[X_1, X_2]$-regular if and only if it does not use any
color on both $X_i$, so this sum will include all other colorings.
For any given partition with $|C_i| = c_i$, we have that the corresponding number
of colorings is at most $(|X_1| |X_2|)^{c_3} \cdot c_1^{|X_1| - 3
\epsilon_2 n} \cdot c_2^{|X_2| - 3 \epsilon_2 n} \cdot 3^{n - 2c_3 -
(|X_1| - 3 \epsilon_2 n) - (|X_2| - 3 \epsilon_2 n)}$, by the
calculation in Claim 2 of Section \ref{sec:exact:sparse:details} with
$q$ replaced by 3 and $\sqrt{m}$ replaced by $n$. Using that each
$|X_i|$ is within $O(\epsilon_2 n)$ of $u_i = \Theta(n)$ and all
irregular colorings have $|C_3| \geq 1 \Rightarrow c_1+c_2 \leq 2$, we
find that the sum $\Sigma_1$ of this bound over all $\leq 4^3$
irregular partitions is:
\begin{eqnarray*}
\Sigma_1 &=& \sum_{\text{irregular}} (|X_1| |X_2|)^{c_3} \cdot c_1^{|X_1|
- 3 \epsilon_2 n} \cdot c_2^{|X_2| - 3 \epsilon_2 n}
\cdot 3^{n - 2c_3 - (|X_1| - 3 \epsilon_2 n) - (|X_2| - 3 \epsilon_2 n)} \\
&\leq& e^{O(\epsilon_2 n)} \sum_{\text{irregular}} (\Theta(n) \cdot \Theta(n))^{c_3} \cdot c_1^{u_1} \cdot c_2^{u_2}
\cdot 3^{n - u_1 - u_2} \\
&\leq& e^{O(\epsilon_2 n)} \cdot 4^3 \cdot O(n^6) \cdot
\max_{c_1 + c_2 \leq 2} \left\{ c_1^{u_1} c_2^{u_2} \right\} \cdot
3^{n - u_1 - u_2}
\ \ = \ \ e^{O(\epsilon_2 n)} \cdot 3^{n - u_1 - u_2}.
\end{eqnarray*}
On the other hand, Proposition \ref{prop:solve-opt}, Theorem
\ref{thm:asymp-number}, and routine algebra show that just as in the
sparse case, the optimal graph has at least $\Sigma_0 :=
e^{-\epsilon_1 n} \cdot 2^{u_2} \cdot 3^{n-u_1-u_2}$ colorings. Using
$u_2 = \Theta(n)$, we find that $\Sigma_1 / \Sigma_0 \leq
e^{-\Theta(n)} = o(1)$, i.e., almost all colorings of $G$ are $[X_1,
X_2]$-regular. \hfill $\Box$
\vspace{3mm}
Before proving the next claim, it is convenient to establish the
following lemma, which should be understood in the context of Claim 3.
\begin{lemma}
\label{lem:exact:q=3:sum-degrees}
Let $x,y$ be a pair of non-isolated vertices of $G$, such that $xy$
is not an isolated edge. Then $d(x) + d(y) \geq \min\{|X_1|,
|X_2|\} - 1$.
\end{lemma}
\noindent \textbf{Proof.}\, Suppose for contradiction that there is
such a pair $x,y$ with $d(x) + d(y) \leq \min\{|X_1|, |X_2|\} - 2$.
Also suppose that among the $[X_1 \setminus \{x,y\}, X_2 \setminus
\{x,y\}]$-regular partial colorings of $V \setminus \{x,y\}$,
at least half of them have $X_1 \setminus \{x,y\}$ monochromatic.
(The case when at least half have $X_2 \setminus \{x,y\}$
monochromatic follows by a similar argument.) Let $G'$ be the graph
obtained by deleting the $\leq |X_1|-2$ edges incident to $x$ or
$y$, and adding back as many edges between $x$ and $X_1 \setminus
\{x,y\}$.
Consider any $[X_1 \setminus \{x,y\}, X_2 \setminus \{x,y\}]$-regular
partial coloring of $V \setminus \{x,y\}$. If it is monochromatic in
$X_1$, which happens at least half the time by assumption, then in $G'$ it has
exactly 2 extensions to $x$, followed by 3 further extensions to the
newly-isolated vertex $y$. The rest of the time, the partial coloring
is monochromatic in $X_2$ and uses at most 2 colors in $X_1$. Then,
in $G'$ it has at least 1 extension to $x$, followed by 3 further
extensions to $y$.
On the other hand, since $x$ and $y$ both have degree at least 1 and do not form an isolated edge, one of them,
say $x$, has a neighbor in the rest of the graph. Therefore, in $G$
the same partial coloring has at most $2$ extensions to the vertex $x$,
and then at most $2$ further extensions to the non-isolated vertex $y$.
Yet by Claim 2, almost all colorings of $G$
arise in this way, so the ratio of $G'$-colorings to $G$-colorings is
at least $\frac{1}{2} \big(\frac{2 \cdot 3}{2 \cdot 2} + \frac{1 \cdot
3}{2 \cdot 2}\big) - o(1) = \frac{9}{8} - o(1) > 1$, contradiction.
\hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 3.}\, If there is an isolated edge
$w_1 w_2$, then Lemma \ref{lem:exact:q=3:sum-degrees} implies that any
other vertex $x$ has $d(x) + 1 = d(x) + d(w_1) \geq \min\{|X_1|,
|X_2|\} - 1 = \Theta(n)$, giving exception (i). Otherwise, the same
lemma implies there is at most one vertex $v_0$ of degree $\leq 2
\epsilon_3 n$, giving exception (ii). The rest of this claim, that
each $|V_i|$ is within $O(\epsilon_2 n)$ of $u_i$, follows by the same
argument as in Claim 3 of Section \ref{sec:exact:sparse:details}, but
with $\sqrt{m}$ replaced by $n$ throughout. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 4.}\, Note that a coloring is
$[V_1^*, V_2^*]$-regular if and only if it does not use any color on
both $V_i^*$. So, we bound the colorings that share a color on both
$V_i^*$, but \textbf{(i)} use only one color on $X_1$ and a subset of the other two on
$X_2$, or \textbf{(ii)} one on $X_2$ and a subset of the other two on $X_1$.
Since almost all colorings are $[X_1, X_2]$-regular, it suffices to
show that these two types of colorings constitute $o(1)$-fraction of
all colorings. The same calculation as in Claim 4 of Section
\ref{sec:exact:sparse:details}, with $q$ replaced by 3 and $\sqrt{m}$
replaced by $n$, shows that the number of type-(i) colorings is at
most:
\begin{eqnarray*}
\Sigma_2 &:=& 3 \cdot 2 \cdot |V_1^* \setminus X_1| |V_2^*| \cdot
1^{\epsilon_3 n} \cdot 2^{|V_2^*| - \epsilon_3 n} \cdot
3^{n-|X_1|-|V_2^*|-1} \\
&\leq& e^{O(\epsilon_2 n)} \cdot O(n^2) \cdot 2^{-\epsilon_3 n} \cdot 2^{u_2} \cdot 3^{n-u_1-u_2}.
\end{eqnarray*}
On the other hand, we showed at the end of the proof of Claim 2 that
$G$ had at least $\Sigma_0 = e^{-\epsilon_1 n} \cdot 2^{u_2} \cdot
3^{n-u_1-u_2}$ colorings. Since $\epsilon_1 \ll \epsilon_2 \ll
\epsilon_3$, we have $\Sigma_2 / \Sigma_0 \leq e^{-\Theta(\epsilon_3
n)} = o(1)$, as desired. The analogous result for type-(ii)
colorings follows by a similar argument. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 5.}\, We first show that $v_0$ cannot
have neighbors in both $V_i^*$. Suppose for contradiction that this
is not the case. Almost all colorings are $[V_1^*, V_2^*]$-regular by
Claim 4, so there is $I \in \{1,2\}$ such that $V_I^*$ is
monochromatic in at least $\big(\frac{1}{2} - o(1)\big)$-fraction of
all colorings. Let $G'$ be obtained by deleting the $\leq 2
\epsilon_3 n$ edges incident to $v_0$, and replacing them with edges
to $|V_I^*| = \Theta(n)$ only. Consider any partial $[V_1^*,
V_2^*]$-regular coloring of $V \setminus \{v_0\}$. If it uses only
one color on $V_I^*$ (which happens at least half the time by assumption), in $G'$
it has exactly 2 extensions to $v_0$. The rest of the time, it still
uses at most 2 colors on $V_I^*$, so there is at least 1 extension.
On the other hand, in $G$ the same partial coloring always has at most
1 extension to $v_0$, because $v_0$'s neighbors in $V_1^*$ are colored
differently from its neighbors in $V_2^*$. By Claim 2, almost all
colorings of $G$ arise in this way, so the ratio of number of
colorings of $G'$ to $G$ is at least $\frac{1}{2} \cdot
\big(\frac{2}{1} + \frac{1}{1}\big) - o(1) = \frac{3}{2} - o(1)$,
contradiction. Therefore, $v_0$ cannot have neighbors in both
$V_i^*$, as claimed.
It remains to show that both $G[V_i^*]$ are empty. Suppose for
contradiction that some $x \in V_2^*$ has neighbors within $V_2^*$.
(The analogous result for $V_1^*$ follows by a similar argument.)
Almost every coloring is $[V_1^*, V_2^*]$-regular, but $V_2^*$ can
never be monochromatic because it contains edges. So, almost all
colorings are in fact $(V_1^*, V_2^*)$-regular.\footnote{Recall that
round brackets denote ``ordered'' regularity, where $V_1^*$ is
monochromatic, and $V_2^*$ has the other two colors.} Therefore,
the same argument as in Claim 5 of Section
\ref{sec:exact:sparse:details}, with $q$ replaced by 3 and $\sqrt{m}$
replaced by $n$, shows that $x$ has at most $\epsilon_3 n$ neighbors
within $V_2^*$.
\vspace{2mm}
\textbf{Case 1: there is some $z_0 \in \boldsymbol{V_0}$.}\, Let $G'$
be obtained by deleting the $\leq \epsilon_3 n$ edges between $x$ and
$V_2^*$ and the $\leq 2 \epsilon_3 n$ edges incident to anything in
$V_0$, and adding back as many edges between $z_0$ and $|V_1^*| =
\Theta(n)$. Every $(V_1^*, V_2^* \setminus \{x\})$-regular partial
coloring of $V \setminus (V_0 \cup \{x\})$ has exactly $2 \cdot 2
\cdot 3^{|V_0|-1}$ extensions to all of $G'$, because $x$ and $z_0$
only need to avoid the single color which appears on $V_1^*$, and the
rest of $V_0$ (if any) is now isolated. On the other hand, in $G$ the
same partial coloring has at most 1 extension to $x$ because $x$ must
avoid the color of $V_1^*$ as well as some (different) color which
appears on its neighbor in $V_2^*$. Then, it has at most
$3^{|V_0|-1}$ further extensions to $V_0 \setminus \{z_0\}$ by the
trivial bound, and at most 2 further extensions to the non-isolated
vertex $z_0$. Note that all $(V_1^*, V_2^*)$-regular colorings of $G$
arise in this way, which is almost all of the total by our remark
before we split into cases. Hence for sufficiently large $m$, $G$ has
fewer colorings than $G'$, contradiction.
\vspace{2mm}
\textbf{Case 2: $\boldsymbol{V_0} = \emptyset$, but there is some
isolated vertex $\boldsymbol{z}$.}\, Define $G'$ by deleting the
$\leq \epsilon_3 n$ edges between $x$ and $V_2^*$, and adding back as
many edges between $z$ and $|V_1^*| = \Theta(n)$. By the same
arguments as in Case 1, all $(V_1^*, V_2^* \setminus \{x\})$-regular
partial colorings of $V \setminus \{x,z\}$ have exactly $2 \cdot 2$
extensions to $G'$, but in $G$ they have at most 1 extension to $x$,
followed by 3 further extensions to the isolated $z$. This produces
almost all colorings of $G$, so $G'$ has more colorings for large $m$,
contradiction.
\vspace{2mm}
\textbf{Case 3: $\boldsymbol{V_1^* \cup V_2^* = V}$.}\, We observed
that the edges in $V_2^*$ force almost all colorings to use only one
color for $V_1^*$ and the other two on $V_2^*$ (hence $G[V_2^*]$ is
bipartite). There are 3 color choices for $V_1^*$, so the number of
colorings of $G$ is $(3+o(1)) \cdot \#\{\text{2-colorings of
$V_2^*$}\}$. Recall that the number of 2-colorings of any bipartite
graph $F$ is precisely $2^r$, where $r$ is its number of connected
components.
We claim that the bipartite $G[V_2^*]$ has at most $|V_2^*| -
2\sqrt{t} + 1$ components, where $t$ is the number of edges in
$G[V_2^*]$. Indeed, for fixed $t$, the optimal configuration is to
have all isolated vertices except for a single nontrivial (bipartite)
component $C$. The sizes $a,b$ of the sides of that bipartite $C$
should minimize $a+b$ subject to the constraint $ab \geq t$, so by the
inequality of the arithmetic and geometric means, we have $a+b \geq
2\sqrt{t}$, as desired. Therefore, $G$ has at most $(3+o(1)) \cdot
2^{|V_2^*| - 2\sqrt{t} + 1}$ colorings.
Let $G'$ be the complete bipartite graph with sides $s$ and $n-s$,
such that $s$ is as large as possible subject to $s(n-s) \geq m$.
Note that $|V_1^*| \cdot |V_2^*| \geq m-t$ because all but $t$ of
$G$'s $m$ edges cross between the $V_i^*$, so Inequality
\ref{ineq:exact:q=3:claim5} routinely shows that $s \geq |V_2^*| -
\lceil \sqrt{t} \rceil$. Since $G'$ is complete bipartite, it has
exactly $3 \cdot 2^s + 3 \cdot 2^{n-s} - 6$ colorings, and thus our
bound on $s$ implies that $G'$ has strictly more than $3 \cdot 2^s
\geq 3 \cdot 2^{|V_2^*| - \lceil \sqrt{t} \rceil}$ colorings. Yet for
$t \geq 3$, one may check that $-\lceil \sqrt{t} \rceil \geq
(-2\sqrt{t} + 1) + 0.4$, giving $G'$ more colorings than $G$, which is
impossible.
We are left with the cases $t \in \{1,2\}$, but for these values there
is always a vertex $y \in V_2^*$ with exactly 1 neighbor $z$ in
$G[V_2^*]$. This forces all edges to be present between the $V_i^*$,
because otherwise we could increase the number of $(V_1^*,
V_2^*)$-regular colorings by a factor of 2 by deleting the edge $yz$
and adding one of the missing edges between the $V_i^*$. The presence
of the complete bipartite graph forces \emph{every}\/ coloring of $G$
to use exactly two colors on $V_2^*$, and the other on $V_1^*$.
Together with the observation that the maximum number of connected
components of $G[V_2^*]$ is $|V_2^*| - t$ when $t \in \{1,2\}$, we
find that $G$ has \emph{exactly}\/ $3 \cdot 2^r \leq 3 \cdot
2^{|V_2^*| - t}$ colorings. On the other hand, we showed above that
$G'$ had more than $3 \cdot 2^{|V_2^*| - \lceil \sqrt{t} \rceil}$
colorings. Since $t = \lceil \sqrt{t} \rceil$ for $t \in \{1,2\}$,
$G'$ has more colorings than $G$, contradiction. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Claim 6.}\, Let $G_0 = G[V_1 \cup V_2]$ be
the graph formed by the non-isolated vertices of $G$, and let $n_0 =
|V_1 \cup V_2|$. Since the number of colorings of $G$ is precisely
$3^{n - n_0}$ times the number of colorings of $G_0$, the optimality
of $G$ implies that $G_0$ must also be optimal among $n_0$-vertex
graphs with $m$ edges. Furthermore, Claim 4 also implies that almost
all colorings of $G_0$ are $[V_1^*, V_2^*]$-regular.
\vspace{2mm}
\textbf{Case 1: $\boldsymbol{V_0}$ is empty.}\, Let $\{a,b\}$ be the
sizes of the $V_i^*$, with $a \leq b$. If there are less than $a$
missing edges between the $V_i^*$, then Lemma \ref{lem:semi-complete}
shows that $G_0$ is semi-complete, so we are done. On the other hand,
if there are at least $a$ missing edges, then $K_{a, b - 1}$ plus one
isolated vertex has $n_0$ vertices and at least $m$ edges, but also
exactly $(3 \cdot 2^a + 3 \cdot 2^{b-1} - 6) \cdot 3$ colorings. Yet
$G_0$ has no vertices outside $V_1^* \cup V_2^*$, and almost all
colorings are $[V_1^*, V_2^*]$-regular, so $G_0$ has at most $(1+o(1))
\cdot (3 \cdot 2^a + 3 \cdot 2^b )$ colorings, which is smaller,
contradiction. \hfill $\Box$
\vspace{2mm}
\textbf{Case 2: $\boldsymbol{V_0}$ is the single edge $\boldsymbol{w_1
w_2}$.}\, We show that this is impossible. Let $\{a,b\}$ be the
sizes of the $V_i^*,$ with $a \leq b$. Since there are always exactly
6 ways to color the endpoints $\{w_1, w_2\}$ of the isolated edge
independently of the rest of $V$, and almost all colorings are
$[V_1^*, V_2^*]$-regular, $G_0$ has $(6+o(1)) \cdot ( 3 \cdot 2^a + 3
\cdot 2^b )$ colorings. Let $G'$ be the complete bipartite graph
$K_{a-1, b+3}$, and let $G''$ be the complete bipartite graph $K_{a-1,
b+2}$ plus one isolated vertex. Both graphs have the same number of
vertices as $G_0$, so it suffices to show that at least one of them
has more edges and more colorings than $G_0$.
Claim 3 gives $\frac{a}{b} \geq \frac{u_1}{u_2} - O(\epsilon_2)$, and
Proposition \ref{prop:solve-opt} implies that $\frac{u_1}{u_2} \geq
\frac{\log 3/2}{\log 3} \approx 0.37$. So for small $\epsilon_2$ and
large $n$, we have that $ab + 3a - b - 3 > ab+1$, hence $G'$ has more
edges than $G_0$. Also, $G'$ has $3 \cdot 2^{b+3} = 24 \cdot 2^b$
colorings that use only one color on the $(a-1)$-side and the other
two on the $(b+3)$-side. We claim that this already exceeds the
number of colorings of $G_0$ whenever $b \geq a+2$. Indeed, then $2^a
\leq \frac{1}{4} \cdot 2^b$, so the number of colorings of $G_0$ is at
most:
\begin{displaymath}
(6+o(1)) \cdot ( 3 \cdot 2^a + 3 \cdot 2^b )
\ \ \leq \ \
(6+o(1)) \cdot \frac{5}{4} \cdot 3 \cdot 2^b
\ \ = \ \
(22.5 + o(1)) \cdot 2^b,
\end{displaymath}
which is indeed less than the number of colorings of $G'$.
It remains to consider $a \leq b \leq a+1$. Here, $G''$ has $ab + 2a
- b - 2 > ab+1$ edges, and exactly $(3 \cdot 2^{a-1} + 3 \cdot 2^{b+2}
- 6)\cdot 3$ colorings. Using $a \geq b-1$, this is at least
$(1-o(1)) \cdot \frac{17}{16} \cdot 3 \cdot 2^{b+2} \cdot 3 = (38.25 -
o(1)) \cdot 2^b$. On the other hand, using $a \leq b$, the number of
colorings of $G_0$ is at most $(36 + o(1)) \cdot 2^b$, which is smaller.
Therefore, $G''$ is superior on this range, and we are done. \hfill
$\Box$
\vspace{2mm}
\textbf{Case 3: $\boldsymbol{V_0}$ is the single vertex
$\boldsymbol{v_0}$.}\, Let $I$ be the index (unique by Claim 5) such
that $V_I^*$ contains neighbors of $v_0$. Let $J = 3-I$ be the other
index, and let $a = |V_I^*|$, $b = |V_J^*|$. Note that $G_0$ is
bipartite with partition $(V_I^*, V_J^* \cup \{v_0\})$. If at least
$d(v_0)$ edges are missing between $V_I^*$ and $V_J^*$, then we can
isolate $v_0$ while only adding edges between $V_I^*$ and $V_J^*$.
This increases the number of $[V_I^*, V_J^*]$-regular colorings by a
factor of $\frac{3}{2} + o(1)$, which is impossible. So, less than $d(v_0)$
edges are missing between $V_I^*$ and $V_J^*$, which implies that less
than $a$ edges are missing between $V_I^*$ and $V_J^* \cup \{v_0\}$.
Hence $G_0$ is a subgraph of $K_{a, b+1}$ with less than $a$ missing
edges.
When $a \leq b+1$, Lemma \ref{lem:semi-complete} shows that $G_0$ is
semi-complete, as desired. It remains to consider $a > b+1$. Some
vertex of the set $V_I^*$ of size $a$ is complete to $V_J^* \cup \{v_0\}$, because
less than $a$ edges are missing between $V_I^*$ and $V_J^* \cup
\{v_0\}$. But we also showed that less than $d(v_0) \leq 2 \epsilon_3
n \ll |V_J^*|$ edges are missing between $V_I^*$ and $V_J^*$, so some
vertex of $V_J^*$ must be complete to $V_I^*$. Thus, Lemma
\ref{lem:subgraph-bipartite:q=3} implies that since $G_0$ is an
optimal graph, the missing edges $E(K_{a,b+1}) \setminus E(G_0)$ form
a star, which must have center $v_0$ because $d(v_0) \leq 2 \epsilon_3
n \ll \min\{a,b\}$. In particular, the number of missing edges is
then exactly $a-d$, where $d = d(v_0)$, and then the same lemma shows
that $G_0$ has exactly $3 \cdot 2^a + 3 \cdot 2^{b+1} + 6 \cdot
(2^{a-d} - 2)$ colorings.
Consider the graph $G'$ obtained by removing a $(b-d)$-edge star from
the complete bipartite graph $K_{a+1,b}$. This has as many vertices
and edges as $G_0$, and $3 \cdot 2^{a+1} + 3 \cdot 2^b + 6 \cdot
(2^{b-d} - 2)$ colorings by Lemma \ref{lem:subgraph-bipartite:q=3}.
The difference between the numbers of colorings of $G'$ and $G_0$ is
\begin{displaymath}
3 \cdot 2^a - 3 \cdot 2^b + 6 \cdot (2^{b-d} - 2^{a-d})
\ \ = \ \ \left(3 - \frac{6}{2^d}\right) \cdot (2^a - 2^b),
\end{displaymath}
which exceeds zero for $d \geq 2$ because we are in the case $a >
b+1$. Optimality of $G_0$ thus forces $d(v_0) = 1$.
We showed there were less than $d(v_0)$ edges missing between the
$V_i^*$, so now we know that the non-isolated vertices of $G$ form a
complete bipartite subgraph $(V_1^*, V_2^*)$ plus a pendant edge to
$v_0$. Finally, observe that $G$ cannot have any isolated vertex $z$,
or else we could replace the pendant edge with the (isolated) edge
$v_0 z$, and this would not change the number of colorings because
every partial coloring of $V \setminus \{v_0\}$ would still have
exactly 2 extensions to the degree-1 vertex $v_0$. But the resulting
graph is not optimal by the same argument as in Case 2 of this claim.
Therefore, $G$ is only a complete bipartite subgraph plus a pendant
edge, with no isolated vertices. This completes the final case of our
final claim, and our entire proof. \hfill $\Box$
\section{Exact result for Tur\'an graphs}
\label{sec:exact:turan}
We now study the extremality of Tur\'an graphs. As we mentioned in
the introduction, Lazebnik conjectured that Tur\'an graphs $T_r(n)$
were the unique graphs that maximized the number of $q$-colorings
whenever $r \leq q$. Note that Theorem \ref{thm:main:q=3} implies
this result for $q=3$ and $r=2$ when $n$ is large, because it shows
that all optimal graphs are bipartite, and no other bipartite graph
has as many edges as $T_2(n)$. In this section, we prove Theorem
\ref{thm:exact:turan}, which confirms (for large $n$) Lazebnik's
conjecture when $r = q-1$, for all remaining $q$. Our proof relies on
the following special case of a result of Simonovits
\cite{Simonovits}. Let $t_r(n)$ denote the number of edges of the
$r$-partite Tur\'an graph $T_r(n)$ with $n$ vertices.
\begin{fact}
\label{fact:simonovits}
Let $F$ be a graph with chromatic number $r+1$. Suppose there is an
edge whose deletion makes $F$ $r$-colorable. Then for all
sufficiently large $n$, the Tur\'an graph $T_r(n)$ is the unique
$n$-vertex graph with at least $t_r(n)$ edges that does not contain
a subgraph isomorphic to $F$.
\end{fact}
We use this fact to prove the following lemma, which we will need
later.
\begin{lemma}
\label{lem:exact:turan}
Let $q \geq 4$ be fixed. The following holds for all sufficiently
large $n$. Let $G \neq T_{q-1}(n)$ have $n$ vertices, and at least
as many edges and $q$-colorings as $T_{q-1}(n)$. Let $\Delta$ be
the difference between the number of edges of $G$ and $T_{q-1}(n)$,
and let $n' = n-(q-1)$. Then there is an $n'$-vertex graph $H$ with
at least $\Delta + 1$ more edges than $T_{q-1}(n')$, and at least
half as many $q$-colorings as $G$ has.
\end{lemma}
\noindent \textbf{Proof.}\, We begin with a convenient technical
adjustment. If $G$ has $k \geq 2$ connectivity components $C_i$ that
are not isolated vertices, then choose vertices $v_i \in C_i$ and glue
the components together by merging all of the $v_i$ into a single
vertex $v$. Add $k-1$ isolated vertices $w_1, \ldots, w_{k-1}$ to
restore the vertex count, and let $G'$ be the resulting graph.
Clearly, $G'$ has as many edges as $G$, and it also is not
$T_{q-1}(n)$ because $G'$ has a vertex whose deletion increases the
number of components while $T_{q-1}(n)$ does not. Furthermore, we
claim that $G$ and $G'$ have the same number of colorings. Indeed, by
symmetry, for an arbitrary color $c$, the total number of colorings of
$G$ is precisely $q^k$ times the number of colorings of $G$ which use
$c$ for every $v_i$. The obvious correspondence gives a bijection
between these colorings and partial colorings of $G' \setminus \{w_1,
\ldots, w_{k-1}\}$ which use $c$ on the merged vertex $v$. Yet the
$w_i$ are isolated, so each of these partial colorings has exactly
$q^{k-1}$ extensions to all of $G'$. Again by symmetry, the total
number of colorings of $G'$ is precisely $q$ times the number that use
$c$ on $v$. Putting everything together, we find that $G$ and $G'$
indeed have the same number of colorings. Therefore, by replacing $G$
with $G'$, we may assume without loss of generality that $G$ has only
one nontrivial connectivity component.
Fact \ref{fact:simonovits} implies that for large $n$, $G$ has a
subgraph $F$ which is the complete $(q-1)$-partite graph on $V(F) =
X_1 \cup \ldots \cup X_{q-1}$ with each part $X_i = \{u_i, w_i\}$
consisting of two vertices, plus an extra edge $u_1 w_1$. Let
$U = \{u_1, \ldots, u_{q-1}\}$ and $W = \{w_1, \ldots, w_{q-1}\}$, and let
$A = U \cup \{w_1\}$.
Let $\delta$ be the difference between the number of edges of
$T_{q-1}(n)$ and $T_{q-1}(n')$. We claim that if there is a set $Y$
of $q-1$ vertices of $A$ such that the sum of their degrees is at most
$\delta + {q-1 \choose 2} - 1$, then $H = G - Y$ satisfies the lemma's
assertion. Clearly, $H$ has the correct number of vertices, and it
has the correct number of edges because $Y \subset A$ induces a
complete graph $K_{q-1}$, so the number of deleted edges is at most
$\delta - 1$. We now show that every $q$-coloring of $H$ extends to
at most two $q$-colorings of $G$.
If $Y = U$, since $\{u_1\} \cup W$ induces a $K_q$-subgraph in $G$,
every coloring of $H \supset W$ has at most 1 extension to $u_1$.
Then, every other $u_i$ has at most 1 choice because $\{u_1, u_i\}
\cup (W \setminus \{w_i\})$ induces a $K_q$-subgraph in which $u_i$ is
the only uncolored vertex. Thus when $Y = U$, every coloring of $H$
colors $W$ and hence has at most 1 extension to $G$. On the other
hand, up to a symmetry of $F$, the only other case is when $Y =
\{w_1\} \cup (U \setminus \{u_{q-1}\})$. As before, $\{u_1\} \cup W$
induces a $K_q$-subgraph in $G$, but this time $H$ contains neither
$u_1$ nor $w_1$ (although it contains the rest). Any partial coloring
of $q-2$ vertices of $K_q$ has only 2 completions, so there are at
most 2 ways to extend any coloring of $H$ to include $u_1$ and $w_1$.
But then every other $u_i$ has at most 1 choice because $\{u_1, u_i\}
\cup (W \setminus \{w_i\})$ induces a $K_q$-subgraph in which $u_i$ is
the only uncolored vertex. Therefore, every coloring of $H$ has at
most 2 extensions to $G$, as claimed.
It remains to consider the case when every set of $q-1$ vertices of
$A$ has degrees summing to at least $\delta + {q-1 \choose 2}$. We
will show that then $G$ has fewer colorings than $T_{q-1}(n)$, which
is impossible. Let $B = V(G) \setminus A$. By an averaging argument,
the sum of degrees of $A$ is at least $\frac{q}{q-1} \big[ \delta +
{q-1 \choose 2} \big]$. Since $|A|=q$, the number of edges between $A$ and $B$ is
at least $\frac{q}{q-1} \big[ \delta + {q-1 \choose 2} \big] - 2 {q
\choose 2}$.
Let $B_0$ be the set of isolated vertices of $G$, and for $2 \leq i
\leq q-1$, let $B_i$ be the set of vertices of $B$ that send $i$ edges
to $A$. Note that no vertex can send $q = |A|$ edges to $A$ because
that would create a $K_{q+1}$-subgraph, making $G$ not $q$-colorable.
So, if we let $B_1 = B \setminus (B_0 \cup B_2 \cup \cdots \cup
B_{q-1})$, then every vertex of $B_1$ either sends exactly 1 edge to
$A$, or it is a non-isolated vertex that sends no edges to $A$. Let
$b_i = |B_i|$. By counting the number of edges between $A$ and $B$,
we obtain:
\begin{equation}
\label{ineq:sum-ibi}
\sum_{i=1}^{q-1} i b_i
\ \ \geq \ \
\frac{q}{q-1} \left[ \delta + {q-1 \choose 2} \right]
- 2 {q \choose 2}.
\end{equation}
We now bound the number of $q$-colorings of $G$ in terms of the $b_i$.
There are exactly $q!$ ways to color $A$ because it induces $K_q$.
Then, there are exactly $q^{b_0}$ ways to extend this partial coloring
to $B_0$ because each isolated vertex has a free choice of the $q$
colors. Next, for every $i \in \{2, \ldots, q-1\}$, each vertex in
$B_i$ has at most $q-i$ color choices left because it is adjacent to
$i$ vertices in $A$, all of which received different colors since
$G[A] = K_q$. Finally, we color the vertices of $B_1$ by considering
them in an order such that whenever we color a vertex, it always has a
neighbor that we already colored. This is possible because our
initial technical adjustment allows us to assume that $G$ has only one
nontrivial connectivity component. Hence each vertex in $B_1$ will
have at most $q-1$ choices. Putting this all together, we find that
the number of $q$-colorings of $G$ is at most
\begin{displaymath}
q! \cdot \prod_{i=0}^{q-1} (q-i)^{b_i}
\ \ \leq \ \
q! \cdot \prod_{i=0}^{q-1} 2^{(q-i-1)b_i}
\ \ \leq \ \
q! \cdot 2^{(q-1)(n-q)} \cdot 2^{
-\frac{q}{q-1} \left[ \delta + {q-1 \choose 2} \right]
+ 2 {q \choose 2}},
\end{displaymath}
where we used the inequality $x+1 \leq 2^x$ for $x \in \mathbb{Z}$,
the identity $\sum b_i = n-q$ (since $\cup B_i = V(G) \setminus A$),
and the bound for $\sum i b_i$ from inequality \eqref{ineq:sum-ibi}.
Inequality \ref{ineq:exact:turan:routine} routinely verifies that this
final bound is always strictly less than the number of colorings of
$T_{q-1}(n)$, contradicting our assumption that $G$ had at least that
many colorings. \hfill $\Box$
\vspace{3mm}
\noindent \textbf{Proof of Theorem \ref{thm:exact:turan}.}\, Let $q
\geq 4$ be fixed, and let $N$ be the corresponding minimum number of
vertices for which Lemma \ref{lem:exact:turan} holds (it is valid only
for sufficiently large $n$). We will show that Theorem
\ref{thm:exact:turan} holds for all $n \geq q {N \choose 2}$. So,
suppose for contradiction that $G \neq T_{q-1}(n)$ is an $n$-vertex
graph with at least as many edges and $q$-colorings as $T_{q-1}(n)$.
Define a sequence of graphs as follows. Start with $G_0 = G$. If
$G_i$ is the current graph, stop if $G_i$ has fewer colorings than the
$(q-1)$-partite Tur\'an graph with $n-(q-1)i$ vertices. Otherwise,
let $G_{i+1}$ be the graph $H$ obtained by applying Lemma
\ref{lem:exact:turan} to $G_i$. We claim that this process terminates
before the graph $G_i$ has fewer than $N$ vertices, so we will always
be able to apply the lemma. Indeed, each $G_i$ has exactly $n-(q-1)i$
vertices, so it will take more than ${N \choose 2}$ iterations before
$G_i$ has fewer than $N$ vertices. Yet if $\Delta \geq 0$ is the
difference between the number of edges of $G$ and $T_{q-1}(n)$, then
each $G_i$ has at least $\Delta+i$ more edges than the $(q-1)$-partite
Tur\'an graph with $n-(q-1)i$ vertices. So, after ${N \choose 2}$
iterations, $G_i$ would certainly have more than the maximum number of
edges of an $N$-vertex graph, and we indeed can never reach a graph
with fewer than $N$ vertices.
Therefore, we stop at some $G_t$, which has $n' = n-(q-1)t$ vertices
and fewer colorings than $T_{q-1}(n')$, but at least $2^{-t}$ times as
many colorings as $G$. Divide $n$ by $q-1$, so that $n = s(q-1) + r$
with $0 \leq r < q-1$, and note that $n' = (s-t)(q-1) + r$. Lemma
\ref{lem:turan-number-colorings} calculates that $T_{q-1}(n')$ has
exactly $q! \cdot \big[ (q-1+r)2^{s-t-1} - q + 2 \big]$ colorings, so
$G$ has at most $2^t$ times that many, hence fewer than $q! \cdot
\big[ (q-1+r)2^{s-1} - q + 2 \big]$. Yet by the same lemma, that
final bound equals the number of colorings of $T_{q-1}(n)$. Thus $G$
has fewer colorings than $T_{q-1}(n)$, contradiction. \hfill $\Box$
\section{Concluding remarks}
\begin{itemize}
\item We have developed an approach that we hope future researchers
can use to determine the graphs that maximize the number of
$q$-colorings. Theorems \ref{thm:asymp-number} and
\ref{thm:asymp-stability} reduce any instance of this problem to a
quadratically-constrained linear program, which can be solved for
any case of interest. Thus, thanks to modern computer algebra
packages, these theorems imply that for any fixed $q$, approximately
determining the extremal graphs amounts to a finite symbolic
computation.
The remaining challenge is to find analytic arguments which solve
the optimization problem for general $q$, and then refine the
approximate structure into precise results. We accomplished this
for low densities $m/n^2$, and the natural next step would be to
extend the result to the range $\frac{m}{n^2} \leq \frac{1}{4}$. In
this range, and for all $q$, we expect the solution to the
optimization problem to correspond to a bipartite graph plus
isolated vertices. This common form gives hope that perhaps one can
find a solution which works across all $q$.
\item For $q=3$, we also know the approximate form of the extremal
graphs when $\frac{m}{n^2} > \frac{1}{4}$, since Proposition
\ref{prop:solve-opt} solved the entire $q=3$ case of the
optimization problem. However, we did not pursue the precise
structure of the optimal graphs because it appears that their
description is substantially more involved, and this paper was
already quite long.
\item Our methods in Section \ref{sec:reduction-to-opt} can easily be
adapted to maximize the number of graph homomorphisms to an
arbitrary $H$ (not just $K_q$). The analogues of Theorems
\ref{thm:asymp-number} and \ref{thm:asymp-stability} show that for
any fixed $H$, the asymptotic maximum number of homomorphisms from
an $n$-vertex, $m$-edge graph to $H$ can be determined by solving a
certain quadratically-constrained linear program. Although this can
in principle be done, it appears that the computations become rather
messy even for graphs $H$ of small order.
However, in the interesting case when $H$ is the two-vertex graph
consisting of a single edge plus a loop, one can easily determine
the extremal graphs via a direct argument. As we mentioned in the
introduction, this corresponds to maximizing the number of
independent sets. By considering the complement of the graph, this
is equivalent to maximizing the number of cliques.
We claim that for any $n,m$, the same graph that Linial found to
minimize the number of colorings also happens to maximize the number
of cliques. This graph $G^*$ was a clique $K_k$ with an additional
vertex adjacent to $l$ vertices of the $K_k$, plus $n-k-1$ isolated
vertices, where $k,l$ are the unique integers satisfying $m = {k
\choose 2} + l$ with $k > l \geq 0$. We will show that for any
$t$, every $n$-vertex graph $G$ with $m$ edges has at most as many
$t$-cliques as $G^*$. The only nontrivial values of $t$ to check
are $2 \leq t \leq k$.
If $l+2 \leq t \leq k$, then $G^*$ has exactly ${k \choose t}$
cliques of size $t$. Suppose for contradiction that $G$ has more
$t$-cliques. Construct a $t$-uniform hypergraph with at least ${k
\choose t} + 1 = {k \choose t} + {t-1 \choose t-1}$ hyperedges by
defining a hyperedge for each $t$-clique. By the Kruskal-Katona
theorem (see, e.g., the book \cite{Kruskal-Katona}), the number of
2-sets that are contained in some hyperedge is at least ${k \choose
2} + {t-1 \choose 1} \geq {k \choose 2} + (l+1)$, which exceeds
the number of edges of $G$. This contradicts the definition of the
hyperedges, because each of these 2-sets must be an edge of $G$.
On the other hand, if $2 \leq t \leq l+1$, $G^*$ has exactly ${k
\choose t} + {l \choose t-1}$ cliques of size $t$. A similar
argument shows that if $G$ has at least ${k \choose t} + {l \choose
t-1} + 1 = {k \choose t} + {l \choose t-1} + {t-2 \choose t-2}$
cliques of size $t$, then $G$ must have at least ${k \choose 2} + {l
\choose 1} + {t-2 \choose 0} \geq {k \choose 2} + l + 1$ edges,
contradiction.
Therefore, $G^*$ indeed maximizes the number of cliques.
Furthermore, we can classify all extremal graphs, because our
argument shows that any other graph $G$ with as many cliques as
$G^*$ must also have exactly the same number of $t$-cliques for all
integers $t$. In particular, using $t=k$, we see that $G$ must also
contain a $K_k$. If $l \neq 1$, we can use $t=l+1$ to conclude that
the remaining edges form a star with all endpoints in the $K_k$.
Therefore, the maximizer is unique unless $l = 1$, in which case the
extremal graphs are $K_k$ plus an arbitrary edge (not necessarily
incident to the $K_k$).
\end{itemize}
|
1,116,691,497,911 | arxiv | \section{Introduction}
\label{sec:intro}
Many problems in ergodic theory and dynamical systems involve properties of limits of sequences of invariant probability measures. If the phase space is compact then the space of invariant probability measures is also compact in the the weak$^*$ topology, which is partly a consequence of convergence in this topology preserving mass. However, when the phase space is non-compact, the space of invariant probability measures might also be non-compact, and thus mass, as well as other quantities of interest, may escape in the limit. In this paper, we are principally interested in how the entropy of sequences of measures behaves in this setting.
More specifically, we consider countable Markov shifts (CMS) $(\Sigma,\sigma)$, which in general are not even locally compact. We discuss the difficulties with the various classical topologies in this context in the next section, where we also give details of the space of invariant sub-probability measures endowed with the so-called cylinder topology, introduced in \cite{iv}. This topology generalises the vague topology to a non-locally compact setting (see Section~\ref{cyl}). If $(\mu_n)_n$ is a sequence of $\sigma$-invariant probability measures that converges in the cylinder topology to the measure $\mu$ then
the total mass $|\mu|:=\mu(\Sigma) \in [0,1]$. In particular, this topology captures the
escape of mass. Moreover, $\mu$ is an invariant measure and the normalisation $\mu/|\mu|$ is an invariant probability measure (whenever $
\mu$ is not the zero measure). Denote by $h_{\nu}(\sigma)$ the entropy of the
invariant probability measure $\nu$ (see Section~\ref{sec:em} for details). Our first main result answers one of the classical
questions about sequences of measures: how does entropy change in the limit?
\begin{theorem}\label{thm:main} Let $(\Sigma,\sigma)$ be a transitive CMS with finite topological entropy. Let $(\mu_n)_{n}$ be a sequence of $\sigma$-invariant probability measures converging on cylinders to $\mu$. Then
\begin{equation} \label{eq:main}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\leqslant |\mu|h_{\mu/|\mu|}(\sigma)+(1-|\mu|)\delta_\infty.
\end{equation}
If the sequence converges on cylinders to the zero measure then the right hand side is understood as $\delta_\infty$.
\end{theorem}
The number $\delta_{\infty}$ is the \emph{topological entropy at infinity} of the system. Since it plays a crucial role in this article, we define it here, leaving details of the background to this to Section~\ref{sec:introh}. The idea is to measure how complicated the dynamics is near infinity. Of course, such a notion only makes sense for dynamical systems defined on non-compact phase spaces. As in the classical entropy theory, we will study two ways of measuring the complexity of the system near infinity. One topological in nature and the other measure theoretic.
\begin{definition} \label{def:ent_inf}
Let $(\Sigma, \sigma)$ be a CMS. Let $M, q \in {\mathbb N}$. For $n\in {\mathbb N}$ let $z_n(M, q)$ be the number of cylinders of the form $[x_0,\ldots,x_{n+1}]$, where $x_0\leqslant q$, $x_{n+1}\leqslant q$, and
\begin{equation*}
\# \left\{ i\in\{0,1,\ldots,n+1\}: x_i\leqslant q \right\} \leqslant \frac{n+2}{M}.
\end{equation*}
Define
\begin{equation*}
\delta_\infty(M,q):=\limsup_{n\to\infty}\frac{1}{n}\log z_n(M, q),
\end{equation*}
and
\begin{equation*}
\delta_\infty(q):=\liminf_{M\to\infty} \delta_\infty(M,q).
\end{equation*}
The \emph{topological entropy at infinity} of $(\Sigma,\sigma)$ is defined by $\delta_\infty:=\liminf_{q\to\infty}\delta_\infty(q)$.
\end{definition}
The measure theoretic counterpart is given by:
\begin{definition} \label{def:ent_meas_inf} Let $(\Sigma, \sigma)$ be a finite entropy CMS. The \emph{measure theoretic entropy at infinity} of $(\Sigma, \sigma)$ is defined by
\begin{equation}
h_\infty :=\sup_{(\mu_n)_n\to 0}\limsup_{n\to\infty}h_{\mu_n}(\sigma), \label{eq:mte}
\end{equation}
where $(\mu_n)_n\to 0$ means that the sequence $(\mu_n)_n$ converges on cylinders to the zero measure.
\end{definition}
Other authors have considered related concepts. Most notably, Buzzi \cite[Definition 1.13]{b} proposed a notion of entropy at infinity for CMS. His definition is given in terms of the graph $G$ which defines the CMS $(\Sigma, \sigma)$: \begin{equation*}
b_{\infty}:=\inf_{F} \inf_{\lambda >0} \sup \left\{h_{\mu}(\sigma) : \mu([F]) < \lambda \right\},
\end{equation*}
where $F$ ranges over the finite sub-graphs of $G$ and $[F]:= \left\{ x \in \Sigma : x_0 \in \mathcal{A}_F \right\}$, where $ \mathcal{A}_F$ denotes the symbols appearing as vertex of $F$. It turns out that Buzzi's notion coincides with ours. Indeed, our next result states that all these three notions coincide.
\begin{theorem} \label{thm:vpinf} Let $(\Sigma,\sigma)$ be a CMS of finite topological entropy. Then
\begin{equation*}
\delta_{\infty}= h_\infty =b_\infty.
\end{equation*}
\end{theorem}
The equality $ \delta_{\infty}= h_\infty$ can be understood as a variational principle at infinity.
Einsiedler, Lindenstrauss, Michel and Venkatesh \cite[Lemma 4.4]{elmv} were the first to obtain an inequality similar to \eqref{eq:main}. It appeared in their ergodic theoretic proof of Duke's theorem on equidistribution of closed geodesics on the modular surface. After that, similar results in the context of homogeneous dynamics were obtained in \cite[Theorem 1.2]{ek} and \cite[Theorem A]{ekp}. For different classes of geodesic flows defined on non-compact manifolds of negative sectional curvature related results were obtained in \cite[Theorem 1.2]{irv} and \cite[Theorem 1.1]{rv}. In this context the most general result was obtained in \cite[Theorems 1.4 and 1.6]{ve} where an inequality like \eqref{eq:main}
was proved for the geodesic flow defined on an arbitrary complete Riemannian manifolds with pinched negative sectional curvature. The manifolds studied are locally compact, thus the topology considered in the space of invariant measures is the vague topology. A more interesting and subtle point is the quantity playing the role of the entropy at infinity. Due to the geometric nature of the examples studied, the entropy at infinity is related to the critical exponent of the Poincar\'e series associated to the non-compact parts of the space (in the geometrically finite case this reduces to the critical exponent of the parabolic subgroups of the fundamental group). Let us mention that the topological entropy at infinity of the geodesic flow was also studied by Schapira and Tapie \cite{st} in their work about the rate of change of the topological entropy under perturbations of the metric.
A major difference with previous works is that in the context of CMS the behaviour of the orbits approaching infinity can be very complicated and that we do not assume the phase space to be locally compact. These are major difficulties that have to be overcome making the analysis more technical. As a general principle we follow the method employed in \cite{ve} with appropriate modifications. Loosely speaking the entropy at infinity of the geodesic flow counts geodesics that start and end at a given base point, but do not return near this point for intermediate times. In our setup the entropy at infinity counts orbits that might return near a base point many times, but the number of returns become negligible on average, which can occur due to the lack of local compactness.
There are several interesting consequences of Theorem \ref{thm:main}, some of them are discussed in Section \ref{sec:app}. For example, in Theorem \ref{semicont} it is proved that the entropy map is upper semi-continuous for every transitive finite entropy CMS. The continuity properties of the entropy map have been studied for a long time. Major results in the area are that for expansive systems defined on compact metric spaces the entropy map is upper semi-continuous \cite[Theorem 8.2]{wa}. Another fundamental result is that if $f$ is a $C^{\infty}$ diffeomorphism defined on a smooth compact manifold then again the entropy map is upper semi-continuous \cite[Theorem 4.1]{n}. As explained in Remark \ref{rem:nousc}, for infinite entropy CMS the entropy map is not upper semi-continuous. In a recent article we proved \cite[Corollary 1.2]{itv} that if $(\Sigma, \sigma)$ is a finite entropy transitive CMS then the entropy map is upper semi-continuous when restricted to ergodic measures. A complete solution to the problem can be obtained as a consequence of Theorem \ref{thm:main}. In Section \ref{sec:app} we also prove that the set of ergodic measures is `entropy dense' in the space of invariant probability measures. This result not only provides a fine description of the structure of the space of invariant probability measures but also provides an important tool to study Large Deviations in this setting.
There is a classification of transitive CMS in terms of their recurrence properties: they can be transient, null recurrent or positive recurrent (see Definition \ref{def:clas} for $\varphi=0$). Positive recurrent CMS are precisely those with a measure of maximal entropy. A particularly important role is played by strongly positive recurrent CMS (SPR); which are a sub-class of positive recurrent Markov shifts. The dynamical properties of this class of systems are similar to that of sub-shifts of finite type. Buzzi gave a characterisation of SPR shifts using $b_\infty$ in \cite[Proposition 6.1]{b}, based on the work of Gurevich-Zargaryan, Gurevich-Savchenko and Ruette. We note in Proposition \ref{prechar} that we can now restate this result as saying that $(\Sigma,\sigma)$ is SPR if and only if $\delta_\infty<h_{top}(\sigma)$, where $h_{top}(\sigma)$ is the Gurevich entropy of $(\Sigma,\sigma)$ (for precise definitions see Section \ref{sec:tf}). In Section \ref{sec:mme} we use Theorem \ref{thm:main} to obtain stability properties of the measure of maximal entropy for SPR CMS (recovering results from \cite{gs}). Similar arguments are used to prove the existence of equilibrium states for potentials in $C_0(\Sigma)$, the space of test functions for the cylinder topology (see Section \ref{sec:eqst}). To the author's knowledge, this is the first result on the existence of equilibrium states for CMS that goes beyond regular potentials (e.g. with summable variations or the Walters property). Finally, in Theorem \ref{thm:em} we prove that for SPR systems it is possible to bound the amount of mass that escapes the system in terms of the entropy of the measures. Sequences of measures with large entropy can not lose much mass.
The entropy at infinity has yet another important appearance in dynamics: it is related to the Hausdorff dimension of the set of points that escape on average (see \cite{aaekmu, ekp, kklm, kp}). These are points for which the frequency of visits to every cylinder equals to zero. In particular, no invariant measure is supported on that set. This notion has been studied recently in contexts of homogeneous and Teichm\"uller dynamics. The motivation comes from work of Dani \cite{da} in the mid 1980s who proved that singular matrices are in one-to-one correspondence with certain divergent orbits of one parameter diagonal groups on the space of lattices. In Theorem \ref{thm:onave} we prove that the Hausdorff dimension of the set of recurrent points that escape on average is bounded above by $\delta_{\infty} / \log 2$, where the factor $\log 2$ comes from the metric in the symbolic space.
While our interest in this paper lies in the realm of Markov shifts, to provide context we mention some applications of this theory. Symbolic methods have been used to describe dynamical properties of a variety of systems since the $1898$ work of Hadamard on closed geodesics on surfaces of negative curvature, at the latest. Compact Markov shifts have been used to study uniformly hyperbolic dynamical systems defined on compact spaces, see for example the work of Bowen in \cite{bo2}. Many deep results can be obtained with this coding. Mostly after the work of Sarig \cite{sa4}, countable Markov partitions have been constructed for a wide range of dynamical systems. This gives a semiconjugacy between a relevant part of the dynamics, albeit not all of it, and a CMS. Examples of systems for which Markov partitions have been constructed include positive entropy diffeomorphisms defined on compact manifolds \cite{b2,ov, sa4} and Sinai and Bunimovich billiards \cite{lm}. Remarkable results have been proved making use of these codings, for example in \cite[Main Theorem]{bcs} it is shown that a positive entropy $C^{\infty}$ diffeomorphism of a closed surface admits at most finitely many ergodic measures of maximal entropy. Results in this paper apply to all the symbolic codings mentioned above. However, due to topologies possibly not being preserved by the coding, it is not clear that the results pass to the original systems.
In 1980 Katok \cite[Theorem 1.1]{ka} established a formula for the entropy of an invariant probability measure in analogy to the definition of topological entropy of a dynamical system \cite{bo,d}. This formula is now known as \emph{Katok's entropy formula}. An important assumption in \cite[Theorem 1.1]{ka} is the compactness of the phase space. In Section \ref{ent} we prove that Katok's entropy formula holds in the non-compact setting of CMS. We require this formula in the proof of Theorem \ref{thm:main}, but it is also of independent interest.
\section{Preliminaries}
\subsection{Basic definitions for CMS} \label{sec:defcms}
Let $\mathcal{S}$ be an alphabet of countably many symbols and $M$ an $\mathcal{S}\times \mathcal{S}$ matrix with entries $0$ or $1$. The symbolic space associated to $M$ is defined by
\begin{equation*}
\Sigma:=\left\{ (x_0, x_1, \dots) \in \mathcal{S}^{{\mathbb N}_0}: M(x_i, x_{i+1})=1 \text{ for every } i \in {\mathbb N}_0 \right\},
\end{equation*}
where ${\mathbb N}_0:={\mathbb N} \cup \{0\}$. We endow $\mathcal{S}$ with the discrete topology and $\mathcal{S}^{{\mathbb N}_0}$ with the product topology. On $\Sigma$ we consider the induced topology given by the natural inclusion $\Sigma\subset \mathcal{S}^{{\mathbb N}_0}$. We stress that, in general, this is a non-compact space.
The space $\Sigma$ is locally compact if and only if for every $i \in \mathcal{S}$ we have $\sum_{j \in \mathcal{S}} M(i,j ) <\infty$ (see \cite[Observation 7.2.3]{ki}).
The \emph{shift map} $\sigma:\Sigma \to \Sigma$ is defined by $(\sigma(x))_i=x_{i+1}$, where $x=(x_0, x_1, \dots ) \in \Sigma$. Note that $\sigma$ is a continuous map. The pair $(\Sigma,\sigma)$ is called a one sided \emph{countable Markov shift} (CMS). The matrix $M$ can be identified with a directed graph $G$ with no multiple edges (but allowing edges connecting a vertex to itself).
An \emph{admissible word} of length $N$ is a string ${\bf w} =a_0a_1\ldots a_{N-1}$ of letters in $\mathcal{S}$ such that $M(a_i,a_{i+1})=1$, for every $i\in\{0,\ldots,N-2\}$. We use bold letters to denote admissible words. The length of an admissible word ${\bf w}$ is $\ell({\bf w})$.
A \emph{cylinder} of length $N$ is the set
\begin{equation*}
[a_0,\ldots,a_{N-1}]:= \left\{ x=(x_0,x_1,\ldots)\in \Sigma : x_i=a_i \text{ for } 0 \leqslant i \leqslant N-1 \right\}.
\end{equation*}
If $a_0\ldots a_{N-1}$ is an admissible word then $[a_0,\ldots,a_{N-1}] \neq \emptyset$. We use the notation $C_n(x)$ to denote the cylinder of length $n$ containing $x$. Since a cylinder can be identified with an admissible word, we also denote the length of a cylinder $C$ by $\ell(C)$. Note that the topology generated by the cylinder sets coincides with that induced by the product topology.
The space $\Sigma$ is metrisable. Indeed, let $d:\Sigma \times \Sigma \to {\mathbb R}$ be the function defined by
\begin{equation} \label{metric}
d(x,y):=
\begin{cases}
1 & \text{ if } x_0\ne y_0; \\
2^{-k} & \text{ if } x_i=y_i \text{ for } i \in \{0, \dots , k-1\} \text{ and } x_k \neq y_k; \\
0 & \text{ if } x=y.
\end{cases}
\end{equation}
The function $d$ is a metric and it generates the same topology as that of the cylinders sets. Moreover, the ball $B(x,2^{-N})$ is precisely $C_N(x)$. Given $\varphi:\Sigma\to {\mathbb R}$, we define
\begin{equation*}
\text{var}_n(\varphi):=\sup \left\{|\varphi(x)-\varphi(y)|: \forall x,y\in \Sigma \text{ such that }d(x,y)\leqslant 2^{-n} \right\}.
\end{equation*}
A function $\varphi:\Sigma\to {\mathbb R}$ is said to have \emph{summable variations} if $\sum_{n\geqslant 2}\text{var}_n(\varphi)<\infty$. A function $\varphi$ is called \emph{weakly H\"older} if there exist $\theta \in (0,1)$ and a positive constant $O \in {\mathbb R}$ such that $\text{var}_n(\varphi)\leqslant O \theta^n$, for every $n\geqslant 2$. A weakly H\"older continuous function is H\"older if and only if it is bounded. The $C^0$-norm of $\varphi$ is $\|\varphi\|_0:=\sup_{x\in \Sigma}|\varphi(x)|$. We denote by
\begin{equation*}
S_n\varphi(x)=\sum_{k=0}^{n-1}\varphi(\sigma^k x),
\end{equation*}
the \emph{Birkhoff sum} of $\varphi$ at the point $x$.
We say that $(\Sigma,\sigma)$ is \emph{topologically transitive} if its associated directed graph $G$ is connected. We say that $(\Sigma,\sigma)$ is \emph{topologically mixing} if for each pair $a,b\in \mathcal{S}$, there exists a number $N(a,b)$ such that for every $n\geqslant N(a,b)$ there is an admissible word of length $n$ connecting $a$ and $b$. There is a particular class of CMS that will be of interest to us,
\begin{definition} \label{def:F}
A CMS $(\Sigma, \sigma)$ is said to satisfy the $\mathcal{F}-$\emph{property} if for every element of the alphabet
$a$ and natural number $n$, there are only finitely many admissible words of length $n$ starting and ending at
$a$.
\end{definition}
\begin{remark}
A CMS $(\Sigma, \sigma)$ satisfies the $\mathcal{F}-$\emph{property} if and only if there are only finitely many periodic orbits of length $n$ intersecting $[a]$. Note that every locally compact CMS satisfies the $\mathcal{F}-$\emph{property}.
\end{remark}
\begin{remark}
Equivalent definitions and properties as those discussed in this section can be given for \emph{two sided CMS}. In this case the acting group is ${\mathbb Z}$.
It turns out that, in general, thermodynamic formalism for the two sided CMS can be reduced to the one sided case (see \cite[Section 2.3]{sabook}).
\end{remark}
\subsection{Topologies in the space of invariant measures} \label{sec:topo}
The space of invariant measures can be endowed with different topologies, some of which can account for the escape of mass phenomenon whereas others can not. In this section we not only fix notation for later use, but we also recall definitions and properties of several topologies in the space of measures. First note that in this article a measure is always a countably additive non-negative Borel measure defined in the symbolic space $\Sigma$. The mass of a measure $\mu$ is defined as $|\mu|:=\mu(\Sigma)$.
Denote by $\mathcal{M}(\Sigma,\sigma)$ the space of $\sigma$-invariant probability measures on $\Sigma$ and by $\mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$ the space of $\sigma$-invariant measures on $\Sigma$ with mass in $[0,1]$. In other words, $\mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$ is the space of $\sigma$-invariant sub-probability measures on $\Sigma$. Note that $\mathcal{M}(\Sigma,\sigma)\subset \mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$. The set of ergodic probability measures is denoted by $\mathcal{E}(\Sigma, \sigma)$.
\subsubsection{The weak$^*$ topology} \label{weak*}
Denote by $C_b(\Sigma)$ the space of real valued bounded continuous function on $\Sigma$. A sequence of measures $(\mu_n)_n$ in $\mathcal{M}(\Sigma,\sigma)$ converges to a measure $\mu$ in the weak$^*$ topology if for every $f \in C_b(\Sigma)$ we have
\begin{equation*}
\lim_{n \to \infty} \int f d \mu_n = \int f d \mu.
\end{equation*}
Note that since the constant function equal to one belongs to $C_b(\Sigma)$ the measure $\mu$ is also a probability measure. A basis for this topology is given by the collection of sets of the form
\begin{align}\label{defbasis}
V(f_1,\ldots,f_k,\mu,\epsilon):= \left\{ \nu \in \mathcal{M}(\Sigma,\sigma) : \left|\int f_i d \nu - \int f_i d \mu \right| < \epsilon, \text{ for } i \in \{1, \dots, k\} \right\},
\end{align}
where $\mu \in \mathcal{M}(\Sigma,\sigma)$, $(f_i)_{i}$ are elements from $C_b(\Sigma) $ and $\epsilon >0$. Note that in this notion of convergence we can replace the set of test functions (bounded and continuous) by the space of bounded uniformly continuous functions (see \cite[8.3.1 Remark]{bg}).
Convergence with respect to the weak$^*$ topology can be characterised as follows, see \cite[Theorem 2.1]{bi}.
\begin{proposition}[Portmanteau Theorem] \label{port}
Let $(\mu_n)_n, \mu$ be probability measures on $\Sigma$. The following statements are equivalent.
\begin{enumerate}
\item The sequence $(\mu_n)_n$ converges to $\mu$ in the weak$^*$ topology.
\item If $O \subset \Sigma$ is an open set, then $\mu(O) \leq \liminf_{n \to \infty} \mu_n(O)$.
\item If $C \subset \Sigma$ is a closed set, then $\mu(C) \geq \limsup_{n \to \infty} \mu_n(C)$.
\item If $A \subset \Sigma$ has $\mu(\partial A)=0$, where $\partial A$ is the boundary of $A$, then $ \lim_{n \to \infty} \mu_n(A)= \mu(A)$.
\end{enumerate}
\end{proposition}
Note that the space $\mathcal{M}(\Sigma,\sigma)$ is closed in the weak$^*$ topology (\cite[Theorem 6.10]{wa}). If $\Sigma$ is compact then so is $\mathcal{M}(\Sigma,\sigma)$ with respect to the weak$^*$ topology (see \cite[Theorem 6.10]{wa}). If $\Sigma$ is not compact then, in general (e.g., whenever the $\mathcal{F}$-property holds), $\mathcal{M}(\Sigma,\sigma)$ is not compact with respect to the weak$^*$ topology.
Finally, the space $\mathcal{M}(\Sigma,\sigma)$ is a convex set whose extreme points are ergodic measures (see \cite[Theorem 6.10]{wa}).
\subsubsection{The topology of convergence on cylinders} \label{cyl}
In this section we recall the definition and properties of the topology of convergence on cylinders. This topology was introduced and studied in \cite{iv} as a way to compactify $\mathcal{M}(\Sigma,\sigma)$ under suitable assumptions on $\Sigma$.
Let $(C^n)_n$ be an enumeration of the cylinders of $\Sigma$. Given $\mu, \nu\in \mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$ we define $$\rho(\mu,\nu)=\sum_{n=1}^\infty \frac{1}{2^n}|\mu(C^n)-\nu(C^n)|.$$
It follows from the outer regularity of Borel measures on metric spaces that $\rho(\mu,\nu)=0$, if and only if $\mu=\nu$. Moreover, the function $\rho$ defines a metric on $\mathcal{M}_{\le1}(\Sigma,\sigma)$. The topology induced by this metric is called the \emph{topology of convergence on cylinders}. We say that a sequence $(\mu_n)_n$ in $\mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$ \emph{converges on cylinders} to $\mu$ if $$\lim_{n\to\infty}\mu_n(C)=\mu(C),$$ for every cylinder $C\subset \Sigma$. Of course, $(\mu_n)_n$ converges on cylinders to $\mu$ if and only if $(\mu_n)_n$ converges to $\mu$ in the topology of convergence on cylinders.
In the next lemma we see that in the case that mass does not escape then weak$^*$ and convergence on cylinders coincide.
\begin{lemma}{\cite[Lemma 3.17]{iv}}\label{restriction}
\label{equivtop} Let $(\Sigma,\sigma)$ be a CMS, $\mu$ and $(\mu_n)_n$ be invariant probability measures on $\Sigma$. The following assertions are equivalent.
\begin{enumerate}
\item The sequence $(\mu_n)_n$ converges in the weak$^*$ topology to $\mu$.
\item The sequence $(\mu_n)_n$ converges on cylinders to $\mu$.
\end{enumerate}
\end{lemma}
Let $\Sigma$ be a locally compact space and $(\mu_n)_n, \mu$ in $\mathcal{M}_{\le1}(\Sigma,\sigma)$. The sequence $(\mu_n)_n$ converges to $\mu$ in the \emph{vague topology} if $\lim_{n \to \infty} \int f d \mu_n = \int f d \mu,$ for every function $f$ continuous and of compact support (note that the set of test functions can be replaced by the set of continuous functions vanishing at infinity). If $(\Sigma, \sigma)$ is locally compact then the topology of convergence on cylinders coincides with the vague topology (see \cite[Lemma 3.18]{iv}). It is important to note that if $\Sigma$ is transitive and not locally compact then the space of continuous functions of compact support is empty, so the vague topology is of no use and the topology of convergence on cylinders is a suitable generalisation (see \cite[Remark 3.13]{iv}).
If $C$ is a cylinder of length $m$, denote by \begin{equation*}
C(\geqslant n):= \left\{ x \in C : \sigma^m(x)\in \bigcup_{k\geqslant n}[k] \right\}.
\end{equation*}
For a non-empty set $ A\subset \Sigma$ we define
\begin{equation*}
var^A(f):=\sup \left\{ \left|f(x)-f(y) \right| : x, y \in A \right\}.
\end{equation*}
We declare $var^A(f)=0$ if $A$ is the empty set.
\begin{definition}\label{C_0} We say that $f$ belongs to $C_0(\Sigma)$ if the following three conditions hold:
\begin{enumerate}
\item $f$ is uniformly continuous.
\item $\lim_{n\to\infty}\sup_{x\in [n]}|f(x)|=0.$
\item $\lim_{n\to\infty}var^{C(\geqslant n)}(f)=0,$ for every cylinder $C\subset \Sigma$.
\end{enumerate}
In this case we say that $f$ \emph{vanishes at infinity}.
\end{definition}
The set $C_0(\Sigma)$ is the space of test functions for the cylinder topology (see \cite[Lemma 3.19]{iv}). In other words, $(\mu_n)_n$ is a sequence in $\mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$ that converges in the cylinder topology to $\mu\in\mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$ if and only if $\lim_{n\to\infty}\int f d\mu_n=\int fd\mu$, for every $f\in C_0(\Sigma)$. Since cylinder topology generalises the vague topology for non-locally compact CMS, the space $C_0(\Sigma)$ is the natural substitute to the set of continuous functions that vanish at infinity.
The following result was proved in \cite[Theorem 1.2]{iv}, and is an important ingredient for many of our applications.
\begin{theorem}\label{compact} Let $(\Sigma,\sigma)$ be a transitive CMS with the $\mathcal{F}-$property. Then $\mathcal{M}_{\leqslant 1}(\Sigma,\sigma)$ is a compact metrisable space with respect to the cylinder topology. Moreover, $\mathcal{M}_{\le1}(\Sigma,\sigma)$ is affine homeomorphic to the Poulsen simplex. \end{theorem}
We remark that, as shown in \cite[Proposition 4.19]{iv}, Theorem \ref{compact} is sharp in a strong sense: if $(\Sigma, \sigma)$ does not have the $\mathcal{F}-$property, then $\mathcal{M}_{\le1}(\Sigma,\sigma)$ is not compact. More precisely, there exists a sequence of periodic measures that converges on cylinders to a finitely additive measure which is not countably additive.
\subsection{Entropy of a measure} \label{sec:em}
In this section we recall the definition of the entropy of an invariant measure $\mu \in\mathcal{M}(\Sigma,\sigma)$ (see \cite[Chapter 4]{wa} for more details). We take the opportunity to fix the notation that will be used in what follows. A partition $\beta$ of a probability space $(\Sigma, \mu)$ is a countable (finite or infinite) collection of pairwise disjoint subsets of $\Sigma$ whose union has full measure.
The \emph{entropy} of the partition $\beta$ is defined by
\begin{equation*}
H_\mu(\beta):= - \sum_{P \in \beta} \mu(P) \log \mu(P),
\end{equation*}
where $0 \log 0 :=0$.
It is possible that $H_\mu(\beta)=\infty$. Given two partitions $\beta$ and $\beta'$ of $\Sigma$ we define the new partition
\begin{equation*}
\beta \vee \beta':= \left\{P \cap Q : P \in \beta , Q \in \beta' \right\}.
\end{equation*}
Let $\beta$ be a partition of $\Sigma$. We define the partition $\sigma^{-1}\beta:= \left\{ \sigma^{-1}P : P \in \beta \right\}$ and for $n \in {\mathbb N}$ we set
$\beta^n:=\bigvee_{i=0}^{n-1} \sigma^{-i}\beta$. Since the measure $\mu$ is $\sigma$-invariant, the sequence $H_{\mu}(\beta^n)$ is sub-additive.
The \emph{entropy of $\mu$ with respect to} $\beta$ is defined by
\begin{equation*}
h_{\mu}(\beta):= \lim_{n \to\infty} \frac{1}{n} H_{\mu}(\beta^n).
\end{equation*}
Finally, the \emph{entropy} of $\mu$ is defined by
\begin{equation*}
h_{\mu}(\sigma):= \sup \left\{h_{\mu}(\beta) : \beta\text{ a partition with } H_{\mu}(\beta) < \infty \right\}.
\end{equation*}
\begin{remark}
Krengel \cite[Remark p.166]{kr} observes that since the entropy of a finite invariant measure $\mu$ is usually defined as the entropy of the normalised measure $\mu/ |\mu|$, the linearity (in the standard sense) of the entropy map is destroyed. Following Krengel's line of thought, the number $ |\mu|h_{\mu/|\mu|}(\sigma)$ appearing in Theorem \ref{thm:main} can be understood, as the entropy of the finite measure $\mu$ (see also \cite[Theorem 8.1]{wa} for example).
\end{remark}
\subsection{Thermodynamic formalism for CMS} \label{sec:tf}
Throughout this section we assume that $(\Sigma, \sigma)$ is topologically transitive and that every function
$\varphi:\Sigma\to {\mathbb R}$ has summable variations. Let $A \subset \Sigma$ and $1_{A}(x)$ be the characteristic function of the set $A$.
In this setting we define,
\begin{equation*}
Z_n(\varphi,a):=\sum_{\sigma^n x=x} e^{S_n \varphi(x)}1_{[a]}(x),
\end{equation*}
where $a\in \mathcal{S}$. The \emph{Gurevich pressure} of $\varphi$ is defined by
\begin{equation*}
P_G(\varphi):=\limsup_{n\to \infty} \frac{1}{n}\log Z_n(\varphi,a).
\end{equation*}
This definition was introduced by Sarig \cite{sa1}, based on the work of Gurevich \cite{gu2}. We remark that the right hand side in the definition of $P_G(\varphi)$ is independent of $a\in \mathcal{S}$, and that if $(\Sigma,\sigma)$ is topologically mixing, then the limsup can be replaced by a limit (see \cite[Theorem 1]{sa1} and \cite[Theorem 4.3]{sabook}).
This definition of pressure satisfies the variational principle (see \cite[Theorem 3]{sa1} and \cite[Theorem 2.10]{ijt}) and can be approximated by the pressure of compact invariant subsets \cite[Theorem 2 and Corollary 1]{sa1}. Indeed,
\begin{eqnarray*} \label{thm:vp}
P_G(\varphi) &=& \sup \left\{P_{\text{top}}(\varphi |K) : K \subset \Sigma \text{ compact and } \sigma^{-1}K=K \right\} \\
&=& \sup_{\mu\in \mathcal{M}(\Sigma,\sigma)} \left\{h_\mu(\sigma)+\int\varphi d\mu: \int \varphi d\mu>-\infty \right\},
\end{eqnarray*}
where $P_{\text{top}}( \cdot)$ is the classical pressure on compact spaces \cite[Chapter 9]{wa}. A measure $\mu \in \mathcal{M}(\Sigma, \sigma)$ is an \emph{equilibrium state} for $\varphi$ if $\int \varphi d\mu>-\infty$ and
\begin{equation*}
P_G(\varphi)=h_\mu(\sigma)+\int \varphi d\mu.
\end{equation*}
The following function will be of importance in this article.
\begin{definition} \label{def:ret}
Let $A \subset \Sigma$. Denote by $R_A(x):=1_{A}(x)\inf\{n\ge1:\sigma^n x\in A\}$ the first return time map to the set $A$. In the particular case in which the set $A$ is a cylinder $[a]$ we denote $R_{[a]}(x):= R_a(x)$.
\end{definition}
Sarig \cite[Section 4.2]{sa1} introduced the following:
\begin{equation*}
Z_n^*(\varphi,a):=\sum_{\sigma^n(x)=x} e^{S_n\varphi(x)}1_{[R_a=n]}(x),
\end{equation*}
where $[R_a =n]:=\left\{x \in \Sigma : R_a(x)=n \right\}$. Extending notions of Markov chains, Sarig \cite{sa1} classified potentials according to its recurrence properties.
\begin{definition}[Classification of potentials] \label{def:clas} Let $(\Sigma , \sigma)$ be a topologically transitive CMS and $\varphi$ a summable variation potential with finite Gurevich pressure. Define $\lambda=\exp \left( P_G(\varphi) \right)$ and fix $a\in \mathcal{S}$.
\begin{enumerate}
\item If $\sum_{n\geqslant 1}\lambda^{-n}Z_n(\varphi,a)$ diverges we say that $\varphi$ is \emph{recurrent}.
\item If $\sum_{n\geqslant 1}\lambda^{-n}Z_n(\varphi,a)$ converges we say that $\varphi$ is \emph{transient}.
\item If $\varphi$ is recurrent and $\sum_{n\geqslant 1}n\lambda^{-n}Z^*_n(\varphi,a)$ converges we say that $\varphi$ is \emph{positive recurrent}.
\item If $\varphi$ is recurrent but $\sum_{n\geqslant 1}n\lambda^{-n}Z^*_n(\varphi,a)$ diverges we say that $\varphi$ is \emph{null recurrent}.
\end{enumerate}
\end{definition}
Topological transitivity implies that above definitions do not depend on the choice of the symbol $a$.
\begin{remark}
The classification in Definition \ref{def:clas} is invariant under the addition of coboundaries and constants. That is, if $\psi:\Sigma \to {\mathbb R}$ is of summable variations and $C \in {\mathbb R}$ we have that: the potential $\varphi$ is recurrent (resp. transient) if and only if the potential $\varphi + \psi - \psi \circ \sigma + C$ is recurrent (resp. transient). Moreover,
the potential $\varphi$ is positive recurrent (resp. null recurrent) if and only if the potential $\varphi + \psi - \psi \circ \sigma + C$ is positive recurrent (resp. null recurrent).
\end{remark}
The following result describes existence and uniqueness of equilibrium states. Parts (\ref{sumvar1}) and (\ref{sumvar2}) follow from Theorems 1.1 and Theorem 1.2 of \cite{bs}, respectively.
\begin{theorem} \label{clas} Let $(\Sigma , \sigma)$ be a topologically transitive CMS and $\varphi$ a summable variation potential with finite Gurevich pressure. Then
\begin{enumerate}
\item \label{sumvar1} There exists at most one equilibrium state for $\varphi$.
\item \label{sumvar2} If $\varphi$ has an equilibrium state then $\varphi$ is positive recurrent.
\end{enumerate}
\end{theorem}
In this article the potential $\varphi=0$ will play a particularly important role. The \emph{topological entropy} of $(\Sigma,\sigma)$, that we denote by $h_{top}(\sigma)$, is defined as the Gurevich pressure of the potential $\varphi=0$, that is
$$h_{top}(\sigma):=P_G(0).$$
We say that $(\Sigma,\sigma)$ is recurrent, transient, null recurrent or positive recurrent according to the corresponding properties of $\varphi=0$. If $(\Sigma,\sigma)$ is positive recurrent, then Theorem \ref{clas} implies that $(\Sigma,\sigma)$ admits a unique measure of maximal entropy. This was first proved by Gurevich \cite{gu1}.
\begin{remark}
Note that every finite entropy, transitive CMS satisfies the $\mathcal{F}-$property (see Definition \ref{def:F}).
\end{remark}
\subsection{Strongly positive recurrent CMS} Properties of CMS may be significantly different from those of sub-shifts of finite type defined on finite alphabets.
In this section we describe a special class of CMS with properties analogous to those of compact sub-shifts. This study is based on work of Vere-Jones \cite{v1,v2} developed during the 1960s, where he first defined an equivalent class in the setting of stochastic matrices. Several people have contributed to the understanding of this class, for example,
Salama \cite{s}, Gurevich and Savchenko \cite{gs}, Sarig \cite{sa3}, Ruette \cite{r}, Boyle, Buzzi and G\'omez \cite{bbg} and Cyr and Sarig \cite{cs}. In these works the following quantities, or related ones, have been defined and studied.
\begin{definition}
Let $(\Sigma, \sigma)$ be topologically transitive CMS and $a \in \mathcal{S}$. Let
\begin{equation*}
\Delta_\infty ([a]):=\limsup_{n\to \infty} \frac{1}{n}\log Z_n^*(0,a),
\end{equation*}
and
\begin{equation*}
{\Delta}_\infty:=\inf_{a\in \mathcal{S}} \Delta_\infty([a]).
\end{equation*}
\end{definition}
\begin{remark}
The number $\Delta_\infty ([a])$ can depend on the symbol $a$, see \cite[Remark 2,1]{r}.
\end{remark}
\begin{definition}[Strongly positive recurrent CMS] \label{def:spro} Let $(\Sigma,\sigma)$ be a topologically transitive CMS with finite topological entropy. We say that $(\Sigma,\sigma)$ is \emph{strongly positive recurrent} (SPR) if ${\Delta}_\infty<h_{top}(\sigma)$.
\end{definition}
\begin{remark} \label{rem_conspr} A strongly positive recurrent CMS is positive recurrent. In particular it admits a unique measure of maximal entropy. Moreover, with respect to this measure the system $(\Sigma, \sigma)$ is exponentially recurrent (see \cite[Proposition 2.3]{bbg} for precise statements). The class of strongly positive recurrent CMS was intensively studied by Gurevich and Savchenko in \cite{gs}. Note, however, that in \cite{gs} these are called \emph{stable-positive recurrent}. We also remark that there exists CMS that are positive recurrent but not strongly positive recurrent (see \cite[Example 2.9]{r}).
\end{remark}
\begin{remark} \label{rem:spr}
Strongly positive recurrent CMS have the property that the entropy is concentrated inside the system and not near infinity. Indeed, let $(\Sigma, \sigma)$ be a CMS an $G$ its associated graph. Gurevich and Zargaryan \cite{gz} (see also \cite{gs}) showed that a condition equivalent to SPR is the existence of a finite connected subgraph $H \subset G$ such that there are more paths inside than outside $H$ (in term of exponential growth). See \cite[Section 3.1]{r} for precise statements.
On the other hand, for graphs that are not strongly positive recurrent the entropy is supported by the infinite paths that spend most of the time outside a finite subgraph (see \cite[Proposition 3.3]{r}).
\end{remark}
Along the lines of the observations made in Remark \ref{rem:spr}, in the next section (see Proposition \ref{prechar}) we characterise SPR for CMS as those having entropy at infinity strictly smaller than the topological entropy.
Sarig \cite{sa3} generalised the notion of strong positive recurrence to potentials $\varphi$. Using his definition, we recover the topological notion in Definition~\ref{def:spro} for the potential $\varphi\equiv 0$, i.e., this potential is strongly positive recurrent if and only if $(\Sigma,\sigma)$ is SPR (see \cite[Remark 2.11]{r}).
For Sarig's class of potentials the associated thermodynamic formalism enjoys most of the properties of the thermodynamics for H\"older potentials on sub-shifts of finite type. In particular, the transfer operator corresponding to a strongly positive recurrent potential has a spectral gap (see \cite[Theorem 2.1]{cs}). This readily implies that the pressure function is analytic and there exist formulas for its derivatives (\cite[Theorem 3 and 4]{sa3} and \cite[Theorem 1.1]{cs}), there exists a unique equilibrium state and it has exponential decay of correlations and satisfies the Central Limit Theorem (\cite[Theorem 1.1]{cs}). Moreover, in the space of potentials strongly positive recurrence is a robust property. Indeed, it was proved by Cyr and Sarig \cite[Theorem 2.2]{cs} that the space of strongly positive recurrent potentials is open and dense (with respect to the uniform metric) in the space of weakly H\"older potentials with finite pressure.
\subsection{Entropy at infinity}\label{sec:introh}
A fundamental consequence of Theorem \ref{thm:main} is that a great deal of dynamical information of the system is captured by its complexity at infinity. As discussed in the introduction, we have defined two different ways of quantifying this complexity. Namely, the topological entropy at infinity (Definition \ref{def:ent_inf}) and the measure theoretic one (Definition \ref{def:ent_meas_inf}).
In this section we will elaborate on these notions and put our results into context.
We first discuss the topological entropy at infinity of $(\Sigma,\sigma)$, given in Definition \ref{def:ent_inf}. Observe that if $M_1<M_2$, then $z_n(M_2, q)\leqslant z_n(M_1, q)$,
for every $n, q \in {\mathbb N}$, so
\begin{equation*}
\delta_\infty(q)=\inf_M \delta_\infty(M,q)=\lim_{M\to\infty}\delta_\infty(M,q).
\end{equation*}
If $(\Sigma,\sigma)$ is a transitive CMS then for every $a,b\in \mathcal{S}$,
\begin{equation}
\label{eq:deltainfagain}
\delta_\infty(M,q)=\limsup_{n\to\infty}\frac{1}{n}\log z_n(M,q,a,b),
\end{equation}
where $z_n(M,q,a,b)$ is the number of cylinders of the form $[x_0,\ldots,x_{n+1}]$, where $x_0=a$, $x_{n+1}=b$, and
\begin{equation*}
\# \left\{i\in\{0,\ldots,n+1\}:x_i\leqslant q \right\}\leqslant \frac{n+2}{M}.
\end{equation*}
Note that $q<q'$ implies the inequality $z_n(M,q',a,b)\leqslant z_n(M,q,a,b)$. In particular $(\delta_\infty(M,q))_q$ is decreasing in $q$. We conclude that
\begin{align}\label{infinf}
\delta_\infty=\inf_{q} \delta_\infty(q)=\inf_{M,q}\delta_\infty(M,q).
\end{align}
Since in our results we will usually assume that the symbolic space is transitive, we can consider \eqref{infinf} as
the definition of the topological entropy at infinity.
We now consider the measure theoretic entropy at infinity, defined for finite entropy CMS as
\begin{equation*}
h_\infty :=\sup_{(\mu_n)_n\to 0}\limsup_{n\to\infty}h_{\mu_n}(\sigma),
\end{equation*}
where $(\mu_n)_n\to 0$ means that the sequence $(\mu_n)_n$ converges on cylinders to the zero measure. Note that the finite entropy assumption, and more generally the $\mathcal{F}-$property, ensures the existence of sequences of measures converging on cylinders to the zero measure (see \cite[Lemma 4.16]{iv}). In particular, $h_{\infty}$ is well defined.
In \cite[Example 4.17]{iv}, an example of a CMS made of infinitely many loops of length two based at a common vertex is considered. The entropy is infinite and there are no sequences of measures converging to zero. Every measure gives weight at least $1/2$ to the base cylinder.
\label{rmk:finite ent}
In Section \ref{entinf1} we will prove that both, the topological and the measure theoretic entropies at infinity coincide. This has several consequences, in particular we obtain that Theorem \ref{thm:main} is sharp. Indeed, $\delta_{\infty}$ is the smallest number for which inequality \eqref{eq:main} holds.
In the context of CMS the entropy at infinity was already investigated by Gurevich and Zargaryan \cite{gz}, Ruette \cite{r} and Buzzi \cite{b}. It is important to mention that they also had two flavours of entropy at infinity, a topological and a measure theoretic version. It is proven by Ruette \cite{r} that both notions coincide (for a precise statement see \cite[Proposition 6.1]{b}). It turned out that the notions of entropy at infinity presented in this work coincide with theirs. Recall that if $G$ is the graph which determines $(\Sigma,\sigma)$, then
\begin{equation*}
b_{\infty}=\inf_{F} \inf_{\lambda >0} \sup \left\{h_{\mu}(\sigma) : \mu([F]) < \lambda \right\},
\end{equation*}
where $F$ ranges over the finite sub-graphs of $G$ and $[F]:= \left\{ x \in \Sigma : x_0 \in \mathcal{A}_F \right\}$, where $ \mathcal{A}_F$ denotes the symbols appearing as vertex of $F$. We first show the relation between $h_\infty$ and $b_\infty$.
\begin{lemma}
For a sequence $(\mu_n)_n$ in $\mathcal{M}(\Sigma, \sigma)$, the following are equivalent:
\begin{itemize}
\item[(a)] for any collection of cylinders $C^1, \ldots, C^N,$ and $\varepsilon>0$, there is $n_0\in {\mathbb N}$ such that $\mu_n(\bigcup_{i=1}^NC^i)<\varepsilon$ for all $n\geqslant n_0$;
\item[(b)] for any finite subgraph $F$ of $G$ and any $\varepsilon>0$, there is $n_1\in {\mathbb N}$ such that $\mu_n([F])<\varepsilon$ for all $n\geqslant n_1$.
\end{itemize}
\label{lem:cylBuz}
\end{lemma}
An easy consequence of the lemma is that convergence on cylinders in this setting corresponds to the type of limits featuring in the definition of $b_\infty$ and thus $b_\infty=h_\infty$.
\begin{proof}[Proof of Lemma~\ref{lem:cylBuz}]
Since (b) concerns 1-cylinders, the fact that (a) implies (b) is clear. To prove the reverse implication, we observe that if $C^1, \ldots, C^N$ is a collection of cylinders then we can take the subgraph defined by the first coordinate of each $C^i$ as our subgraph.
\end{proof}
As previously mentioned, in Section \ref{entinf1} we will prove that $h_\infty=\delta_\infty$. This implies that the entropy at infinity defined in this section coincides with the previously defined one. One consequence is that, since \cite[Proposition 6.1]{b} implies $b_{\infty}<h_{top}(\sigma)$ is a characterisation of SPR, we thus have the following alternative characterisation:
\begin{proposition}\label{prechar} A topologically transitive CMS $(\Sigma,\sigma)$ is SPR if and only if $h_{\infty}<h_{top} (\sigma)$ if and only if $\delta_{\infty}<h_{top} (\sigma)$.\end{proposition}
This result is consistent with the comments in Remark \ref{rem:spr}. Indeed, SPR systems are those for which the entropy is not concentrated at infinity; the inequality $\delta_{\infty}<h_{top} (\sigma)$ has a wealth of dynamical consequences (see Remark \ref{rem_conspr}).
From a slightly different point of view, it was not realised until recently that the entropy at infinity has a particularly important role in the regularity of the entropy map. In the context of homogeneous dynamics, for the diagonal action on $G/\Gamma$, where $G$ is a ${\mathbb R}$-rank 1 semisimple Lie group with finite centre and $\Gamma\leqslant G$ a lattice, a formula like Theorem \ref{thm:main} was obtained in \cite{ekp}. In that context the constant playing the role of the entropy at infinity is half the topological entropy of the flow. It was later proved in \cite{kp} that half the topological entropy is in fact sharp and equal to the measure theoretic entropy at infinity in that setup. The method employed in \cite{ekp} was used in \cite{rv} to prove that a similar result holds for the geodesic flow on a geometrically finite manifold. Unfortunately, an obstruction to run the method from \cite{ekp} is the existence of periodic orbits that escape to infinity. This issue was overcome in \cite{ve}, where the results in \cite{rv} where generalised to all complete negatively curved manifolds. For CMS the existence of periodic orbits that escape phase space is quite common so our approach is similar to the one in \cite{ve}. Additional complications arise from the possible lack of locally compactness of $\Sigma$. In Section \ref{mainine} and Section \ref{finalproof} we will address these issues and prove Theorem \ref{thm:main}.
The entropy at infinity has further applications to suspension flows, entropy density, the dimension of points which escape on average, existence of equilibrium states and bounds on mass escape, all of which we give in Section~\ref{sec:app}.
\section{Katok's entropy formula } \label{ent}
In the early 1970s Bowen \cite{bo} and Dinaburg \cite{d} provided a new definition of topological entropy of a dynamical system. Inspired by these results, Katok \cite{ka} established a formula for the measure theoretic entropy in analogy to the definition of topological entropy by Bowen and Dinaburg. We now recall his result in a particular context.
Let $(\Sigma, \sigma)$ be a CMS and let $d$ be the metric on $\Sigma$ defined in \eqref{metric}. The dynamical metric $d_n$ is defined by the formula
\begin{equation*}
d_n(x,y):=\max_{k\in \{0,\ldots,n-1\}}d(\sigma^kx,\sigma^ky).
\end{equation*}
The open ball of radius $r$ centred at $x$ with respect to the metric $d_n$ is denoted by $B_n(x,r)$. By the definition of the metric $d$ we know that $B_n(x, 2^{-N})=C_{n+N}(x)$. A ball of the form $B_n(x,r)$ is called a $(n,r)$-dynamical ball. The following theorem was proved in \cite[Theorem 1.1]{ka}.
\begin{theorem}\label{kat} Let $(\Sigma,\sigma)$ be a sub-shift of finite type defined on a finite alphabet and $\mu$ an ergodic $\sigma$-invariant probability measure. Then
\begin{equation} \label{eq:kat}
h_\mu(\sigma)=\lim_{\epsilon\to 0} \liminf_{n\to \infty}\dfrac{1}{n}\log N_\mu(n,\epsilon,\delta),
\end{equation}
where $N_\mu(n,\epsilon,\delta)$ is the minimum number of $(n,\epsilon)$-dynamical balls needed to cover a set of $\mu$-measure strictly bigger than $1-\delta$. In particular the limit above does not depend on $\delta\in (0,1)$.
\end{theorem}
The relation established in \eqref{eq:kat} is known as Katok's entropy formula. It turns out that Katok's proof is rather flexible. It was observed by Gurevich and S. Katok \cite[Section 4]{gk} and also by Riquelme \cite[Theorem 2.6]{ri} that the proof in \cite[Theorem 1.1]{ka} yields that if $(X,d)$ is a metric space (not necessarily compact) and $T: X \to X$ a continuous map then
\begin{equation*}
h_{\mu}(T) \leq \lim_{\epsilon \to 0} \liminf_{n \to \infty} \frac{\log N_{\mu}(n, \epsilon ,\delta)}{n}.
\end{equation*}
The compactness assumption on $X$ is used in the proof of the other inequality. It is routine to check that compactness assumption can be replaced by the existence of a totally bounded metric.
This section is devoted to proving that formula \eqref{eq:kat} holds for CMS of finite topological entropy. Moreover, we will prove the limit is independent of $\epsilon$.
We prove:
\begin{theorem} \label{katformula} Let $(\Sigma, \sigma)$ be a CMS and $\mu$ an ergodic $\sigma$-invariant probability measure. Then for every $\delta\in (0,1)$ we have
$$h_\mu(\sigma)\leqslant \lim_{n\to \infty}\frac{1}{n}\log N_\mu(n,1,\delta).$$
If $(\Sigma,\sigma)$ has finite topological entropy, then
$$h_\mu(\sigma)= \lim_{n\to \infty}\frac{1}{n}\log N_\mu(n,1,\delta).$$
\end{theorem}
For simplicity we fix once and for all an identification of the alphabet $\mathcal{S}$ with ${\mathbb N}$ and we use ${\mathbb N}$ instead. Define the following collection of sets, for every $m \in {\mathbb N}$ let
\begin{equation} \label{def:km}
K_m:=\bigcup_{s=1}^m [s].
\end{equation}
Note that if $\Sigma$ is locally compact, then $K_m$ is compact for every $m\in {\mathbb N}$. To every sequence of natural numbers $(a_i)^\infty_{i=0}$ we associate the set
\begin{equation} \label{def:kai}
K((a_i)_i)= \prod_{i\geqslant 0} \{1,\ldots,a_i\}.
\end{equation}
Observe that $K((a_i)_i)$ is the intersection of a closed set with a compact set and is thus a compact subset of $\Sigma$. Moreover, every compact set $K\subset \Sigma$ is contained in a set of the form $K((a_i)_i)$. The following lemma follows directly from \cite[Theorem 3.2]{pa}. For concreteness we provide a simple proof of this general fact.
\begin{lemma}\label{lem:compact} Let $\mu$ a Borel measure on $\Sigma$. For every $\epsilon>0$, there exists a sequence of natural numbers $(a_i)_i$ such that $\mu(K((a_i)_i))>1-\epsilon$.
\end{lemma}
\begin{proof} Fix a sequence $(b_i)_i$ satisfying
\begin{equation*}
\left(1-\frac{\epsilon}{2}\right)\prod_{i=1}^\infty b_i> 1-\epsilon,
\end{equation*}
where $b_i\in (0,1)$ for every $i\in{\mathbb N}$. We will construct the sequence $(a_i)_i$ inductively. Choose $a_0$ such that $\mu(\bigcup_{i=1}^{a_0} [i])>1-\frac{\epsilon}{2}$. For every $i\in \{1,\ldots,a_0\}$ we choose $c(i)\in {\mathbb N}$ such that
\begin{equation*}
\mu\left(\bigcup^{c(i)}_{k= 1}[ik]\right)\geqslant \mu([i])b_1.
\end{equation*}
Let $a_1:=\max_{i\in \{1,\ldots,a_0\}} c(i)$. For $(i_1,i_2)\in \prod_{i=0}^1\{1,\ldots,a_i\}$ we define $c(i_1,i_2)$ such that
\begin{equation*}
\mu\left(\bigcup^{c(i_1,i_2)}_{k= 1}[i_1i_2k]\right)\geqslant \mu([i_1i_2])b_2.
\end{equation*}
Define $a_2=\max_{(i,j)\in \prod_{i=0}^1 \{1,\ldots,a_i\} } c(i,j)$. We continue this procedure inductively. It follows from the construction that
\begin{equation*}
\mu(K((a_i)_i))=\mu\left(\prod_{i=0}^\infty \{1,\ldots,a_i\} \right)\geqslant \left(1-\frac{\epsilon}{2}\right)\prod_{i=1}^\infty b_i>1-\epsilon,
\end{equation*}
as desired.
\end{proof}
\begin{remark}\label{rem:upbound} Katok proved \cite[Theorem 1.1]{ka} that if ${\mathcal P}$ is any finite partition of $\Sigma$ satisfying $\mu(\partial {\mathcal P})=0$, then for any $\delta \in (0,1)$
\begin{equation*}
h_\mu({\mathcal P})\leqslant \lim_{r\to 0}\liminf_{n\to\infty}\frac{1}{n}\log N_\mu(n,r,\delta).
\end{equation*}
For a CMS it is easy to check that the partitions
\begin{equation*}
{\mathcal P}_n=\left\{[1],\ldots,[n],\bigcup_{s>n}[s]\right\}
\end{equation*}
are such that $\partial {\mathcal P}_n=\emptyset$, and $\lim_{n\to\infty}h_\mu(\sigma,{\mathcal P}_n)=h_\mu(\sigma)$. From this we conclude that
\begin{equation*}
h_\mu(\sigma)\leqslant \lim_{r\to 0}\liminf_{n\to\infty}\frac{1}{n}\log N_\mu(n,r,\delta).
\end{equation*}
\end{remark}
Our next result is inspired by the proof of \cite[Theorem 2.10 and Theorem 2.11]{ri}. In our context we do not have local compactness of $\Sigma$: the finite entropy assumption is important in overcoming this issue.
\begin{lemma}\label{lem:katokeq} Let $(\Sigma,\sigma)$ be a CMS with finite topological entropy. If $\mu$ is an ergodic $\sigma$-invariant probability measure, then for every $\delta\in (0,1)$ we have
\begin{equation*}
h_\mu(\sigma)=\lim_{N\to \infty} \liminf_{n\to \infty}\frac{1}{n}\log N_\mu(n,2^{-N},\delta).
\end{equation*}
\end{lemma}
\begin{proof}
As observed in Remark \ref{rem:upbound} the inequality
\begin{equation*}
h_\mu(\sigma)\leqslant\lim_{N\to \infty} \liminf_{n\to \infty}\frac{1}{n}\log N_\mu(n,2^{-N},\delta)
\end{equation*}
is known to hold. For the converse inequality it suffices to prove that for every $\ell\in {\mathbb N}$ there exists a partition ${\mathcal P}={\mathcal P}(\ell)$ of $\Sigma$ and a subset $ K \subset \Sigma$ satisfying:
\begin{enumerate}
\item The partition ${\mathcal P}(\ell)$ has finite entropy with respect to $\mu$.
\item $\mu(K)>1-\frac{\delta}{6}$.
\item For every $x\in K\cap \sigma^{-n}K$ we have ${\mathcal P}^n(x)\subset B_n(x,2^{-\ell})$.
\end{enumerate}
In this situation a slight modification of the first part of the proof in \cite[Theorem 1.1]{ka} yields the desired inequality, as we show here.
Suppose that the partition ${\mathcal P}={\mathcal P}(\ell)$ has been constructed. Let $\epsilon >0$. Since the measure $\mu$ is ergodic by the Shannon-McMillan-Breiman theorem there exists $N_0 \in {\mathbb N}$ such that the set
\begin{equation*}
A_{\epsilon, N_0}:=\left\{x\in \Sigma: \mu({\mathcal P}^n(x))\geqslant \exp(-n(h_\mu({\mathcal P})+\epsilon)), \text{for all }n\geqslant N_0 \right\}.
\end{equation*}
satisfies $\mu(A_{\epsilon,N_0})>1-\frac{\delta}{6}$. Let $n\geqslant N_0$ and $B_n:=A_{\epsilon,N_0}\cap K\cap \sigma^{-n}K$. Observe that $\mu(B_n)\geqslant 1-\frac{\delta}{2}$ and that if $x\in B_n$ then $x\in K\cap \sigma^{-n}K$, and therefore ${\mathcal P}^n(x)\subset B_n(x,2^{-\ell})$. The set $A_{\epsilon,N_0}$ requires at most $\exp(n(h_\mu({\mathcal P})+\epsilon))$ elements of the partition ${\mathcal P}^n$ to cover it. Therefore, $B_n$ requires at most $\exp(n(h_\mu({\mathcal P})+\epsilon))$ $(n,2^{-\ell})$-dynamical balls to cover it, where $\mu(B_n)>1-\frac{\delta}{2}$. We conclude that
\begin{equation*}
\limsup_{n\to\infty}\frac{1}{n}\log N_\mu(n,2^{-\ell},\delta)\leqslant h_\mu({\mathcal P})+\epsilon\leqslant h_\mu(\sigma)+\epsilon.
\end{equation*}
Since $\epsilon>0$ was arbitrary we obtain
\begin{equation*}
\lim_{\ell\to\infty}\limsup_{n\to\infty}\frac{1}{n}\log N_\mu(n,2^{-\ell},\delta)\leqslant h_\mu(\sigma),
\end{equation*}
concluding the proof of the lemma.
We now prove the existence of such a partition ${\mathcal P}={\mathcal P}(\ell)$. By Lemma \ref{lem:compact} there exists a sequence $(a_i)_i$ such that the compact set $K_0:=K((a_i)_i)$ satisfies $ \mu(K_0) \geqslant 1-\frac{\delta}{6}$.
Denote by $S$ the set of points in $\Sigma$ that enter $K_0$ infinitely many times under iterates of $\sigma$. It is a consequence of Birkhoff's Ergodic Theorem that $\mu(S)=1$. Define $K:=K_0\cap S$, and observe that $\mu(K)\geqslant 1-\frac{\delta}{6}$. For every $k \geq 1$, let
\begin{equation*}
R_K(x):=\inf\left\{k\geqslant 1: \sigma^k(x)\in K_0\right\} \text{ for } x\in K, \text{ and }
A_k:=\left\{x \in K: R_K(x)=k \right\}.
\end{equation*}
Partition $A_k$ using cylinders of length $k+\ell+1$ and denote such partition by ${\mathcal Q}_k$. It is important to observe that $\#{\mathcal Q}_k$ is finite for all $k$. This follows from the definition of $K_0$ and the finite topological entropy of $(\Sigma,\sigma)$. Indeed, if $x= (x_0, x_1, \dots , x_k \dots) \in A_k$, then $x_0 ,x_k \in \{1,\ldots,a_0\}$. Moreover, there are at most $C=\prod_{i=0}^{\ell} a_i$ cylinders of the form $[y_0 y_1\ldots y_l]$ intersecting $K$, so for $k$ large enough, $$\#{\mathcal Q}_k\leqslant Ce^{k(h_{top}(\sigma)+1)}.$$
Finally, consider the partition of $\Sigma$ defined by ${\mathcal P}= \mathcal{Q} \cup \left( \bigcup_{k=1}^{\infty} {\mathcal Q}_k \right)$, where $\mathcal{Q}:= \Sigma \setminus \bigcup_{k=1}^{\infty} {\mathcal Q}_k$.
We claim that this countable partition satisfies the remaining required properties, that is:
\begin{enumerate}
\item The partition ${\mathcal P}={\mathcal P}(\ell)$ has finite entropy with respect to $\mu$.
\item For every $x\in K\cap \sigma^{-n}K$ we have ${\mathcal P}^n(x)\subset B_n(x,2^{-\ell})$.
\end{enumerate}
The second property follows from the construction of ${\mathcal P}$. Indeed, let $z\in {\mathcal P}^n(x)$, where $x , \sigma^n(x) \in K$. We claim that $z\in B_n(x,2^{-\ell})$. For simplicity we will assume that $x$ has its first return to $K$ at time $n$ (the general case is just an iteration of the argument in this setting). Since $x \in A_n$ we have that ${\mathcal P}(x)$ is a cylinder of length $n+\ell+1$, which readily implies that $z\in B_n(x,2^{-\ell})$.
We now verify that $H_\mu({\mathcal P})<\infty$. For $r$ sufficiently large,
\begin{align*} H_\mu({\mathcal P})+R & = \sum_{k\geqslant r} \sum_{P\in {\mathcal Q}_k}-\mu(P)\log \mu(P)\\
&= \sum_{k\geqslant r} \mu(A_k)\left( \sum_{P\in {\mathcal Q}_k}-\frac{\mu(P)}{\mu(A_k)}\log \frac{\mu(P)}{\mu(A_k)}-\frac{\mu(P)}{\mu(A_k)} \log\mu(A_k)\right) \\
&\leqslant\sum_{k\geqslant r}\mu(A_k)\log(|{\mathcal Q}_k|)-\sum_{k\geqslant r}\mu(A_k)\log \mu(A_k) \\
&\leqslant \sum_{k\geqslant r}k\mu(A_k)\log\left(e^{h_{top}(\sigma)+1}C^{1/k}\right)-\sum_{k\geqslant r}\mu(A_k)\log \mu(A_k)\\
&\leqslant C'\sum_{k\geqslant r}k\mu(A_k)-\sum_{k\geqslant r}\mu(A_k)\log \mu(A_k),
\end{align*}
where $R=\mu({\mathcal Q})\log\mu({\mathcal Q})+\sum_{k=1}^{r-1}\sum_{P\in {\mathcal Q}_k}\mu(P)\log\mu(P)\in {\mathbb R}$. It follows from Kac's lemma that $\sum k\mu(A_k)=1$. This and the inequality
\begin{align*}\label{mane}
-\sum_{k\geqslant r}\mu(A_k)\log \mu(A_k)\leqslant \sum_{k\geqslant r}k\mu(A_k)+2e^{-1}\sum_{k\geqslant r}e^{-k/2},
\end{align*}
see \cite[Lemma 1]{m}, imply the finiteness of $H_\mu({\mathcal P})$. This concludes the proof.
\end{proof}
\begin{lemma}\label{lem:katokine} Let $(\Sigma, \sigma)$ be a CMS and $\mu$ an ergodic $\sigma$-invariant probability measure. Then for any $\delta\in (0,1)$, we have
\begin{equation*}
h_\mu(\sigma)\leqslant \liminf_{n\to\infty} \frac{1}{n}\log N_\mu(n,1,\delta).
\end{equation*}
\end{lemma}
\begin{proof}
Let $A \subset \Sigma$ be a set such that $\mu(A) \geq 1 - \delta$. Denote by $a_\mu(n,\delta)$ the minimum number of cylinders of length $n$ that cover a $A$. Observe that
\begin{equation*}
N_\mu(n,2^{-t},\delta)=a_\mu(n+t,\delta),
\end{equation*}
and that
\begin{equation*}
\liminf_{n\to\infty}\frac{1}{n}\log a_\mu(n,\delta)=\liminf_{n\to\infty}\frac{1}{n}\log a_\mu(n+t,\delta),
\end{equation*}
for every $t\in{\mathbb N}$. In particular we have that $\liminf_{n\to\infty}\frac{1}{n}\log N_\mu(n,2^{-\ell},\delta)$ is independent of $\ell$.
From Remark \ref{rem:upbound} we conclude that
\begin{align*}h_\mu(\sigma)&\leqslant \lim_{t\to \infty}\liminf_{n\to \infty}\frac{1}{n}\log N_\mu(n,2^{-t},\delta)\\
&=\lim_{t\to \infty}\liminf_{n\to \infty}\frac{1}{n}\log a_\mu(n+t,\delta)\\
&=\liminf_{n\to \infty}\frac{1}{n}\log a_\mu(n,\delta).
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem \ref{katformula}]
The proof follows combining Lemma \ref{lem:katokeq} and Lemma \ref{lem:katokine}.
\end{proof}
We now prove a result related to Lemma \ref{lem:katokine}. We say that two points $x, y \in \Sigma$ are $(n,r)$-separated if $d_n(x,y)\geqslant r$. In particular $x$ and $y$ are $(n,1)$-separated if they do not belong to the same cylinder of length $n$.
\begin{lemma}\label{lem:compshift} Let $X$ be a $\sigma$-invariant compact subset of $\Sigma$. Then
\begin{equation*}
h_{top}(\sigma |X)=\limsup_{n\to\infty} \frac{1}{n}\log N(X,n),
\end{equation*}
where $N(X,n)$ is the maximal number of $(n,1)$-separated points in $X$, and $h_{top}(\sigma|X)$ is the topological entropy of $(X,\sigma)$.
\end{lemma}
\begin{proof} By definition of the topological entropy of a compact metric space we know that
\begin{equation*}
h_{top}(\sigma |X)=\lim_{k\to\infty}\limsup_{n\to\infty}\frac{1}{n}\log N(X,n,k),
\end{equation*}
where $N(X,n,k)$ is the maximal number of $(n, 2^{-k})$-separated points in $X$. Observe that being $(n,2^{-k})$-separated is the same as being $(n+k,1)$-separated. This implies that $N(X,n,k)=N(X,n+k)$. Note that
\begin{equation*}
\limsup_{n\to\infty}\frac{1}{n}\log N(X,n,k)=\limsup_{n\to\infty}\frac{1}{n}\log N(X,n+k)=\limsup_{n\to\infty}\frac{1}{n}\log N(X,n).
\end{equation*}
Therefore
\begin{equation*}
h_{top}(\sigma |X)=\lim_{k\to\infty}\limsup_{n\to\infty}\frac{1}{n}\log N(X,n,k)=\limsup_{n\to\infty} \frac{1}{n}\log N(X,n),
\end{equation*}
as desired.
\end{proof}
\section{Weak entropy density} \label{sec:wed}
In this section we describe the inclusion $\mathcal{E}(\Sigma, \sigma) \subset \mathcal{M}(\Sigma,\sigma)$, where $\mathcal{E}(\Sigma, \sigma)$ is the subset of ergodic measures. It is well known that, even in this non-compact setting, the set $\mathcal{E}(\Sigma, \sigma)$ is dense in $ \mathcal{M}(\Sigma,\sigma)$ with respect to the weak* topology (see \cite[Section 6]{csc}). We prove that any finite entropy measure can be approximated by an ergodic measure with entropy sufficiently large, see Proposition \ref{dense}. This result can be thought of as a weak form of entropy density. In Section \ref{sec:ed} we will make use of this result to prove that any invariant measure $\mu$ can be approximated by ergodic measures with entropy converging to $h_{\mu}(\sigma) $ (see Theorem \ref{teodense}). Moreover, Proposition \ref{dense} will be used in the proof of our main result (see Theorem \ref{thm:main}). Both the statement and the proof of Proposition \ref{dense} closely follow that of \cite[Theorem B]{ekw}, but modifications are required to deal with the non-compactness of the space $\Sigma$.
\begin{proposition} \label{dense} Let $(\Sigma,\sigma)$ be a transitive CMS. Then for every $\mu\in \mathcal{M}(\Sigma,\sigma)$ with $h_{\mu}(\sigma) <\infty$, $\epsilon>0$, $\eta>0$, and $f_1,\ldots,f_l\in C_b(\Sigma)$, there exists an ergodic measure $\mu_e\in V(f_1,\ldots,f_l,\mu,\epsilon)$ (see equation (\ref{defbasis})) such that $h_{\mu_e}(\sigma)>h_\mu(\sigma) -\eta$. We can moreover assume that $\mbox{\rm supp} (\mu_e)$ is compact.
\end{proposition}
Analogously to the proof of \cite[Theorem B]{ekw} we will use the following fact.
\begin{lemma} \label{entropyforergodic} Let $\mu\in \mathcal{E}(\Sigma, \sigma)$, $\alpha>0$, $\beta>0$, $f_1,\ldots,f_\ell\in C_b(\Sigma)$, and a set $K \subset \Sigma$ satisfying $\mu(K)>3/4$. Assume that $h_\mu(\sigma) <\infty$. Then there exists $n_0 \in {\mathbb N}$ such that for all $n\geqslant n_0$ there is a finite set $\mathcal{G}=\mathcal{G}(n)\subset\Sigma$ satisfying the following properties:
\begin{enumerate}
\item \label{a} $\mathcal{G}\subset K\cap \sigma^{-n}K$
\item \label{b} $d(x,y)>2^{-n}$, for every pair of distinct points $x,y\in \mathcal{G}$.
\item \label{c} $\#\mathcal{G}\geqslant \exp(n(h_\mu(\sigma) -\alpha))$.
\item \label{d} $|\frac{1}{n}\sum_{k=0}^{n-1} f_j(\sigma^k x)-\int f_j d\mu|<\beta$, for all $x\in \mathcal{G}$ and $j\in\{1,\ldots,\ell\}$.
\end{enumerate}
\end{lemma}
\begin{proof} Let $$A_{k,\beta}:=\left\{x\in \Sigma: \left|\sum_{i=0}^{n-1} f_j(\sigma^i x)-\int f_j d\mu\right|<\beta, \forall j\in \{1,\ldots,\ell\} \text{ and }n\geqslant k\right\}.$$
By Birkhoff's Ergodic Theorem there exists $s_0 \in {\mathbb N}$ such that $\mu(A_{s_0,\beta})>3/4$. From Lemma \ref{lem:katokine} we have that
\begin{equation*}
h_\mu(\sigma) \leqslant \liminf_{n\to\infty} \frac{1}{n}\log N_\mu(n,1,1/4).
\end{equation*}
There exists $s_1 \in {\mathbb N}$ such that if $n\geqslant s_1$, then
\begin{equation*}
\exp(n(h_\mu(\sigma)-\alpha))\leqslant N_\mu(n,1,1/4).
\end{equation*}
Let $B_n:=K\cap \sigma^{-n}K\cap A_{s_0,\beta}$ and observe that $\mu(B_n)>1/4$. In what follows we assume that $n\geqslant n_0:=\max\{s_0,s_1\}$. From the definition of $N_\mu(n,1,1/4)$ the minimal number of cylinders of length $n$ needed to cover $B_n$ is at least $N_\mu(n,1,1/4)$. More precisely, let $(C_i)_{i\in I}$ be a minimal collection of cylinders of length $n$ covering $B_n$. In particular for every $i\in I$ we have $C_i\cap B_n\ne \emptyset$. For every $i\in I$ choose a point $x_i\in C_i\cap B_n$. We claim that the set $(x_i)_{i\in I}$ satisfies the properties required on $\mathcal{G}$. Conditions \eqref{a} and \eqref{d} follow from the definition of $B_n$. Condition \eqref{b} follows from the fact that if $i\ne j$, then $x_i$ and $x_j$ are in different cylinders of length $n$. Condition \eqref{c} follows from the inequality $$\#I\geqslant N_\mu(n,1,1/4)\geqslant \exp(n(h_\mu(\sigma)-\alpha)).$$
\end{proof}
\begin{proof}[Proof of Proposition~\ref{dense}]
Recall that we want to prove that given $\mu\in \mathcal{M}(\Sigma,\sigma)$, $\epsilon>0$, $\eta>0$, and $f_1,\ldots,f_l\in C_b(\Sigma)$, there exists an ergodic measure $\mu_e\in V(f_1,\ldots,f_\ell,\mu,\epsilon)$ such that $h_{\mu_e}(\sigma)>h_\mu(\sigma) -\eta$. In the following remarks we observe that this general situation can be simplified.
As observed in Section \ref{weak*} or in \cite[8.3.1 Remark]{bg} it suffices to consider the case in which the functions $(f_i)_i$ in Proposition \ref{dense} are uniformly continuous. Therefore, under this assumption, there exists $A=A(f_1,\ldots,f_\ell)\in {\mathbb N}$, such that if $d(x,y)<2^{-A}$, then $|f_i(x)-f_i(y)|<\frac{\epsilon}{4}$. Also define $W=\max_{i\in \{1,...,\ell\}}|f_i|_0$.
Since periodic measures are dense in $\mathcal{M}(\Sigma,\sigma)$, see \cite[Section 6]{csc}, we will assume that $h_\mu(\sigma) -\eta>0$, otherwise we can approximate $\mu$ by a periodic measure. By the affinity of the entropy map \cite[Theorem 8.1]{wa} and \cite[Lemma 6.13]{iv}
we can reduce the problem to the case in which $\mu=\frac{1}{N}\sum_{i=1}^N \mu_i$, where $\{\mu_i\}_{i=1}^N$ is a collection of ergodic measures.
Let $m \in {\mathbb N}$ be such that the set $K=K_m$, defined as in \eqref{def:km}, satisfies $\mu_i(K)>3/4$ for every $i \in \{1, \dots, N\}$.
Since $(\Sigma, \sigma)$ is transitive, there exists a constant $L=L(m)$ such that for each pair $(a,b)\in \{1,\ldots,m\}^2$, there exists an admissible word $a{\bf r }b$, where $\ell({\bf r})\leqslant L$. It follows from Lemma \ref{entropyforergodic}, setting $\beta=\epsilon/4$ and $\alpha=\eta/2$, that there exists $n' \in {\mathbb N}$ such that for every $n >n'$ and every measure $\mu_i$, with $i \in \{1, \dots , N\}$, there exists $(n,1)$-separated sets $\mathcal{G}_i\subset K\cap \sigma^{-n}K$ satisfying properties \eqref{a}, \eqref{b}, \eqref{c} and \eqref{d} of Lemma \ref{entropyforergodic}.
Denote by ${\bf a}(x)$ the word defined concatenating the first $(n+1)-$coordinates of $x \in \Sigma$. Given $\hat{x}=(x^1,x^2,\ldots,x^{MN})\in (\prod_{i=1}^N \mathcal{G}_i)^M$, we define an admissible word $w_0(\hat{x})={\bf a}(x^1) {\bf r_1} {\bf a}(x^2){\bf r_2}\ldots{\bf a}(x^{MN}){\bf r_{MN}}{\bf a}(x^1)$, where ${\bf r_k}$s are words chosen so that $w_0(\hat{x})$ is an admissible word and $\ell({\bf r_k})\leqslant L$ (note that this is possible since $({\bf a}(x^i))_{0}$ and $({\bf a}(x^i))_{n}$ are in $\{1,\ldots,m\}$ by definition of $\mathcal{G}_i$).
The word $w_0(\hat{x})$ defines a periodic point in $\Sigma$ that we denote by $w(\hat{x})$. We have that
\begin{equation*}
w(\hat{x})=\overline{{\bf a}(x^1) {\bf r_1} {\bf a}(x^2){\bf r_2}\ldots{\bf a}(x^{MN}){\bf r_{MN}}}.
\end{equation*}
Let $\mathcal{G}:=\prod_{M\geqslant 1} (\prod_{i=1}^N \mathcal{G}_i)^M$. Following the same procedure of concatenation described above, for every $\hat{x} \in \mathcal{G}$ we define a point $w(\hat{x})\in \Sigma$. Define
\begin{equation*}
\Psi=\bigcup_{\hat{x}\in \mathcal{G}}O(w(\hat{x})),
\end{equation*}
where $O(w(\hat{x}))$ is the orbit of $w(\hat{x})$ and define $\Psi_0$ to be the topological closure of $\Psi$.
Note that the space $\Psi_0$ is a compact $\sigma$-invariant subset of $\Sigma$. By definition the set $\Psi$ is closed and invariant.
Observe that the number of symbols appearing in elements belonging to $\Psi$ is finite: there are finitely many admissible words ${\bf a}(x^i)$ (recall that each $\mathcal{G}_i$ is a finite set) and we could use finitely many connecting words ${\bf r_i}$. Therefore there exists $J \in {\mathbb N}$ such that $\Psi \subset \{1,\ldots,J\}^{\mathbb N}$. Thus, $\Psi_0$ is a closed subset of a compact set.
By property \eqref{d} of Lemma \ref{entropyforergodic}, and assuming that $n'$, which also depends on $A$, $W$ and $L$, is sufficiently large,
\begin{align} \label{eq:incl}
\Psi&\subset \left\{x\in \Sigma: \left|\frac{1}{n}S_n f_j(x)-\int f_j d\mu \right|\leqslant\epsilon,\forall j\in\{1,\ldots,\ell\} \right\}.
\end{align}
Since the set in right hand side of \eqref{eq:incl} is closed, the same inclusion holds if $\Psi$ is replaced by $\Psi_0$. Also, since $\Psi_0$ is $\sigma$-invariant we have
\begin{align*} \Psi_0&\subset \left\{x\in \Sigma: \left|\frac{1}{n}S_n f_j(\sigma^s x)-\int f_j d\mu \right|\leqslant\epsilon, \forall j\in\{1,\ldots,\ell\}\text{ and }s\geqslant 0 \right\}\\
& \subset \left\{x\in \Sigma: \left|\frac{1}{nk}S_{nk} f_j( x)-\int f_j d\mu \right|\leqslant\epsilon, \forall j\in\{1,\ldots,\ell\}\text{ and }k\in {\mathbb N} \right\}.
\end{align*}
This implies that every ergodic measure supported in $\Psi_0$ belongs to $V(f_1,\ldots,f_l,\mu,\epsilon)$. Indeed, consider a generic point for the ergodic measure and use the inclusion above.
By construction, if $x,y\in \left(\prod_{i=1}^N \mathcal{G}_i\right)^M$ and $x\ne y$, then $$d_{NM(n+1+L)}(w(x),w(y))=1.$$ In other words $\Psi_0$ contains a $(NM(n+1+L),1)$-separated set of cardinality at least
\begin{equation*}
\exp\left(nNM \left(\frac{1}{N}\sum_{k=1}^N h_{\mu_k} (\sigma)-\frac{\eta}{2} \right) \right).
\end{equation*}
Here we used property \eqref{c} of Lemma \ref{entropyforergodic} for our sets $\mathcal{G}_i$. It follows from Lemma \ref{lem:compshift} that
\begin{equation*}
h_{top}(\Psi_0)\geqslant \limsup_{M\to \infty} \dfrac{nNM(h_\mu(\sigma)-\frac{\eta}{2})}{NM(n+1+L)}= \dfrac{n(h_\mu(\sigma)-\frac{\eta}{2})}{(n+1+L)} > h_\mu(\sigma)-\eta.
\end{equation*}
Finally, let $\mu_e$ be an ergodic measure supported in $\Psi_0$ with entropy at least $h_\mu(\sigma)-\eta$ (which exists by the standard variational principle in the compact setting), since we already proved that $\mu_e\in V(f_1,\ldots,f_l,\mu,\epsilon)$ this finishes the proof.
\end{proof}
\section{Main entropy inequality}\label{mainine}
This section is devoted to the proof of the main entropy inequality. This is stated in Theorem \ref{pre} and relates the entropy of a sequence of ergodic measures with the amount of mass lost and the topological entropy at infinity.
Recall that, as explained in \eqref{def:kai}, to every sequence of natural numbers $(a_i)_i$ we assign a compact set $K=K((a_i)_i) \subset \Sigma$.
The definition of $K$ implies that if $x\in K^c$, then $x_i>a_i$, for some $i\in {\mathbb N}_0$. For $x\in K^c$ we define $i: K^c \to {\mathbb N}_{0}$ by
\begin{equation} \label{def:i}
i(x):= \min \left\{ n \in {\mathbb N}_{0} : x_n > a_n \right\}.
\end{equation}
For $n\in {\mathbb N}$ we define
\begin{equation} \label{def:T}
T_n(K):=K_{a_0}\cap \sigma^{-1}K^c\cap\cdots\cap\sigma^{-n}K^c\cap \sigma^{-(n+1)}K_{a_0},
\end{equation}
where $K_{a_0}=\bigcup_{i=1}^{a_0}[i]$ (as defined in \eqref{def:km}). Let
\begin{equation} \label{def:That}
\widehat{T}_n(K):=\left\{x\in T_n(K): i(\sigma^k(x))\leqslant n-k,\text{ for every } k\in \{1,\ldots,n\} \right\}.
\end{equation}
Let $\hat z_n(K)$ be the minimal number of cylinders of length $(n+2)$ needed to cover $\widehat{T}_n(K)$ and define
\begin{equation} \label{def:di}
\hat\delta_\infty(K):=\limsup_{n\to\infty}\frac{1}{n}\log \hat z_n(K).
\end{equation}
The reason why we define $\hat\delta_\infty(K)$ covering the sets $\widehat{T}_n(K)$, and not $T_n(K)$, is to ensure Lemma \ref{lem:Kineq2}. This allows us to relate $\hat\delta_\infty(K)$ with the topological entropy at infinity of $(\Sigma,\sigma)$.
Our next result is fundamental in this paper.
\begin{theorem} \label{pre} Let $(\Sigma,\sigma)$ be a finite entropy CMS. Let $(\mu_n)_n$ be a sequence of ergodic probability measures converging on cylinders to an invariant measure $\mu$. Let $(a_i)_i$ be an increasing sequence of natural numbers such that the corresponding compact set $K=K((a_i)_i)$ satisfies that $\mu_n(K)>0$, for all $n\in {\mathbb N}$. Then
\begin{equation*}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\leqslant |\mu|h_{\mu/|\mu|}(\sigma)+(1-\mu(Y))\hat\delta_\infty(K),
\end{equation*}
where $Y=\bigcup_{s=0}^\infty \sigma^{-s}K$.
\end{theorem}
The proof of this theorem requires some propositions and lemmas, which we will prove first before completing the proof of the theorem at the end of this section.
The fact that $K \subset \sigma (K)$, which follows since $(a_i)_i$ is an increasing sequence, will be used several times here. Let $A_k:=\left\{ x \in K : R_K(x)=k \right\}$, where $R_K(x)$ is the first return time function to the set $K$ (see Definition \ref{def:ret}). For $x \in Y$ we define the following:
\begin{align*}
n_1(x)&:= \min \left\{ n \in{\mathbb N}_0 : \text{ there exists } y \in K \text{ such that } \sigma^{n}(y)=x \right\},\\
n_2(x)&:= \min \left\{ n \in{\mathbb N}_0 : \sigma^{n}x\in K \right\}.
\end{align*}
We emphasise that the function $n_1(x)$ is well defined. Indeed, observe that if $x\in Y$ then $\sigma^{n_2(x)}x\in K$. Let $r \in {\mathbb N}$ be such that $r> n_2(x)-1$. Since the sequence $(a_i)_i$ is increasing we have that $a_r \geq \max\left\{ a_i : i\in\{0,\ldots,n_2(x)-1 \} \right\}$. Since $x\in \sigma^r(K)=\prod^\infty_{k=r}\{1,\ldots,a_k\}\cap \Sigma$ we have that $n_1(x)$ is finite for every $x\in Y$. Let
\begin{eqnarray*}
n(x):=
\begin{cases}
n_1(x)+n_2(x) & \text{ if } x \in Y, \\
\infty & \text{ if } x\in \Sigma\setminus Y.
\end{cases}
\end{eqnarray*}
For $n\in{\mathbb N}_0 \cup \{\infty\}$ define
\begin{equation*}
\mathcal{C}_n:=\{x\in\Sigma: n(x)=n\}.
\end{equation*}
Note that $\mathcal{C}_0=K$ and $\mathcal{C}_1=\emptyset$. For $n \geq 2$ observe that $x\in \mathcal{C}_n$ if it belongs to the orbit of a point in $A_n$. More precisely, for every $n\geqslant 2$ we have that $\mathcal{C}_n\subset \bigcup_{k=1}^{n-1}\sigma^k(A_n)$. We define the following sets,
\begin{equation*}
\alpha_{\leqslant N}:=\left(\bigcup_{n=2}^{N} \mathcal{C}_n\right), \alpha_{N,M}:=\left(\bigcup^M_{n> N} \mathcal{C}_n\right) \text{ and } \alpha_{>M}:=\left(\bigcup_{n> M} \mathcal{C}_n\right)\cup \mathcal{C}_\infty.
\end{equation*}
\begin{remark}
The set $\alpha_{\leqslant M}$ can be covered with finitely many cylinders of length $L$. Indeed, observe that for every $n\geqslant 2$ we have
\begin{equation*}
\mathcal{C}_n \subset \bigcup_{s=1}^{n-1}\sigma^s(A_n) \subset \bigcup_{s=1}^{n-1}\sigma^s(K) \subset \sigma^{n-1}(K).
\end{equation*}
Therefore,
\begin{equation*}
\alpha_{\leqslant M}=\bigcup_{n=2}^{M}\mathcal{C}_n \subset \sigma^{M-1}(K)=\prod^\infty_{s=M-1}\{1,\ldots,a_s\}\cap \Sigma.
\end{equation*}
Since the set $\prod^\infty_{s=M-1}\{1,\ldots,a_s\}\cap \Sigma$ can be covered with at most $\prod_{s=M-1}^{M-2+L}a_s$ cylinders of length $L$, the same holds for $\alpha_{\leqslant M}$.
\end{remark}
Observe that it follows directly from the definition of $\hat\delta_\infty(K)$ (see \eqref{def:di}) that for every $\epsilon>0$, there exists $N_0=N_0(\epsilon) \in {\mathbb N}$ such that for every $n\geqslant N_0$ we have
\begin{equation*}
\hat z_n(K)\leqslant e^{n(\hat\delta_\infty(K)+\epsilon)}.
\end{equation*}
At this point we fix $\epsilon >0$ and $k , N \in {\mathbb N}$ large enough so that $kN\geqslant N_0(\epsilon)$: these will appear explicitly in the proof of Theorem~\ref{pre}.
Given $A\subset \Sigma$ and $t\in {\mathbb N}$ we define
\begin{equation*}
U_t(A):=\left\{x\in \Sigma: d(x,A)\leqslant 2^{-t} \right\}.
\end{equation*}
Now let
\begin{eqnarray*}
K(k,N):=U_{kN+2}(K) , &\\ \gamma_{\leqslant N}:=U_{(k+1)N+2}(\alpha_{\leqslant N})\setminus K(k,N) , &\\ G_{k,N}:=K(k,N)\cup \gamma_{\leqslant N},
\end{eqnarray*}
and
\begin{eqnarray*}
\gamma_{N,kN}:=U_{2(k+1)N+2}(\alpha_{N,kN})\setminus G_{k,N}, &\\ \gamma_{>kN}:=\Sigma\setminus (G_{k,N}\cup \gamma_{N,kN}), &\\ B_{k,N}:=\gamma_{N,kN}\cup\gamma_{>kN}.
\end{eqnarray*}
Denote by ${\mathcal Q}_1(k,N)$ the minimal cover of $K(k,N)$ with cylinders of length $kN+2$. Similarly, denote by ${\mathcal Q}_2'(k,N)$ the minimal cover of $\alpha_{\leqslant N}$ with cylinders of length $(k+1)N+2$. Observe that every element in ${\mathcal Q}_2'(k,N)$ is disjoint or contained in an element of ${\mathcal Q}_1(k,N)$. In particular $\gamma_{\leqslant N}$ is a finite union of cylinders of length $(k+1)N+2$; this collection of cylinders is denoted by ${\mathcal Q}_2(k,N)$. Define
\begin{equation} \label{def:beta'}
\beta_{k,N}':={\mathcal Q}_1(k,N)\cup {\mathcal Q}_2(k,N)
\end{equation}
and observe that $\beta_{k,N}'$ is a partition of the set $G_{k,N}$. Define the following partition of $\Sigma$,
\begin{equation} \label{def:beta}
\beta_{k,N}:=\{\gamma_{>kN},\gamma_{N,kN}\}\cup \beta_{k,N}'.
\end{equation}
Recall that the refinement $ \beta_{k,N}^n$ follows as in Section \ref{sec:em}. \\
\emph{Notation:} We use the following notation for an interval of integers $[a,b):= \{ n \in {\mathbb N} : a \leq n < b\}$ and $|[a, b)|=b-a$.
\begin{definition}
Let $Q\in \beta_{k,N}^n$ be such that $(Q\cup \sigma^{n-1}{Q})\subset G_{k,N}$. An interval $[r,s)\subset [0,n)$ is called an \emph{excursion} of $Q$ into $\gamma_{>kN}$ (resp. $B_{k,N}$) if $\sigma^t Q\subset \gamma_{>kN}$ (resp. $\sigma^t Q \subset B_{k,N}$) for every $t\in [r,s)$ and $(\sigma^{r-1} Q\cup \sigma^{s}Q)\subset G_{k,N}$.
An excursion $[r,s)$ of $Q$ into $B_{k,N}$ is said to \emph{enter} $\gamma_{>kN}$ if there exists $i \in [r,s)$ such that $\sigma^i Q \subset \gamma_{>kN}$.
\end{definition}
The next three lemmas are preparation for the proof of Proposition \ref{prop:goodcover}. These give us control on the return times to $K(k,N)$ and the length of excursions into $B_{k,N}$
\begin{lemma}\label{lem:A}
If $[r,r+s)$ is an excursion of $Q$ into $B_{k,N}$ that does not enter $\gamma_{>kN}$ then $s<kN$.
\end{lemma}
\begin{proof} Since the excursion does not enter $\gamma_{>kN}$ we have that $\sigma^rQ\subset \gamma_{N,kN}$. Fix $x\in \sigma^{r}Q$. By the definition of $\gamma_{N,kN}$ there exists $x_0\in \alpha_{N,kN}$ such that $d(x,x_0)\leqslant 2^{-(2(k+1)N+2)}$. Since $x_0 \in \alpha_{N,kN}$ we have that $n(x_0)\leqslant kN$ and therefore $n_2(x_0)< kN$. In particular $\sigma^t (x_0)\in \alpha_{\leqslant N}$, for some $t\in [0,kN)$. Observe that $$d(\sigma^t (x),\sigma^t (x_0))\leqslant 2^{-(2(k+1)N+2)+t}\leqslant 2^{-((k+1)N+2)}.$$ This readily implies that $\sigma^t(x)\in U_{(k+1)N+2}(\alpha_{\leqslant N})\subset G_{k,N}$. We conclude that $\sigma^{r+t} Q\subset G_{k,N}$, and therefore $s<kN$.
\end{proof}
\begin{lemma}\label{lem:B}
If $Q\subset G_{k,N}$ then there exists $t \in [0, N)$ such that $\sigma^tQ\subset K(k,N)$.
\end{lemma}
\begin{proof}
If $Q\subset K(k,N)$ there is nothing to prove. Assume that $Q\subset \gamma_{\leqslant N}$. Let $x\in Q$ and $y \in \alpha_{\leqslant N}$ such that $d(x,y)\leqslant 2^{-((k+1)N+2)}$.
Since $y \in \alpha_{\leqslant N}$ we have that $\sigma^t(y)\in K$, for some $t< N$. Observe that
\begin{equation*}
d(\sigma^t (x), \sigma^t (y))\leqslant 2^{-((k+1)N+2)}2^{t}<2^{-(kN+2)}.
\end{equation*}
We conclude that there exists $t \in [0, N)$ such that $\sigma^t (x)\in U_{kN+2}(K)=K(k,N)$. This implies that for some $t<N$ we have $\sigma^tQ\subset K(k,N)$.
\end{proof}
\begin{lemma}\label{lem:C} If $[r,r+s)$ is an excursion of $Q$ into $\gamma_{>kN}$ such that $s\geqslant N$ then $\sigma^{r-1}Q\subset K(k,N)$.
\end{lemma}
\begin{proof} From the definition of an excursion, the set $Q_0:=\sigma^{r-1}Q$ must lie in $G_{k, N}$, so to derive a contradiction we will assume that $Q_0\subset \gamma_{\leqslant N}$. Let $x\in Q_0$. By the construction of $\gamma_{\leqslant N}$ there exists $y \in \alpha_{\leqslant N}$ such that $d(x,y)\leqslant 2^{-((k+1)N+2)}$. Since $y\in \alpha_{\leqslant N}$ there exists $t\leqslant N$ such that $\sigma^t(y)\in K$. Therefore
\begin{equation*}
d(\sigma^t(x),\sigma^t (y))\leqslant 2^{-((k+1)N+2)+t}\leqslant 2^{-(kN+2)}.
\end{equation*}
We conclude that $\sigma^t(x)\in U_{kN+2}(K)=K(k,N)$. This contradicts the fact the length of the excursion is larger than $N$.
\end{proof}
\begin{definition}
Denote by $m_{n,k,N}(Q)$ the number of excursions of length greater or equal to $kN$ into $B_{k,N}$ that enter $\gamma_{>kN}$ and let
\begin{equation*}
E_{n,k,N} :=\# \left\{i \in[0,n): \sigma^i Q\subset B_{k,N} \right\}.
\end{equation*}
\end{definition}
The following result shows that an atom $Q\in \beta_{k,N}^n$ such that $Q\subset K(k,N)\cap \sigma^{-(n-1)}K(k,N)$ can be covered by cylinders of length $n$ in a controlled way. This is an estimate closely related to \cite[Lemma 7.4]{ekp} (see also \cite[Proposition 4.5]{ve}). The constant $\hat\delta_\infty(K)$ naturally appears when we try to control the time spent in the `bad' part $B_{k,N}$.
\begin{proposition} \label{prop:goodcover} Let $\beta_{k,N}$ be the partition defined in \eqref{def:beta}. Then an atom $Q\in \beta_{k,N}^n$ such that $Q\subset K(k,N)\cap \sigma^{-(n-1)}K(k,N)$, can be covered by at most
\begin{equation*}
e^{E_{n,k,N}(Q)(\hat\delta_\infty(K)+\epsilon)}e^{m_{n,k,N}(Q)N(\hat\delta_\infty(K)+\epsilon)}
\end{equation*}
cylinders of length $n$.
\end{proposition}
\begin{proof}
To simplify notation we drop the sub-indices $N$ and $k$. The proof of Proposition \ref{prop:goodcover} is by induction on $n$. First decompose $[0,n-1]$ into
\begin{equation*}
[0,n-1]=W_1\cup V_1\cup W_2\cup\cdots \cup V_s\cup W_{s+1},
\end{equation*}
according to the excursions into $B_{k,N}$ that contain at least one excursion into $\gamma_{>kN}$. More precisely, let $V_i=[m_i,m_i+h_i)$ and $W_i=[l_i,l_i+L_i)$ with $l_i+L_i=m_i$ and $m_i+h_i=l_{i+1}$. The segment $V_i$ denotes an excursion into $B_{k,N}$ that contains an excursion into $\gamma_{>kN}$. Given $i\in {\mathbb N}$ define $J_i:= \sum_{j=1}^i |V_j|1_{[kN,\infty)}(|V_j|),$
where $1_{[kN,\infty)}$ is the characteristic function of the interval $[kN,\infty)$. Similarly define $H_i:= \sum_{j=1}^i 1_{[kN,\infty)}(|V_j|).$ Observe that $Q\subset K(k,N)$ implies that $Q$ is already contained in a cylinder of length $kN+2$. \\
\emph{Step 1:} Assume that $Q$ has been covered with $c_i$ cylinders of length $l_i$, where
\begin{equation*}
c_i \leqslant e^{J_i \left(\hat\delta_\infty(K)+\epsilon \right)}e^{NH_i \left(\hat\delta_\infty(K)+\epsilon \right)}.
\end{equation*}
(As mentioned above, the set $Q$ is covered by one cylinder of length $1$, therefore take $c_1=1$.) We claim that the same number of cylinders of length $(l_i+L_i)$ cover $Q$.
Observe that by hypothesis $\sigma^{l_i}Q$ is contained in an element of $\beta'$, therefore $\mbox{\rm diam} (\sigma^{l_i}Q)\leqslant 2^{-(kN+2)}$. Since the elements of $\beta'$ all have diameter smaller than $2^{-(kN+2)}$, the same holds if $Q$ spends some extra time in $\beta'$. By Lemma \ref{lem:A}, if $Q$ has an excursion into $B_{k,N}$ that does not enter $\gamma_{>kN}$, then it must come back to $\beta'$ before $kN$ iterates. In particular if the excursion into $B_{k,N}$ is $[p_i,p_i+q_i)$, then $q_i< kN$. Observe that $\mbox{\rm diam} (\sigma^{p_i-1}Q)\leqslant 2^{-(kN+2)},$ implies that $\mbox{\rm diam} (\sigma^{p_i+t}Q)\leqslant 2^{-2},$ for every $t\in [0,kN)$. In particular the same holds for $t\in [0,q_i]$. Repeating this process we conclude that $\mbox{\rm diam} (\sigma^{t}Q)\leqslant 2^{-2}$, for every $t\in [l_i,l_i+L_i)$. This immediately implies that $\sigma^{l_i}Q$ is contained in a cylinder of length $L_i$, which implies our claim. We go next to Step 2.
\\
\emph{Step 2:} Assume we have covered $Q$ with $c_i$ cylinders of length $m_i$, where
\begin{equation*}
c_i\leqslant e^{J_i(\hat\delta_\infty(K)+\epsilon)}e^{NH_i(\hat\delta_\infty(K)+\epsilon)}.
\end{equation*}
We want to estimate the number of cylinders of length $(m_i+h_i)$ needed to cover $Q$. Define $Q_i:=\sigma^{m_i-1}Q$. If we are able to cover $Q_i$ with $R$ cylinders of length $(h_i+1)$, then we will be able to cover $Q$ with $Rc_i$ cylinders of length $(m_i+h_i)$. We will separate into two cases:\\
\emph{Case 1}: $h_i< kN$.\\
Observe that $Q_i\subset G_{k,N}$ and is therefore contained in an element of $\beta'$, which implies $\mbox{\rm diam} (Q_i)\leqslant 2^{-(kN+2)}$. This implies that $Q_i$ is contained in a cylinder of length $(kN+2)$. Since $h_i<kN$, this implies that $Q_i$ can be covered with one cylinder of length $(h_i+1)$. We conclude that
\begin{equation*}
c_{i+1}= c_i \leqslant e^{J_i(\hat\delta_\infty(K)+\epsilon)}e^{NH_i(\hat\delta_\infty(K)+\epsilon)}= e^{J_{i+1}(\hat\delta_\infty(K)+\epsilon)}e^{NH_{i+1}(\hat\delta_\infty(K)+\epsilon)}.
\end{equation*}
\emph{Case 2}: $h_i\geqslant kN$.\\
By Lemma \ref{lem:C}, $Q_i=\sigma^{m_i-1}Q\subset K(k,N)$. Observe that by assumption $\sigma^{h_i+1}Q_i\subset \gamma_{\leqslant N}$. By Lemma \ref{lem:B} there exists $0\leqslant t_i<N,$ such that $\sigma^{h_i+1+t_i}Q_i\subset K(k,N)$ (we assume $t_i$ is the smallest such number).
We conclude that every $x\in Q_i$ satisfies $x\in K_{a_0}$, $\sigma^{h_i+1+t_i}(x)\in K_{a_0}$, and $\sigma^s x\in K^c$, for every $s\in \{1,\ldots, h_i+t_i\}$. In other words $Q_i\subset T_{h_i+t_i}(K)$. We now claim that $Q_i\subset \widehat{T}_{h_i+t_i}(K)$. Observe that if $x\in Q_i$, then $\sigma^{h_i+t_i+1}(x)\in K(k,N),$ and $\sigma^k(x)\in K(k,N)^c$, for every $k\in \{1,\ldots, h_i+t_i\}$. We argue by contradiction and suppose that $i(\sigma^k(x))>(h_i+t_i)-k$ for some $k\in \{1,\ldots, h_i+t_i\}$. This implies that $(\sigma^k(x))_j\leqslant a_j$, for $j\in \{0,\ldots, h_i+t_i-k\}$. Observe that $(\sigma^k(x))_{h_i+t_i-k+j+1}=(\sigma^{h_i+t_i+1}(x))_{j}$, and for $j\in\{0,\ldots,kN+1\}$ we have $(\sigma^{h_i+t_i+1}(x))_{j}\leqslant a_j$. We conclude that $(\sigma^k(x))_{h_i+t_i-k+j+1}\leqslant a_j$, for $j\in \{0,\ldots,kN\}$. In particular we have that $(\sigma^k(x))_j\leqslant a_j$, for every $j\in \{0,\ldots,kN+1\}$, which contradicts that $\sigma^k(x)\in K(k,N)^c$, completing the proof of our claim.
This implies, from the definition of $\hat\delta_\infty(K)$, that $Q_i$ can be covered by at most $e^{(h_i+t_i)(\hat\delta_\infty(K)+\epsilon)}$ cylinders of length $(h_i+1+t_i)$; and by at most $e^{(h_i+N)(\hat\delta_\infty(K)+\epsilon)}$ cylinders of length $(h_i+1)$. We conclude that $Q$ can be covered by at most $c_{i+1}$ cylinders of length $(n_i+h_i)$, where
\begin{align*}c_{i+1}\leqslant & e^{(h_i+N)(\hat\delta_\infty(K)+\epsilon)}(e^{J_i(\hat\delta_\infty(K)+\epsilon)}e^{NH_i(\hat\delta_\infty(K)+\epsilon)})\\
&= e^{J_{i+1}(\hat\delta_\infty(K)+\epsilon)}e^{NH_{i+1}(\hat\delta_\infty(K)+\epsilon)}.
\end{align*}
Adding these steps together and noting that $J_s= E_{n,k,N}(Q)$ and $H_s= m_{n,k,N}(Q)$ completes the proof of the proposition.
\end{proof}
The idea now is to use Proposition \ref{prop:goodcover} to compare the entropy of a measure with the corresponding entropy of our partition $\beta_{k,N}$. This is a natural idea: the map $\mu\mapsto h_\mu(\beta_{k,N})$ is typically better behaved under sequences of measures; at this point we crucially use that the partition $\beta_{k,N}$ is finite.
\begin{proposition}\label{prop:ineq} Let $\beta_{k,N}$ be the partition defined in \eqref{def:beta} and $\mu$ an ergodic $\sigma$-invariant probability measure satisfying $\mu(K(k,N))>0$. Then
\begin{equation*}
h_{\mu}(\sigma)\leqslant h_{\mu}(\beta_{k,N})+\left( \mu(B_{k,N}) + \frac{1}{k} \right) (\hat\delta_\infty(K)+\epsilon).
\end{equation*}
\end{proposition}
\begin{proof}
To simplify notation we denote the partition $\beta_{k,N}$ by $\beta$. We will apply Theorem \ref{katformula}, so the main task is to estimate $N_\mu(n,1,\delta)$ for some $\delta\in (0, 1)$.
Since $\mu$ is an ergodic measure such that $\mu(K(k,N))>0$ there exists $\delta_1>0$ and an increasing sequence $(n_i)_{i}$ satisfying
\begin{equation*}
\mu(K(k,N)\cap \sigma^{-n_i}K(k,N)) > \delta_1,
\end{equation*}
for every $i\in {\mathbb N}$. Given $\epsilon_1>0$, by the Shannon-McMillan-Breiman theorem the set
\begin{equation*}
{\mathcal D}_{\epsilon_1,N}= \left\{x\in X : \forall n\geq N, \mu(\beta^n(x))\geq \exp(-n(h_\mu(\beta)+\epsilon_1)) \right\},
\end{equation*}
satisfies
\begin{equation*}
\lim_{N \to \infty} \mu \left( {\mathcal D}_{\epsilon_1,N}\right) =1.
\end{equation*}
By Birkhoff's Ergodic Theorem there exists a set $W_{\epsilon_1} \subset \Sigma$ satisfying $\mu(W_{\epsilon_1})>1-\frac{\delta_1}{4}$ and $n(\epsilon_1) \in {\mathbb N}$ such that
for every $x\in W_{\epsilon_1}$ and $n\geqslant n(\epsilon_1)$,
\begin{equation*}
\frac{1}{n}\sum_{i=0}^{n-1} 1_{B_{k,N}}(\sigma^n x) < \mu(B_{k,N})+\epsilon_1.
\end{equation*}
Define
\begin{equation*}
X_i:= W_{\epsilon_1}\cap {\mathcal D}_{\epsilon_1,n_i}\cap K(k,N)\cap \sigma^{-n_i}K(k,N).
\end{equation*}
So for sufficiently large values of $i \in {\mathbb N}$, by construction we have that $\mu(X_i)>\frac{\delta_1}{2}$. In what follows we will assume that $i \in {\mathbb N}$ is large enough that it satisfies this condition.
By definition of ${\mathcal D}_{\epsilon_1,n_i}$ the set $X_i$ can be covered by $\exp(n_i(h_\mu(\beta)+\epsilon_1))$ many elements of $\beta^{n_i}$.
We will make use of Proposition \ref{prop:goodcover} to efficiently cover each of those atoms by cylinders. Let $Q\in \beta^{n_i}$ be an atom intersecting $X_i$. In particular $Q\in K\cap\sigma^{-(n-1)}K$.
It follows from the definition of $W_{\epsilon_1}$ that
\begin{equation*}
E_{n_i,k,N}(Q) <\left(\mu(B_{k,N})+\epsilon_1 \right)n_i.
\end{equation*}
Moreover,
\begin{equation*}
m_{n_i,k,N}(Q)\leqslant \frac{1}{kN}n_i.
\end{equation*}
Indeed, each of the excursions counted in $m_{n_i,k,N}$ has length at least $kN$, which implies that the number of excursions can not be larger than $\frac{1}{kN}n_i$.
Therefore Proposition \ref{prop:goodcover} implies that
\begin{equation*}
N_\mu\left(n_i, 1,1-\frac{\delta_1}{2} \right) \leq
e^{n_i(h_\mu(\beta)+\epsilon_1)}e^{n_i(\hat\delta_\infty(K)+\epsilon)(\mu(B_{k,N})+\epsilon_1)}e^{\frac{1}{kN}n_iN(\hat\delta_\infty(K)+\epsilon)}.
\end{equation*}
It now follows from Katok's entropy formula (see Theorem \ref{katformula}) that
\begin{equation*}
h_\mu(\sigma) \leqslant h_\mu(\beta_{k,N})+\epsilon_1+(\hat\delta_\infty(K)+\epsilon)(\mu(B_{k,N})+\epsilon_1)+\frac{1}{k}(\hat\delta_\infty(K)+\epsilon).
\end{equation*}
Since $\epsilon_1>0$ was arbitrary the proof is complete.
\end{proof}
As in Proposition \ref{prop:ineq} we denote the partition $\beta_{k,N}$ by $\beta$. We may assume, possibly after refining the partition, that
$$\beta=\{C^1,\ldots,C^q,R\},$$ where each $C^i$ is a cylinder for the original partition and $R=\gamma_{>kN}$ is the complement of a finite collection of cylinders.
For simplicity we still denote this partition by $\beta$.
We emphasise that Proposition \ref{prop:ineq} still holds for this new partition.
Define, for large $m$, $F_m:=\bigcap_{i=0}^{m-1}\sigma^{-i} R$.
We will require the following continuity result.
\begin{proposition}\label{prop:atom} Suppose that $(\mu_n)_n$ is a sequence of ergodic probability measures converging on cylinders to an invariant measure $\mu$, where $\mu(\Sigma)>0$. For every $P\in \beta^m\setminus \{F_m\},$ we have
\begin{equation*}
\lim_{n\to\infty}\mu_n(P)=\mu(P).
\end{equation*}
\end{proposition}
\begin{proof} In order to prove the proposition we will need the following fact.
\begin{claim}\label{claim:preatom} Let $(H_i)_i$ be a collection of cylinders and $(p_i)_i$ a sequence of natural numbers. Then $H_0\cap \sigma^{-p_1}H_1\cap \cdots\cap \sigma^{-p_k}H_k$, is either a finite collection of cylinders, or the empty set.
\end{claim}
\begin{proof} We begin with the case $k=2$, in other words, we will prove that if $C$ and $D$ are cylinders, then for every $p\in {\mathbb N}$ the set $C\cap \sigma^{-p}D$ is a finite collection of cylinders or the empty set. If the length of $C$ is larger than or equal to $p$ then $C\cap \sigma^{-p}D$ is empty or a cylinder. If $p$ is larger than the length of $C$, then we use that there are only finitely many admissible words of given length connecting two fixed symbols. More precisely, if $C=[x_0,\ldots,x_{h-1}]$ and $D=[y_0,\ldots,y_{t-1}]$, then there are finitely many admissible words of length $p-h+2$ connecting $x_{h-1}$ and $y_0$. We conclude that $C\cap \sigma^{-p}D$ is a finite collection of cylinder or the empty set. The same argument gives us the proof of the claim for arbitrary $k$.
\end{proof}
Let $P=S_0\cap \sigma^{-1}S_1\cap\cdots\cap\sigma^{-(m-1)}S_{m-1}$, where $S_i\in \beta$ and $P_k:=\bigcap_{i=k}^{m-1}\sigma^{-(i-k)}S_i$. Define $B=B(P):=\{i\in\{0,\ldots,m-1\}: S_i=R\}$, $G=G(P):=\{0,\ldots,m-1\}\setminus B$, and $k=k(P):=(\min G)-1$. By assumption we know that $G\ne \emptyset$. Let $Q_0=Q_0(P):=\bigcup_{i=0}^k \sigma^{-i}R$, $Q_1=Q_1(P):=\bigcap_{i\in G} \sigma^{-i}S_i$, and $Q_2=Q_2(P):=\bigcap_{i\in B\cap (k,\infty)}\sigma^{-i}S_i$. We will first consider the case $k=-1$, where $Q_0=\emptyset$.
\begin{claim}\label{claim:-1} Let $P=\bigcap_{i=0}^{m-1}\sigma^{-i}S_i$, where $S_0\in \{C^1,\ldots,C^q\}$. Then $$\lim_{n\to\infty} \mu_n(P)=\mu(P).$$\end{claim}
\begin{proof}
Since $Q_1$ is the disjoint union of $P=(Q_1\cap Q_2)$ and $(Q_1\cap Q_2^c)$, for every $n\in {\mathbb N}$ we obtain that
\begin{equation*}
\mu_n(P)=\mu_n(Q_1)-\mu_n(Q_1\cap Q_2^c).
\end{equation*}
Observe that
\begin{align*} Q_1\cap Q_2^c&= \left(\bigcap_{j\in G}\sigma^{-j}S_j \right) \cap \left(\bigcup_{i\in B} \sigma^{-i}R^c \right)=\bigcup_{i\in B} \left(\sigma^{-i}R^c\cap \bigcap_{j\in G}\sigma^{-j}S_j \right).
\end{align*}
From Claim \ref{claim:preatom} we conclude that for every $i\in B$ the sets $Q_1$ and $(\sigma^{-i}R^c\cap \bigcap_{j\in G}\sigma^{-j}S_j)$ are a finite union of cylinders or the empty set. Therefore, $Q_1$ and $Q_1\cap Q_2^c$ are a finite union of cylinders, or the empty set. From this we immediately obtain that
\begin{align*}\lim_{n\to\infty}\mu_n(P)&=\lim_{n\to\infty}\mu_n(Q_1)-\lim_{n\to\infty}\mu_n(Q_1\cap Q_2^c)=\mu(Q_1)-\mu(Q_1\cap Q_2^c)=\mu(P),\end{align*}
which proves the claim.
\end{proof}
We now explain how to reduce the case $k\geqslant 0$ to Claim \ref{claim:-1}. Observe that $P=R\cap \sigma^{-1}P_1$, therefore $\sigma^{-1}P_1$ is the disjoint union between $P$ and $S_1:=(R^c\cap\sigma^{-1}P_1)=\bigcup_{i=1}^q(C^i\cap \sigma^{-1}P_1)$. Thus,
\begin{align*} \mu_n(P)&=\mu_n(\sigma^{-1}P_1)- \mu_n(R^c\cap\sigma^{-1}P_1)=\mu_n(P_1)-\sum_{i=1}^q \mu_n(C^i\cap\sigma^{-1}P_1).
\end{align*}
By Claim \ref{claim:-1} we know that $\lim_{n\to\infty}\mu_n(C^i\cap\sigma^{-1}P_1)=\mu(C^i\cap\sigma^{-1}P_1)$. Therefore it suffices to prove that $\lim_{n\to\infty}\mu_n(P_1)=\mu(P_1)$. Applying the above argument $k$ times we obtain that the original problem is reduced to $\lim_{n\to\infty}\mu_n(P_{k+1})=\mu(P_{k+1})$
Since $P_{k+1}=S_{k+1}\cap \sigma^{-1}P_{k+2}$, where $S_{k+1}\in \{C^1,\ldots,C^q\}$, we conclude the proof of the proposition by applying Claim \ref{claim:-1}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{pre}]
We first consider the case in which not all the mass escapes, that is, we assume that $\mu(\Sigma)>0$. Let $\varepsilon_0>0$.
Choose $m \in {\mathbb N}$ sufficiently large such that
\begin{equation*}
h_{\frac{\mu}{|\mu|}}(\sigma)+\varepsilon_0>\frac{1}{m}H_{\frac{\mu}{|\mu|}}(\beta^{m})\quad , \quad 2\frac{e^{-1}}{m}<\frac{\varepsilon}{2} \quad and \quad -\left(\frac1m\right)\log \mu(\Sigma)<\varepsilon_0.
\end{equation*}
Then
\begin{equation*}
|\mu| h_{\frac{\mu}{|\mu|}}(\sigma)+2\varepsilon_0>\frac{1}{m}\sum_{P\in \beta^m} \mu(P)\log\mu(P).
\end{equation*}
It follows from Proposition \ref{prop:atom} that
\begin{equation*}
\lim_{n\to\infty} \sum_{Q\in \beta^{m}\setminus\{F_m\}}\mu_n(Q)\log\mu_n(Q) = \sum_{Q\in \beta^{m}\setminus\{F_m\}}\mu(Q)\log\mu(Q).
\end{equation*}
For sufficiently large $n \in {\mathbb N}$ we have the inequality
\begin{equation*}
|\mu|h_{\frac{\mu}{|\mu|}}(\sigma)+3\varepsilon_0\geqslant \frac{1}{m}H_{\mu_n}(\beta^m).
\end{equation*}
By Proposition \ref{prop:ineq}, we have that
\begin{align*}
|\mu| h_{\frac{\mu}{|\mu|}}(\sigma)+3\varepsilon_0 > & \frac{1}{m}H_{\mu_n}(\beta^{m})\geq h_{\mu_n}(\sigma,\beta)\\
\geqslant & h_{\mu_n}(\sigma)-(\hat\delta_\infty(K)+\epsilon)\mu_n(B_{k,N})-\frac{1}{k}(\hat\delta_\infty(K)+\epsilon),
\end{align*}
and therefore
\begin{align}\label{forA} \limsup_{n\to\infty} h_{\mu_n}(\sigma)\leq |\mu| h_{\frac{\mu}{|\mu|}}(\sigma)+(\hat\delta_\infty(K)+\epsilon)(1-\mu(G_{k,N})) +\frac{1}{k}(\hat\delta_\infty(K)+\epsilon).
\end{align}
We stress that Proposition \ref{prop:ineq} can be applied for arbitrary $k, N \in {\mathbb N}$ since $K\subset \mbox{\rm supp}(\mu_n)$ and therefore $\mu_n(K(k,N))>0$.
Finally, letting $k\to \infty$ and $\epsilon\to 0$ we obtain the inequality
\begin{equation*}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\leqslant |\mu|h_{\mu/|\mu|}(\sigma)+\left(1-\sup_{k,N}\mu(G_{k,N}) \right)\hat\delta_\infty(K).
\end{equation*}
Observe that $Y\subset \bigcup_{k,N} G_{k,N}$, therefore $\mu(Y)\leqslant \mu \left(\bigcup_{k,N} G_{k,N} \right)=\sup_{k,N}\mu(G_{k,N})$.
We conclude that
\begin{equation*}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\leqslant |\mu|h_{\mu/|\mu|}(\sigma)+(1-\mu(Y))\hat\delta_\infty(K).
\end{equation*}
The case $\mu(\Sigma)=0$ follows directly from Proposition \ref{prop:ineq} since $h_{\mu_n}(\sigma,\beta)\to 0$ and $\mu_n(B_{k,N})\to 1$ as $n\to \infty$.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}\label{finalproof}
In this section we prove our main result. We start with a simple result we will need later.
\begin{lemma}\label{lem:Kineq}
Let $(a_j)_j$ and $(b_j)_j$ be sequences of natural numbers such that for every $i\in{\mathbb N}_0$ we have $a_0=b_0$ and $a_j\leqslant b_j$. Then $\hat\delta_\infty(K((b_j)_j))\leqslant \hat\delta_\infty(K((a_j)_j)).$
\end{lemma}
\begin{proof} Denote by $K_1:=K((a_j)_j)$ and $K_2:=K((b_j)_j)$. Recall that associated to each compact set defined in this way there is a function $i$ (see \eqref{def:i} for the definition). Denote the function $i$ associated to $K_1$ (resp. $K_2$) by $i_1$ (resp. $i_2$). It follows from the hypothesis that $K_1\subset K_2$. In particular we have that $K_2^c\subset K_1^c$ and therefore $T_n(K_2)\subset T_n(K_1)$ (see \eqref{def:T} for the definition of $T$). Moreover, we have that
\begin{equation*}
\widehat{T}_n(K_2)\subset \widehat{T}_n(K_1),
\end{equation*}
(see \eqref{def:That} for the definition of $\widehat{T}$). Indeed, let $x\in \widehat{T}_n(K_2)$, we have that $i_2(\sigma^k(x))\leqslant n-k$. In particular
\begin{equation*}
(\sigma^k(x))_{i_2(\sigma^k(x))}>b_{i_2(\sigma^k(x))}\geqslant a_{i_2(\sigma^k(x))}.
\end{equation*}
We conclude that $i_1(\sigma^k(x))\leqslant i_2(\sigma^k(x))\leqslant n-k$, and therefore $x\in \widehat{T}_n(K_1)$. Thus $\widehat{T}_n(K_2)\subset \widehat{T}_n(K_1)$, which readily implies that for every $n \in {\mathbb N}$ we have $\hat z_n(K_2)\leqslant \hat z_n(K_1)$.
\end{proof}
In the next lemma we establish a relation between the quantities $\hat\delta_\infty(K)$ and $\delta_\infty(q)$, which in turn is necessary to relate Theorem \ref{pre} with Theorem \ref{thm:main}. As mentioned before, in the definition of $\hat\delta_\infty(K)$ we covered the sets $\widehat{T}_n(K)$ (and not $T_n(K)$ which may seem more natural) in order to ensure this result.
\begin{lemma}\label{lem:Kineq2} Let $(\Sigma, \sigma)$ be a CMS satisfying the $\mathcal{F}-$property, and $M , q \in {\mathbb N}$. Then there exists a sequence of natural numbers $(a_i)_i$ such that $a_0=q$, and
\begin{equation*}
\hat\delta_\infty(K)\leqslant \delta_\infty(M,q),
\end{equation*}
where $K=K((a_i)_i)$
\end{lemma}
\begin{proof}
Let $i \in {\mathbb N}$. Since $\Sigma$ satisfies the $\mathcal{F}-$property, there are finitely many cylinders of the form $[x_0,\ldots,x_n]$, where $x_0\leqslant q$, $x_n\leqslant q$, and $n\leqslant iM$. Thus, only a finite collection of symbols from the alphabet are used in this collection of cylinders. Denote by $r_i \in {\mathbb N}$ the largest of this collection of symbols. Inductively define $(a_i)_i \subset {\mathbb N}$ so that:
\begin{equation*}
a_{i+1} >a_i \quad \text{ and } \quad a_i > r_i.
\end{equation*}
We now prove that the set $K=K((a_i)_i)$ is such that $\hat z_n(K)\leqslant \delta_\infty(M,q)(n)$, for every $n\in {\mathbb N}$. Recall that $\hat z_n(K)$ is the minimal number of cylinders of length $(n+2)$ needed to cover $\widehat{T}_n$. Let $x=(x_0, x_1, \dots) \in \widehat{T}_n$,
\begin{equation*}
E:= \left\{k\in \{0,\ldots,n+1\}:x_k\leqslant a_0 \right\},
\end{equation*}
and $B:=\{0,\ldots,n+1\}\setminus E$. For $k\in E$ we define $p_k:=i(\sigma^k(x))$. We emphasise that since $k\in E$ then $x_k\leqslant a_0$, thus $p_k=i(\sigma^k(x))\geqslant 1$. Let $r \in E$ and observe that $x_{p_r+r}=(\sigma^r(x))_{p_r}>a_{p_r}$, where $p_r\leqslant n-r$.
Because of the choice of $a_{p_r}$, there is no admissible word of length less or equal to $ p_rM$ connecting $x_{p_r+r}$ and a symbol in the set $\{0, 1 \dots, q\}$. Since $x_{n+1}\leqslant q$, this means that we must have $p_r+r+(p_rM)\leqslant n+1$. Moreover, for every $0\leqslant m< p_rM$
we have that $p_r+r+m\in B$. In other words, the interval $[r,r+p_r+p_rM)$ has at least $p_rM$ elements in $B$, equivalently, at most $p_r$ elements in $E$. Since this argument holds for every $r\in E$ we conclude that $M\#E\leqslant n+2$ and therefore
\begin{equation} \label{eq:ele}
\#E\leqslant \frac{n+2}{M}.
\end{equation}
From \eqref{eq:ele} it follows that every $x\in \widehat{T}_n$ belongs to a cylinder of the form $[x_0,\ldots,x_{n+1}]$, where $x_0\leqslant q$, $x_{n+1}\leqslant q$ and
\begin{equation*}
\#\{i\in\{0,1,\ldots,n+1\}: x_i\leqslant q\}\leqslant \frac{n+2}{M}.
\end{equation*}
This implies that $\hat z_n(K)\leqslant z_n(M,q)$, for every $n\in {\mathbb N}$. Therefore $\hat\delta_\infty(K)\leqslant \delta_\infty(M,q)$.
\end{proof}
Define $\hat{\delta}_\infty(q):=\inf_{(a_i)_i:a_0=q} \hat\delta_\infty(K((a_i)_i)$.
\begin{corollary}\label{cor:deltaineq}
For every $q\in {\mathbb N}$ we have $\hat{\delta}_\infty(q)\leqslant \delta_\infty(q)$.
\end{corollary}
\begin{proof} Combine Lemma \ref{lem:Kineq} with Lemma \ref{lem:Kineq2}.
\end{proof}
We now prove Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
Let $(a_i)_i$ be a sequence of non-negative integers and $K:=K((a_i)_i)$ the corresponding compact set. We assume $K$ large enough so that
there exists a periodic measure $\mu_p$ with $\mu_p(K)>0$. We will prove that
\begin{align}\label{11}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\leqslant |\mu|h_{\mu/|\mu|}(\sigma)+(1-\mu(K))\hat\delta_\infty(K).
\end{align}
Let $\mu'_{n}:=(1-\frac{1}{n})\mu_n+\frac{1}{n}\mu_p$. Observe that for every $n\in{\mathbb N}$ we have $\mu_n'(K)>0$. It follows from Proposition \ref{dense} that there exists an ergodic measure $\nu_n$ arbitrarily close in the weak$^*$ topology to $\mu_{n}'$ such that $h_{\nu_n}(\sigma)>h_{\mu'_n}(\sigma)-\frac{1}{n}$. In particular, we can assume that $\nu_n(K(n,n))>0$ and that $(\nu_n)_n$ converges on cylinders to $\mu$.
Let $k , N \in {\mathbb N}$. If $n>\max\{k,N\}$ then $K(n,n)\subset K(k,N)$, therefore $\nu_n(K(k,N))>0$. It now follows from \eqref{forA} that
\begin{equation*}
\limsup_{n\to\infty} h_{\nu_n}(\sigma)\leq |\mu| h_{\frac{\mu}{|\mu|}}(\sigma)+(\hat\delta_\infty(K)+\epsilon)(1-\mu(G_{k,N})) +\frac{1}{k}(\hat\delta_\infty(K)+\epsilon).
\end{equation*}
Letting $k$ tend to infinity and $\epsilon$ to zero we obtain
\begin{equation*}
\limsup_{n\to\infty} h_{\nu_n}(\sigma)\leq |\mu| h_{\frac{\mu}{|\mu|}}(\sigma)+(1-\mu(K))\hat\delta_\infty(K).
\end{equation*}
Since $h_{\nu_n}(\sigma)>h_{\mu'_n}(\sigma)-\frac{1}{n}=(1-\frac{1}{n})h_{\mu_n}-\frac{1}{n}$, then
\begin{equation*}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\leqslant \limsup_{n\to \infty} h_{\nu_n}(\sigma),
\end{equation*}
from which \eqref{11} follows.
The argument above also holds for every set $K'=K((b_i)_i)$, where $a_0=b_0$ and $a_i\leqslant b_i$. Observe that $\sup_{(b_i)_i:b_0=a_0} \mu(K((b_i)_i))=\mu(K_{a_0})$. Thus, it is a consequence of Corollary \ref{cor:deltaineq} that
\begin{align*} \limsup_{n\to \infty} h_{\mu_n}(\sigma)&\leqslant |\mu|h_{\mu/|\mu|}(\sigma)+(1-\mu(K_{a_0}))\hat{\delta}_\infty(a_0)\\
& \leqslant |\mu|h_{\mu/|\mu|}(\sigma)+(1-\mu(K_{a_0}))\delta_\infty(a_0).
\end{align*}
Letting $a_0$ tend to infinity concludes the proof of Theorem \ref{thm:main}.
\end{proof}
\section{Variational principle for the entropies at infinity}\label{entinf1}
In this section we prove Theorem \ref{thm:vpinf}. That is, we prove a variational principle at infinity: the measure theoretic entropy at infinity coincides with its topological counterpart.
For each pair $(i,j)\in{\mathbb N}^2$ choose a non-empty cylinder $w(i,j)$ of length $\ell(i,j)+1$ such that
\begin{equation*}
w(i,j):=[i, \dots, j]= [(w(i,j)_0, \dots , w(i,j)_{\ell(i,j)}].
\end{equation*}
Let $\varphi: \Sigma \to {\mathbb R}$ be a potential and define
\begin{equation*}
Z_n(\varphi, a,b):=\sum_{x:\sigma^{n+\ell(b,a)}(x)=x}\exp \left(S_{n+\ell(b,a)}\varphi(x) \right)1_{[a]\cap \sigma^{-n}w(b,a)}(x).
\end{equation*}
In the following lemma we show that the Gurevich pressure can be computed by means of the partition function $Z_n(\varphi, a,b)$; this will be used in Lemma \ref{lem:testineq}.
\begin{lemma}\label{lem:equivgur} Let $(\Sigma,\sigma)$ be a transitive CMS and $\varphi: \Sigma \to {\mathbb R}$ a bounded potential with summable variations. Then for every pair $(a,b) \in {\mathbb N}^2$ we have that
\begin{equation*}
P_{G}(\varphi)=\limsup_{n\to\infty}\frac{1}{n}\log Z_n(\varphi,a,b).
\end{equation*}
\end{lemma}
\begin{proof} Let $C=\|\varphi\|_0$ and $D=\sum_{k=2}^\infty \text{var}_k(\varphi)$. It follows from the definition of $Z_n(\varphi,a,b)$ that
\begin{equation*}
Z_{n+\ell(b,a)}(\varphi,a)=\sum_{x:\sigma^{n+\ell(b,a)}(x)=x}\exp(S_{n+\ell(b,a)}\varphi(x))1_{[a]}(x) \geqslant Z_n(\varphi,a,b).
\end{equation*}
In particular we obtain that
\begin{equation*}
P_{G}(\varphi)=\limsup_{n\to\infty}\frac{1}{n}\log Z_n(\varphi,a)\geqslant \limsup_{n\to\infty}\frac{1}{n}\log Z_n(\varphi,a,b).
\end{equation*}
Let $\mathbb{P}_n:=w(a,b)\cap \sigma^{-n}w(b,a)$. Note that
\begin{align*}
Z_n(\varphi,a,b)&\geqslant \sum_{x:\sigma^{n+\ell(b,a)}(x)=x}\exp \left(S_{n+\ell(b,a)}\varphi(x) \right)1_{P_n}(x)\\
& \geqslant e^{-(\ell(a,b)+\ell(b,a))C} \sum_{x:\sigma^{n+\ell(b,a)}(x)=x} \exp \left(S_{n -\ell(b,a)}\varphi(\sigma^{\ell(a,b)}x) \right)1_{\mathbb{P}_n}(x).\\
\end{align*}
Observe that if $x=(x_0, x_1, \dots) \in \mathbb{P}_n$, then $x_{\ell(b,a)}=x_{n}=b$. Define the periodic point $y(x):=\overline{x_{\ell(b,a)}\ldots x_{n-1}}$. The function $y$ establishes a one-to-one correspondence between points in $x\in \mathbb{P}_n$ such that $\sigma^{n+\ell(b,a)}(x)=x$, and periodic points of length $n-\ell(b,a)$ in $[b]$. Moreover, note that if $x\in\mathbb{P}_n$, then
\begin{equation*}
\left |S_{n-\ell(b,a)} \left(\varphi(\sigma^{\ell(a,b)}x) \right)-S_{n-\ell(b,a)} \left(\varphi(y(x)) \right) \right|\leqslant D.
\end{equation*}
We conclude that
\begin{eqnarray*}
\sum_{x:\sigma^{n+\ell(b,a)}(x)=x} \exp \left(S_{n -\ell(b,a)}\varphi(\sigma^{\ell(a,b)}x) \right)1_{\mathbb{P}_n}(x) \geqslant &\\
e^{-D} \sum_{x:\sigma^{n-\ell(b,a)}(y)=y} \exp \left(S_{n -\ell(b,a)}\varphi(y) \right)1_{[b]}(x).
\end{eqnarray*}
That is $Z_n(\varphi,a,b)\geqslant e^{-(\ell(a,b)+\ell(b,a))C-D}Z_{n-\ell(b,a)}(\varphi,b)$ and therefore
\begin{equation*}
\limsup_{n\to\infty}\frac{1}{n}\log Z_n(\varphi,a,b)\geqslant P_{G}(\varphi).
\end{equation*}
\end{proof}
\begin{remark} Note that in Lemma \ref{lem:equivgur} the assumption $\|\varphi\|_0<\infty$ is too strong for what is required: it suffices to assume that
for every $n\in {\mathbb N}$ we have $\sup_{x\in [n]}|\varphi(x)|<\infty$.
\end{remark}
We say that a point $x\in\Sigma$ belongs to the set $Per(q,M,n)$ if the following properties hold:
\begin{enumerate}
\item $\sigma^n(x)=x$.
\item If $x\in [x_0,\ldots,x_{n-1}]$, then $x_0\leqslant q$, and $\#\{k\in\{0,\ldots,n-1\}:x_k\leqslant q\}\leqslant \frac{n}{M}$.
\end{enumerate}
The following lemma is important in our proof of Theorem \ref{thm:vpinf} as it will allow us to find a sequence of invariant probability measures which converges to the zero measure and entropies approach the topological entropy at infinity.
\begin{lemma}\label{lem:testineq} Let $\varphi: \Sigma \to {\mathbb R}$ be a bounded potential of summable variations such that
\begin{equation*}
\lim_{n\to\infty}\sup_{x\in [n]} |\varphi(x)|=0.
\end{equation*}
Then $P_G(\varphi)\geqslant \delta_\infty$.
\end{lemma}
\begin{proof}
For every $\epsilon>0$ there exists $N_0=N_0(\epsilon) \in {\mathbb N}$ such that $\sup_{x \in [n]} |\varphi(x)| \leqslant \epsilon$, for every $n\geqslant N_0$. By Lemma \ref{lem:equivgur}, for sufficiently large values of $n \in {\mathbb N}$, since $Z_n(a,n)(\varphi) \leq \exp (n P_G(\varphi) + \epsilon)$ there exists $N'=N'(N_0) \in {\mathbb N}$ such that
\begin{equation*}
N' \exp \left(n P_G(\varphi) + \epsilon \right) \geq \sum_{(a,b)\in \{1,\ldots,N_0\}^2} Z_n(\varphi,a,b).
\end{equation*}
That is,
\begin{equation*}
P_G(\varphi)\geqslant \limsup_{n\to\infty} \frac{1}{n}\log\sum_{(a,b)\in \{1,\ldots,N_0\}^2} Z_n(\varphi,a,b).
\end{equation*}
Define
\begin{equation*}
\mathbb{T}_n(a,b):=\sum_{x\in Per(N_0,M,n+\ell(b,a))} \exp \left(S_{n+\ell(b,a)}\varphi(x) \right)1_{[a]\cap \sigma^{-n}w(b,a)}(x),
\end{equation*}
and observe that $Z_n(\varphi,a,b)\geqslant \mathbb{T}_n(a,b).$ Recall that $x\in Per(N_0,M,n+\ell(b,a))$ implies that
\begin{equation*}
\# \left\{k\in\{0,\ldots,n+\ell(b,a)-1\}:x_k\leqslant N_0 \right\} \leqslant \frac{n+\ell(b,a)}{M}.
\end{equation*}
It follows from the choice of $N_0$ that
\begin{equation*}
S_{n+\ell(b,a)}\varphi(x)\geqslant -(n+\ell(b,a))\epsilon-\frac{\|\varphi\|_0}{M}(n+\ell(b,a)).
\end{equation*}
In particular
\begin{equation*}
\mathbb{T}_n(a,b)\geqslant \# \left\{ Per(N_0,M,n+\ell(b,a))\cap [a]\cap\sigma^{-n}w(b,a) \right\} e^{-(n+\ell(b,a))(\epsilon+\frac{\|\varphi\|_0}{M})}.
\end{equation*}
Denote by $\mathcal{W}_n(a,b,N_0,M)$ the collection of cylinders of the form $[x_0,\ldots,x_{n}]$, where $x_0=a$, $x_n=b$, and $\#\{k\in\{0,\ldots,n\}:x_k\leqslant N_0\}\leqslant \frac{n+1}{2M}$.
In order to estimate the number of these using periodic points, to each cylinder $[x_0,\ldots,x_n]\in \mathcal{W}_n(a,b,N_0,M)$ we associate the cylinder
\begin{equation*}
D=[y_0,\ldots,y_{n+\ell(b,a)}]=[x_0,\ldots,x_n,(w(b,a))_1,\ldots,w(b,a)_{\ell(b,a)}].
\end{equation*}
Observe that $y_0=a$, $y_{n+\ell(b,a)}=a$, and $$\#\{k\in \{0,\ldots,n+\ell(b,a)\}:y_k\leqslant N_0\}\leqslant \frac{n+1}{2M}+\ell(b,a).$$ For $n \in {\mathbb N}$ sufficiently large we can assume that $\frac{n+1}{2M}+\ell(b,a)\leqslant \frac{n+\ell(b,a)}{M}$. In particular the periodic point associated to $D$ belongs to $Per(N_0,M,n+\ell(b,a))\cap[a]\cap\sigma^{-n}w(b,a)$. We conclude that
\begin{equation*}
\#\mathcal{W}_n(a,b,N_0,M)\leqslant \#\left\{Per(N_0,M,n+\ell(b,a))\cap[a]\cap\sigma^{-n}w(b,a)\right\}.
\end{equation*}
Observe that $\sum_{(a,b)\in \{1,\ldots,N_0\}^2} \#\mathcal{W}_n(a,b,N_0,M)=z_{n-1}(2M,N_0)$, which implies
\begin{equation*}
z_{n-1}(2M,N_0)\leqslant \sum_{(a,b)\in \{1,\ldots,N_0\}^2} \#\left\{Per(N_0,M,n+\ell(b,a))\cap[a]\cap\sigma^{-n}w(b,a) \right\}.
\end{equation*}
Hence, writing $\ell_{N_0} := \max_{(a,b)\in \{1,\ldots,N_0\}^2}\ell(b,a))$, we obtain that
\begin{equation*}
\sum_{(a,b)\in \{1,\ldots,N_0\}^2}Z_n(\varphi,a,b)\geqslant \sum_{(a,b)\in \{1,\ldots,N_0\}^2}\mathbb{T}_n(a,b)\geqslant z_{n-1}(2M,N_0) e^{-(n+\ell_{N_0})(\epsilon+\frac{\|\varphi\|_0}{M})},
\end{equation*}
and therefore
\begin{equation*}
P_G(\varphi)\geqslant \limsup_{n\to\infty}\frac{1}{n}\log \sum_{(a,b)\in \{1,\ldots,N_0\}^2}Z_n(\varphi,a,b)\geqslant \delta_\infty(2M,N_0)-\epsilon-\frac{\|\varphi\|_0}{M}.
\end{equation*}
Letting $M\to\infty$ we obtain that $P(\varphi)\geqslant \delta_\infty(N_0)-\epsilon$. Choosing $N_0$ sufficiently large we can make $\epsilon$ arbitrarily small, to conclude that $P_G(\varphi)\geqslant \delta_\infty$.
\end{proof}
Recall that the measure theoretic entropy at infinity of a transitive CMS of finite entropy $(\Sigma,\sigma)$ is defined by
\begin{equation*}
h_\infty :=\sup_{(\mu_n)_n\to 0}\limsup_{n\to\infty}h_{\mu_n}(\sigma),
\end{equation*}
where the supremum is taken over all sequences of invariant probability measures converging on cylinders to the zero measure. An immediate consequence of Theorem \ref{thm:main} is the following upper bound for the measure theoretic entropy at infinity of $(\Sigma,\sigma)$:
\begin{align} \label{eq:ineinf}
h_\infty \leqslant \delta_\infty
\end{align}
We will now prove that in fact equality holds. This is equivalent to the sharpness of the inequality in Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:vpinf}]
As observed in \eqref{eq:ineinf}, it suffices to prove the inequality $\delta_\infty\leqslant h_\infty$. Let $\varphi: \Sigma \to {\mathbb R}$ be a bounded, strictly negative locally constant potential depending only on the first coordinate such that
\begin{equation*}
\lim_{n\to\infty}\sup_{x\in [n]} |\varphi(x)|=0.
\end{equation*}
By Lemma \ref{lem:testineq}, for every $t \in {\mathbb R}$ we have $P(t\varphi)\geqslant \delta_\infty$. Now consider a sequence of measures $(\mu_n)_{n }$ such that
\begin{equation*}
h_{\mu_n}(\sigma)+n\int \varphi ~d\mu_n>P(n\varphi)-\frac{1}{n}.
\end{equation*}
The existence of such a sequence of invariant probability measures is guaranteed by the variational principle. Then
\begin{equation*}
h_{\mu_n}(\sigma)+n\int \varphi~d\mu_n>\delta_\infty-\frac{1}{n}.
\end{equation*}
Since the potential $\varphi$ is strictly negativity and bounded we conclude that the sequence $(\mu_n)_{n}$ converges on cylinders to the zero measure. Since $h_{\mu_n}(\sigma)\geqslant \delta_\infty-\frac{1}{n}$,
\begin{equation*}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\geqslant \delta_\infty.
\end{equation*}
In particular, $\delta_\infty\leqslant h_\infty$.
\end{proof}
\section{Applications} \label{sec:app}
In this section we discuss several consequences of Theorem \ref{thm:main}. Among the consequences we obtain the upper semi-continuity of the entropy map, the entropy density of the space of ergodic measures, the stability of the measure of maximal entropy in the SPR case, existence of equilibrium states for potentials in $C_0(\Sigma)$, a relationship between the entropy at infinity and the dimension of the set of recurrent points that escape on average and a bound on the amount of mass that can escape for measures with large entropy.
\subsection{Upper semi-continuity of the entropy map} \label{sec:usc}
Starting in the early 1970s with the work of Bowen \cite{bo1} many results describing the continuity properties of the entropy map have been obtained. More precisely, given a dynamical system $T:X \to X$, the map $\mu \mapsto h_{\mu}(T)$ defined on the space $ \mathcal{M}(X,T)$ endowed with the weak$^*$ topology is called \emph{entropy map}. In general it is not continuous \cite[p.184]{wa}. However, it was soon realised that
that if $X$ is compact and $T$ expansive then the entropy map is upper-semi continuous \cite[Theorem 8.2]{wa}. This result has been extended to a wide range of dynamical systems exhibiting weak forms of expansion or hyperbolicity, but always assuming the compactness of $X$. Indeed, there exist examples of expansive maps $T$ defined on non-compact spaces for which the entropy map is not upper semi-continuous. We discuss some of them in this section (see Remark \ref{rem:nousc}). We recently proved in \cite[Corollary 1.2]{itv} that if $(\Sigma, \sigma)$ is a finite entropy transitive CMS then the entropy map is upper semi-continuous when restricted to ergodic measures. The method of proof used in \cite{itv} does not seem to generalise to handle the non-ergodic case. However, the general case can be obtained directly as a corollary of Theorem \ref{thm:main}.
\begin{theorem} \label{semicont} Let $(\Sigma,\sigma)$ be a transitive CMS of finite topological entropy and $(\mu_n)_{n}$ a sequence of $\sigma$-invariant probability measures converging weak$^*$ to $\mu$. Then
\begin{equation*}
\limsup_{n\to \infty} h_{\mu_n}(\sigma)\leqslant h_\mu(\sigma).
\end{equation*}
That is, the entropy map is upper semicontinuous.
\end{theorem}
The proof follows immediately from Theorem \ref{thm:main}, the fact that $|\mu|=1$ and Lemma \ref{restriction}.
\begin{remark}\label{rem:nousc}
We now describe the situation in the infinite entropy case.
\begin{enumerate}
\item[(a)]
Without the finite entropy assumption, Theorem~\ref{semicont} is false, as we demonstrate here. If $(\Sigma , \sigma)$ is a a topologically transitive infinite entropy CMS then there exists a sequence $(\nu_n)_n$ and $\mu$ in $\mathcal{M}(\Sigma, \sigma)$ such that $h_{\mu}(\sigma) < \infty$, and $\lim_{n \to \infty} h_{\nu_n}(\sigma)= \infty$. Let $(\mu_n)_n$ be the sequence of invariant probability measures defined by
\begin{equation*}
\mu_n:= \left(1-\frac{1}{\sqrt{h_{\nu_n}(\sigma)}} \right)\mu+\frac{1}{\sqrt{h_{\nu_n}(\sigma)}} \nu_n.
\end{equation*}
Notice $\mu_n$ is well defined for large enough $n$. Then $(\mu_n)_n$ converges weak$^*$ to $\mu$ and
\begin{equation*}
h_{\mu}(\sigma)<\lim_{n \to \infty} h_{\mu_n}(\sigma)= \infty.
\end{equation*}
Therefore, the entropy map is not upper semi-continuous at any finite entropy measure.
\item[(b)] Examples of sequences of ergodic measures with finite entropy uniformly bounded above converging weak$^*$ to an ergodic measure (with finite entropy) in the full-shift on a countable alphabet, for which the entropy map is not upper-semi continuous can be found in \cite[p.774]{jmu} and \cite[Remark 3.11]{itv}.
\item[(c)] The entropy map is trivially upper semi-continuous at any measure of infinite entropy.
\end{enumerate}
\end{remark}
We conclude this subsection with a consequence of Theorem \ref{thm:main} and Remark \ref{rem:nousc}.
\begin{proposition}\label{prop:iff} Let $(\Sigma,\sigma)$ be a transitive CMS. Then $h_{top}(\sigma)$ is finite if and only if $\delta_\infty$ is finite.
\end{proposition}
\begin{proof} We only need to prove that if $\delta_\infty$ is finite, then $h_{top}(\sigma)$ is finite; the other direction follows directly from the inequality $\delta_\infty\leqslant h_{top}(\sigma)$.
First assume that $(\Sigma,\sigma)$ does not satisfy the $\mathcal{F}-$property. It follows directly from the definition of $\delta_\infty$ that in this situation we have $\delta_\infty=\infty$. As mentioned above there is nothing to prove in this case.
Now assume that $(\Sigma,\sigma)$ satisfies the $\mathcal{F}-$property. In the proof of Theorem \ref{thm:main} we did not use the fact that the topological entropy of $(\Sigma,\sigma)$ is finite, we only used that our CMS has the $\mathcal{F}-$property and that $\delta_\infty$ is finite--those follow trivially under the finite entropy assumption. The $\mathcal{F}-$property is crucially used in Proposition \ref{prop:atom} and Lemma \ref{lem:Kineq2}. If $\delta_\infty$ is finite, then Theorem \ref{thm:main} implies that the entropy map is upper semi-continuous, which would contradict Remark \ref{rem:nousc} if $h_{top}(\sigma)$ is infinite. We conclude that the topological entropy of $(\Sigma,\sigma)$ is finite.
\end{proof}
\subsection{Suspension flows}
Let $(\Sigma, \sigma)$ be a transitive, finite entropy CMS and $\tau: \Sigma \to {\mathbb R}^+$ a potential bounded away from zero. Let $$Y:= \left\{ (x,t)\in \Sigma \times {\mathbb R} \colon 0 \leqslant t \leqslant\tau(x) \right\},$$
with the points $(x,\tau(x))$ and $(\sigma(x),0)$ identified for each $x\in \Sigma $. The \emph{suspension flow} over $\Sigma$
with \emph{roof function} $\tau$ is the semi-flow $\Phi= (\varphi_t)_{t \in {\mathbb R}_{\geqslant 0}}$ on $Y$ defined by
$ \varphi_t(x,s)= (x,s+t)$ whenever $s+t\in[0,\tau(x)]$. Denote by $\mathcal{M}(Y,\Phi)$ the space of flow invariant probability measures. In this section we prove that in this continuous time, non-compact setting again the entropy map is upper semi-continuous. This generalises \cite[Proposition 5.2]{itv} in which upper semi-continuity of the entropy map was proven for ergodic measures. Let
\begin{equation}
\mathcal{M}_\sigma(\tau):= \left\{ \mu \in \mathcal{M}_{\sigma}: \int \tau ~d \mu < \infty \right\}.
\end{equation}
A result by Ambrose and Kakutani \cite{ak} implies that the map $M \colon \mathcal{M}_\sigma \to \mathcal{M}_\Phi$, defined by
\begin{equation*} \label{eq:R map}
M(\mu)=\frac{(\mu \times \text{Leb})|_{Y} }{(\mu \times \text{Leb})(Y)},
\end{equation*}
where \text{Leb} is the one-dimensional Lebesgue measure, is a bijection.
The following result proved in \cite[Lemma 5.1]{itv} describes the relation between weak$^*$ convergence in $\mathcal{M}(Y,\Phi)$ with that in $\mathcal{M}(\Sigma, \sigma)$.
\begin{lemma} \label{lem:weak}
Let $(\nu_n), \nu \in \mathcal{M}(Y,\Phi)$ be flow invariant probability measures such that
\begin{equation*}
\nu_n=\frac{\mu_n \times Leb}{\int \tau~d \mu_n} \quad \text{ and } \quad \nu= \frac{\mu_n \times Leb}{\int \tau~d \mu_n}
\end{equation*}
where $(\mu_n)_n , \mu \in \mathcal{M}(\Sigma, \sigma)$. If the sequence $(\nu_n)_n$ converges weak$^*$ to $\nu$ then
$(\mu_n)_n$ converges weak$^*$ to $\mu$ and $\lim_{n \to \infty} \int \tau~d \mu_n = \int \tau~d\mu$.
\end{lemma}
\begin{proposition} \label{thm:susp}
Let $(\Sigma,\sigma)$ be a transitive CMS of finite topological entropy. Let $\tau$ be a potential bounded away from zero and $(Y,\Phi)$ the suspension flow of $(\Sigma,\sigma)$ with roof function $\tau$. Then the entropy map of $(Y, \Phi)$ is upper semi-continuous.
\end{proposition}
The proof directly follows from Abramov's formula \cite{ab}, Lemma \ref{lem:weak} and Theorem \ref{thm:main}. Because of the similarities between the geodesic flow and the suspension flow over a Markov shift it is reasonable to expect that, under suitable assumptions on the roof function $\tau$, the suspension flow also satisfies an entropy inequality like Theorem \ref{thm:main}. This is in fact the case and will be discussed in \cite{ve2}. The space of invariant measures for the suspension flow was already investigated and described in \cite[Section 6]{iv}.
\subsection{Entropy density of ergodic measures} \label{sec:ed}
The structure of the space of invariant measures for finite entropy (non-compact) CMS was studied in \cite{iv}. In this non-compact setting it is well known that the space of ergodic measures is still dense in $\mathcal{M}(\Sigma,\sigma)$ (see \cite[Section 6]{csc}). A natural question is whether the approximation by ergodic measures can be arranged so that the corresponding entropies also converge.
If this is the case we say that the set of ergodic measures is \emph{entropy dense}. More precisely,
\begin{definition} \label{def:edense}
A subset $\mathcal{L} \subset \mathcal{M}(\Sigma,\sigma)$ is \emph{entropy dense} if for every measure $\mu \in \mathcal{M}(\Sigma,\sigma)$ there exists a sequence $(\mu_n)_n$ in $\mathcal{L}$ such that
\begin{enumerate}
\item $(\mu_n)_n$ converges to $\mu$ in the weak$^*$ topology.
\item $\lim_{n\to\infty} h_{\mu_n}(\sigma)=h_\mu(\sigma)$.
\end{enumerate}
\end{definition}
Results proving that certain classes of measures are entropy dense have been obtained for different dynamical systems defined on compact spaces by Katok \cite{ka}, Orey \cite{or}, F\"ollmer and Orey \cite{fo}, Eizenberg, Kifer and Weiss \cite{ekw} and by Gorodetski and Pesin \cite{gp} among others. In this section we prove, for the non-compact setting of finite entropy CMS, that the set of ergodic measures ${\mathcal E}(\Sigma,\sigma)$ is entropy dense.
\begin{theorem}\label{teodense}
Let $(\Sigma, \sigma)$ be a finite entropy, transitive CMS and $\mu \in \mathcal{M}(\Sigma,\sigma)$. Then there exists a sequence $(\mu_n)_{n }$ of ergodic measures such that
$(\mu_n)_n$ converges to $\mu$ in the weak$^*$ topology and $\lim_{n\to\infty} h_{\mu_n}(\sigma)=h_\mu(\sigma)$, i.e., ${\mathcal E}(\Sigma, \sigma)$ is entropy dense. Moreover, it is possible to choose the sequence so that each $\mu_n$ has compact support.
\end{theorem}
The proof of this result directly follows combining Theorem \ref{semicont}, where the upper semi-continuity of the entropy map is proved, and Proposition \ref{dense}, where we proved a weak form of entropy density of the set of ergodic measures. Note that the entropy density property of ergodic measures is an important tool in proving large deviations principles via the orbit-gluing technique (see, for example, \cite{ekw} and \cite{fo}).
\subsection{Points that escape on average}
In this section we relate the Hausdorff dimension of the set of recurrent points that escape on average with the entropy at infinity of $(\Sigma,\sigma)$. Recall we have fixed an identification of the alphabet of $(\Sigma,\sigma)$ with ${\mathbb N}$.
\begin{definition}
Let $(\Sigma, \sigma)$ be a CMS, the set of points that \emph{escape on average} is defined by
\begin{equation*}
E:=\left\{ x \in \Sigma : \lim_{n \to \infty} \frac{1}{n} \sum_{i=0}^{n-1} 1_{[a]}(\sigma^i x)=0, \text{ for every } a \in {\mathbb N} \right\}.
\end{equation*}
We say that $x\in\Sigma$ is a \emph{recurrent point} if there exists an increasing sequence $(n_k)_k$ such that $\lim_{k\to\infty}\sigma^{n_k}(x)=x$. The set of recurrent points is denoted by $\mathcal{R}$.
\end{definition}
A version of the set $E$ has been considered in the context of homogeneous dynamics. Interest in that set stems from work of Dani \cite{da} in the mid 1980s who proved that singular matrices are in one-to-one correspondence with certain divergent orbits of one parameter diagonal groups on the space of lattices. For example, Einsiedler and Kadyrov \cite[Corollary 1.7]{ek} computed the Hausdorff dimension of that set in the setting of $SL_3({\mathbb Z}) \backslash SL_3({\mathbb R})$. In the context of unimodular $(n+m)-$lattices an upper bound for the Hausdorff dimension of the set of points that escape on average has been obtained in \cite[Theorem 1.1]{kklm}. More recently, for the Teichm\"uller geodesic flow, in \cite[Theorem 1.8]{aaekmu} the authors prove an upper bound for the Hausdorff dimension of directions in which Teichm\"uller geodesics escape on average in a stratum. In all the above mentioned work, either explicitly or not, the bounds are related to the entropy at infinity of the system. Our next result establishes an analogous result for CMS. In this case the upper bound is the entropy at infinity divided by $\log 2$. This latter constant comes from the metric we consider in the space (see \eqref{metric}) and can be thought of as the Lyapunov exponent of the system.
\begin{theorem}\label{thm:onave}
Let $(\Sigma, \sigma)$ be a finite entropy transitive CMS. Then
\begin{equation*}
\dim_H(E\cap \mathcal{R}) \leq \frac{\delta_{\infty}}{\log 2}
\end{equation*}
where $\dim_H$ denotes the Hausdorff dimension with respect to the metric \eqref{metric}.
\end{theorem}
Before initiating the proof of Theorem \ref{thm:onave} let us set up some notation. Given natural numbers $a, b, q, m$ and $N$ we define $S^q_{a,b}(N,m)$ as the collection of cylinders of the form $[x_0,...,x_{L-1}]$, where $L\geqslant Nm$, $x_0=a, x_{L-1}=b,$ and the number of indices $i\in\{0,...,L-1\}$ such that $x_i\leqslant q$ is exactly $N$. It will be convenient to define
$$H_{a,b}^q(n,m):=\bigcup_{N\geqslant n}S_{a,b}^q(N,m).$$
Finally define
$$\mathcal{L}_b:=\{x \in\Sigma: \exists (n_k)_k \text{ strictly increasing such that } \sigma^{n_k}(x)\in[b], \forall k\in{\mathbb N}\},$$
and $\mathcal{L}=\bigcup_{b\in {\mathbb N}} \mathcal{L}_b$.
\begin{remark}\label{rem:enddd} Let $a, b, q$ and $m$ be natural numbers. Assume that $q\geqslant b$. Note that if $x\in \left(E\cap\cL_b\cap [a] \right)$, then there exists $s_0 \in {\mathbb N}$ such that
$$\# \left\{i\in\{0,...,s-1\}: x_i\leqslant q \right\}\leqslant \frac{s}{m},$$
for every $s\geqslant s_0$. Moreover, there exists an increasing sequence $(n_k)_{k}$ such that $x_{n_k}=b$. Define ${\mathcal T}_k(x)=\#\{i\in\{0,...,n_k-1\}: x_i\leqslant q\}$. Since $q\geqslant b$ we get that ${\mathcal T}_k(x)\geqslant k$. Observe that if $n_k\geqslant s_0$, then
$$m\mathcal{T}_k(x)= m\#\{i\in\{0,...,n_k-1\}: x_i\leqslant q\}\leqslant n_k.$$
We conclude that $$[x_0,...,x_{n_k-1}]\in S^q_{a,b}(\mathcal{T}_k(x),m)\subset \bigcup_{p\geqslant k}S^q_{a,b}(p,m)=H_{a,b}^q(k,m).$$ This gives us the inclusion
\begin{align}\label{eq:cov}
\left( E\cap \cL_b\cap [a] \right) \subset \bigcup_{C\in H_{a,b}^q(k,m)} C,\end{align}
for every $k\in {\mathbb N}$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:onave}] First observe that $\left(E\cap\cR \right)\subset \bigcup_{b\in{\mathbb N}} \left(E\cap\cL_b\cap [b] \right)$. In particular it suffices to prove that $\dim_H(E\cap \mathcal{L}_b\cap[a])\leqslant \delta_\infty / \log 2$ , for every pair of natural numbers $a$ and $b$. Fix $t> \delta_\infty/\log 2$. Recall that $\delta_\infty=\inf_{m,q}\delta_\infty(m,q)$ (see equation \eqref{infinf}). Choose $m$ and $q$ large enough so that $t>\delta_\infty(q,m)/\log 2$, and that $q\geqslant \max\{a,b\}$. Observe that we are now in the same setup as in Remark \ref{rem:enddd}.
In order to estimate the Hausdorff dimension of $E\cap \cL_b\cap[a]$ we will use the covering given by \eqref{eq:cov}. Thus, it is enough to bound $\sum_{C\in H_{a,b}^q(k,m)} \mbox{\rm diam} (C)^t$. First observe that since $q\geqslant\max\{a,b\}$,
a cylinder $C\in H_{a,b}^q(k,m)$ has length $\ell(C)\geqslant k$. Recall that $\mbox{\rm diam} (C)\leqslant 2^{-\ell(C)}=e^{-(\log 2)\ell(C) }$. Therefore, as $k \in {\mathbb N}$ increases the diameter of the covering given by \eqref{eq:cov} converges to zero. Now observe that
\begin{align*}\sum_{C\in H_{a,b}^q(k,m)} \mbox{\rm diam} (C)^t&\leqslant \sum_{C\in H_{a,b}^q(k,m)} e^{-t(\log 2)\ell(C)}\\
&=\sum_{l\geqslant k}e^{-t(\log 2) l}\#\{C: C\in H_{a,b}^q(k,m)\text{ and }\ell(C)=\ell\}\\
&\leqslant\sum_{l\geqslant k} e^{-t(\log 2) l}z_{l-2}(m,q). \end{align*}
In the last inequality we used that $$\#\{C\in H_{a,b}^q(k,m)\text{ and }\ell(C)=l\}\leqslant z_{l-2}(m,q).$$ Indeed, if $C\in H_{a,b}^q(k,m)\text{ and }\ell(C)=\ell$, then $C$ is a cylinder of the form $[x_0,...,x_{\ell-1}]$ where $x_0=a$, $x_{l-1}=b$, and $$\#\{i\in\{0,...,\ell-1\}: x_i\leqslant q\}=k\leqslant \frac{\ell}{m}.$$
Since $\max\{a,b\}\leqslant q$ we conclude that $C$ is one of the cylinders counted in the definition of $z_{\ell-2}(m,q)$ (see Definition \ref{def:ent_inf}).
By the definition of $\delta_\infty(m,q)$ the series $Z(s):=\sum_{\ell=2}^\infty e^{-s\ell}z_{l-2}(m,q)$ is convergent for $s>\delta_\infty(m,q)$. In particular since $t\log 2>\delta_\infty(m,q)$ we have that $Z(t\log 2)$ is finite. This implies that the tail of $Z(t\log 2)$ converges to zero. We conclude that $\sum_{C\in H_{a,b}^q(k,m)} \mbox{\rm diam} (C)^t$ goes to zero as $k\to\infty$. This implies that $\dim_H(E\cap \cL_b\cap[a])\leqslant t$, but $t$ was an arbitrary number larger than $\delta_\infty / \log 2$.\end{proof}
\begin{remark} It is proved in \cite[Theorem 3.1]{i} that if $(\Sigma,\sigma)$ is a transitive CMS with finite topological entropy, then $\dim_H(\cR)=h_{top}(\sigma)/\log 2$. In particular if $(\Sigma,\sigma)$ is SPR, then $\dim_H(E\cap\cR)<\dim_H(\cR)$.
\end{remark}
\subsection{Measures of maximal entropy}\label{sec:mme}
An invariant measure $\mu \in \mathcal{M}(\Sigma, \sigma)$ is called a \emph{measure of maximal entropy} if $h_{\mu}(\sigma)= h_{top}(\sigma)$.
It follows from work by Gurevich \cite{gu1,gu2} that if $h_{top}(\sigma)<\infty$ then there exists at most one measure of maximal entropy. Note that a direct consequence of the variational principle (see \cite{gu2} or
Theorem \ref{thm:vp}) is that there exists a sequence of invariant probability measures $(\mu_n)_n$ such that $\lim_{n \to\infty} h_{\mu_n}(\sigma)= h_{top}(\sigma)$.
Moreover, if the sequence has a weak$^*$ accumulation point $\mu$ then it follows from the upper semi-continuity of the entropy map, see Theorem \ref{semicont}, that $h_{\mu}(\sigma)=h_{top}(\sigma)$. Since the space $\mathcal{M}(\Sigma, \sigma)$ is not compact there are cases in which the sequence $(\mu_n)_n$ does not have an accumulation point. In fact, there exist transitive finite entropy CMS that do not have measures of maximal entropy (see \cite{ru2} for a wealth of explicit examples). Our next result follows directly from Theorem \ref{thm:main} and Theorem \ref{compact}. Recall that $(\Sigma,\sigma)$ is SPR if and only if $\delta_\infty<h_{top}(\sigma)$ (see Proposition \ref{prechar}).
\begin{theorem} \label{thm:mme}
Let $(\Sigma, \sigma)$ be a SPR CMS and $(\mu_n)_{n}$ a sequence of $\sigma$-invariant probability measures such that
\begin{equation*}
\lim_{n\to\infty}h_{\mu_n}(\sigma)=h_{top}(\sigma).
\end{equation*}
Then the sequence $(\mu_n)_{n }$ converges in the weak$^*$ topology to the unique measure of maximal entropy.
\end{theorem}
\begin{proof} Note that the inequality $\delta_\infty<h_{top}(\sigma)$ immediately implies that $(\Sigma,\sigma)$ has finite topological entropy (see Proposition \ref{prop:iff}). Since $\mathcal{M}_{\le1}(\Sigma,\sigma)$ is compact (see Theorem \ref{compact}) there exists a subsequence $(\mu_{n_k})_k$ which converges on cylinders to $\mu\in \mathcal{M}_{\le1}(\Sigma,\sigma)$. It follows directly from Theorem \ref{thm:main} that
\begin{equation*}
h_{top}(\sigma)= \limsup_{k\to \infty} h_{\mu_{n_k}}(\sigma)\leqslant |\mu|h_{\mu/|\mu|}(\sigma)+(1-|\mu|)\delta_\infty.
\end{equation*}
Recall that $\delta_{\infty} < h_{top}(\sigma)$. If $|\mu| <1$ then the right hand side of the equation is a convex combination of numbers, one of each is strictly smaller than $h_{top}(\sigma)$. Since this is not possible we have that $|\mu|=1$. In particular
\begin{equation*}
h_{top}(\sigma) \leq h_{\mu}(\sigma).
\end{equation*}
That is, $\mu$ is a measure of maximal entropy. We conclude that $(\Sigma,\sigma)$ has a measure of maximal entropy. The same argument holds for every subsequence of $(\mu_n)_n$, this implies that the entire sequence $(\mu_n)_n$ converges in the weak$^*$ topology to the unique measure of maximal entropy.
\end{proof}
In fact Theorem \ref{thm:main} also gives a complete description of non strongly positive recurrence, as follows. Some of these results were originally proved in \cite[Theorem 6.3]{gs} by different methods.
\begin{theorem}\label{thm:mme2}
Let $(\Sigma, \sigma)$ be a transitive CMS of finite entropy.
\begin{enumerate}
\item Suppose $(\Sigma,\sigma)$ does not admit a measure of maximal entropy. Let $(\mu_n)_{n }$ be a sequence of $\sigma$-invariant probability measures such that $\lim_{n\to\infty}h_{\mu_n}(\sigma)=h_{top}(\sigma)$. Then $(\mu_n)_{n }$ converges on cylinders to the zero measure and $\delta_\infty=h_{top}(\sigma)$.
\item Suppose that $(\Sigma,\sigma)$ is positive recurrent, but $h_{top}(\sigma)=\delta_\infty$. Let $(\mu_n)_{n}$ be a sequence of $\sigma$-invariant probability measures such that $\lim_{n\to\infty}h_{\mu_n}(\sigma)=h_{top}(\sigma)$. Then the accumulation points of $(\mu_n)_{n}$ lie in the set $\{ \lambda \mu_{max}: \lambda \in [0,1]\}$, where $\mu_{max}$ is the measure of maximal entropy. Moreover, every measure in $\{ \lambda \mu_{max}:t\in [0,1]\}$ can be realised as such limit.
\end{enumerate}
\end{theorem}
\begin{proof}
Note that part $(a)$ directly follows from Theorem \ref{thm:main}. Indeed, if a sequence $(\mu_n)_{n }$ with $\lim_{n\to\infty}h_{\mu_n}(\sigma)=h_{top}(\sigma)$ converges in cylinder to a measure $\mu \in \mathcal{M}_{\leq 1}(\Sigma, \sigma)$ different from the zero measure then $\mu/|\mu|$ would be a measure of maximal entropy. This argument also gives us the first part of $(b)$, that is, the accumulation points of $(\mu_n)_n$ lie in $\{\lambda\mu_{max}:\lambda\in[0,1]\}$. As for the second part of $(b)$, by Theorem \ref{thm:vpinf} there exists a sequence $(\mu_n)_n$ in $\mathcal{M}(\Sigma,\sigma)$ with $\lim_{n \to \infty} h_{\mu_n}(\sigma)=h_{top}(\sigma)$ such that that $(\mu_n)_n$ converges on cylinders to the zero measure. Since there exist a measure of maximal entropy $\nu$ we have that for every $\lambda \in [0,1]$ the sequence $\rho_n:=\lambda \nu +(1-\lambda)\mu_n$ converges on cylinders to $\lambda\nu$ and $\lim_{n \to \infty} h_{\rho_n}(\sigma) =h_{top}(\sigma)$.
\end{proof}
\subsection{Existence of equilibrium states}\label{sec:eqst}
In this section we will always assume that $(\Sigma,\sigma)$ is a transitive CMS with finite entropy. In Section~\ref{sec:tf} we described the thermodynamic formalism developed by Sarig in the setting of CMS and functions (potentials) of summable variations. It turns out that the same methods can be extended and thermodynamic formalism can be developed for functions with weaker regularity assumptions (for example functions satisfying the Walters condition \cite{sabook}). However, these methods can not be extended much further. In this section we propose an alternative definition of pressure that generalises the Gurevich pressure to the space of functions $C_0(\Sigma)$ (see Definition \ref{C_0}). We stress that these functions are just uniformly continuous. Making use of Theorem \ref{thm:main} we can ensure the existence of equilibrium states.
The following result is a direct consequence of Theorem \ref{thm:main} and the continuity of the map $\mu\mapsto \int Fd\mu$, when $F\in C_0(\Sigma)$ and $\mu$ ranges in $\mathcal{M}_{\le1}(\Sigma,\sigma)$ endowed with the cylinder topology.
\begin{theorem}\label{ineqC_0} Let $(\Sigma,\sigma)$ be a transitive CMS with finite entropy and $F\in C_0(\Sigma)$. Let $(\mu_n)_n$ be a sequence in $\mathcal{M}(\Sigma,\sigma)$ converging on cylinders to $\lambda \mu$, where $\lambda\in [0,1]$ and $\mu\in \mathcal{M}(\Sigma,\sigma)$. Then
$$\limsup_{n\to\infty}\left( h_{\mu_n}(\sigma)+\int Fd\mu_n\right)\leqslant \lambda \left(h_{\mu}(\sigma)+\int Fd\mu\right)+(1-\lambda)\delta_\infty.$$
\end{theorem}
For a continuous, bounded potential $F$ define the \emph{(variational) pressure} of $F$ by
\begin{equation*}
P_{var}(F):=\sup_{\mu\in \mathcal{M}(\Sigma,\sigma)}\left(h_\mu(\sigma)+\int Fd\mu\right).
\end{equation*}
A measure $\mu$ is an equilibrium state for $F$ if $P_{var}(F)=h_\mu(\sigma)+\int Fd\mu$. Recall that since $F$ needs not to be of summable variations then the classifications of potentials (see Definition \ref{def:clas}) and the uniqueness of equilibrium states (Theorem \ref{clas}) do not necessarily hold.
Note that if $F\in C_0(\Sigma)$, then $P_{var}(F)\geqslant \delta_\infty$. Indeed, let $(\mu_n)_n$ be a sequence of measures in $\mathcal{M}(\Sigma,\sigma)$ converging on cylinders to the zero measure and such that $\lim_{n\to\infty}h_{\mu_n}(\sigma)=\delta_\infty$. Since $F\in C_0(\Sigma)$, then $\lim_{n\to\infty}\int Fd\mu_n=0$. We conclude that $$P_{var}(F)\geqslant \limsup_{n\to\infty}\left(h_{\mu_n}(\sigma)+\int Fd\mu_n\right)=\delta_\infty.$$
Our next result follows directly from Theorem \ref{ineqC_0} and Theorem \ref{compact}, as Theorem \ref{thm:mme} follows from Theorem \ref{thm:main} and Theorem \ref{compact}.
\begin{theorem}\label{thm:sta} Let $(\Sigma,\sigma)$ be a transitive CMS with finite entropy and $F\in C_0(\Sigma)$. Assume that $P_{var}(F)> \delta_\infty$. Then there exists an equilibrium state for $F$. Moreover, if $(\mu_n)_n$ is a sequence in $\mathcal{M}(\Sigma,\sigma)$ such that $$\lim_{n\to\infty}\left(h_{\mu_n}(\sigma)+\int Fd\mu_n\right)=P_{var}(F),$$
then every limiting measure of $(\mu_n)_n$ is an equilibrium state of $F$.
\end{theorem}
In Theorem \ref{thm:sta}, if we further assume that $F$ has summable variations, then the sequence $(\mu_n)_n$ converges in the weak$^*$ topology to the unique equilibrium state of $F$. For the description of the pressure map $t\mapsto P_{var}(tF)$ we refer the reader to \cite[Theorem 5.7]{rv}.
\subsection{Entropy and escape of mass}
In this subsection we show that
for a SPR CMS $(\Sigma,\sigma)$ it is possible to bound the escape of mass of sequences of measures with sufficiently large entropy. In the setting of homogenous flows an analogous result was proven in \cite[Corollary of Theorem A]{ekp}.
\begin{theorem} \label{thm:em} Let $(\Sigma,\sigma)$ be a SPR CMS. Let $(\mu_n)_n$ be a sequence in $\mathcal{M}(\Sigma,\sigma)$ such that $h_{\mu_n}(\sigma)\geqslant c$, for every $n\in{\mathbb N}$, and $c\in (\delta_\infty,h_{top}(\sigma))$. Then every limiting measure $\mu$ of $(\mu_n)_n$ with respect to the cylinder topology satisfies
\begin{equation*}
\mu(\Sigma)\geqslant \frac{c-\delta_\infty}{h_{top}(\sigma)-\delta_\infty}.
\end{equation*}
\end{theorem}
\begin{proof}
From Theorem \ref{thm:main} we have that
\begin{eqnarray*}
c \leq \limsup_{n \to \infty} h_{\mu_n}(\sigma) \leq \mu(\Sigma) h_{\mu / |\mu|}(\sigma) + (1 - \mu(\Sigma)) \delta_{\infty} \leq
\mu(\Sigma) (h_{top}(\sigma) - \delta_{\infty}) + \delta_{\infty}.
\end{eqnarray*}
The result then follows.
\end{proof}
|
1,116,691,497,912 | arxiv | \section{Introduction} \label{S1Intr}
The ultraviolet (UV) continuum spectrum of star-forming galaxies is characterized by the spectral index $\beta$ in the form of $f_{\lambda} \propto \lambda^{\beta}$ \citep{Calz94}. The $\beta$ values are related with physical quantities such as age, metallicity, and dust extinction of the galaxies. In the case of less dust attenuation, younger age, and lower metallicity, the galaxy has the larger negative $\beta$ value (e.g., \cite{Bouw10}; \cite{Stan16}). Since it is relatively simple to measure $\beta$ values even if objects are at high redshift, the $\beta$ index is a useful tool to probe their physical quantities.
The typical value of $\beta$ found in the previous works is $\sim -1.7$ at $z \sim 4$ for $\sim L_{*}$ galaxies with $M_{\mathrm{UV}} \sim -21.0$ (\cite{Bouw12}; \cite{Fink12}), and it is bluer at higher redshift up to $z \sim 7$ ($\beta$--\textit{z} relation: e.g., \cite{Wil11}). At the fainter magnitude (e.g. $M_{\mathrm{UV}} \sim -19.0$), the observed $\beta$ value has still uncertainties and the relation between $\beta$ and UV absolute magnitude ($\beta$--$M_{\mathrm{UV}}$ relation) has been a subject of debate for the several years (e.g., \cite{Bouw12}; \cite{Bouw14}; \cite{Dunl12}; \cite{Dunl13}; \cite{Fink12}; \cite{Rog14}). \authorcite{Bouw12} and \authorcite{Rog14} report that bright galaxies have redder $\beta$ values and faint galaxies have bluer $\beta$ values, while \authorcite{Dunl13} and \authorcite{Fink12} report that $\beta$ values are constant over the observed magnitude range. This inconsistency in the $\beta$--$M_{\mathrm{UV}}$ relation can be caused by both large photometric errors for faint galaxies and selection bias. In order to reveal the true $\beta$--$M_{\mathrm{UV}}$ relation, a further large sample of objects with small photometric uncertainties is needed and/or it is necessary to assess the incompleteness of the observed $\beta$ distribution. Recently, \citet{Dunc15} and \citet{Bouw14} show that the $\beta$ value decreases with the $M_{\mathrm{UV}}$ value. \authorcite{Dunc15} finds the trend by combining the results of the previous literature (\cite{Bouw14};\cite{Dunc14}; \cite{Dunl12}; \cite{Dunl13}; \cite{Fink12}; \cite{Rog14}; \cite{Wil11}) and \authorcite{Bouw14} discusses the trend by assessing the observational bias (incompleteness) of the observed $\beta$ distribution in the faint magnitude range.
The $\beta$--$M_{\mathrm{UV}}$ relation is understood as another aspect of the mass-metallicity relation seen in star-forming galaxies as the $\beta$ value depends on the dust extinction (\cite{Bouw09}, \yearcite{Bouw12}, \yearcite{Bouw14}). Moreover, it is suggested that there is a ``knee'' in the $\beta$--$M_{\mathrm{UV}}$ relation at $M_{\mathrm{UV}} \sim -19.0$, and the dependence of $\beta$ on $M_{\mathrm{UV}}$ becomes weaker at $M_{\mathrm{UV}} \gtrsim -19.0$ than at $M_{\mathrm{UV}} < -19.0$ \citep{Bouw14}. The change of the dependence is interpreted as the change of the dependence of the dust extinction on the UV luminosity or the stellar mass (e.g., \cite{Pann09}; \cite{Redd10}). The semi-analytic model predicts the sudden change of the dust-to-gas mass ratio at the critical metallicity and the dust mass rapidly increase at the metallicity level larger than the critical metallicity (e.g., \cite{Hira11}). The change of the dependence of $\beta$ on $M_{\mathrm{UV}}$ perhaps indicates the existence of the critical metallicity. The redshift dependence of $\beta$ is interpreted as the dust attenuation history or the dust production history in star-forming galaxies. Interestingly, the dust attenuation history at $z \gtrsim 3$ estimated from the redshift dependence of $\beta$ is smoothly connected with the dust attenuation history at $z \lesssim 3$ estimated from the direct measurements of both IR and UV luminosity \citep{Burg13}. The dust attenuation history is also used for revealing the history of the true (dust-corrected) cosmic Star Formation Rate (SFR) density in the high-z universe because it is still difficult to obtain the IR luminosity for high-z star-forming galaxies (e.g., \cite{Bouw09}, \yearcite{Bouw12}; \cite{Mada14}). Currently, the $\beta$--\textit{z} relation has been used for considering the source of cosmic reionization, assuming that the $\beta$ value represents the production rate of hydrogen ionizing photons since the production rate is susceptible to the stellar population (e.g., \cite{Dunc15}; \cite{Bouw15}, \yearcite{Bouw16a}).
We note that the samples in most of the literature are overlapped and not independent since the set of GOODS-South/HST or HUDF/HST data is mostly used so far in their works. Due to the small observed area, the number of bright objects at $z = 4$ in the field is limited (except for \cite{Rog14}) and these previous studies focus on relatively faint galaxies at high redshift. The $\beta$ distribution of luminous objects is, however, also important since such population provides important clues to understand early star-formation history in the universe (e.g., \cite{Cucc12}; \cite{Hatf18}). In fact, by using stacked images, \citet{LeeKS11} investigate the $\beta$ value of ultra-luminous star-forming galaxies ($19.46 \leq I < 24.96$) at \textit{z} $\sim$ 4 and they find that the $\beta$ value of the stacked star-forming galaxies tends to be redder toward the brighter magnitude range. When taking all the previous works into consideration, the investigation for individual and ''normal'' luminous galaxies ($L \sim L_{*}$) is very comprehensive.
Recent Atacama Large Millimeter/submillimeter Array (ALMA) observations have revealed the dust properties of high redshift star-forming galaxies through the relation between the ratio of IR to UV (so called IRX) and the UV slope $\beta$ (IRX--$\beta$ relation; e.g., \cite{Cpk15}; \cite{Bouw16}; \cite{McLu18}). In order to interpret these results, it is very important to investigate the detailed relation between the UV slope $\beta$ and the stellar population which is hidden by dust extinction. On the other hand, there is difficulty in studying the intrinsic values of the $\beta$, $M_{\mathrm{UV}}$, and stellar population for the luminous and massive galaxies, since the effect of dust extinction on their color degenerates with the age and metallicity of their stellar population: the more massive systems are on average older and metal rich. It is essential to resolve these degeneracy and understand the intrinsic properties of star formation and effects of dust extinction.
In this paper, we present the results of the UV slope $\beta$ and stellar population for the relatively bright galaxies with $M_{\mathrm{UV}} \lesssim -20$ at $z \sim 4$ in relatively wide-area and deep Subaru/XMM-Newton Deep Survey (SXDS) field. Wide area coverage is essential to sample rare luminous galaxies. In section 2, we explain the data and sample selection. In section 3, we describe our method to evaluate the $\beta$ value. In section 4, we show the result of the observed $\beta$--$M_{UV}$ relation, and we assess the incompleteness of our sample selection. In section 5, we discuss the intrinsic $\beta$--$M_{UV}$ relation, most active star-forming galaxies, and dust attenuation law for $z \sim 4$ galaxies. In section 6, we give our conclusions to summarize this work. In regard to the cosmological parameters, we assume $\Omega_{m,0} = 0.3$, $\Omega_{\Lambda,0} = 0.7$, $H_{\mathrm{0}} = 70\>\mathrm{km\>s^{-1}\>Mpc^{-1}}$. Finally throughout this work we apply the AB magnitude system (\cite{OkeGunn}; \cite{Fkgt96}).
\section{ Data and sample selection } \label{S2Dtss}
\subsection{ Data } \label{S2s1da}
In our analysis, we select the Lyman Break Galaxies (LBGs) at $z \sim 4$ in the SXDS field which is partially covered by other surveys (i.e., UDS-UKIDSS/UKIRT, UDS-CANDELS/HST, and SEDS/Spitzer). Figure \ref{fig1} shows the field map of each survey and indicates that UDS-CANDELS and SEDS survey do not cover the entire field of SXDS or UDS-UKIDSS.
Our catalog includes \textit{B}, \textit{V}, \textit{R}, \textit{i'}, \textit{z'}, and updated-\textit{z'} from Subaru/Suprime-Cam, \textit{J}, \textit{H}, and \textit{K} from UKIRT/WFCAM, F125W and F160W from HST/WFC3, and 3.6$\>\micron$ and 4.5$\>\micron$ from Spitzer/IRAC. The \textit{B}, \textit{V}, \textit{R}, \textit{i'}, \textit{z'} band images are taken from the archived SXDS data \citep{Furu08}. The CCD of Subaru/Suprime-Cam is replaced with the new CCD of Hamamatsu Photonics in 2008, and the total response function improve especially at the longer wavelength. The Subaru/Suprime-Cam \textit{z'} band images are taken after the replacement again, and we call the images the updated-\textit{z'} band. The limiting magnitude of the updated-\textit{z'} images is $\sim 0.5\>\mathrm{mag}$ deeper than the archived SXDS data \citep{Furu16}. The \textit{J}, \textit{H}, and \textit{K} band images are taken from the UKIRT Deep Sky Survey (UKIDSS: \cite{Law07}) DR10, and the F125W and F160W band images are taken from the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS: \cite{Grog11}; \cite{Koek11}). For the data from \textit{B} to \textit{K} band, we use 2''-diameter aperture magnitude measured by using the SExtractor\footnote{$\langle$http://www.astromatic.net/software/sextractor$\rangle$} ver.2.5.0 \citep{BeAr96}. The smaller PSF images are convolved with Gaussian kernels to be matched in FWHM of the stars (1''.0) with the original updated-\textit{z'} band image. On the other hand the 3.6$\>\micron$ and 4.5$\>\micron$ photometry are taken from the Spitzer Extended Deep Survey (SEDS) catalog \citep{Ash13} and we apply aperture correction. Finally, we pick up the objects which are in the overlapped region covered by both the SXDS and UDS-UKIDSS fields (see figure \ref{fig1}: the overlapped region is filled by yellow slanting lines) because we need both optical and NIR photometry for estimating the UV slope $\beta$ value of \textit{z} $\sim$ 4 galaxies. The information of the imaging data is summarized in table \ref{tab1}.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Pic.Region_SXDS_new1812.eps}
\end{center}
\caption{ Field map of imaging data used in our analysis. The blue solid line show the sky coverage of the SXDS field; the green dot-dash line represent the sky coverage observed with Subaru/updated-\textit{z'}; The red dash line represents the sky coverage of the UDS-UKIDSS field; the black dotted line represents the sky coverage of the UDS-CANDELS field; and the brown two-dot-dash line represents the sky coverage of the SEDS field. Our catalog consists of the objects within the area covered by all of the SXDS, updated-\textit{z'}, and UDS field since we use the \textit{i'}, \textit{z'}, updated-\textit{z'}, and \textit{J} band photometry for estimating the UV slope $\beta$ value. This area is filled by the yellow slanting lines. } \label{fig1}
\end{figure}
\begin{table*}
\tbl{Summary of survey data.}{%
\begin{tabular}{lllccc}
\hline
Field Name & Instrument & Band & Limiting Mag. & PSF FWHM & Reference\footnotemark[$*$] \\
(Survey Name) & & & ($5\,\sigma$, 2''$\phi$ , AB) & (arcsec) & \\
\hline
SXDS & Subaru/Suprime-Cam & \textit{B} & 27.5 & 0.8 & (1) \\
SXDS & Subaru/Suprime-Cam & \textit{V} & 27.1 & 0.8 & (1) \\
SXDS & Subaru/Suprime-Cam & \textit{R} & 27.0 & 0.8 & (1) \\
SXDS & Subaru/Suprime-Cam & \textit{i'} & 26.9 & 0.8 & (1)\\
SXDS & Subaru/Suprime-Cam & \textit{z'} & 25.8 & 0.7 & (1) \\
& Subaru/Suprime-Cam & updated-\textit{z'} & 26.5 & 1.0 & (2) \\
UDS-UKIDSS & UKIRT/WFCAM & \textit{J} & 25.5 & 0.8 & (3) \\
UDS-UKIDSS & UKIRT/WFCAM & \textit{H} & 24.9 & 0.8 & (3) \\
UDS-UKIDSS & UKIRT/WFCAM & \textit{K} & 25.2 & 0.8 & (3) \\
UDS-CANDELS & HST/WFC3 & F125W & 25.6 & 0.12 & (4), (5) \\
UDS-CANDELS & HST/WFC3 & F160W & 25.6 & 0.18 & (4), (5) \\
SEDS & Spitzer/IRAC & 3.6$\>\micron$ & 24.75\footnotemark[$\dag$] & 1.8 & (6) \\
SEDS & Spitzer/IRAC & 4.5$\>\micron$ & 24.8\footnotemark[$\dag$] & 1.8 & (6) \\
\hline
\end{tabular}}\label{tab1}
\begin{tabnote}
\footnotemark[$*$] (1)\citet{Furu08}; (2)\citet{Furu16}; (3)\citet{{Law07}}; (4)\citet{Grog11}; (5)\citet{Koek11}; (6)\citet{Ash13}; \\
\footnotemark[$\dag$] The values show the total magnitude where the completeness of the source detection is 50\%.
\end{tabnote}
\end{table*}
\subsection{ Sample selection and SED fitting } \label{S2s2ss}
From the photometric sample, we select the objects satisfying all the following criteria.
\begin{itemize}
\setlength{\itemsep}{-2pt}
\item[(1)] $i' \leq 26.0$
\item[(2)] Subaru/\textit{z'} or Subaru/updated-\textit{z'} $\geq$ 2$\,\sigma$
\item[(3)] UKIRT/\textit{J} or HST/F125W $\geq$ 2$\,\sigma$
\item[(4)] $B - R > 1.2$, $R - i' < 0.7$, and $B - R > 1.6(R - i') + 1.9$
\item[(5)] $3.5 \leq z_{phot} < 4.5$ with reduced $\chi^2 \leq 2$
\end{itemize}
The criteria (1) is applied so as to select the galaxies bright enough to have small photometric errors, and the magnitude threshold corresponds to $S / N \gtrsim 11.5$. The criteria (2) and (3) are required to estimate the $\beta$ value accurately. Due to the stellar spikes and/or saturated pixels, some objects are not detected in the deeper Subaru/updated-\textit{z'} or HST/F125W imaging but detected in the shallower Subaru/\textit{z'} or UKIRT/\textit{J} imaging. Therefore, we use such the criteria (2) and (3). The criteria (4) is the \textit{BRi'}--LBG selection investigated by \citet{Ouch04} for the Subaru/Suprime-Cam filter set, and this criteria is intended to pick up star-forming galaxies at \textit{z} $\sim$ 4. In \citet{Ouch04}, the detectability of \textit{z} $\sim$ 4 galaxies and the contamination rate from low-\textit{z} galaxies are discussed, and leaders should refer to the reference for more details. After the criteria (1), (2), (3), and (4), the total number of objects is $\sim$ 2100. The criteria (5) is applied so as to select the reliable galaxies at \textit{z} $\sim$ 4. The reduced $\chi^{2}$ value is calculated for each galaxy as $\chi^2 / \mathrm{d.o.f}$, in which d.o.f $=$ (number of observed broad-band filters for each galaxy) $-$ (number of free parameters in the fitting). In the selection procedure, our concern is only the photometric redshift, and thus we adopt the number of free parameters $=$ 1. As a result, our catalog contains $\sim$ 1800 objects which are visually checked in order to avoid stellar spikes and/or saturated pixels.
We write down the detail of $\sim$ 300 objects, which are excluded by the criteria (5). Among $\sim$ 300 objects, (i) $\sim$ 130 objects are the reduced $\chi^2 > 2$ objects at $3.5 \leq z_{phot} < 4.5$, (ii) $\sim$ 90 objects are the low-z interlopers at $z_{phot} < 3.0$, and (iii) $\sim$ 80 objects are the slightly lower/higher redshift objects at $3.0 \leq z_{phot} < 3.5$ or $4.5 \leq z_{phot} < 5.0$. First of all, it is reasonable that we exclude the objects in the sub-sample (ii) because they are the possible contamination in our study. On the other hand, the objects in the sub-sample (i) and (iii) are the potential LBGs at $3.5 \leq z_{phot} < 4.5$. We check the influence of the sub-sample (i) and (iii) on the UV slope $\beta$, the UV magnitude, and the other quantities, and consequently we confirm that the $\sim$ 300 objects excluded by the criteria (5) does not change our results and the criteria (5) is reasonable.
\begin{table*}
\tbl{Summary of parameters for SED fitting analysis}{%
\begin{tabular}{ccc}
\hline
Parameter & \citet{Bruz03} & STARBURST99 \\
\hline
IMF & Chabrier & Kroupa \\
SFH & Single Burst with finite time duration ($10^{7}\>\mathrm{Myr}$) & Instantaneous Burst \\
& Continuous Constant & Continuous Constant \\
& Exponentially Decline with $\tau = 0.1$, $2$, and $5\>\mathrm{Gyr}$ & \\
Age & $5\>\mathrm{Myr}$--$15\>\mathrm{Gyr}$ & $2\>\mathrm{Myr}$--$150\>\mathrm{Myr}$ \\
Metallicity ($Z$) & $0.02\,Z_{\solar}$, $0.2\,Z_{\solar}$, and $Z_{\solar}$ & $0.02\,Z_{\solar}$, $0.2\,Z_{\solar}$, $0.4\,Z_{\solar},$ and $Z_{\solar}$ \\
Dust (Av) & $0.0$--$3.0$ & $0.0$--$3.0$ \\
Dust extinction curve & \citet{Calz00} & \citet{Calz00} \\
Redshift (z) & $0.0$--$6.0$ & $0.0$--$6.0$ \\
Nebular continuum & Not included & Included and Not included \\
\hline
\end{tabular}}\label{tab2}
\begin{tabnote}
\end{tabnote}
\end{table*}
For the investigation of photometric redshift and stellar population, we use the Hyperz\footnote{$\langle$http://webast.ast.obs-mip.fr/hyperz/$\rangle$} photometric redshift code ver.1.1 \citep{Bolz00} with the \citet{Bruz03} templates\footnote{$\langle$http://www.bruzual.org/bc03/$\rangle$} (hereafter BC03) and the STARBURST99\footnote{$\langle$http://www.stsci.edu/science/starburst99/docs/default.htm$\rangle$} \citep{Leit99} templates (hereafter SB99). The BC03 templates are chosen as ''typical galaxy'' models and constructed from five different star formation histories (SFHs), thirty age values, and three metallicity values with the Chabrier Initial Mass Function (IMF). The SB99 templates are adopted as ''young star-forming galaxy'' models and constructed from two SFHs, thirty age values, four metallicity values, and two extreme nebular continuum cases with the Kroupa IMF. In the run of the Hyperz, the dust attenuation value, Av, is ranged from 0.0 to 3.0 with $\Delta \mathrm{Av} = 0.1$ assuming the \citet{Calz00} attenuation law for dust extinction curve. The details of the parameters are summarized in table \ref{tab2}. The motivation to use the BC03 and SB99 model templates simultaneously is that the BC03 template is not enough to describe young star-forming galaxies: A time step, which is critical to make spectra of young-age galaxies, in the SB99 computation is much smaller than that in the BC03 computation. Although the ages of the SB99 template set is very young, we consider that our template sets are optimal for fitting the spectrum of young star-forming galaxies.
We note that we compare the photometry and the UV slope $\beta$ value used in this work with those of the published catalog. In the UDS-CANDELS field (the small area enclosed with the black dotted line in figure \ref{fig1}), a multi-wavelength photometry catalog has been published by the CANDELS collaborators \citep{Gala13}, and the catalog has the total flux of Subaru/\textit{BVRi'z'}, UKIRT/\textit{JHK}, HST/F125WF160W, and Spitzer/3.6$\>\micron$4.5$\>\micron$. Their photometry of the Subaru and UKIRT bands are consistent with ours if we take account of the difference in the measurement method. The UV slope $\beta$ value of our catalog is also comparable to that of the published catalog when applying the same measurement method for UV slope $\beta$ to both catalogs. However, the difference of the HST and Spitzer bands is slightly larger than expected. We consider that for the HST data the difference is attributed to our very large PSF correction factor applied to be matched with the Subaru images. For the Spitzer data the difference is due to the uncertainty in the aperture correction. We remark that our catalog tends to have the slightly bluer \textit{K} $-$ [3.6] and \textit{K} $-$ [4.5] colors than those of the published one.
\subsection{ Example results of SED fitting } \label{S2s3exs}
We show two examples in figures \ref{fig2} and \ref{fig3} in order to show the validity of our SED fitting analysis. The first figure is the red LBG observed with Spitzer ($\beta_{\mathrm{obs}} = -1.27$ and $M_{\mathrm{UV,obs}} = -20.38$), and the second figure is the blue LBG observed without Spitzer ($\beta_{\mathrm{obs}} = -2.39$ and $M_{\mathrm{UV,obs}} = -20.32$). In the top nine panels of each figure, we show the stamps of the imaging data from Subaru/\textit{B} to UKIRT/\textit{K}. The center of the images represent the detected position, and the green two straight lines in each image with 1'' length are placed at 1'' from the detected position. In the bottom left panel, we show the observed photometry and best-fit SED. The blue plus points with the error bar represent the observed photometry and its uncertainty, and the red solid line represents the best-fit SED. The black open circle with the arrow represents the upper limit of the photometry at 2\,$\sigma$ level. In the bottom right three panels, we show the $\chi^{2}$ map of our SED fitting analysis on one-dimensional space. The vertical axis represents the $\chi^{2}$ value and the horizontal axis represents the photometric redshift, dust attenuation, and age. The horizontal black dashed line in each panel represents the minimum $\chi^{2}$ value. We emphasize that the two examples we show here are categorized as faintest objects, and most of the other objects are fitted better than the examples. Our SED fitting procedure performs well due to the deep imaging data of UKIRT/\textit{HK} which covers the wavelength of Balmer break for LBGs at \textit{z} $\sim$ 4.
\begin{figure*}
\begin{center}
\includegraphics[width=140mm]{im10051_comb.eps}
\end{center}
\caption{ Example object for the red LBGs observed with Spitzer which has $\beta_{\mathrm{obs}} = -1.27$ and $M_{\mathrm{UV,obs}} = -20.38$. Top: The panels show the 5'' $\times$ 5'' stamps from Subaru/\textit{B} (left) to UKIRT/\textit{K} (right). The center of the images is the position of the detection. The green two straight lines in each stamp have 1'' length and are drawn at 1'' apart from the center. Bottom left: The blue plus points with the error bars show the measured aperture photometry with the 1\,$\sigma$ error from Subaru/\textit{B} (left) to Spitzer/3.6$\>\micron$ (right). The red solid line shows the best-fit SED model template. Bottom right: We show the $\chi^{2}$ map as a function of photometric redshift, dust attenuation, and age parameter. The horizontal black dashed line in each panel shows the minimum $\chi^{2}$ value. } \label{fig2}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=140mm]{im10643_comb.eps}
\end{center}
\caption{ Same as figure \ref{fig2}, but the example object is the blue LBGs observed without Spitzer and it has $\beta_{\mathrm{obs}} = -2.39$ and $M_{\mathrm{UV,obs}} = -20.32$. } \label{fig3}
\end{figure*}
\section{ Measurement method of UV slope $\beta$ } \label{S3Mmuvb}
According to the original definition of \citet{Calz94}, the UV slope $\beta$ value should be estimated from the Spectral Energy Distribution (SED) from 1250\,\AA\ to 2600\,\AA\ through the 10 fitting windows. However, spectroscopic observation generally requires a long exposure time for measuring the continuum flux from high-z galaxies due to their faint continuum. Furthermore, the number of objects which can be observed at a time is limited depending on slit configuration. Therefore, it is impractical to accurately measure the continuum flux from spectroscopic data for all the $\sim$ 1800 targets in our sample. Instead, we apply the simple power-law fitting to the broad-band photometry with the following functional form,
\begin{equation}
M(\lambda_{x}) = -2.5 (\beta + 2) \log \lambda_{x} + Const
\label{eq1}
\end{equation}
\noindent where $\lambda_{x}$ is the effective wavelength of $x$th broad-band filter, $M$($\lambda_{x}$) is the measured magnitude of $x$th broad-band filter, and {\it Const} is a constant value. This method is suitable since the bias in the $\beta$ estimation is small (\cite{Fink12}; \cite{Rog13}). For the fitting, we conduct the least square fitting to the \textit{i'}, \textit{z'}, updated-\textit{z'}, and \textit{J} band filters which cover the rest-frame wavelength from $\sim$ 1500\,\AA\ to $\sim$ 2500\,\AA\ for the objects at \textit{z} $=$ 3.5--4.5. In our analysis, an uncertainty of the $\beta$ value represents the uncertainty of the least square fitting when taking account of the photometric error of the broad-band filters.
Although using the larger number of the photometry data points results in more accurate determination of the UV slope $\beta$ value, we need to select the optimal broad-band filters for the fitting so as to avoid strong spectral features. In the rest-frame UV and optical wavelength range, the redshifted Ly$\alpha$ ($\lambda$ $=$ 1216\,\AA) line (or Lyman break) and Balmer break ($\lambda$ $\sim$ 3600\,\AA) can be a contamination affecting the broad-band photometry. Figure \ref{fig4} illustrates the position of the Lyman and Balmer breaks in the observed-frame, and filter profiles of the optical and NIR broad-band. For the sake of clarity, we show three model spectra with clear Lyman and Balmer breaks at \textit{z} $=$ 3, 4, and 5, and we omit the filter profiles of Subaru/updated-\textit{z'}, HST/F125W, F160W, Spitzer/3.6$\>\micron$, and 4.5$\>\micron$ bands. This figure indicates that the \textit{R} ($\lambda$ $\sim$ 6000--7000\,\AA) and \textit{H} ($\lambda$ $\sim$ 15000-17000\,\AA) band filters are probably affected by Lyman or Blamer breaks, and the wavelength coverage from \textit{i'} to \textit{J} band (black horizontal line in upper panel of figure \ref{fig4}) is suitable for calculating the UV slope $\beta$ value of \textit{z} $\sim$ 4 LBGs. Consequently, we use the Subaru/\textit{i'}, \textit{z'}, updated-\textit{z'}, UKIRT/\textit{J}, and HST/F125W band filters.
Another critical point is that the robustness of the $\beta$ measurement for the faint and blue galaxies is strongly influenced by the depth of the imaging data at longer wavelength in our analysis. As mentioned above, we select the $i' \leq 26.0$ LBGs, and we use the Subaru/\textit{i'}, \textit{z'}, updated-\textit{z'}, UKIRT/\textit{J}, and HST/F125W band filters for the $\beta$ measurement. Under the condition, for instance, the galaxies with $i' = 26.0$ and $\beta < -2.0$ have the larger photometric uncertainty in the \textit{z'}, updated-\textit{z'}, \textit{J}, and F125W band filters compared with the galaxies with $i' = 26.0$ and $\beta \geq -2.0$, and hence the bluer galaxies have the larger uncertainty in $\beta$. Moreover, the sample completeness of the extremely blue galaxies such as $\beta \sim -3.0$ is also influenced by the depth of the imaging data although the extremely blue objects are rare. From equation \ref{eq1}, the case of $\beta = -2.0$ is derived from $M(\lambda_{x}) = Const$, namely, all the broad-band photometry is same. For the appropriate $\beta$ measurement with the wide $\beta$ range, it is required that the magnitude threshold of \textit{i'} band is brighter than the $\sim$2--3$\sigma$ limiting magnitude of the \textit{z'}, updated-\textit{z'}, \textit{J}, and F125W bands. Our selection criteria described in section \ref{S2s2ss} is applied by taking the aspect into consideration. We use the conservative criteria and consider that our selection does not cause the strong bias in the $\beta$ distribution except for the extremely faint and blue objects. In order to quantify and discuss the influence, we estimate the recovery fraction in section \ref{S4s2RF}.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Pic.speccomp5-multi.eps}
\end{center}
\caption{ Model spectra and important broad-band filters. In the top panel, the green, blue, and red solid lines represent the model spectrum at \textit{z} $=$ 3, 4, and 5, respectively, which clearly show the Lyman and Balmer breaks. The length of the black horizontal line denotes the wavelength coverage of Subaru/\textit{i'}, Subaru/\textit{z'}, and UKIRT/\textit{J} band filters which are used for calculating the UV slope $\beta$. In the bottom panel, we show the Subaru/\textit{BVRi'z'} and UKIRT/\textit{JHK} band filters from left to right. For the sake of clarity, the filter responses of UKIRT/\textit{JHK} are multiplied by an arbitrary constant. } \label{fig4}
\end{figure}
\section{Results} \label{S4Rslt}
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Pic2-Official.SEDSBetaMuv+1sig-v1cal.v1BobsMobs.eps}
\end{center}
\caption{ Observed UV slope $\beta$ vs. UV absolute magnitude at rest-frame 1500\,\AA\ ($\beta$--$M_{\mathrm{UV}}$ relation). The UV absolute magnitude is calculated by integrating the flux of the best-fit SED model template from rest-frame 1450\,\AA\ to 1550\,\AA. The green filled circles represent the individual objects in the area of the SEDS (Spitzer field) and the blue filled squares represent the individual objects out of the area. The magenta open circles with the error bar represent the mean UV slope $\beta$ value, standard deviation (left side), and mean error (right side) of the UV slope $\beta$ for each magnitude bin, and they are summarized in table \ref{tab3}. } \label{fig5}
\end{figure}
\subsection{Relation between $\beta$ and $M_{UV}$} \label{S4S1BM}
Figure \ref{fig5} shows the obtained distribution of the UV slope $\beta$ as a function of the UV absolute magnitude at rest-frame 1500\,\AA\ ($\beta$--$M_{\mathrm{UV}}$ relation). The UV absolute magnitude for each object is calculated from the best-fit SED model template by integrating the flux from rest-frame 1450\,\AA\ to 1550\,\AA. The green filled circles show the objects in the area observed with Spitzer and the blue ones show the objects out of the area. There seems to be no notable systematic difference in the distribution of the sample with and without the Spitzer data, and the lack of the information about 3.6$\>\micron$ and 4.5$\>\micron$ does not influence the selection for \textit{z} $\sim$ 4 star-forming galaxies very much. In fact, we conduct the Kolmogorov--Smirnov test (K--S test) for the whole sample and some magnitude sub-samples. The null hypothesis of the K--S test is that the samples with (green) and without (blue) SEDS/Spitzer data are derived from the same distribution. The p-values of all the test are $>> 5\,\%$, and the null hypothesis is not rejected at 5\,$\%$ significance level. Therefore, there is no evidence that the information about 3.6$\>\micron$ and 4.5$\>\micron$ influences the $\beta$--$M_{\mathrm{UV}}$ relation.
The magenta open circles with the error bar indicate the mean $\beta$ value and the mean $M_{\mathrm{UV}}$ value for each magnitude bin. The standard deviation of the $\beta$ distribution is indicated by the thick marks toward the left side, and the typical uncertainty in the $\beta$ value for the individual objects is shown by the thick marks toward the right side. For the mean values, we just apply the simple geometric mean without taking account of the individual uncertainty in $\beta$ for the individual objects. This is because the mean $\beta$ value can be biased toward positive values if we weight the $\beta$ values by the individual uncertainty of the objects, i.e., the uncertainty is not symmetric and becomes smaller toward positive $\beta$ values than the opposite. The mean $\beta$ value, standard deviation, typical uncertainty, and mean $M_{\mathrm{UV}}$ value for each bin are summarized in table \ref{tab3}. In addition, the other useful information such as the median values is also summarized.
\begin{table*}
\tbl{Summary of $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ Relation}{%
\begin{tabular}{cccccccr}
\hline
$M_{\mathrm{UV}}$ bin & $M_{\mathrm{UV,mean}}$ & $M_{\mathrm{UV,median}}$ & $\beta_{\mathrm{mean}}$ & $\beta_{\mathrm{median}}$ & $\sigma_{\beta}$ & Mean Err. of $\beta$ & $N_{\mathrm{obj}}$ \\
\hline
$-$22.0 to $-$21.5 & $-$21.71 & $-$21.72 & $-$1.66 & $-$1.69 & 0.25 & 0.11 & 37 \\
$-$21.5 to $-$21.0 & $-$21.18 & $-$21.15 & $-$1.73 & $-$1.71 & 0.35 & 0.18 & 204 \\
$-$21.0 to $-$20.5 & $-$20.73 & $-$20.72 & $-$1.76 & $-$1.75 & 0.41 & 0.28 & 563 \\
$-$20.5 to $-$20.0 & $-$20.26 & $-$20.26 & $-$1.73 & $-$1.74 & 0.47 & 0.41 & 861 \\
$-$20.0 to $-$19.5 & $-$19.89 & $-$19.91 & $-$1.49 & $-$1.52 & 0.52 & 0.45 & 140 \\
\hline
\end{tabular}}\label{tab3}
\begin{tabnote}
\end{tabnote}
\end{table*}
The standard deviation is clearly larger than the typical uncertainty in the $\beta$ value except for the two of the most faint magnitude bin. Therefore, in the magnitude range of $M_{\mathrm{UV}} \lesssim -20.5$, the observed $\beta$ distribution is more scattered than the typical uncertainty and the observed scatter represents the variation of the stellar population and dust extinction among the sample. In the magnitude range of $M_{\mathrm{UV}} \gtrsim -20.5$, the typical uncertainty in $\beta$ becomes as large as the standard deviation, and particularly the objects with $M_{\mathrm{UV}} \gtrsim -20.0$ are not uniformly distributed. Although we does not add any criteria to our sample since our purpose is to investigate the bright LBGs with \textit{i'} $\leq$ 26.0, we check the sample completeness in section \ref{S4s2RF} and discuss the results with some $M_{\mathrm{UV}}$ sub-samples.
The mean $\beta$ value seems not to decrease with the mean UV magnitude. In order to quantify the $\beta$--$M_{\mathrm{UV}}$ trend, we conduct the least-square linear fitting for our mean $\beta$ and $M_{\mathrm{UV}}$ values described in table \ref{tab3}. We use the four data points in $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$, and the slope of the fitted linear equation becomes $-0.02\,\pm\,0.02$, which is nearly zero compared with the value in the previous works for the similar redshift, $-0.13\,\pm\,0.02$ \citep{Bouw14} and $-0.10\,\pm\,0.03$ \citep{Kurc14}. We note that our target is the relatively bright LBGs and the dynamic range of $M_{\mathrm{UV}}$ in our study is smaller than that in the previous works, $-22.0 \leq M_{\mathrm{UV}} \leq -15.5$ \citep{Bouw14} and $-21.0 \leq M_{\mathrm{UV}} \leq -15.0$ \citep{Kurc14}. When calculating the slope of the $\beta$--$M_{\mathrm{UV}}$ relation, we use \textit{standard deviation of the mean} as an uncertainty of each mean $\beta$ value. The uncertainty is calculated from the standard deviation divided by the number of galaxies in each bin ($= \sigma_{\beta} / N_{\mathrm{obj}}$) described in table \ref{tab3}, and thus the uncertainty of the slope of the $\beta$--$M_{\mathrm{UV}}$ relation is much smaller than the mean error of $\beta$.
In figure \ref{fig6}, we show the direct comparison of our result to the results from \textit{z} $\sim$ 4 super luminous stacked LBGs (\cite{LeeKS11}, red diamonds) and \textit{z} $\sim$ 4 faint LBGs (\cite{Bouw14}, green triangles). The red data points are calculated by us from the photometry described in \citet{LeeKS11} since all the $\beta$ values and its uncertainties are not described in their paper. The error bars denotes the typical uncertainty in the $\beta$ value. The error bars of \citet{Bouw14} show the sum of the random and systematic error described in their paper. The blue circles with the error bars show our result which is same as the magenta points in figure \ref{fig5} but the error bars are the typical uncertainty. In the magnitude range of $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$, our results consistent with the previous works. Although the dynamic range of $M_{\mathrm{UV}}$ is smaller than the previous works, the luminous star-forming galaxies at \textit{z} $\sim$ 4, which are selected from the ground-based wide field images, seem to show the weaker $\beta$--$M_{\mathrm{UV}}$ relation over the magnitude range of $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Pic.liter-comp-new1812.v1.eps}
\end{center}
\caption{ Comparison of this work with the previous studies at the similar redshift. For the sake of clarity, the vertical scale is changed from figure \ref{fig5}. The red open diamonds and green open triangle with the error bars show the results from \citet{LeeKS11} and \citet{Bouw14}, respectively. The blue open circles with the error bars show the result from our sample which is same as the magenta points in figure \ref{fig5}, but the error bars are the typical uncertainty. } \label{fig6}
\end{figure}
It seems that the distribution of the objects in figure \ref{fig5} is truncated and the shape of the distribution looks like a ``triangle''. This must be a result from either some physical constraint or some sample selection bias, or both. For example, a galaxy at \textit{z} $=$ 4.5 with the \textit{i'}-band magnitude \textit{i'} $= 26.0$ has $M_{UV} \lesssim -20.1$, and therefore the number of objects, which are selected from our selection criteria, decrease in the region around $M_{UV} \sim -20$ and $\beta \sim -2.5$. By using only our three data points in $-22.0 \leq M_{\mathrm{UV}} \leq -20.5$, the slope of the relation indeed becomes $-0.09\,\pm\,0.04$ which is quite similar to the value of the previous works. However, the number of the objects in the brightest (most left) bin is much less than the other bins (see column 8 in table \ref{tab3}), and the slope is almost estimated from only the two data points. We need to assess the incompleteness of the observed $\beta$ distribution in order to take account of our selection bias, which is discussed in the next section.
\subsection{Recovery Fraction} \label{S4s2RF}
As shown in figure \ref{fig5}, the observed $\beta$ distribution is restricted in the ``triangle'' zone. It seems that there are three truncations, namely, (a) at the top left side, (b) at the bottom left side, and (c) at the bottom right side. In order to discuss the reason of those truncation and evaluate the validity of our results, we calculate the recovery fraction which is the number ratio of recovered objects to input objects by using Monte Carlo method.
At first, we make a uniform input distribution on $\beta$--$M_{\mathrm{UV}}$ space for the quantitative discussion. For this purpose we consider the $8 \times 13 = 104$ grids with $\Delta \beta = 0.5$ and $\Delta M_{UV} = 0.25$, and we generate 300 mock galaxy spectra whose $\beta$ and $M_{UV}$ values place in each the small grid (so the total spectra are $104 \times 300 = 31,200$). The mock spectra are constructed from the BC03 or SB99 model templates which are the similar template sets as described in section \ref{S2s2ss}. All of the parameters such as SFH, dust attenuation value, age, metallicity, and source redshift are determined at random. We note that the range of the dust attenuation value and source redshift for the mock galaxies are different from the range described in section \ref{S2s2ss}, and they are $0.0 \leq {\rm Av} \leq 1.5$ and $3.5 \leq z_{s} \leq 4.5$. The number of the age step is also changed from 30 into 15. If a resultant spectrum does not place in the designated small grid, we repeat the procedure until the desired $\beta$ and $M_{\mathrm{UV}}$ values.
Second, we calculate the apparent magnitude of the broad-band filters for each mock spectrum and we put the artificial galaxies on the real observed images from Subaru/\textit{B} to UKIRT/\textit{K} by using the IRAF mkobjects task. Since we check that the impact of Spitzer/3.6$\>\micron$ and 4.5$\>\micron$ is negligible for the $\beta$--$M_{\mathrm{UV}}$ relation in section \ref{S4S1BM}, we omit both the information in our simulation. The size and shape of the mock galaxies are also determined at random so that the size distribution of our simulated objects reproduces the observed size distribution.
Finally these embedded mock galaxies are re-detected, re-measured, and re-compiled by the same manners described in section \ref{S2s2ss}. In the SED fitting procedure, however, we change the number of the age step from 30 to 15 in order to save the computer resource. After the compilation, we count the number of final recovered objects for each small grid and calculate the number ratio of recovered to input objects. The final result includes the impact from the image quality, the magnitude criteria, the LBG selection, and the photo-\textit{z} selection. Note that the prepared objects are restricted by only the rest-frame {\it UV} information and thus the rest-frame {\it optical} information such as Balmer break is purely determined at random.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Pic-Official.BC5.BetaRF.v3.eps}
\end{center}
\caption{ Recovery fraction which is the number ratio of recovered to input objects on $\beta$--$M_{\mathrm{UV}}$ space. The black grid lines represent the area where we prepare the input objects uniformly throughout the $\beta$--$M_{\mathrm{UV}}$ space and a total of 31,200 mock galaxies is distributed. The colored area represents the detected area where we can find recovered objects. The white colored area represents the non-detection area where we cannot find any recovered objects. }\label{fig7}
\end{figure}
Figure \ref{fig7} shows the final recovery fraction map by the color-coded area. The vertical axis is the UV slope $\beta$ and the horizontal axis is the absolute magnitude at rest-frame 1500\,\AA. Although the UV absolute magnitude for input objects is given as total magnitude, the UV absolute magnitude for recovered objects is calculated from 2''-diameter aperture photometry. Therefore we convert the total magnitude of input objects to the 2'' aperture magnitude by the aperture correction: $M_{\mathrm{UV,aperture}} = M_{\mathrm{UV,total}} + 0.352$. The black lattice lines indicate each area where $\sim$ 300 mock galaxies (or input objects) are distributed except for both the faintest and brightest magnitude bins where $\sim$ 150 mock galaxies are distributed. The white area represents the non-detection area which means that there are no recovered objects.
We find that the relative value of the recovery fraction is roughly homogeneous over the area where the observed objects are distributed ($-2.5 \lesssim \beta \lesssim -0.5$ and $-22.0 \lesssim M_{\mathrm{UV}} \lesssim -20.0$) and does not depend on the UV magnitude except for the area around the truncation (c). The rough homogeneity of the relative value indicates that the observed $\beta$--$M_{\mathrm{UV}}$ distribution is not strongly biased at least over the area of $-2.5 \leq \beta \leq -0.5$ and $-22.0 \leq M_{\mathrm{UV}} \leq -20.0$, and our measurement for the $\beta$--$M_{\mathrm{UV}}$ relation described in section \ref{S4S1BM} is reasonable. In other words, we find no evidence that the truncation (a) and (b) are artificial, and they must be caused by some other reasons. On the other hand, this figure also shows that the truncation (c) is artificially caused by our sample selection. Our simulation indicates that the truncation (c) is attributed to the selection criteria of the detection in Subaru/\textit{z} or Subaru/updated-\textit{z'}, and UKIRT/\textit{J} or HST/F125W.
The figure also indicates that the recovery fraction locally peaks around $\beta \sim -2.0$ and there are some fluctuated peaks at $\beta \sim -0.25$. Qualitatively, prominent spectral features can be easily identified by the SED fitting procedure, and then the recovery fraction may become higher than the other $\beta$ values. Since the Lyman Break technique prefers to select blue galaxies such as $\beta \sim -2.0$, our simulation indeed reflects our sample selection rather than the assumption about input objects. In addition, red galaxies such as $\beta \sim -0.25$ have clear Balmer Break if their red color is due to the aged stellar population. In our simulation the rest-frame optical information is determined at random, and hence the too many input galaxies, which have the prominent Balmer Break, are probably generated to have $\beta \sim -0.25$. In conclusion, the inhomogeneity of the recovery fraction seen in figure \ref{fig7} is due to our sample selection criteria adopting the Lyman Break technique and photo-\textit{z} estimation.
The truncation (a) and (b) can be interpreted as follows: The observed number of LBGs decreases toward the brighter UV magnitude and the average $\beta$ value converges in $\beta \sim -1.7$. The decrease of LBGs along with the UV magnitude is explained by the drop of UV Luminosity Function since the characteristic luminosity of \textit{z} $\sim$ 4 LBGs is $M_{\mathrm{UV}}^{*} = -21.14$ \citep{Yoshi06}. However, it is not clear why the average UV slope $\beta$ value converges in $\beta \sim -1.7$. Qualitatively the galaxies with $\beta \gtrsim -1.7$ should contain a large amount of dust and their UV magnitude becomes fainter due to the dust obscuration. Therefore the red and bright galaxies are a rare or almost impossible population, and it causes the truncation (a). On the other hand, the galaxies with $\beta \lesssim -1.7$ contain a less amount of dust and the galaxies can remain bright UV magnitude. As we cannot find the blue and bright galaxies from figure \ref{fig5}, such the objects are indeed a rare population in the observational data.
In summary, we conclude that the truncation (a) and (b) are not only caused by our sample selection and are most likely caused by some physical requirements, and the truncation (c) is clearly caused by our sample selection. In order to understand what make the blue and bright galaxies rare and to reveal the reason of the truncation (b), we discuss the underlying stellar population of LBGs for our sample in section \ref{S5Dis}.
\section{Discussion} \label{S5Dis}
\subsection{Relation between Intrinsic $\beta$ and Intrinsic $M_{\mathrm{UV}}$} \label{S5s1IBM}
We here estimate the dust-corrected $\beta$ (hereafter we call it {\it intrinsic} UV slope, $\beta_{\mathrm{int}}$) and the dust-corrected $M_{\mathrm{UV}}$ (hereafter we call it {\it intrinsic} UV absolute magnitude, $M_{\mathrm{UV,int}}$). In section \ref{S4s2RF}, our simulation indicates that the observed distribution on $\beta$--$M_{\mathrm{UV}}$ space is caused by some physical reasons. Both observed $\beta$ and $M_{\mathrm{UV}}$ value strongly depend on the dust attenuation value, and hence it is helpful to investigate the $\beta$--$M_{\mathrm{UV}}$ distribution before the dust reddening. In our discussion, we assume that the reasonable best-fit physical quantities are estimated from the SED fitting analysis in which the observed photometry covers the wavelength range between rest-frame $\sim$ 900\,\AA\ and $\sim$ 4400\,\AA\ (or $\sim$ 9000\,\AA\ in part) for \textit{z} $\sim$ 4 objects.
\begin{figure*}
\begin{center}
\includegraphics[width=140mm]{Pic.ConvBetaMuv-new1812.v1.eps}
\end{center}
\caption{ Comparison of the distribution for (a) observed UV slope $\beta$ vs. observed absolute magnitude at rest-frame 1500\,\AA\ (left, same as figure \ref{fig4}) and (b) \textit{intrinsic} UV slope $\beta$ vs. \textit{intrinsic} absolute magnitude (right). In both panel, the best-fit dust attenuation values for the individual objects are expressed by the blue, green, and red color-coding, which indicate Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. }\label{fig8}
\end{figure*}
We calculate the intrinsic UV slope by equation \ref{eq1} for the intrinsic magnitude of Subaru/\textit{i'}, Subaru/updated-\textit{z'}, and UKIRT/\textit{J} band filters. For estimating the intrinsic magnitude, we convolve the intrinsic SED, which is reproduced with the best-fit physical quantities without any dust extinction, with the three broad-band filters. We note that the intrinsic UV slope depends on the prepared model templates in the SED fitting (i.e., SFH, age, and metallicity) and has discrete values in out discussion.
Figure \ref{fig8} shows the conversion from observed to intrinsic value for $\beta$ and $M_{\mathrm{UV}}$. The left panel (a) of figure \ref{fig8} shows the \textit{observed} $\beta$ as a function of the \textit{observed} $M_{\mathrm{UV}}$ ($\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation, same as figure \ref{fig4}). The right panel (b) of figure \ref{fig8} shows the \textit{intrinsic} $\beta$ as a function of the \textit{intrinsic} $M_{\mathrm{UV}}$ ($\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation). The blue filled circles, green filled triangles, and red filled squares represent individual objects with the best-fit dust attenuation value of Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. In the panel (a), we confirm that the objects with the higher dust attenuation value are distributed at the upper area where the $\beta_{\mathrm{obs}}$ value becomes redder. This trend is natural and is not inconsistent with the previous studies reported as the relation between the $\beta_{\mathrm{obs}}$ and dust attenuation value (IRX--$\beta$ relation: e.g., \cite{Calz94}; \cite{Meur99}; \cite{Take12}). In the panel (b), due to the large dust correction, the objects with the higher dust attenuation value and the redder $\beta_{\mathrm{obs}}$ value tend to be distributed at the bottom left area where the $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ value becomes bluer and brighter. Moreover, the trend of the distribution is different from the panel (a), namely, the slope of the $\beta$--$M_{\mathrm{UV}}$ relation is nearly constant or positive. We discuss this distribution for different sub-samples in the following.
Both two panels in figure \ref{fig910} shows the same $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ distribution but the color-coding represent the different sub-samples classified according to with and without SEDS/Spitzer (left) and $M_{\mathrm{UV,obs}}$ (right). In the left panel, the red filled squares and blue filled circles indicate the objects with and without SEDS/Spitzer data, respectively. In the right panel, the blue filled circles, green filled triangles, and red filed squares indicate the objects with $M_{\mathrm{UV,obs}} > -20.5$, $-20.5 \geq M_{\mathrm{UV,obs}} > -21.0$, and $M_{\mathrm{UV,obs}} < -21.0$, respectively. The large open circle, triangle, and square with the error bars in each panel represent the median value and the median uncertainty for each sub-sample. We calculate the dust attenuation value at $\chi^{2}_{min} + 1$ for the individual objects as the uncertainty of the dust attenuation, and then the uncertainty in Av is converted into the uncertainty in $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$. Therefore, the error bars in figure \ref{fig910} denote the uncertainty in Av. We also show the histogram of $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ for each sub-sample.
\begin{figure*}
\begin{center}
\includegraphics[width=70mm]{Pic.BintMint-new1812.v3SEDS.eps}
\includegraphics[width=70mm]{Pic.BintMint-new1812.v4Mobs.eps}
\end{center}
\caption{ The $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ distribution of sub-samples classified according to with and without SEDS/Spitzer (left) and $M_{\mathrm{UV,obs}}$ (right). The open circle, triangle, and square with the error bars in each panel represent the median value and the median uncertainty for each sub-sample. The uncertainty in $M_{\mathrm{UV,int}}$ and $\beta_{\mathrm{int}}$ is estimated from the uncertainty in Av. }\label{fig910}
\end{figure*}
From figure \ref{fig910}, there seems not to be systematic difference in the $\beta_{\mathrm{int}}$ distribution. From the left panel, we find that the information of the Spitzer data does not influence the estimation of the $\beta_{\mathrm{int}}$ value although the uncertainty in $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ for the sub-sample with Spitzer tends to be smaller than that for the sub-sample without Spitzer. From the right panel, we find that the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ distribution of each $M_{\mathrm{UV,obs}}$ sub-sample is almost parallel to each other. When calculating the slope of the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation for each sub-sample by the same manner described in section \ref{S4S1BM}, we obtain the value of $0.12 \pm 0.01$ ($M_{\mathrm{UV,obs}} > -20.5$), $0.14 \pm 0.01$ ($-20.5 \geq M_{\mathrm{UV,obs}} > -21.0$), and $0.16 \pm 0.02$ ($M_{\mathrm{UV,obs}} < -21.0$). It means that the variation of the $M_{\mathrm{UV,obs}}$ value does not significantly affect the shape of the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ distribution. Although the faint objects such as the objects with $M_{\mathrm{UV,obs}} > -20.5$ have the $\beta_{\mathrm{obs}}$ value with the large uncertainty, the $\beta_{\mathrm{int}}$ values are reasonably well evaluated after the SED fitting using the all photometric data points.
\begin{figure*}
\begin{center}
\includegraphics[width=140mm]{Pic.BintMint-new1812.v1Av-Bobs.eps}
\end{center}
\caption{ The $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ distribution of sub-samples classified according to Av and $\beta_{\mathrm{obs}}$. The open circle, triangle, and square with the error bars in each panel represent the median value and the median uncertainty for each sub-sample. The uncertainty in $M_{\mathrm{UV,int}}$ and $\beta_{\mathrm{int}}$ is estimated from the uncertainty in Av. }\label{fig11}
\end{figure*}
Figure \ref{fig11} shows the same $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ distribution as figure \ref{fig910} but the color-coding represent the sub-samples classified according to Av and $\beta_{\mathrm{obs}}$. In the left panel, the blue filled circles, green filled triangles, and red filled squares indicate the objects with Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. In the right panel, the blue filled circles and red filled squares indicate the objects with $\beta_{\mathrm{obs}} \leq -1.73$ and $\beta_{\mathrm{obs}} > -1.73$, respectively. $\beta_{\mathrm{obs}} = -1.73$ represents the median $\beta_{\mathrm{obs}}$ value for our whole sample. The open circle, triangle, and square with the error bars in each panel represent the median value and the median uncertainty for each sub-sample. As mentioned above, the uncertainty in $\beta_{\mathrm{int}}$ and $M_{\mathrm{UV,int}}$ is estimated from the uncertainty in Av.
Figure \ref{fig11}, interestingly, shows that the objects which are dusty and redder in the observed $\beta$ tend to be bluer in the intrinsic $\beta$ and brighter in the intrinsic $M_{\mathrm{UV}}$. In addition, the intrinsic $\beta$ value slightly increases with the intrinsic $M_{\mathrm{UV}}$ value and the trend is opposite of that of the $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation. Our result can be interpreted as follows. The more intense ongoing star-forming galaxies, whose {\it intrinsic} $\beta$ and $M_{\mathrm{UV}}$ value are bluer and brighter, generate and/or contain a large amount of dust, and the {\it observed} $\beta$ and $M_{\mathrm{UV}}$ value result in a redder and fainter value due to the dust attenuation. Then, the nearly constant $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ distribution observed in our analysis is formed by the galaxies which have a blue $\beta_{\mathrm{int}}$ and bright $M_{\mathrm{UV,int}}$ value because they are distributed at the area of a red $\beta_{\mathrm{obs}}$ and faint $M_{\mathrm{UV,obs}}$ value.
According to our SED fitting analysis, a young-age stellar population is responsible for the bluest $\beta_{\mathrm{int}}$ value. In other words, there are some young-age galaxies with the bluest $\beta_{\mathrm{int}}$ value in the brightest $M_{\mathrm{UV,int}}$ range, but there are no intermediate-age and old-age galaxies with the bluest $\beta_{\mathrm{int}}$ value in the brightest $M_{\mathrm{UV,int}}$ range. This is not surprising because the intrinsic UV luminosity is expected to be sensitive to the age of the stellar population. It is hard to sustain a very high star formation rate with the intermediate and long time duration due to rapid gas depletion. The UV luminosity is dominated by the stars at ''turn-off point'' on Hertzsprung--Russell Diagram which is an age indicator of the young stellar population. We, however, emphasize that other parameters such as metallicity and/or IMF can also explain the reason of the bluest $\beta_{\mathrm{int}}$ value. Indeed some literature argue that dusty star-forming galaxies have a ''top-heavy'' IMF although the discussion still continues (\cite{Bau05}, \cite{Tacc08}, \cite{Bas10}). Under the top-heavy IMF environment, hot and massive stars can be formed more and more, and the bluer $\beta_{\mathrm{int}}$ value is easily produced. Otherwise, among the galaxies with the bluest $\beta_{\mathrm{int}}$ value, there may be a post-primordial starburst which is dominated by extremely metal-poor (or PopI\hspace{-.1em}I\hspace{-.1em}I) stars.
In order to investigate the star formation activity, we plot the Star Formation Rate (SFR) of the individual objects as a function of their stellar mass in figure \ref{fig12}. For the estimation of SFR, we convolve the best-fit template with the GALEX/FUV filter response curve and use the calibration for FUV luminosity (\cite{Hao11}; \cite{Kenn12}). For estimating the stellar mass, we multiply the best-fit normalization factor to the output from the BC03 model template. Figure \ref{fig12} shows the dust-corrected SFR as a function of the stellar mass, and the blue circle, green triangle, and red squares represent the individual objects with the dust attenuation value of Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. The large open circle, triangle, and square with the error bars show the median value and median uncertainty of each sub-sample. The uncertainty in SFR is estimated from the uncertainty in Av, and the uncertainty in stellar mass is estimated from the photometric uncertainty of \textit{K} band since the estimation of the stellar mass is almost determined by the \textit{K} band photometry. We also plot the previous results of the SFR--$\mathrm{M_{*}}$ relation called as main-sequence of star forming galaxies at \textit{z} $\sim$ 4 (\cite{Spe14}, \cite{Ste14}, \cite{Cap17}).
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{Pic.SFRMs-new1812.v2AvSingle.eps}
\end{center}
\caption{ Dust-corrected SFR vs. stellar mass estimated from the BC03 model templates. The SFR is estimated from the luminosity at the \textit{GALEX}/FUV filter by using the \citet{Hao11} calibration. The stellar mass is calculated multiplying the normalization factor to the output of the BC03 model. The color-coding represents the best-fit dust attenuation value. We also draw the SFR--$\mathrm{M_{*}}$ relation at \textit{z} $\sim$ 4 reported by the previous works (\cite{Spe14}, \cite{Ste14}, \cite{Cap17}). } \label{fig12}
\end{figure}
The figure shows that the most intense star-forming galaxies in our sample have SFR $\gtrsim$ a few $\times\ 10^{2}\,\mathrm{M_{\solar}}\>\mathrm{yr^{-1}}$, and they are the objects with Av $\geq$ 1.0. Since most of the objects with Av $\geq$ 1.0 have $\beta_{\mathrm{obs}} > -1.73$ and $\beta_{\mathrm{int}} \leq -2.42$ ($\beta_{\mathrm{int}} = -2.42$ is the median $\beta_{\mathrm{int}}$ value for our whole sample) from figure \ref{fig11}, our analysis indicates that the highly dust-attenuated and intense star-forming galaxies at \textit{z} $\sim$ 4 tend to have $\beta_{\mathrm{obs}} > -1.73$ and $\beta_{\mathrm{int}} \leq -2.42$. When comparing our result with the previous works, our median value of Av $<$ 0.5 and 0.5 $\leq$ Av $<$ 1.0 sub-sample is consistent with the relation from the previous works although the distribution of our sample is significantly scattered. The sub-sample with Av $\geq$ 1.0 tends to be distributed above the relation from the previous works, and the deviation of the median value from the relation is larger than the uncertainty in SFR. Because the galaxies distributed above the star formation main sequence are classified as starburst phase (e.g., \cite{Cap17}; \cite{Bisi18}), we consider that the objects with Av $\geq$ 1.0 are indeed in the starburst phase and our results is not inconsistent with the previous works. In conclusion, we find some highly dust-attenuated (Av $\geq$ 1.0) and intense star-forming (SFR $\gtrsim$ a few $\times\ 10^{2}\,\mathrm{M_{\solar}}\>\mathrm{yr^{-1}}$) galaxies at \textit{z} $\sim$ 4 which have $\beta_{\mathrm{obs}} > -1.73$ and $\beta_{\mathrm{int}} \leq -2.42$.
Finally, we consider a simple case in which the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ trend monotonically continues in the fainter magnitude range. According to \citet{Bouw14}, the $\beta_{\mathrm{obs}}$ value becomes bluer when the $M_{\mathrm{UV,obs}}$ value becomes fainter, but the slope of the $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation becomes flatter in $M_{\mathrm{UV,obs}} \gtrsim -19.0$. In order to establish both the observed and intrinsic $\beta$--$M_{\mathrm{UV}}$ relation without contradiction, it is expected that the $\beta_{\mathrm{int}}$ value becomes redder and converges to the certain $\beta$ value toward the fainter magnitude range. When we extrapolate the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation faintward below our sample magnitude limit, we will find the intersection point of the observed and intrinsic $\beta$--$M_{\mathrm{UV}}$ relation. Since the dust attenuation value becomes smaller toward the fainter magnitude range along the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation, the intersection point (or convergence point) will represent the position of the appearance of nearly dust-free population. Our $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation shows $\beta_{\mathrm{int}} = 0.61 + 0.14 M_{\mathrm{UV,int}}$ by the same manner described in section \ref{S4S1BM}, and the $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation from \citet{Bouw14} shows $\beta_{\mathrm{obs}} = -4.39 - 0.13 M_{\mathrm{UV,obs}}$ in $M_{\mathrm{UV,obs}} \leq -18.8$. As a result, both relations intersect at $M_{\mathrm{UV}} = -18.9$ and $\beta = -1.94$ and its point corresponds to the break point of $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation at $M_{\mathrm{UV}} = -18.8$ and $\beta = -1.95$ reported by \citet{Bouw14}. Therefore the transition of the $\beta_{\mathrm{obs}}$-$M_{\mathrm{UV,obs}}$ relation around $M_{\mathrm{UV}} \sim -18.8$ indicates that we really see the almost dust-free population in $M_{\mathrm{UV}} > -18.8$, and the apparently bluest star-forming galaxies at \textit{z} $\sim$ 4 distribute around $\beta \sim -2.0$.
\subsection{ Case of fixed star formation history and SMC attenuation law } \label{S5s2fhsmc}
In this section and the following sections (section \ref{S5s3zjkd} and \ref{S5s4iasfg}), we verify our results by using different and somewhat independent ways. We emphasize that these verification is intended not only to check the results from our SED fitting analysis but also to strengthen our suggestion, i.e., we find the dusty and on-going active star-forming galaxies at \textit{z} $\sim$ 4.
First of all, we repeat the SED fitting analysis (1) by fixing SFH parameter and (2) by using SMC attenuation law for dust extinction curve from \citet{Pre84} and \citet{Bouch85}. Figures \ref{fig13} and \ref{fig14} show the result of the case (1) and (2), respectively. The figures show the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation, and the fixed SFH parameter or dust extinction curve used in the SED fitting is labeled on the top of each panel. In figure \ref{fig13}, the first and second rows show the results of the BC03 model template and the third row shows the results of the SB99 model template. In all the panels except for the case of the SMC attenuation law, the blue, green, and red points represent the individual objects with the best-fit dust attenuation value of Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. In the case of the SMC attenuation law, the blue, green, and red points represent the objects with Av $<$ 0.3, 0.3 $\leq$ Av $<$ 0.6, and Av $\geq$ 0.6, respectively. In figure \ref{fig14}, the large diamonds with the error bars represent the median value and the median uncertainty for each sub-sample. Although the error bars is quite large for the objects with Av $\geq$ 0.6 in the right panel, it is caused from the small number of objects in the sub-sample.
\begin{figure*}
\begin{center}
\includegraphics[width=140mm]{Pic-Official.zpBetaMuv_SFH.v2Av.eps}
\end{center}
\caption{ Intrinsic UV slope $\beta$ distribution for fixed SFH models. The top three panels and middle two panels show the results of the BC03 model templates and bottom left two panels show the results of the SB99 model templates. The SFHs used for the SED fitting analysis are labeled in each panel: (a) instantaneous burst, (b) continuous constant, (c) exponentially decline with $\tau = 0.1\>$Gyr, (d) exponentially decline with $\tau = 2\>$Gyr, (e) exponentially decline with $\tau = 5\>$Gyr, (f) instantaneous burst, and (g) continuous constant from top to bottom panels. In all of the panels, the best-fit age values for the individual objects are expressed by blue, green, and red color-coding, which indicate the best-fit dust attenuation value of Av $<$ 0.5, 0.5 $\leq$ Av $<$ 1.0, and Av $\geq$ 1.0, respectively. } \label{fig13}
\end{figure*}
From figure \ref{fig13}, we find that the global trend of the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation does not significantly change among SFH parameters, which supports our interpretation described in section \ref{S5s1IBM}. We note that the $\beta_{\mathrm{int}}$ value has discrete values and makes discrete sequences, especially in the panel (c). It is attributed to the age step of the prepared model template in the SED fitting, and the more large number of the age step will dilute the discrete sequences. However, it is not critical when taking account of the moderate uncertainty in photometry.
In brief, the effect of dust attenuation significantly distorts the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation, which is probably positive, and then the $\beta$--$M_{\mathrm{UV}}$ relation results in the negative $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation reported by the previous works. In $-22 \lesssim M_{\mathrm{UV,obs}} \lesssim -20$, however, the $\beta_{\mathrm{obs}}$ value seems to be constant to the $M_{\mathrm{UV,obs}}$ value (constant $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation) due to the existence of dusty active star-forming population.
\begin{figure*}
\begin{center}
\includegraphics[width=130mm]{Pic-Official.AvBintMint_Comp_Cal-SMC.unique1.eps}
\end{center}
\caption{ Comparison of $\beta_{int}$-$M_{UV,int}$ relation obtained from the \authorcite{Calz00} attenuation law (left) and SMC attenuation law (right). The left panel is same as the left panel of figure \ref{fig11}. The right panel shows the case of the SMC attenuation law for dust extinction curve in the SED fitting analysis, and the blue, green, and red color-coding represent the objects with Av $<$ 0.3, 0.3 $\leq$ Av $<$ 0.6, and 0.6 $\leq$ Av, respectively. The large diamonds with the error bars represent the median value and the median uncertainty. Although the error bars is quite large for the objects with Av $\geq$ 0.6 in the right panel, it is caused from the small number of objects in the sub-sample. } \label{fig14}
\end{figure*}
Figure \ref{fig14} shows that the best-fit Av value from the SMC attenuation law becomes much smaller than that from the \authorcite{Calz00} attenuation law because the slope of the dust extinction curve of the SMC is much steeper than that of the \authorcite{Calz00}. Consequently, we cannot identify the intrinsically active star-forming galaxies which show the high dust attenuation (Av $>$ 1.0), blue $\beta_{\mathrm{int}}$ value ($\beta_{\mathrm{int}} < -2.42$), and red $\beta_{\mathrm{obs}}$ value ($\beta_{\mathrm{obs}} > -1.7$), although we can again find that the intrinsic $\beta$ value slightly increases with the intrinsic $M_{\mathrm{UV}}$ value. Actually, recent works of Atacama Large Millimeter/submillimeter Array (ALMA) observations report that the SMC dust attenuation law is appropriate for normal star forming galaxies at high redshift (e.g., \cite{Cpk15}; \cite{Bouw16}). On the other hand, as discussed in section \ref{S5s4iasfg}, the \authorcite{Calz00}-like attenuation law is partly required to reproduce the results of the Submillimeter Common User Bolometer Array 2 (SCUBA2) from \citet{Copp15} and \citet{Kopr18}.
\subsection{ \textit{zJK}-diagram } \label{S5s3zjkd}
In this section, we compare the observed color of the \textit{z'JK} band photometry with the predicted color which is estimated from the model simulation. Since our sample tends to have a larger photometric error in the broad-band filters at longer wavelength owing to the depth of the imaging data, the weight of the broad-band filters at longer wavelength becomes smaller than the opposite in the SED fitting analysis. It is possible that the photometry of the \textit{z'JHK} band filters does not have a considerable constraint on the best-fit SED. We therefore focus on the observed color of the \textit{z'JK} band photometry, and more directly compare the observed value with the predicted value in the color--color space.
For the model simulation, we calculate the color of the two SFH model templates with some condition: One is the BC03 Instantaneous Burst model (hereafter IB), and the other is the BC03 Continuous Constant star formation model (hereafter CSF). We consider that the IB and CSF SFH model is most opposite case in star formation activity, and the models are helpful to interpret the observed results. For the sake of simplicity, we fix the metallicity value with $Z = 0.2 Z_{\solar}$ and the redshift value with \textit{z} $=$ 3.5 and 4.5. In order to clarify the variation of the colors depending on the dust and age, we calculate the colors of the IB and CSF model templates with (a) the fixed age but the variable dust ranging from 0.0 to 3.0, and (b) the variable age ranging from 10$\>\mathrm{Myr}$ to 15.0$\>\mathrm{Gyr}$ but the fixed dust.
\begin{figure*}
\begin{center}
\includegraphics[width=140mm]{Pic-Official.color_zJHK.v3obj-mdl.eps}
\end{center}
\caption{ $z - J$ vs. $J - K$ color diagram. The blue filled triangles and the red filled circles denote the objects with $\beta_{\mathrm{obs}} \leq -1.73$ and $\beta_{\mathrm{obs}} > -1.73$ in our sample, respectively. The blue and red large circles with the error bars denote the median value and median uncertainty in the observed colors. (a): the green lines represent the CSF model template with age $=$ 10$\>\mathrm{Myr}$, and the purple lines represent the IB model template with age $=$ 100$\>\mathrm{Myr}$. The solid and dashed lines represent the case of \textit{z} $=$ 3.5 and 4.5, respectively, and the space between the lines is filled with the shaded area. The solid circles on each line indicate the dust attenuation value of Av $=$ 0.0, 1.0, 2.0, and 3.0 from bottom left to top right, and the given value is labeled beside the circles. (b): Same as panel (a), but the green lines represent the CSF model template with Av $=$ 1.0, and the purple lines represent the IB model template with Av $=$ 0.0. The solid circles on each line indicate the age value of 10$\>\mathrm{Myr}$, 100$\>\mathrm{Myr}$, 500$\>\mathrm{Myr}$, and 1$\>\mathrm{Gyr}$ from bottom left to top right, and the given value is labeled beside the circles. We note that we omit the label of 100$\>\mathrm{Myr}$ for the green line. } \label{fig15}
\end{figure*}
Figure \ref{fig15} shows the $z - J$ vs. $J - K$ color--color diagram. The vertical axis is the $z - J$ color and the horizontal axis is the $J - K$ color. The $J - K$ color can trace the Balmer break of galaxies at \textit{z} $\sim$ 4 and the $z - J$ color represents the observed UV slope $\beta$. The blue filled triangles and the red filled circles denote the objects with $\beta_{\mathrm{obs}} \leq -1.73$ and $\beta_{\mathrm{obs}} > -1.73$ in our sample, respectively. In this figure, we only use the objects satisfying all the following condition, $z' > 3\,\sigma$, $J > 3\,\sigma$, and $K > 3\,\sigma$, so as to calculate the reliable colors. The blue and red large circles with the error bars denote the median value and median uncertainty in the observed colors. In the left panel (a), the green lines represent the CSF model template with age $=$ 10$\>\mathrm{Myr}$, and the purple lines represent the IB model template with age $=$ 100$\>\mathrm{Myr}$. The solid and dashed lines represent the case of \textit{z} $=$ 3.5 and 4.5, respectively, and the space between the lines is filled with the shaded area. The solid circles on each line indicate the dust attenuation value of Av $=$ 0.0, 1.0, 2.0, and 3.0 from bottom left to top right, and the given value is labeled beside the circles. In the right panel (b), the green lines represent the CSF model template with Av $=$ 1.0, and the purple lines represent the IB model template with Av $=$ 0.0. The solid and dashed lines represent the case of \textit{z} $=$ 3.5 and 4.5, respectively, and the space between the lines is filled with the shaded area. The solid circles on each line indicate the age value of 10$\>\mathrm{Myr}$, 100$\>\mathrm{Myr}$, 500$\>\mathrm{Myr}$, and 1$\>\mathrm{Gyr}$ from bottom left to top right, and the given value is labeled beside the circles. We note that we omit the label of 100$\>\mathrm{Myr}$ for the green line in the panel (b) since the corresponding point is placed under the median value and cannot be seen.
The figure indicates that the observed distribution of the $\beta_{\mathrm{obs}} > -1.73$ sub-sample tends to be reproduced by the star-forming, dusty, and very young-age, that is bluer $\beta_{\mathrm{int}}$, population. Although we only show the extreme and slightly arbitrary cases in the figure, we can deduce the other possibilities from the examples such as star-forming, less dusty, and middle-age population. However, when we take the other possibilities into consideration, the above interpretation is not changed because the direction of the increase in age and dust is different. We consider that the observed $J - K$ color of the $\beta_{\mathrm{obs}} > -1.73$ sub-sample is not sufficiently red, and thus the middle-age and old-age population is not preferred in the SED fitting analysis. The observed distribution of the $\beta_{\mathrm{obs}} \leq -1.73$ sub-sample tends to be reproduced by the less star-forming, less dusty, and young-age population, although the sub-sample can be also reproduced by the star-forming, less dusty, and middle-age population. We note that there are some outliers in our sample, but most of them have a lower signal to noise ratio ($S/N \sim 3$--$5$) in \textit{J} and/or \textit{K} band than the other objects. In summary, the interpretation from the \textit{z'JK} color--color diagram is consistent with the interpretation from our SED fitting analysis, and therefore a part of star-forming galaxies at \textit{z} $\sim$ 4 in our sample is indeed classified as dusty star-forming population.
\subsection{ Expected features of most active star-forming galaxies at \textit{z} $\sim$ 4 } \label{S5s4iasfg}
Last of this paper, we show two estimation for the IR features of the active star-forming galaxies at \textit{z} $\sim$ 4: One is the luminosity ratio of IR to UV so called IRX, and the other is the flux density at observed-frame 850$\>\micron$, $S_{850}$. Our sample does not have the rest-frame IR information for the individual objects and therefore we use the approximate conversion. For estimating the IRX value, we apply the empirical conversion between ${\rm IRX_{TIR-FUV}}$ and ${\rm A_{FUV}}$ for low-\textit{z} galaxies reported by \citet{Burg05}: ${\rm A_{FUV}} = -0.028[{\rm log_{10}}L_{\rm TIR}/L_{\rm FUV}]^3 + 0.392[{\rm log_{10}}L_{\rm TIR}/L_{\rm FUV}]^2 + 1.094[{\rm log_{10}}L_{\rm TIR}/L_{\rm FUV}] + 0.546$. For estimating the $S_{\mathrm{850}}$ value, we first calculate the total (bolometric) IR luminosity by utilizing the not dust-corrected FUV luminosity and the IRX value, and then we convert the total IR Luminosity into the flux density at observed-frame 850$\>\micron$. In the conversion, we use the modified blackbody $+$ power-law formula as the dust thermal emission and the total IR luminosity is estimated by integrating the modeled spectrum from 8$\>\micron$ to 1000$\>\micron$ in the rest-frame. The formula is,
\begin{equation}
S (\nu, T_{d}) \propto \left\{
\begin{array}{ll}
\frac{\nu^{\beta}\nu^{3}}{e^{h\nu/kT_{d}}-1} & (\nu \leq \nu_{c}); \\
\nu^{-\alpha} & (\nu > \nu_{c}),
\end{array}
\label{eq2}
\right.
\end{equation}
where $S(\nu, T_{d})$ is the flux density at $\nu$ for a dust temperature $T_{d}$ in the units of Jy and $\beta$ is a dust emissivity index. The connecting frequency, $\nu_{c}$, is calculated from,
\begin{equation}
\Biggl. \frac{dS}{d\nu} \Biggr|_{\nu=\nu_{c}} = -\alpha.
\label{eq3}
\end{equation}
For the sake of simplicity, we fix all the above parameters and the source redshift referring to \citet{Copp15}: the dust temperature of $T_{d} = 38\>$K, the dust emissivity index of $\beta_{\mathrm{dust}} = 1.5$, the power-law index of $\alpha = 1.7$, and the source redshift of $z = 3.87$. We emphasize that the cautious treatment is required for the comparison between our result and the previous results presented in this paper.
\begin{figure*}
\begin{center}
\includegraphics[width=52mm]{Pic2-Official.IRXBeta.v01Av.eps}
\includegraphics[width=52mm]{Pic2-Official.IRXBeta-v2smc.v01Av.eps}
\includegraphics[width=52mm]{Pic2-Official.IRXBeta-Comp.v1med.eps}
\end{center}
\caption{ Predicted distribution of IRX--$\beta$ relation in the case of the \authorcite{Calz00} attenuation law (left) and SMC attenuation law (middle), and the comparison with previous works (right). The IRX value is estimated by the empirical conversion between $\mathrm{IRX_{TIR-FUV}}$ and $\mathrm{A_{FUV}}$ reported by \citet{Burg05}. The blue, green, and red points in the left panel (middle panel) represent the best-fit dust value of Av $\leq$ 0.5 (0.3), 0.5 (0.3) $<$ Av $\leq$ 1.0 (0.6), and Av $\geq$ 1.0 (0.6), respectively. The large square with the error bars denote the median value and median uncertainty. In the right panel, we show the median value of our result, and the magenta squares, red triangles, and orange circles represent the results from \citet{Alva16}, \citet{Fuda17}, and \citet{Bouw16}, respectively. In the left and right panels, the black solid and dashed lines show the relation based on the \authorcite{Calz00} attenuation law from \citet{Meur99} and \citet{Take12}, respectively. In the middle and right panels, the black dot-dashed line shows the relation based on the SMC attenuation law from \citet{Bouw16}.} \label{fig161718}
\end{figure*}
Figure \ref{fig161718} shows the IRX--$\beta$ relation obtained from the \authorcite{Calz00} attenuation law (left) and SMC attenuation law (middle). The vertical axis is the predicted IRX value, and the horizontal axis is the observed UV slope $\beta$. The small dots represent each object, and the color-coding is same as the figure \ref{fig14}. The large blue, green, and red square with the error bars represent the median value and median uncertainty for each sub-sample. In the right panel, we show the median values of our result and the previous works from \authorcite{Alva16}(2016: AM16, magenta square), \authorcite{Fuda17}(2017: F17, red triangle), and \authorcite{Bouw16}(2016: B16, orange circle). The sample of AM16 is LBGs at \textit{z} $\sim$ 3 in the COSMOS field and the IR luminosity is obtained from the stacked image of the Herschel and AzTEC. The sample of F17 is massive star-forming galaxies at \textit{z} $\sim$ 3.2 in the COSMOS field, which are distributed on the main-sequence of star formation, and the IR luminosity is obtained from the stacked image of the ALMA. We note that both the samples consist of the relatively more massive ($\mathrm{M_{*} \gtrsim 10^{10} M_{\solar}}$) and lower redshift LBGs compared with our sample. The sample of B16 is LBGs at \textit{z} $=$ 4--10 in the Hubble Ultra Deep Field, and the IR Luminosity is obtained from the stacked image of the ALMA. For B16, the data points in this panel represent the 2$\,\sigma$ upper limit of the formal uncertainty for the $\mathrm{M_{*} < 10^{9.75} M_{\solar}}$ sample described in table 13 of their paper, and thus their sample is relatively less massive (and possibly higher redshift) LBGs compared with our sample. The black solid and dashed lines show the relation based on the \authorcite{Calz00} attenuation law from \citet{Meur99} and \citet{Take12}, respectively. The black dot-dashed line shows the relation based on the SMC attenuation law from \citet{Bouw16}.
In the case of the \authorcite{Calz00} attenuation law (left panel), our sample shows the systematically bluer UV slope $\beta$ and the systematic offset becomes larger at the larger IRX value. According to previous works for lower redshift star-forming galaxies (e.g., \cite{Redd06}; \cite{Hein13}; \cite{Oteo13}; \cite{Alva16}), normal star-forming galaxies are distributed along the IRX--$\beta$ relation, and IR luminous galaxies such as Luminous InfraRed Galaxies (LIRGs; $L_{\mathrm{TIR}} > 10^{11}\,L_{\solar}$) or Ultra Luminous InfraRed Galaxies (ULIRGs; $L_{\mathrm{TIR}} > 10^{12}\,L_{\solar}$) are distributed above the IRX--$\beta$ relation. The offset of our red points implies the presence of IR excess galaxies at at \textit{z} $\sim$ 4 such as local LIRG/ULIRGs although the systematic shift can be attributed to the uncertainty of IRX which comes from the conversion from $A_{\mathrm{FUV}}$ to $\mathrm{IRX_{TIR−-FUV}}$ and/or the failure of the SED fitting analysis. In the case of the SMC attenuation law (middle panel), our sample also shows the systematically bluer UV slope $\beta$ especially at the larger IRX value. Most of our sample, however, show the moderate IRX value (IRX $\leq$ 10), and we find a few IR excess galaxies in our sample. In conclusion, our sample indicates the presence of the IR excess galaxies at \textit{z} $\sim$ 4.
When comparing the previous works (right panel in figure \ref{fig161718}), our sample from \authorcite{Calz00} attenuation law tends to have the bluer UV slope $\beta$ at the larger IRX value than all the previous works while our sample from SMC attenuation law is comparable to the those of AM16 and F17. Our results from both attenuation law are not consistent with the result of B16. We note that the difference in the stellar mass of the sample is critical for the IRX--$\beta$ relation since both the IRX and $\beta$ values depend on the stellar mass (e.g., \cite{Alva16}; \cite{Bouw16}; \cite{Fink12}; \cite{Fuda17}), and we consider that the inconsistency between our results and B16 is attributed to the difference in the stellar mass. The red data point from F17 at $\beta \sim -1.7$ and IRX $\sim$ 10 (most left side point) is comparable to our result from \authorcite{Calz00} attenuation law although the other data point from F17 is comparable to that from SMC. The authors mention that the most left side point is uncertain because of the small sample size of the bin. Therefore, our result from SMC attenuation law is not inconsistent with the previous works.
\begin{figure*}
\begin{center}
\includegraphics[width=60mm]{Pic-Official.IRX-850um.v1cal.v3K5sigSecure-scat.eps}
\includegraphics[width=60mm]{Pic-Official.IRX-850um.v2smc.v6K5sig-scat.eps}
\end{center}
\caption{ Predicted flux density at observed-frame 850$\>\micron$ for our sample. The left panel shows the case of the \authorcite{Calz00} attenuation law, and the right panel shows the case of the SMC attenuation law. The vertical axis is the predicted $S_{\mathrm{850}}$ value and the horizontal axis is the predicted IRX value. The blue open diamonds represent the individual objects detected at $> 5\,\sigma$ level in \textit{K} band photometry. The green filled circle with the orange error bars denotes the median value and median uncertainty estimated from the uncertainty in Av. The horizontal magenta solid line denotes the flux density measured in \citet{Copp15}. } \label{fig1920}
\end{figure*}
For the further verification of our result, figure \ref{fig1920} shows the prediction of $S_{\mathrm{850}}$ for the case of the \authorcite{Calz00} attenuation law (Left) and the SMC attenuation law (Right). The vertical axis is the predicted $S_{850}$ value and the horizontal axis is the predicted IRX value. The blue open diamonds represents the individual objects detected at $> 5\,\sigma$ level in \textit{K} band photometry in our sample. The green filled circle with the orange error bars denotes the median value and median uncertainty of the whole sample. The uncertainty is also estimated from the uncertainty in Av. The horizontal magenta solid line denotes the flux density for the stacked LBG at \textit{z} $\sim$ 4 measured in \citet{Copp15} whose sample is quite similar to our sample. Since the sample in \citet{Copp15} consists of the \textit{K} band detected objects, we only show the $K > 5\,\sigma$ objects in the figure. According to \citet{Copp15}, the flux density measured for the stacked image is $S_{\mathrm{850}} = 0.411 \pm 0.064\>$mJy. We note that the result of \citet{Copp15} is derived from the SED template library constructed by \citet{Swin14} and the modified blackbody $+$ power-law formula is used just for checking the validity of their SED fitting analysis.
The figure shows that the predicted $S_{\mathrm{850}}$ flux from the SMC attenuation law is insufficient to reproduce the stacking result of \citet{Copp15}, but the result from the \authorcite{Calz00} attenuation law is consistent with the stacking result. This comparison indicates that a part of \textit{z} $\sim$ 4 LBGs are indeed significantly dust attenuated and there must be IR luminous star-forming galaxies in our sample. Alternatively, at least the SMC attenuation law is unsuitable for high-\textit{z} and \textit{K}-detected LBGs. However, the difference between \citet{Copp15} result and ours can be due to the fact that the stacking result is the average weighted by luminosities while our median values are not. Since the red LBGs in our sample can be easily detected and measured by using the ALMA, future ALMA observations for individual detection will potentially solve the discrepancy.
We consider the possible interpretation of our optical/NIR-based IRX--$\beta$ relation. The IRX--$\beta$ relation is expressed as $\log_{10} \mathrm{IRX} = \log_{10}(10^{0.4*\mathrm{c1}*(\beta - \beta_{0})} - 1.0) + \mathrm{c2}$ where c1, c2, and $\beta_{0}$ are a constant value. The c1 value is the slope of the relation between dust attenuation Av and the observed UV slope $\beta$, $d \mathrm{Av}/d \beta$, which is specified by the dust extinction curve. The c2 value represents the bolometric correction because the observed UV and IR Luminosity is not the representative value and we need the correction factor for the observed values. The $\beta_{0}$ value is the intrinsic UV slope $\beta$ as investigated in this paper. In short, the IRX--$\beta$ relation assumes that the extinction curve and the stellar population hidden by dust does not vary significantly with the physical quantities of the star-forming galaxies.
In the previous works for the IR-based IRX--$\beta$ relation, by using the fixed $\beta_{0}$ value ($\sim -2.2$), the authors discuss the suitable extinction curve for reproducing the IRX--$\beta$ relation seen in high redshift galaxies (e.g., \cite{Cpk15}; \cite{Alva16}; \cite{Bouw16}). \citet{Redd18} explain the the IRX--$\beta$ relation of the \textit{z} $\sim$ 2 galaxies by using the SMC attenuation law and the more bluer $\beta_{0}$ value ($\sim -2.6$), which is derived from the recent stellar population synthesis model. Moreover, \citet{LeeKS12} and \citet{Redd12} discuss the variation of the extinction curve according to the observed UV magnitude and the age of star-forming galaxies.
From our analysis, assuming a certain dust extinction curve, the observed properties are not represented by the IRX--$\beta$ relation with the fixed $\beta_{0}$ value, and it is required that there is the variation of the intrinsic $\beta$ value or the variation of the extinction curve, or both, depending on the physical quantities of the star-forming galaxies. The prediction of the $S_{850}$ flux indicates that our sample is expected to include the highly dust attenuated and IR luminous galaxies which are explained by the \authorcite{Calz00} attenuation law. Therefore, while the less dusty galaxies can be characterized by either attenuation law of \authorcite{Calz00} and SMC, the highly dust attenuated galaxies are most likely characterized by the \authorcite{Calz00} attenuation law and the bluer intrinsic $\beta$ value. Although it is difficult to confirm the variation from our results, there seems to be the variation of the intrinsic $\beta$ value or the extinction curve according to the physical quantities of the star-forming galaxies.
\section{Conclusion} \label{S6Conc}
In this work, we investigate the UV slope $\beta$ and stellar population of bright star-forming galaxies at \textit{z} $\sim$ 4 in the SXDS field which is the wide-area and deep survey field. We use the imaging data of Subaru/\textit{BVRi'z'}updated-\textit{z'}, UKIRT/\textit{JHK}, HST/F125WF160W, and Spitzer/3.6$\>\micron$ 4.5$\>\micron$, and we construct the sample of star-forming galaxies at \textit{z} $\sim$ 4 by both Lyman Break technique and photometric redshift selection. The UV slope $\beta$ is calculated by the simple power-law fit, and the stellar population is estimated from the optical and NIR photometry thorough the SED fitting analysis. Consequently, we find a sign that some star-forming galaxies, which experience on-going active star formation and suffer heavy dust attenuation, really exist in the \textit{z} $\sim$ 4 universe. We list our main results below.
\begin{itemize}
\item
There seems to be little dependence of the observed UV slope $\beta$ on the observed UV absolute magnitude $M_{\mathrm{UV}}$ in the range of $-22.0 \lesssim M_{\mathrm{UV}} \lesssim -20.0$ although the dynamic range of $M_{\mathrm{UV}}$ is limited. The slope of the $\beta$--$M_{\mathrm{UV}}$ relation is $-$0.02 $\pm$ 0.02, and it is more shallower than the previous studies for similar redshift but fainter LBGs ($-$0.13 $\pm$ 0.02 from \cite{Bouw14} and $-$0.10 $\pm$ 0.03 from \cite{Kurc14}).
\item
For investigating the dependence of the UV slope $\beta$ on the dust attenuation, age, metallicity, and SFH, we calculate the \textit{intrinsic} (dust-corrected) UV slope, $\beta_{\mathrm{int}}$, and \textit{intrinsic} UV absolute magnitude, $M_{\mathrm{UV,int}}$, by using the results of the SED fitting analysis. The star-forming galaxies with the bluest $\beta_{\mathrm{int}}$ and brightest $M_{\mathrm{UV,int}}$ value are the dusty star-forming population which is observed with the red $\beta_{\mathrm{obs}}$ value. The dusty star-forming population has $\beta_{\mathrm{obs}} > -1.73$, $\mathrm{Av} \geq 1.0$, $\beta_{\mathrm{int}} \leq -2.42$, and SFR $\gtrsim$ a few $\times\ 10^{2}\,\mathrm{M_{\solar}}\>\mathrm{yr^{-1}}$, and we see the flat $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ distribution due to such population.
\item
We find the intersection point of the $\beta_{\mathrm{int}}$--$M_{\mathrm{UV,int}}$ relation and the $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation by extrapolating our relation toward the fainter magnitude range. The intersection point represents the position of the appearance of nearly dust-free population, and it is at $\beta = -1.94$ and $M_{UV} = -18.88$ which is close to the break point of $\beta_{\mathrm{obs}}$--$M_{\mathrm{UV,obs}}$ relation reported by \citet{Bouw14}.
\item
Our result does not depend on the SFHs used in the SED fitting analysis. However, our result depends on the assumption of the attenuation law. The best-fit dust attenuation value assuming the SMC attenuation law is found to be smaller than that obtained with the \authorcite{Calz00} attenuation law. The trend that the intrinsic $\beta$ value increases with the intrinsic $M_{\mathrm{UV,int}}$ value appears for both the cases.
\item
We compare the observed color of the \textit{zJK} broad-band filters with the expected colors. Since the \textit{z-J} color traces the UV slope $\beta$ and the \textit{J-K} color traces the Balmer break of \textit{z} $\sim$ 4 LBGs, we can also infer the stellar population by the observed quantities. The observed color of the $\beta_{\mathrm{obs}} > -1.73$ sub-sample of the \textit{z} $~$ 4 star-forming galaxies is well reproduced by star-forming, dusty, and young-age (blue $\beta_{\mathrm{int}}$) population.
\item
We estimate the IRX ($= L_{\mathrm{TIR}}/L_{\mathrm{FUV}}$) value and the flux density at observed-frame 850$\>\micron$, $S_{\mathrm{850}}$, from only the optical and NIR imaging data. The optical/NIR-based IRX--$\beta$ relation indicates the variation of the intrinsic $\beta$ value or the variation of the dust attenuation law, or both, according to the physical quantities of the star-forming galaxies. The $S_{\mathrm{850}}$ value estimated from the SMC attenuation law is not consistent with the stacking results of \citet{Copp15}, and thus the \authorcite{Calz00} attenuation law is preferable to the \textit{z} $\sim$ 4 intrinsically luminous LBGs.
\item
Our analysis indicates that a significant fraction of \textit{z} $\sim$ 4 LBGs are the highly dust attenuated and IR luminous population such as ULIRGs/LIRGs. This population has not been recognized very well in the previous analysis but is important in understanding early phase of galaxy formation possibly linking the typical blue LBGs and the further very red sub-mm selected galaxies.
\end{itemize}
\begin{ack}
We appreciate M. Kajisawa, A. Inoue, K. mawatari, and T. Hashimoto for helpful comments and discussions.
This work is mainly based on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
The UKIDSS project is defined in \citet{Law07}. UKIDSS uses the UKIRT Wide Field Camera (WFCAM; \cite{Cas07}). The photometric system is described in \citet{Hew06}, and the calibration is described in \citet{Hodg09}. The pipeline processing and science archive are described in Irwin et al (2009, in prep) and \citet{Hamb08}. We used UKIDSS data release 10.
This work is based on observations taken by the CANDELS Multi-Cycle Treasury Program with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
This work is based in part on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
Data analysis was in part carried out on the open use data analysis computer system at the Astronomy Data Center, ADC, of the National Astronomical Observatory of Japan. We used the interactive analysis servers (anam[01-16]), the batch processing servers (bapm[01-06]), the terminal workstations (new-r[01-13]), and the disk space (home and mfst).
This work was supported by JSPS KAKENHI Grant Number JP26400217.
\end{ack}
|
1,116,691,497,913 | arxiv | \section{ Introduction}
The new data of TOTEM collaboration\cite{TOTEMRHO1,TOTEMRHO2,TOTEMRHO3,
TOTEMRHO4}
drew attention to the state with negative signature and with an
intercept which is close to unity
(see Refs.\cite{KMRO,BJRS,TT,MN,BLM,SS,KMRO1,KMRO2,GLP,CNPSS}).
This state is known as the Odderon, and it appears naturally in
perturbative QCD (see Ref.\cite{KOLEB} for the review)
with the intercept $\alpha_{\rm Odd}\Lb t=0\right)\,\,=\,\,1$\cite{BLV,KS}.
In a number of papers it was shown that such a state could be
helpful for describing the experimental
data\cite{KMRO,BJRS,TT,MN,BLM,SS,KMRO1,KMRO2,GLP,CNPSS}.
However, in perturbative QCD the dependence of the Odderon on energy
is crucially affected by the shadowing corrections, which lead to a
substantial decrease of the Odderon contribution
with increasing energy\cite{KS,KOLEB,CLMS}.
In this paper we wish to study the shadowing corrections to the
Odderon contribution using the model that we proposed in
Ref.\cite{GLPPM}. The model is based (i) on
Pomeron calculus in 1+1 space-time, suggested in Ref. \cite{KLL},
and
(ii) on
simple assumptions of hadron structure, related to the impact
parameter
dependence of the scattering amplitude. This parton model stems from QCD,
assuming that the unknown non-perturbative corrections lead to
determining
the
size of the interacting dipoles. The advantage of this approach is that
it
satisfies both the $t$-channel and $s$-channel unitarity, and can be
used
for summing all diagrams of the Pomeron interaction, including Pomeron
loops. In other words, we can use this approach for all possible
reactions: dilute-dilute (hadron-hadron), dilute-dense (hadron-nucleus)
and dense-dense (nucleus-nucleus), parton systems scattering.
The model gives a fairly good description of
three
experimental
observables: $\sigma_{\rm tot}$,$\sigma_{\rm el}$ and $B_{\rm el}$ for proton-proton scattering,
in the eikonal model for
the structure of hadrons at high energy.
The goal of this paper is study the influence of the shadowing
corrections on the Odderon contribution in our model.
\section{ The model (brief review)}
Our model includes three essential ingredients: (i) the new parton
model for the dipole-dipole scattering amplitude that has been discussed
above; (ii) the simplified one channel model that enables us to
take
into account diffractive production in the low mass region, and (iii)
the assumptions for impact parameter dependence of the initial
conditions.
\subsection{New parton model.}
The model that we employ \cite{GLPPM,KLL} is based on
three
ingredients:
1. The Colour Glass Condensate
(GCC) approach (see Ref.\cite{KOLEB} for a review), which can be
re-written in an equivalent form as the interaction of BFKL
Pomerons\cite{AKLL} in a limited range of rapidities
( $Y \leq Y_{\rm max}$):
\begin{equation} \label{RAPRA}
Y \,\leq\,\frac{2}{\Delta_{\mbox{\tiny BFKL}}}\,\ln\Lb
\frac{1}{\Delta^2_{\mbox
{\tiny BFKL}}}\right)
\end{equation}
$\Delta_{\mbox{\tiny BFKL}}$ denotes the intercept of the BFKL
Pomeron\cite{BFKL}. In our model $ \Delta_{\mbox{\tiny BFKL}}\,
\approx\,0.2 - 0.25$ leading to $Y_{max} = 20 - 30$, which covers
all collider energies.
2. The following Hamiltonian:
\begin{equation}\label{HNPM}
{\cal H}_{\rm NPM}=-\frac{1}{\gamma}\bar PP\end{equation}
where NPM stands for ``new parton model''. $P$ and $\bar P$ denote
the BFKL
Pomeron fields.
The fact that it is self dual is evident. This Hamiltonian in the limit
of small $\bar P$ reproduces the Balitsky-Kovchegov Hamiltonian
${\cal H}_{\rm BK}$
( see Ref.\cite{KLL} for details). This condition is
important for determining the form of
${\cal H}_{\rm NPM}$. $\gamma$ in \eq{HNPM} denotes the dipole-dipole
scattering amplitude, which in QCD is proportional to $\bar{\alpha}_S^2$.
3. The new commutation relations:
\begin{equation}\label{CRCOR}
\Big(1\,\,-\,\,P\Big)\Big(1\,\,-\,\,\bar P \Big)\,\,=\,\,(1-\gamma)\Big(1\,\,-\,\,\bar P\Big) \Big(1\,\,-\,\,P\Big)
\end{equation}
For small $\gamma$ and in the regime where $P$ and $\bar P$ are also
small, we obtain
\begin{equation}
[P,\bar P]=-\gamma +...
\end{equation}
consistent with the standard BFKL Pomeron calculus (see Ref.\cite{KLL}
for details) .
In Ref.\cite{KLL}, it was shown that the scattering matrix
for the model
is
given by
\begin{eqnarray}\label{classs}
S^{\rm NPM}_{m\bar n}(Y)&=&e^{\frac{1}{\gamma} \int_0^Yd\eta\left[
\ln(1-p)\frac{\partial}{\partial \eta}\ln (1-\bar p)
+\bar pp\right]}[1-p(Y)]^m[1-\bar p(0)]^{\bar n}|_{p(0)=1-e^{-\gamma
\bar n};\ \bar p(Y)=1-e^{-\gamma m}}\nonumber\\
&=&[1-p(Y)]^m\,e^{\frac{1}{\gamma}\int_0^Yd\eta \left[\ln(1-\bar p)+\bar
p\right]p}
\end{eqnarray}
where $p(\eta)$ and $\bar p(\eta)$ are solutions of the classical equations
of motion and have the form:
\begin{equation} \label{H03}
P (\eta)\,=\,\frac{ \alpha +\beta e^{ (1 - \alpha) \eta} }{1 + \beta e^{ (1
- \alpha) \eta}}; \ \ \ \ \bar P(\eta)= \frac{ \alpha (1+\beta e^{
(1 - \alpha) \eta}) }{\alpha + \beta e^{ (1 - \alpha) \eta}};
\end{equation}
where the parameters $\beta$ and $\alpha$ should be determined from
the
boundary conditions:
\begin{equation} \label{H0BC}
P (\eta= 0)\,=\,p_0;\,\,\,\,\,\,\,\, \bar P (\eta= Y)\,=\,\frac{\alpha}{P
(\eta= Y)}\,=\,\bar p_0
\end{equation}
\subsection{ Eikonal approximation }
In the eikonal approximation we neglect the contribution of the
diffractive production and assume that the hadron wave function
diagonalize the matrix of interaction. In this model
the unitarity constraints take the form
\begin{equation} \label{UNIT}
2\,\mbox{Im}\,A\left(s,b\right)=|A\left(s,b\right)|^2
+G^{in}(s,b),
\end{equation}
where $G^{in}$ denotes the contribution of all inelastic processes.
In \eq{UNIT}
$\sqrt{s}=W$ denotes the energy of the colliding hadrons and $b$ denotes
the
impact parameter. In our approach we used the solution to \eq{UNIT}
given by \eq{classs} and
\begin{equation} \label{EIK}
A \,=\,1 - S^{\rm NPM}(Y, b)\,\,\,\equiv\,\,\,1\,\,-\,\,\exp\Big(- \Omega\Lb Y,b\right)\Big)
\end{equation}
\subsection{ The general formulae.}
{\it Initial conditions:}
Following Ref.\cite{GLPPM} we chose the initial conditions in the form:
\begin{equation} \label{IC}
p(b') = p_{0 } \,S(b',m)~~~\mbox{with}~~S(b,m)= m b K_1(m\, b );~~~~~
\bar{p}(\vec{b} - \vec{b}) = p_{0} S( \vec{b} - \vec{b}',m) ~~~~~~~z_m = e^{\Delta(1 - p_{0})Y}
\end{equation}
Both $p_{0} $ and mass $m$, as well as the Pomeron intercept
$\Delta$, are parameters of the model, which are determined by
fitting to the relevant data. Note, that
$S\Lb b, m\right) \xrightarrow{m_i\,b \gg 1}\,\exp\Lb - m\,b\right)$
in accord with the Froissart theorem\cite{FROI}.
From \eq{IC} we find that
\begin{eqnarray}
a(b,b')\,\equiv\, a\Lb p, \bar{p}, z_m\right) &=&\,\frac{1}{2}\Lb p + \bar{p}\right) \,+\,\frac{1}{2\,z_m}\Lb (1-p)(1-\bar{p} ) \,-\, D\right);\label{ALEQ}\\
b(b,b') \,\equiv\,b\Lb p, \bar{p}, z_m\right) \,\,&=&\, \frac{1}{2} \frac{p- \bar{p}}{1 - p} -\frac{1}{2 z_m (1 - p)}\Lb (1- p) (1 - \bar{p}) - D\right);\label{BEEQ}\\
~~
D &=& \sqrt{4 p (1 - p) (1- \bar{p}) z_m - \Lb (1 - p) (1- \bar{p}) - (p - \bar{p}) z_m\right)^2};\label{D}
\end{eqnarray}
These equation are the explicit solutions to \eq{H03} and \eq{H0BC}.
\par{\it Amplitudes:}
In the following equations $ p \equiv p ( b')$ and $\bar{p}
\equiv \bar{p}(\vec{b} - \vec{b} ')$.
~
$$z = e^{\Delta\,(1- p_{0})\,y}$$
~
$S (a,b,z) \equiv S(a(b,b'),b(b,b'),z_m)$, \,\, $ X( a, b,z) \equiv X(a(b,b'),b(b,b'),z_m)$
\begin{equation}
X(a,b,z)) = \frac{a + b z}{1 + b z}
\end{equation}
\begin{eqnarray}
&&SS(a,b,z)=\\
&&-(a-1) \text{Li}_2(-b z)+a
\text{Li}_2\left(-\frac{b
z}{a}\right)+(a -1)
\text{Li}_2\left(\frac{a+b
z}{a -1}\right)+\frac{1}{2} a \log
^2((1-a) b z)\nonumber\\
&& -(a -1) \log (b z+1)
\log ((1-a) b z)
-\left(a \log
(z)-(a-1) \log \left(-\frac{b
z+1}{a -1}\right)\right) \log (a + b
z)\nonumber\\
&& +a \log (z) \log \left(\frac{b
z}{a+1}\right)\nonumber
\end{eqnarray}
\begin{equation} \label{FIN}
S(a,b,z) \,\,=\,\,SS(a, b, z) \,-\,SS(a , b,z=1) \end{equation}
~
The amplitude is given by
\begin{eqnarray} \label{AIK}
\hspace{-1cm}&&A(s, b)\,\,\,=\,\,\,1\,\,\,-\,\,\,e^{ - \Omega\Lb W, b\right) }\,\,=\\
\hspace{-1cm}&&\,1 - \exp\Bigg( \frac{1}{p_{0}}\int \frac{m^2 d^2 b'}{4 \pi} \Big( S(a,b,z_m) \,\,+\,\, a(b,b') \Delta (1 - p_0) Y\Big) - \int \frac{m^2 d^2 b'}{4 \pi} \bar{p}( \vec{b} - \vec{b}',m)\,X(a, b,z_m) \Bigg)\nonumber
\end{eqnarray}
\section{The Odderon contribution}
\subsection{ Odderon exchange}
As has been mentioned, we view the Odderon as a reggeon with negative
signature and with the intercept
$\alpha_{\rm Odd}(t=0)$=1. Generally speaking its contribution to the
scattering amplitude has the following form:
\begin{equation} \label{ODD1}
O_{i k}(s,b)\,\,=\,\,\eta_{-}(t) \,g^i_{\rm Odd}(b)\,g^k_{\rm Odd}(b)\,\,s^{\alpha_{\rm Odd}(t)\,\,-\,\,1}
\end{equation}
where $\eta_{-}$ is a signature factor $\eta\,\,=\,\,\tan\Lb \frac{1}{2} \pi\,
\alpha_{\rm Odd}(t)\right)\,\,-\,\,i$
, $g^i_{\rm Odd}$ is the vertex for the interaction of the Odderon with
state $i$, and $\alpha_{\rm Odd}$ denotes the trajectory. The Odderon
appears naturally in perturbative QCD. As one can see from
\fig{odqcd} the QCD Odderon describes the exchange of three
gluons and all the interactions between them. The QCD Odderon has
the trajectory with the intercept equal to 1 and which does
not depend on $t$\cite{BLV,KS}. Hence, the Odderon only contributes
to the real part of the scattering amplitude. For an
estimate we will use the following form of the Odderon contribution:
\begin{equation} \label{ODD2}
O_{i k}(s,b)\,\,=\,\,\pm \, \sigma_0 e^{ - \frac{b^2}{4 \,B}}
\end{equation}
where sign plus corresponds to proton-antiproton scattering, while
sign minus describes the proton-proton collisions.
The value of $\sigma_0$ was evaluated in Ref.\cite{RYODD} (see also
Ref.\cite{LERY90}) in the framework of perturbative QCD. It turns
out that $\sigma_0\,\,=\,\,20.6\,\bar{\alpha}_S^3\,mb$. In perturbative QCD,
we expect that $B$ is smaller than for the elastic scattering. We
choose $ B=5.6-6 \,GeV^{-2}$ for our estimates\cite{GLPPM,KMRO}.
In \eq{ODD2} we assume that $g^i_{\rm Odd}(b)$
in \eq{ODD1} does not depend on $i$.
\begin{figure}
\centering
\includegraphics[width=14cm]{OddQCD.pdf}
\caption{QCD Odderon for two dipoles scattering: the wavy
lines describe gluons and the solid lines correspond to quarks. }
\label{odqcd}
\end{figure}
\subsection{Shadowing corrections}
In the eikonal model the elastic amplitude is equal to
(see \fig{sc}-a)
\begin{eqnarray} \label{SC1}
A_{\rm el}\Lb s, b\right) \,\,\,&=&\,\,\,\,1\,\,- \,\,\exp\Big( -\,\,\Omega\Lb s, b\right)\Big)
\end{eqnarray}
\eq{SC1} is the series whose general term is proportional to
$\Omega^n/n!$. In the case of Odderon exchange we
need to replace one of $\Omega$ by $O(s,b)$. Hence
$\Omega^n/n!$
should be replaced by $ O(s,b) \,n\, \Omega^{n - 1}/n!
\,\,=\,\,O(s,b) \Omega^{n - 1}/(n - 1)! $. Finally, we
have (see also Ref.\cite{GLP})
\begin{equation} \label{SC2}
O^{\rm SC}\Lb s, b\right) \,\,\,=\,\,\,\, \,O(s,b)\,e^{- \Omega\Lb s, b\right)} \,\,=\,\,
\,\,O(s,b)\,\Bigg( 1\,\,-\,\,A_{\rm el}\Lb s, b\right)\Bigg)
\end{equation}
\begin{figure}
\centering
\includegraphics[width=12cm]{OddEik.pdf}
\caption{Shadowing corrections to the Odderon exchange:
\fig{sc}-a: elastic amplitude in the two channel model.
\fig{sc}-b: the shadowing corrections in our model.
The wavy lines describe the Pomeron exchanges while
the zigzag line corresponds to the exchange of the Odderon. }
\label{sc}
\end{figure}
\subsection{Numerical estimates}
In this section we make estimates using our model
for $\Omega$, with parameters that are given by Table I.
\begin{figure}
\centering
\includegraphics[width=12cm]{Bdep.pdf}
\caption{$O\Lb W, b\right)$ versus $b$ for different energies.
The red line corresponds to the contribution of \eq{ODD2}. }
\label{b}
\end{figure}
In \fig{b} we plot the $b$ dependence of the Odderon contribution.
One can see that the shadowing corrections lead to a considerable
suppression of the Odderon contribution at small $b$ in comparison
with \eq{ODD2} (see red line in \fig{b}). This suppression
is much smaller than in our approach, based on CGC\cite{GLP}. The reason
for this is that in our model the value of $A_{\rm el}\Lb s,
b\right) $
turns out to be smaller than 1 even at very high energies. Due to this
$O\Lb W, b=0\right)\,\,\neq\,\,0$ even at $W \approx 20 TeV$.
In \fig{rho} we plot the contribution of the Odderon to the ratio of
$\rho\,\,=\,\,{\rm Re}/{\rm Im}$ parts of the scattering amplitude as
function of energy. One sees the influence of the shadowing
corrections, which induce the energy dependence of this ratio
on energy. \eq{ODD2} shows that the Odderon does not depend
on energy without these corrections. This induced energy dependence
turns out to be rather large causing a decrease of $\rho$
in the energy range: W = 0.5 $\div$ 20 TeV. However, this effect is
much smaller than in our previous estimates \cite{GLP} and the value
of $\rho$ does not contradict the experimental data
\cite{TOTEMRHO1,TOTEMRHO2,TOTEMRHO3, TOTEMRHO4}.
The shadowing correction has a remarkable
effect on the $t$-dependence
of the scattering amplitude (see \fig{q} ). We see that the shadowing
corrections lead to a narrower distribution over $t$, than the
input
given by \eq{ODD2}, which is shown in \fig{q} by the red line.
\begin{figure}
\centering
\includegraphics[width=12cm]{RhovsW1.pdf}
\caption{$\rho = {\rm Re}/{\rm Im}$ due to the Odderon
contribution versus W in our model. The solid line presents the estimates,
using \eq{SC2}, while the dashed line describes the evaluation in two
channel model of Ref.\cite{GLPPM2}. }
\label{rho}
\end{figure}
\begin{table}[h]
\begin{tabular}{|l|l|l|l|}
\hline
\hline
$\Delta_{\rm dressed}$ & $p_{0}$ & $m(GeV)$ & $\chi^2$/d.o.f\\
\hline
0.331 $\pm$ 0.030 & 0.483 $\pm$ 0.030 &0.867 $\pm$ 0.005& 1.3\\
\hline
\hline
\end{tabular}
\caption{Fitted parameters.$\Delta_{\rm dressed} = \Delta\Lb 1 - p_{0}\right)$.}
\label{t2}
\end{table}
\begin{figure}
\centering
\includegraphics[width=8cm]{Qdep.pdf}
\caption{$O\Lb W,q=\sqrt{|t|}\right)$ versus $q =
\sqrt{|t|}$ for different energies. The red line corresponds
to the contribution of \eq{ODD2}. }
\label{q}
\end{figure}
\begin{boldmath}
\section{Dependence of the elastic cross sections on $t$ and the Odderon}
\end{boldmath}
In this section we will look at another facet of the Odderon
contribution: it could contribute to the real part of the
scattering amplitude at the value of $t=t_{min}$, where $d
\sigma_{\rm el}/dt$ has a minimum.
We attempt to describe the elastic cross section for $|t|=0 \div
1\,GeV^2$ . Our model predicts the existence of a
minimum
in the elastic cross sections, however its position occurs at
$|t |\,\approx\,0.3\,GeV^2$, which is much smaller than was
observed experimentally by TOTEM collaboration\cite{TOTEMLT}.
Assuming that this discrepancy is due to the simplified
form of
$b$ dependence of our amplitude which is given by \eq{IC}, we
changed the initial conditions of \eq{IC} to the following equations
\begin{equation} \label{IC1}
p(b') = p_{0 i} \,S(b',m,\mu,\kappa_i)~~~\mbox{with}
~~S(b,m,\mu,\kappa_i)= \Lb 1 - \kappa\right) \Lb m\,b\right)^{\nu_1} K_{\nu_1}(m \,b )\,\,+\,\,\kappa \frac{\Lb m \,b\right)^{\nu_2} K_{\nu_2}(\mu \,b )}{2^{\nu_2\,-\,1} \,\Gamma\Lb \nu_2\right)}
\end{equation}
In Table II we present the parameters that we found
for the fit. \fig{dsdt} shows the comparison with TOTEM data
of Ref.\cite{TOTEMLT}. One can see that we obtain good agreement
with the experimental data for $|t|\,<\,|t|_{min}$ and for
$|t|\,>\, |t|_{min}$. However, for $ |t| \approx\,|t|_{min}$ the real
part
of the scattering amplitude turns out to be small, and we obtain
a value of the $d \sigma_{el}/d \,t$ approximately an order of
magnitude smaller than the experimental one. It should be stressed that
we do not use any of the simplified approaches to estimate the
real part of
the amplitude, but using our general expression of \eq{AIK} for $A_{i
k}\Lb s,t\right)$, we consider the sum $A_{ik}\Lb s,+ i \epsilon t\right) +
A_{ik}\Lb u-i \epsilon,t\right)$, which corresponds to positive signature,
and calculated the real part of this sum.
In \fig{dsdt} we estimate the contribution of the $\omega$ -reggeon ,
using the
description taken from the paper of Ref.\cite{DOLA}(note the difference
between green dashed line and the blue solid curve). This
contribution is small, and can be neglected.
\begin{figure}[ht]
\centering
\leavevmode
\includegraphics[width=11cm]{DSDT.pdf}
\caption{$d \sigma_{el}/dt$ versus $t$. The black green line
describes the result of our fit. The dashed line corresponds to the
contribution of the imaginary part of the scattering amplitude to the
elastic cross section. The dotted line relates to the real part of our
amplitude.The red solid line takes into account the contribution of the
odderon to the real part of the $p p$ amplitude, as is shown in
\eq{ODD}. The data , shown in grey, include systematic errors. They are
taken from Ref.\cite{TOTEMLT}. } \label{dsdt}
\end{figure}
To evaluate the real part of the amplitude we use the relation:
\begin{equation} \label{DSDT1}
{\rm Re}A_{11}(s,t)\,\,=\,\,\frac{1}{2} \,\pi\,\frac{\partial}{\partial\,\ln\Lb
s/s_0\right)}\, {\rm Im}A_{11}\Lb s,t\right)|_{\eq{AIK}}
\end{equation}
\eq{DSDT1} correctly describes the real part of the amplitude only for
small $\rho={\rm Re}A/{\rm Im} A$. In \fig{dsdt1} we plot
the $d \sigma/dt$ with these estimates for the real part.
The real part from \eq{DSDT1} turns
out to be almost
twice larger than the experimental data in the vicinity of $t_{min}$.
Therefore, at the minimum, where
${\rm Im}\, A \,\ll\,{\rm Re}A$, \eq{DSDT1} cannot be used
for the real part. However, replacing \eq{DSDT1} by
\begin{equation} \label{DSDT2}
{\rm Re}A_{11}(s,t)\,\,=\,\,\tan\Lb\rho\right)\, {\rm Im}A_{11}\Lb s,t\right)|_{ \eq{AIK}}
\end{equation}
we obtain the same result, that the real part of the amplitude turns
out to be too large. Actually,\eq{DSDT2} assumes that the scattering
amplitude depends on energy as a power $A\Lb s,
t\right)\,\propto\,s^{2\,\rho/\pi}$. Our amplitude
is a rather complex function of energy, and depends
on $\ln(s)$.
\begin{figure}[ht]
\centering
\leavevmode
\includegraphics[width=10cm]{DSDT1.pdf}
\caption{$d \sigma_{el}/dt$ versus $t$. The solid line
describes the result of our fit. The dotted line corresponds
to the contribution of the real part of the scattering amplitude
to the elastic cross section, which is calculated using \eq{DSDT1},
with added contribution of the
exchange of the $\omega$ - reggeon, which is taken from
Ref.\cite{DOLA}. We do not show the contribution of the real part
without the $\omega$-reggeon as it coincides with the dotted
line. The dashed line is the contribution of the imaginary part
of the amplitude.
The data are taken from Ref.\cite{TOTEMLT}) }
\label{dsdt1}
\end{figure}
\begin{table}[h]
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
Variant of the fit &$\Delta_{\rm dressed}$ & $p_{0}$ & $m$
(GeV) &$\mu$(GeV)& $\nu_1$& $\nu_2$&$\kappa$\\
\hline
one channel model &0.48 $\pm$ 0.01&0.8 $\pm$ 0.05&0.860 &7.6344&0.9&0.1 &0.48\\\hline
\hline
\end{tabular}
\caption{Fitted parameters for $d \sigma_{el}/d t$
dependence.$\Delta_{\rm dressed} = \Delta\Lb 1 - p_{01}\right)$.}
\label{t3}
\end{table}
Concluding, we see that to describe the TOTEM experimental data in
the framework of our model, the contribution to
the real part of the amplitude from the exchange of the odderon\cite{ODD}
is needed.
Hence, our
estimates confirm the conclusions of Ref.\cite{ODDSC}. In
\fig{dsdt} we plot the description of the elastic cross section in
which we have added the odderon contribution to the amplitude of
\eq{AIK}
(red solid curve in \fig{dsdt}):
\begin{equation} \label{ODD}
f\Lb s,t\right)\,\,=\,\,f\Lb s,t; \eq{AIK}\right)\,\,\pm\,\, O(s,t) \end{equation}
where we consider a QCD odderon\cite{ODD}: the state with odd signature and
with the intercept $\alpha_{\rm odd}(t=0)=1$, which contributes only
to the real part of the scattering amplitude. $O(s,t)$ is given by
\eq{ODD2}. Our odderon parameters are in
accord with the estimates in Ref.\cite{KMR}. The amplitude $f(s,t)$ is
related to $a_{el}\Lb s,b\right)$ as
\begin{equation} \label{OBSEL}
\frac{d \,\sigma_{el}}{d t}\,\,=\,\,\pi\,|f(s,t)|^2;\,\,\,\,\,a_{el}(s,b)\,\,=\,\,\frac{1}{2 \,\pi}\int d^2 q\, e^{- i \vec{q}\cdot\vec{b}}\,f\Lb s,t\right) \mbox{where}\,\, t\,=\,- q^2
\end{equation}
In \fig{dsdt2} we show the prediction for proton-antiproton scattering.
One can conclude that in our model the measurements of the elastic cross
sections for $p\,p$ and $\bar{p} p$ scattering can provide the estimates
for
the odderon contribution. It should be stressed that the contribution
of the $\omega$-reggeon leads to negligible contribution at
$W= 7\,TeV$ (see \fig{dsdt1}).
\begin{figure}[ht]
\centering
\leavevmode
\includegraphics[width=10cm]{DSDT2.pdf}
\caption{$d \sigma_{el}/dt$ versus $t$.
The solid line describes the elastic cross sections
for $p p$-scattering with the odderon contribution
(see \eq{ODD}), while the dashed line shows the
elastic cross section for $\bar{p} p$-scattering
using \eq{ODD}. The data are taken from Ref.\cite{TOTEMLT})
}
\label{dsdt2}
\end{figure}
~
~
\section{Conclusions}
In this paper we discussed the Odderon contribution in
our model\cite{GLPPM} that provides a fairly good description of
$\sigma_{\rm tot}$,$\sigma_{\rm el}$
and $B_{\rm el}$, especially as related to the energy dependence
of these observables. We showed that the shadowing
corrections are large and induce considerable dependence
on energy for the Odderon contribution, which
in perturbative QCD is energy independent . However, this energy
dependence does not contradict the experimental data for
$\rho = {\rm Re}/{\rm Im}$, if we assume that the Odderon
gives a contribution of about $1 \div 4$ mb at W=7 TeV (see \fig{rhoexp}).
\begin{figure}
\centering
\includegraphics[width=8cm]{RhoExp.pdf}
\caption{ $\rho$ = Re/Im for proton-proton scattering
versus $W =\sqrt{s}$. Data are taken from PDG \cite{PDG} and
from the TOTEM papers \cite{TOTEMRHO1,TOTEMRHO2,TOTEMRHO3,TOTEMRHO4}. The
solid line shows the
predictions of our model, while the dashed line presents the estimates
for the
value of $\rho$, adding the odderon contribution $4 mb$ at W=13 TeV,
to our model. }
\label{rhoexp}
\end{figure}
This fact is
in striking contrast to our estimates for the CGC based
model\cite{GLP}.
The reason for this difference is that the elastic scattering amplitude
in our eikonall model does not reach the unitarity limit ($A_{el}\Lb
W, b=0\right)=1$, even at very high energies. The contrast turns out
to be more pronounced in the two channel model\cite{GLPPM2}, which is shown
by dashed lines in \fig{rho}. However, we need to point out here
that
the
comparison with the experimental data in the two channel model is worse
than in the eikonal one, especially for $\sigma_{el}$.
Our attempt to describe the $t$-dependence of the elastic
cross section shows that we can reproduce the main features of
the $t$-dependence that are measured experimentally: the slope
of the elastic cross section at small $t$, the existence of the
minima in $t$-dependence which is located at $|t|_{min} = 0.52\,GeV^2$
at W= 7 TeV; and the behaviour of the cross section at $|t|\,>\,|t|_{min}$.
It should be stressed that
we do not use any of the simplified approaches to estimate the
real part of
the amplitude which we show ( in our model ), that they do not
reproduce
correctly the real part of the amplitude at large $t$.
In our model the real
part turns out to be much smaller
than the experimental one. Consequently, to achieve a description of the
data, it is necessary to add an
odderon
contribution. Hence, our model
corroborates the conclusion of Ref.\cite{ODDSC}.
Summarizing, in this paper we have presented estimates
resulting from a simple eikonal model,
which provides a fair description of the data on total and elastic cross
sections.
We are aware, that this is a simplified approach, which could be a good
first approximation, but we need to go further. We plan to
develop a model which will also describe diffraction production.
We have made the first effort \cite{GLPPM2}, but we consider it as
not very successful, and we need to continue our search. We also
plan to re-visit our model based on CGC approach \cite{GLP}, with
the goal to improve it so that , we can also introduce
the Odderon contribution. We believe that we can base these attempts
on results for QCD Odderon \cite{KOLEB,BLV,KS,LRRW,YHH,CLMS}.
{\it Acknowledgements.} \\
We thank our colleagues at Tel Aviv University and UTFSM for
encouraging discussions. Our special thanks go to
Tamas Cs\"org\H o and Jan Kasper for discussion of the odderon contribution
and elastic scattering during the Low x'2019 WS.
This research was supported by
CONICYT PIA/BASAL FB0821(Chile) and Fondecyt (Chile) grants
1170319 and 1180118 .
|
1,116,691,497,914 | arxiv | \section*{Introduction}
With the rapid development of internet and network science, the explosive growth of various information volumes in different fields is accompanied by an urgent need for scientific research and industry to improve data processing capabilities. The complex network, which takes big data and complex associations among data as the research object, is the basic algorithmic framework for modeling data\cite{complex_network, big_data}. Link prediction in complex networks has been widely regarded as one of the most interesting problems in the information field\cite{lp_df}.
The mission of link prediction is to predict the connection possibility of nodes that have not yet connected in the network, including the recovery of missing links and the formation of future links. The major difference of the aforementioned tasks is that the latter one mainly focuses on dynamic networks, which means links in those networks emerge at different time.
For example, for protein networks as static data\cite{pp_net}, due to the insufficiency of our empirical knowledge, prediction of two proteins' interaction can be thought as restoration of missing links. Static link prediction focuses on the completeness of the graph, while dynamic link prediction mainly predicts the formation of future links in order to simulate network evolution. It is well established that networks are highly dynamic objects with inherent dynamic properties\cite{dynamic_properties}. Temporal link prediction aims to capture those properties in dynamic networks. It intends to extract the implicit driving force in the network and achieve the goal of network evolution analysis\cite{dynamic_lp}. The most important application of it is in recommender systems\cite{recommender}, which have been widely used in many fields, such as e-commerce, social network and other scenarios\cite{electronic_commerce, social_network}.
Among all successful link prediction methods, similarity method is one of the most commonly used link prediction methodology. However, traditional similarity methods solely consider the current static state of networks, such as the topology structure, while ignoring the temporal dimensional evolution pattern of complex networks\cite{similarity}. This type of method is not suitable for temporal networks, where edges are annotated with timestamps indicating the time they emerged. With the increasing demands of various applications in temporal networks, it is imperative to design a general temporal network link prediction method to effectively capture the temporal characteristics of network evolution. Several temporal link prediction methods have attempted to couple spatial and temporal information. LIST\cite{LIST} characterized the network dynamics as a function of time, which used matrix factorization technique and integrated the spatial topology of network at each timestamp. By extracting target link features, SSF\cite{SSF} used an exponential function to specify the influence of historical edges, and then combined the network structure to acquire the predictions.
However, temporal link prediction methods based on exponential decay ignore the life cycle of information that newly added edges in the network will remain active for a certain period of time, after which the link information decays to a stable state. Besides, many real-world networks are sparse and a majority number of existing structure-based similarity methods are common neighbor related\cite{CN,CAR,CCLP,JI,RA},
which might cause lower performance of these methods. In addition, due to the irregular connection characteristics of the network, each node has its unique local topology. Therefore, when considering the local structure of the target link, the high-order structure of the two endpoints should also be reflected.
To address these issues, we first utilize the characteristics of the sigmoid function to systematically modify the demerits of the exponential function\cite{Sigmoid}. We propose the adjusted sigmoid function (\textit{ASF}) to quantify temporal information based on the simplified life cycle of information.
Then, owing to the powerful mathematical representation of simplex in algebraic topology, we come up with hidden node set and latent matrix sequence, which solve the dilemma that some node pairs do not have common neighbors due to network sparsity.
Finally, considering the endpoints asymmetry with simplex structure, it fully represents the surrounding topology information around the target links. Combining them to achieve the consistency of temporal and structural information, thus, the link prediction model TLPSS is proposed for general dynamic networks. The main contributions of this paper are as follows:
\begin{itemize}
\item Based on the active, decay and stable states of information, we proposed a new time decay mode \textit{ASF}, which adequately considers the decay time and rate for different network information.
\item We define the latent matrix sequence composed of simplex high-order structure to reveal the spatial topology of the network. The richer high-order topological information in latent edges alleviates the problem that traditional similarity methods are affected by lack of common neighbors due to the sparsity of the network.
\item Coupling temporal and structural information, we introduce a temporal link prediction metric TLPSS induced by the hidden node asymmetry, and it is consistently feasible for various dynamic networks.
\item We evaluate the TLPSS model on six real-world datasets and show that it outperforms other baseline models, which demonstrates the effectiveness of our proposed method.
\end{itemize}
\section*{Problem Description}
A dynamic network is defined as a graph $G_t = (V^t,E^t)$, where $V^t$ is the node set, $E^t$ is the set of links. A temporal link is denoted by $e^t(u,v)$, which means that the node $u$ and $v$ are connected at time $t\in \{1, 2, ..., T\}$. Since this paper focuses on the link prediction, we only consider the change of edge connection with time, and fix the node set at different time as $V$. Note that node pairs are allowed to have multiple edges generated at different timestamps, and only undirected networks are concerned in this paper.
For temporal link prediction, a temporal network $G_t$ can be divided into a series of snapshots $G_t = \{G^1, G^2...,G^T\}$ at discrete time frames $\{1, ..., T\}$. For $t,s\in T$, $t < s$, $G^t$ can be regarded as the historical information of $G^s$, and they are strongly correlated and involved with the same evolution mechanism. When a set of network snapshots are given within the time period $[1, T]$, the temporal link prediction method aims to study a function $f(\cdot)$ to predict a group of edge set $E^{T+1} = \{e^t(u,v)| u,v \in V, t = T+1 \}$ created at time $T+1$. The problem is illustrated in Fig. \ref{diagram}. Main notations in this paper are introduced in Tab. \ref{notations} for future reference.
\begin{figure}[hp]
\centering
\includegraphics[width=15 cm]{temporal--lp.pdf}
\caption{Schematic diagram of temporal link prediction.}
\label{diagram}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|}
\hline
\textbf{Notation} & \textbf{Dscription} \\ \hline
\textbf{$G^t$} & graph snapshot at timestamp $t$ \\ \hline
\textbf{$A^t$} & adjacency matrix at timestamp $t$ \\ \hline
\textbf{$B^t$} & latent matrix at timestamp $t$ \\ \hline
\textbf{$D^t$} & degree matrix at timestamp
$t$ \\ \hline
$V$ & node set \\ \hline
$E^t$ & edge set at timestamp $t$ \\ \hline
$(x\sim h)$ & latent edge, $x$ and $h$ are nodes \\ \hline
$p,q$ & hyperparameters in adjusted sigmoid function \\ \hline
\end{tabular}
\caption{Main notations.}
\label{notations}
\end{table}
\section*{Literature Review}
A large number of static link prediction methods have been proposed, and these methods can be divided into three categories\cite{lp_three_category}. The first category is the method based on probability statistics\cite{probability}. The basic idea of these methods is to build a parametric probabilistic model and use optimization strategies such as maximum likelihood estimation to find the optimal parameters. This type of model usually acquires satisfying results by assuming that the network has known structures or obeys specific distributions. But usually the great computational cost makes it not suitable for large-scale network, and representative works are like \cite{probability-lp1,probability-lp2,probability-lp3}. The second category is machine learning-based methods. The link prediction problem in the network can be regarded as a classification problem in machine learning\cite{machine_learning}, and the related methods work on massive training data to achieve high prediction accuracy in large-scale networks though explainable features are difficult to be extracted. Furthermore, inspired by the superiority of deep learning and graph representation learning in capturing node feature representations\cite{graph_represent}, the link prediction task can be transformed into computing distances between nodes to reveal the underlying correlation. The advantage of this type of method is that with the iterative update of representation learning algorithms, such as deepwalk\cite{deepwalk}, node2vec\cite{node2vec} and their derivatives, the link prediction accuracy can be gradually improved, but the prediction mechanism is difficult to explain in an explicit way. The third category is the similarity-based method\cite{similarity}, which is based on the assumption that the connection probability of nodes is positively correlated with the similarity between them\cite{similarity_assumption, assumption2}. Such methods assign a score to each pair of nodes by defining a similarity function, and higher scored node pairs will have more potential to be linked together.
Recently, more complicated metrics based on temporal and structural information have been proposed for link prediction. Yu et al.\cite{LIST} proposed a link prediction model LIST with spatio-temporal consistency rule, which described the dynamic characteristics of the network as a function of time, and integrated the spatial topology of the network at each time point and the temporal characteristics of network evolution. Chen et al.\cite{STEP} proposed the temporal link prediction model STEP, which integrated structural and temporal information, and transformed the link prediction problem into a regularized optimization problem by constructing a sequence of high-order matrices to capture the implicit relationship of node pairs. Li et al.\cite{SSF} proposed a structure subgraph feature model SSF that is based on link feature extraction. This method effectively represented the topological features around the target link and normalized the influence of multiple links and different timestamps in structural subgraphs.
Complex networks have become the dominant paradigm for dynamic modeling of interacting systems. However, networks are inherently limited to describe the interactions of node pairs, while real-world complex systems are often characterized by high-order structures. Furthermore, interaction behaviors based on structure take place among more than two nodes at once \cite{PNAS-LP}. Empirical studies reveal that high-order interactions are ubiquitous in many complex systems, and such community behaviors play key roles in physiological, biological, and neurological field. In addition, high-order interactions can greatly affect the dynamics of networked systems, ranging from diffusion\cite{diffusion}, synchronization\cite{synchronization}, to social evolution processes\cite{social_evolutionary}, and may lead to the emergence of explosive transitions between node states. For a deeper understanding of the network pattern structure, we can model it via set systems from the perspective of algebraic topology. For example, high-order structures, such as hypergraphs and simplicial complex are better tools for characterizing many social, biological systems\cite{hyper_and_Simplex}. In addition to recognize the high-order structure in the network, it is important to measure the interaction information of the different structures. Applying simplex structure to complex networks due to its powerful mathematical representation and fully quantifying structure interaction information is the key to the performance of link prediction.
\section*{Methods}
\subsection*{Time Decay Function}
The most crucial part of the temporal network link prediction task is to effectively process historical information. Based on the accumulation of historical time data, network evolution pays attention to the overall changes of the network, and performs the complex behavior of dynamic networks. Similarly, the purpose of link prediction is to understand how these characteristics will evolve over time. Link prediction makes use of temporal information to reveal the relationship between the current state of the network and its most recent states. The basic principle of dynamic link prediction is temporal smoothing\cite{time_smoothing}, which assumes that the current state of a network should not change dramatically from its most recent history in general. Several researchers concern the exponential function as time decay function\cite{SSF,LIST},
\begin{equation}
f(s,t) = e^{-\theta (t - s)},
\end{equation}
which means the remaining influence of a history link $l$ with timestamp $s$ at present time $t$, and $\theta \in(0,1)$ is a damping factor to control the speed of decay, and as a parameter, $\theta$ needs to be pre-learned.
Choosing the exponential function as the time decay function has improved some link prediction algorithms\cite{SSF,LIST}, and it can be regarded as one of the information decay modes. Besides, scholars have been discussing the society as an information- and knowledge-based society. By giving an insight into them, information resources can be clarified with life cycle model\cite{life_cycle_1,life_cycle_2,life_cycle_3}. The life cycle phases consist of generation, institutionalization, maintenance, enhancement, and distribution. Inspired by this theory, we assume that the generation of new edges tends to remain active for a certain period of time, and then decay to a stable state. For example,
the 2022 Grammys song of the year \textit{Leave the door open}, its Billboard chart history follows the above hypothesis that it remained popular (top five) for 14 weeks, then it was gradually losing its position in the following 25 weeks, finally it fell off the chart in week number 40. Based on this assumption, we find that the sigmoid function in neural network\cite{Sigmoid} is accompanied with such properties. By parameterizing the sigmoid function, we obtain the adjusted sigmoid function (\textit{ASF}), which satisfies the assumption, as the temporal information decay function. It can be divided into active state, decay state and stable state. The formal definition of \textit{ASF} is as follows.
\begin{equation}
ASF(x) = \frac{\frac{1}{1 + exp\{x/p - a\}} + q}{q+1},
\end{equation}
in which the parameter $p$ represents the active period of the information, and an increase in $p$ means that the information is active for a long time. For parameter $q$, it controls the decay range of information, a larger $q$ means that the lower bound of link information gets greater.
As shown in Fig. \ref{ASF}, the influence of the parameter $p$ is mainly reflected in the first stage of the \textit{ASF} function. Compared to the upper right figure, the lower right one has a longer active time with the larger $p$. Besides, the role of the parameter $q$ is reflected in the value range of the \textit{ASF} function. Comparison with the upper right and lower left figures indicates that the more information is remained with the larger $q$ in stable state. Unlike the former parameters, the role of parameter $a$ is only to fix the position of \textit{ASF}. In the experiment, we set $a = 5$. It is obvious that the lower bound of \textit{ASF} is $q/(q+1)$, which means that the remaining temporal information of all links in the network is greater than this value. The sigmoid fuction and variation of \textit{ASF} with parameters are illustrated in Fig. \ref{ASF}.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{sigmoid_asf.pdf
\caption{Sigmoid and ASF function. The upper left figure shows the original sigmoid function. The comparison of the upper right and lower left figures shows that the more information is remained with the larger parameter $q$. The comparison of the upper right and lower right figures shows that the active period of link information is determined by the parameter $p$.
}
\label{ASF}
\end{figure}
\subsection*{Simplex Structure in Link Prediction}
The basic premise of network model is to represent the elements of system as nodes and to use links to capture the pairwise relationships. High-order interactions mean that there are more than two entities involved at once\cite{PNAS-LP}, which are ubiquitous in various scenes\cite{h-o_scene1,h-o_scene2,h-o_scene3}. Capturing and characterizing high-order structures in networks is helpful for revealing network evolution mechanisms. Motivated by the significance of the triangular structures in network clustering and the theory of triadic closure in social networks\cite{triadic_closure}, we employ this theory via the increasing of structural order. Similar to the definition in algebraic topology, a set of $k$ nodes is called a $(k-1)$-simplex, and the set with all possible connections is called a $k$-clique in graph theory. Likewise, a simplicial complex is a finite set of simplices\cite{hyper_and_Simplex, PNAS-LP}. As shown in Fig. \ref{simplex}, 0-simplices are nodes, 1-simplices are edges, and 2-simplices are triangles. The simplicial complex $J$ is composed of two 2-simplices.
\begin{figure}[ht]
\centering
\includegraphics[width=12 cm]{simplex1.pdf}
\caption{Examples of different types of simplex structures in networks.}
\label{simplex}
\end{figure}
Researchers apply high-order structure to link prediction to capture the topology information around the target link. For example, similarity metrics CAR\cite{CAR} and CCLP\cite{CCLP} use the triangle structure, which gave some insights into the mechanism of high-order interaction. But such methods mainly focus on the quantity information of triangles, while ignoring the interaction information between different structures. Simplex has been used in the study of complex dynamical systems due to its powerful mathematical representation\cite{hyper_and_Simplex}.
In this paper, we introduce the concept of latent edge, its detailed definition is given in the following subsection. Thus, the 2-simplices around the target link form a simplicial complex structure. We measure the interaction information of these two 2-simplices to capture the local topology structure around the target link.
\subsection*{Proposed Algorithm}
Since dynamic network $G_t$ consists of a series of snapshots, adjacency matrix sequence can be expressed as $A_t =\{A^1, A^2...,A^T\}$, $A^t = \left[a^t_{i,j}\right]_{N\times N}$, $a^t_{i,j}\in [0,1]$, $N = |V|$ is the total number of nodes. Given time $t$, $a^t_{i,j}\neq 0$ means that node $i$ and node $j$ are connected, and the value is the quantification of corresponding time information by \textit{ASF} fuction. The smaller the value is, the earlier the edge is generated, and $a^t_{i,j}= 0$ means that node $i$ and node $j$ are unconnected.
In general networks, the degree of a node is defined as the number of its neighbor nodes. In our study, the elements in the adjacency matrix are no longer just 0 or 1, so the degree of a node is no longer an integer but a continuous number. Adjacency matrix can be regarded as a weighted matrix, and the weighted adjacency matrix is different at each moment. Therefore, based on the adjacency matrix sequence, the node degree information should also vary over time. The formal definition of degree matrix sequence (\textit{DMS}) is as follows.
\textbf{Definition 1.} (Degree Matrix Sequence) Given the adjacency matrix sequence, the degree information of nodes changes as the network evolves, and it can be calculated from the adjacency matrix at the corresponding snapshots. We can define \textit{DMS} as $D_t = \{D^1, D^2, ..., D^T \}$, and each degree matrix is obtained by the following calculation formula.
\begin{equation}
D^t = [w^t(v)]_{N\times 1}, w^t(v) = \sum\limits_{z \in \Gamma(v)}A^t(v,z),
\end{equation}
in which $\Gamma(v)$ is the set of neighbors of node $v \in V$.
The core of link prediction is correlation analysis, which reveals the intrinsic similarity between objects. A higher score of the link prediction metric indicates a higher probability of forming a link. Methods based on node centrality or common neighbors and their relevant variants indeed achieved good results\cite{CI,CN,JI}. For example, Resource Allocation index (RA) \cite{RA} considers that each node in the network has certain resources and distributes the resources equally to its neighbors. Besides, RA index shows good performance with low time complexity and high accuracy on some datasets. The formula of RA index is as follows.
\begin{equation}
RA(x,y) = \sum\limits_{z \in \Gamma(x) \cap \Gamma(y)}\frac{1}{|\Gamma(z)|}.
\end{equation}
However, this method only considers the transmission of resources through common neighbor paths, while ignoring the potential resources transmitted through local paths between two endpoints.
As shown in Fig. \ref{hidden nodes}, for example, RA index only uses the 2-simplices $\{x, z_i, y\}, i = 1, 2, 3$, which ignores the importance of neighbor nodes of $y$ that are not directly connected to $x$. However, theses nodes participate in forming the high-order structure $J = \{x, k_i, h, y\}, i = 1, 2$ around the target link. It is assumed that the resources of node are allocated to its neighbors according to the importance of the nodes.
The role of common neighbors in information transmission is important. But we should also pay attention to those nodes that are only directly connected to one endpoint of the target link, such as node $h$. Combining the above analysis, we define hidden node set (\textit{HNS}) for endpoints that is crucial in information transmission.\\
\begin{figure}[ht]
\centering
\includegraphics[width=10cm]{hidden_node.pdf}
\caption{Schematic diagram of the topology around the target link. In this figure, the node pair $x$ and $y$ is to be predicted, $\{z_1, z_2, z_3\}$ are their common neighbors, and they form 2-simplices $\{x, z_i, y\}, i=1,2,3$. $h$ is the hidden node of endpoint $x$, and $(x\sim h)$ is the latent edge. The simplicial complex is composed by two 2-simplices $\{x, k_1, h\}$ and $\{x, h, y\}$. Symmetrically, $k_1$ and $k_2$ are hidden nodes of node $y$.}
\label{hidden nodes}
\end{figure}
\textbf{Definition 2.} (Hidden Node Set) For each endpoint of target links, we define its hidden node as the kind of node that is connected to one endpoint and the neighbor of the other endpoint. Given a node pair $x$ and $y$, the \textit{HNS} of endpoint $x$ can be formulated as follows,
\begin{equation}
H_x = \{h | h \in \Gamma(y), h\notin \Gamma(x), \Gamma(x) \cap \Gamma(h)\neq \varnothing \}.
\end{equation}
Vice versa, for endpoint $y$,
\begin{equation}
H_y = \{h | h \in \Gamma(x), h\notin \Gamma(y), \Gamma(y) \cap \Gamma(h)\neq \varnothing \}.
\end{equation}
Based on the definition of \textit{HNS}, we can divide the neighbors of an endpoint into three categories according to their topological significance. The first is the common neighbors, the second is the hidden nodes, and the third is the rest nodes. The consideration of hidden nodes makes the link prediction method take higher-order structure into account than traditional common neighbor based similarity methods.
With the help of hidden node, we assume that there is a high probability that the hidden node will be connected to the endpoint. Therefore, we call this edge in the network that is temporarily unconnected but carries target link information as a latent edge, obviously, it is composed of endpoint and hidden node. Hidden node and latent edge play an important role in improving link prediction performance because they participate in forming a simplicial complex structure around the target link. As shown in Fig. \ref{hidden nodes}, simplicial complex structure $J$ is composed of two 2-simplices $\{x, k_1, h\}$ and $\{x, h, y\}$. Besides, their intersection is the latent edge, which contains certain information of the endpoints. We give the formal definition of latent edge (\textit{LE}) as follows.
\textbf{Definition 3.} (Latent Edge) The latent edge represents the intersection of two 2-simplices in a simplicial complex structure of the target link. As shown in Fig. \ref{hidden nodes}, the latent edge is denoted by $(x\sim h)$, in which $x$ is the endpoint and $h$ is the hidden node.
Moreover, \textit{LE} effectively reveals the spatial topology information of the target link. It can be seen that the latent edges contain non-negligible topological information of the target node pair, but the quantification of its significance remains unsolved. Here we define the quantification strategy of such edges by the definition of latent matrix sequence (\textit{LMS}) as follows.
\textbf{Definition 4.} (Latent Matrix Sequence) The connection state of network at each snapshot can be represented by the adjacency matrix sequence $A_t =\{A^1, A^2...,A^T\}$. We define Latent Matrix Sequence as $B_t =\{B^1, B^2...,B^T\}$, the elements in the $B^t$ are latent edges. Latent edges use simplicial complex structure to fully consider the information transmission between endpoints. The value of latent edges is calculated as follows.
\begin{gather}
B^t(i,j) = inf\{ASF\} \cdot scale\ factor \label{latent matrix},\\
inf\{ASF\} = \frac{q}{q+1} \label{inf},\\
scale\ factor = \frac{1}{min\{d(i), d(j)\}}\ \cdot \sum\limits_{z \in \Gamma(i) \cap \Gamma(j)}\frac{A^t(i,z)+A^t(z,j)}{(m(i,z)+m(z,j))}, \label{scale factor}
\end{gather}
where $m(i,z)$ is the number of multiple edges between node $i$ and node $z$ created at different timestampts, $a^t_{i,j} = 0$, and $d(i)$ is the degree of a node in the traditional sense.
These operations hold that the weight of the latent edge is less than the weight of the existing edge in the network. We simply prove it as follows.
\begin{proof}
\vspace{-0.36cm}
The first item $inf\{ASF\}$ in Eq. \ref{latent matrix} is the lower bound of the \textit{ASF} function, and it quantifies the time information of the connected edges in the network.
It is clear that the numerator of the second term in Eq. \ref{scale factor} is less than the denominator. Since $|\Gamma(i) \cap \Gamma(j)| \leqslant min\{d(i), d(j)\}$, we obtain $scale\ factor \leqslant 1$. The product of the two terms in Eq. \ref{latent matrix} ensures that the weight of the latent edge is less than the $inf\{ASF\}$. Thus, the weight of the latent edge is less than the weight of the existing edge in the network.
\vspace{-0.40cm}
\end{proof}
The \textit{LMS} further characterizes the topology information around the target link by using simpicial complex structures. Besides, considering the complete difference between two endpoints' hidden nodes, it is important to introduce endpoints asymmetric topology information into the mechanism of link prediction.
After the above analysis, based on the network history information from time $1$ to $T$, we can predict the generation of new links at time $T+1$. By mixing the 2-simplices information in adjacency matrix and latent matrix of endpoints, we obtain endpoints similarity scores respectively. For endpoint $x$,
\begin{equation}
\label{s_xy}
score(x\rightarrow y)=\sum\limits_{z_i \in \Gamma(x) \cap \Gamma(y)}\frac{A^T(x,z_i)}{w^T(z_i)} + \sum\limits_{z_i \in H_x}\frac{B^T(x,z_i)}{w^T(z_i)}.
\end{equation}
Similarly, for endpoints $y$,
\begin{equation}
\label{s_yx}
score(y\rightarrow x)=\sum\limits_{z_i \in \Gamma(x) \cap \Gamma(y)}\frac{A^T(y,z_i)}{w^T(z_i)} + \sum\limits_{z_i \in H_y}\frac{B^T(y,z_i)}{w^T(z_i)}.
\end{equation}
Finally, we obtain the temporal link prediction method TLPSS that integrates the features of 2-simplex topological structures, endpoints asymmetry and the \textit{ASF} time decay paradigm,
\begin{equation}
TLPSS(x,y)=\frac{1}{2}(score(x\rightarrow y)+score(y\rightarrow x)).
\end{equation}
\begin{figure}[ht]
\centering
\includegraphics[width=15 cm]{model_frame.pdf}
\caption{Diagram of the proposed model TLPSS. This model contains pre-process, construct graph and prediction steps. In first step, the data is processed and decayed by \textit{ASF}. Then, according to the network snapshots, we obtain the adjacency matrix sequence and latent matrix sequence. Finally, coupling the temporal and structural information, the temporal link prediction method TLPSS is proposed.}
\label{model_frame}
\end{figure}
Based on the above analysis, the proposed algorithm can be divided into three steps, and the schematic diagram of TLPSS model is shown in Fig. \ref{model_frame}. Firstly, it is necessary to pre-process the data, because the data we get is always noisy. Specifically, we remove data with missing temporal information and sort them according to time evolution. The subgraph extraction strategy is used for large-scale networks to reduce computational cost. Then link information decays according to the historical time by \textit{ASF}. Secondly, by constructing the processed data, we get the weighted adjacency matrices at different timestamps and the latent matrices on this basis. Thirdly, considering node asymmetry, the data is input into our link prediction model to evaluate the generation of new links at the next time period.
\section*{Experimental Setting}
In this section, we conduct experiments to evaluate the effectiveness of the proposed approach by using six real-world datasets for link prediction tasks, and compare its performance with six baseline algorithms. At first, we briefly introduce the datasets from different domains.
\subsection*{Datasets Description}
\begin{itemize}
\item \textbf{Contact}\cite{contact}: This network represents contacts between people, which is measured by carried wireless devices. Each node represents a person, and an edge between two persons shows that there was a contact between them.
\item \textbf{DBLP}\cite{DBLP}: This is the citation network of DBLP, a database of scientific publications such as papers and books. Each node in the network is a publication, and each edge represents a citation of a publication by another publication.
\item \textbf{Digg}\cite{Digg}: This is the reply network of the social news website Digg. Each node in the network is a user of the website, and each edge denotes that one user replied to another user.
\item \textbf{Enron}\cite{Enron}: The Enron email network consists of emails sent between employees of Enron. Nodes in the network are individual employees and edges are individual emails. It is possible to send an email to oneself, and thus this network contains loops.
\item \textbf{Facebook}\cite{Facebook}: This network contains friendship data of Facebook users. A node represents a user and an edge represents a friendship between two users.
\item \textbf{Prosper}\cite{Prosper}: This network represents loans between members of the peer-to-peer lending network at Prosper.com. The network is directed from lender to borrower. Each edge is tagged the timestamps when the loan was occured.
\end{itemize}
All of these datasets are dynamic networks, i.e., each edge is annotated with timestamps showing the formation time. Since our main concern is whether there will be an edge between two nodes, the direction of the edge in the network is eliminated in the experiment. Tab. \ref{data info} shows major information of those datasets. \textit{Total duration} is the length of the
time span of dynamic networks, specifically, \textit{h, d, w, m} and \textit{y} stand for \textit{hour, day, week, month} and \textit{year} respectively. \textit{Snapshot number} denotes the decay times of the network divided by the time information decay period, which is determined by the edge distribution of each dataset.
Besides, we normalize the time attribute so that the timestamp of network starts from 1. In the link prediction evaluation stage, the existing link set $E_t$ in the network is divided into two sets: train set $E(T)$ and test set $E(P)$ according to time evolution. The ratio between them is around 9:1.
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Dataset} & Node number & Edge number & Ave. Degree & Start date & End date & Total duration & Snapshot number \\ \hline
\textbf{Contact} & 273 & 28227 & 206.78 & 1970/1/1 & 1970/1/4 & 70h & 70/h \\ \hline
\textbf{DBLP} & 1169 & 10667 & 18.24 & 1986/1/1 & 1996/1/1 & 10y & 10/y \\ \hline
\textbf{Digg} & 3159 & 17661 & 11.18 & 2008/11/3 & 2008/11/11 & 8d & 192/h \\ \hline
\textbf{Enron} & 883 & 31092 & 70.42 & 2000/2/15 & 2000/6/14 & 4m & 17/w \\ \hline
\textbf{Facebook} & 3877 & 30480 & 15.72 & 2007/11/30 & 2008/8/26 & 9m & 270/d \\ \hline
\textbf{Prosper} & 2561 & 46540 & 36.34 & 2006/10/10 & 2006/12/11 & 2m & 60/d \\ \hline
\end{tabular}
\caption{Network datasets statistics.}
\label{data info}
\end{table}
\subsection*{Baseline Methods and Evaluation Metrics}
\textbf{Baseline Methods.} We compare our proposed model TLPSS with the following link prediction methods. These methods are usually used for static networks, it can also be applied to time-varying
networks by aggregating all edges from different timestamps to one network. We make improvements over traditional baseline methods. The number of common neighbors of target link and the number of triangular closures around them are determined by the weights of edges. The specific definitions are shown in the Tab. \ref{baseline}.
\begin{table}[h]
\centering
\footnotesize
\begin{tabular}{|m{3cm}|m{7cm}|m{6cm}|}%
\hline
\textbf{Baseline Methods} &\textbf{Description} &\textbf{Definition} \\ \hline
Common Neighbors (CN) & The algorithm uses the number of common
neighbors as an indicator to measure the possibility of establishing a link between two nodes \cite{CN}. & $CN\_ASF(x,y)=\frac{1}{2}\sum\limits_{z \in \Gamma(x) \cap \Gamma(y)}A^T_{x,z}+A^T_{y,z}$ \\ \hline
Jaccard Index (JA) & This algorithm evaluates the probability of connecting edges also by measuring the number of common neighbors, it is the normalized version of $CN\_ASF$ \cite{JI}. & $JA\_ASF(x,y)=CN\_ASF(x,y)/(w^T(x)+w^T(y))$ \\ \hline
Preferential Attachment (PA) & In this algorithm, the probability that the target link is connected is proportional to the product of the degrees of the two endpoints, it is a hub-promoted method. \cite{PA}. & $PA\_ASF(x,y)=w^T(x) \cdot w^T(y)$ \\ \hline
Resource Allocation (RA) & Common neighbors serve as a medium for resource transfer, and the weight of common neighbors is inversely proportional to its degree \cite{RA}. & $RA\_ASF(x,y)=\sum\limits_{z \in \Gamma(x) \cap \Gamma(y)}\frac{1}{w^T(z)}$ \\ \hline
Cannistrai Alanis Ravai (CAR) & The algorithm utilizes the links between commmon neighbors, along with commmon neighbors information, where LCL'(x,y) is total weights of links between common-neighbors \cite{CAR}. & $CAR\_ASF(x,y)=CN\_ASF(x,y)\cdot LCL'(x,y)$ \\ \hline
Clustering Coefficient-based Index (CCLP) & This metric employs clustering coefficient of common neighbors to reflect the density of triangles within a local network environment, where $\Delta'$ is the total weight of weighted triangles among common-neighbors \cite{CCLP}. & $CCLP\_ASF(x,y)=\sum\limits_{z \in \Gamma(x) \cap \Gamma(y)}\frac{\Delta'}{d(z)\cdot (d(z)-1)/2}$ \\ \hline
\end{tabular}
\caption{Baseline link prediction methods for temporal networks.}
\label{baseline}
\end{table}
\noindent\textbf{Evaluation Metrics.} We use two commonly adopted evaluation metrics, AUC \cite{AUC} and precision \cite{precision} to systematically evaluate the performance of the aforementioned methods. AUC can be interpreted as the probability that the similarity value of a randomly chosen new link is greater than a randomly chosen nonexistent link. A larger AUC value means better performance of the model. AUC measures the accuracy of the algorithm from a general perspective, while sometimes we pursue how many positive items the top part of link prediction methods output contains. Precision considers whether the edge in the top-L position is accurately predicted.
\section*{Results and Discussions}
In this section, we verify the effectiveness of TLPSS in real-world datasets with different evaluation metrics.
Tab. \ref{AUC performance} shows the AUC performance of different approaches on six dynamic networks, and the best performance on every datasets is highlighted. The proposed TLPSS model outperforms all baselines consistently across all six dynamic networks.
The average performance of the TLPSS model outperforms other baseline methods about 15\%, especially in Digg and Prosper datasets, our model leads with 20\% and 30\% respectively. Our proposed model TLPSS can be regarded as asymmetric modification of RA if we remove the latent edge terms in Eq. \ref{s_xy} and Eq. \ref{s_yx}. Experimental results illustrate that capturing the local structure around the endpoints separately could improve the link prediction performance.
Besides, the reason for the superiority of TLPSS is that considering the latent edges in the network can address the cold-start problem for traditional common neighbor based link prediction methods, since the sparsity of the network might lead to a lack of common neighbors. In conclusion, the clear domination of the TLPSS index indicates that a deep understanding of life cycle of information and topology information could be converted to an outstanding link prediction algorithm.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{AUC} & \textbf{CN} & \textbf{JA} & \textbf{PA} & \textbf{RA} & \textbf{CAR} & \textbf{CCLP} & \textbf{TLPSS} \\ \hline
\textbf{Contact} & 0.9525 & 0.8611 & 0.9020 & 0.9324 & 0.9334 & 0.8495 & \textbf{0.9751} \\ \hline
\textbf{DBLP} & 0.8627 & 0.8591 & 0.5201 & 0.8718 & 0.7748 & 0.8105 & \textbf{0.9183} \\ \hline
\textbf{Digg} & 0.6426 & 0.6445 & 0.5089 & 0.6472 & 0.6525 & 0.6119 & \textbf{0.8905} \\ \hline
\textbf{Enron} & 0.8872 & 0.8730 & 0.4681 & 0.8852 & 0.8745 & 0.8277 & \textbf{0.9014} \\ \hline
\textbf{Facebook} & 0.7505 & 0.7518 & 0.4731 & 0.7520 & 0.5798 & 0.7453 & \textbf{0.9093} \\ \hline
\textbf{Prosper} & 0.4018 & 0.4102 & 0.4639 & 0.3907 & 0.4993 & 0.4036 & \textbf{0.7493} \\ \hline
\end{tabular}
\caption{Comparison of the AUC value between TLPSS and baseline methods.}
\label{AUC performance}
\end{table}
Tab. \ref{precision performance} reports the precision values of TLPSS and other similarity algorithms. Due to the different scales of datasets, for \textit{Contact, DBLP, Digg, Enron, Facebook} and \textit{Prosper}, we set $L = \{100, 100, 1000, 100, 1000, 2500\}$ respectively. It can be seen from the table that the proposed method TLPSS is superior to other methods and can provide the highest accuracy on most datasets. In Contact dataset, five of the competing methods all achieve superior performance. It shows that in densely connected network, triangular closure structures based methods like CN are already sufficient. Besides, compared with CN on Contact dataset, JA has much lower performance, this demonstrates that the normalization operation does not always work. To sum up, TLPSS model has better accuracy on most sparse networks, which indicates that the consideration of hidden node and latent edge can properly reveal the structural information around the target link.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Precision} & \textbf{CN} & \textbf{JA} & \textbf{PA} & \textbf{RA} & \textbf{CAR} & \textbf{CCLP} & \textbf{TLPSS} \\ \hline
\textbf{Contact} & \textbf{0.9900} & 0.3700 & 0.6600 & 0.9700 & \textbf{0.9900} & 0.9600 & 0.9600 \\ \hline
\textbf{DBLP} & 0.2100 & 0.1700 & 0.0000 & 0.0210 & 0.1400 & 0.0800 & \textbf{0.3600} \\ \hline
\textbf{Digg} & 0.0670 & 0.0530 & 0.0000 & 0.0240 & 0.0830 & 0.0070 & \textbf{0.0910} \\ \hline
\textbf{Enron} & 0.6500 & 0.6500 & 0.0000 & 0.2400 & 0.6100 & 0.1900 & \textbf{0.6800} \\ \hline
\textbf{Facebook} & 0.0080 & 0.0090 & 0.0000 & 0.0030 & 0.0050 & 0.0080 & \textbf{0.0100} \\ \hline
\textbf{Prosper} & 0.0004 & 0.0008 & 0.0024 & 0.0004 & 0.0024 & 0.0028 & \textbf{0.0032} \\ \hline
\end{tabular}
\caption{Comparison of the Precision value between TLPSS and baseline methods.}
\label{precision performance}
\end{table}
\subsection*{Sensitivity Test of Parameter \textit{p} in ASF}
We first study the impact of different setting of the parameter $p$ in Eq. \ref{ASF}, and set the parameter $q = 1$. Fig. \ref{comparision of p} shows the performance of different methods with varied parameter $p$ on six real-world datasets. For \textit{Contact, DBLP, Digg, Enron, Facebook} and \textit{Prosper} datasets, we obtain the optimal value of parameter $p = \{3, 1, 10, 2.5, 5, 7\}$ respectively. There are several interesting phenomenons. First, TLPSS outperforms all other methods in most cases, and it can be interpreted that the consideration of hidden node and latent edge could unveil the spatial structure features around the target link. Second, the performance of most methods drop quickly when the parameter $p$ takes a large value on \textit{DBLP} and \textit{Digg} datasets. We hold that the large number of parameter $p$ will result in a longer decay time for the information, and inadequate utilization of temporal information due to the weight of new edge and historical edge is almost equal.Third, the optimal parameter $p$ is different for each dataset, which indicates that the decay rate of datasets in different domains can be revealed by \textit{ASF}. Forth, in \textit{Digg} and \textit{Facebook} datasets, the average degrees of these two social networks are low, common neighbor-based methods have similar performance.
Unlike other approaches, TLPSS takes historical temporal information and simplex structure into account, thus, it has further improved the overall performance on temporal networks.
\begin{figure}[h]
\centering
\includegraphics[width=12cm]{AUC_score.pdf}
\caption{Performance comparison of varying parameter $p$ in different dynamic networks. All methods are based on the same temporal information decayed by \textit{ASF}. The performance of TLPSS is superior to other baseline methods.}
\label{comparision of p}
\end{figure}
\subsection*{Performance of Latent Matrix Sequence in Real-World Networks}
Based on the definition of \textit{ASF} and \textit{LMS}, we can conclude that the weight of the latent edge in latent matrix is closely related to the parameter $q$, and its upper bound is the lower bound of \textit{ASF}, which is $q/(q + 1)$. In order to further understand the mechanism of proposed model TLPSS, the influence of parameter $q$ on AUC value is demonstrated in Fig. \ref{comparision of q}.
We set the parameter $p$ for each dataset to be the optimal value according to the former experiment. Besides, we choose different values of parameter $q$, which varies from 0 to 10 with step size 1, to compute AUC values of different algorithms. From Fig. \ref{comparision of q}, AUC value of TLPSS model fluctuates greatly when parameter $q$ increases from 0 to 1. The special case of $q=0$ indicates that there are no latent edges considered according to Eq.\ref{latent matrix}. If we remove the latent edge terms in Eq. \ref{s_xy} and Eq. \ref{s_yx}, we could see that TLPSS is the asymmetric modification of RA. This will make the 2-simplex structure composed of endpoint and hidden node to lose effect and bring damage to the performance of TLPSS method. Evidence can be found at the initial point of curves in Fig. \ref{comparision of q}, which shows that the AUC values of TLPSS and RA indexes are almost equal at a lower value.
The AUC value of the TLPSS model increases significantly when parameter $q$ varies from 0 to 1, which proves that considering latent edges in the network can address the cold-start problem. Traditional common neighbor-based link prediction methods face this problem since the sparsity of the network might lead to a lack of common neighbor.
It is clear that as $q$ increases from 0 to 10, the prediction accuracy of TLPSS increases till an optimal value, after which it maintains stable. All methods are not sensitive to the change of parameter $q$ in the stable state, this is because that the similarity scores of positive and negative samples increase proportionally, and their ordinal relationship remains unchanged. Moreover, compared with TLPSS, other similarity methods have lower average performance. It is evident that TLPSS considers the role of latent edges composed of simplex high-order structures, which makes the surrounding topological information richer.
To sum up, the large number of $q$ leads to little fluctuation on TLPSS. According to the experimental results, it is recommended to set the parameter $q = 1$.
\begin{figure}[h]
\centering
\includegraphics[width=12 cm]{AUC_q_analysis.pdf}
\caption{Performance comparison of varying parameter $q$ in different dynamic networks. The AUC value of the TLPSS model includes rising stage and stable stage. Explosive rising stage illustrates the effectiveness of latent edge composed of 2-simplices structure.}
\label{comparision of q}
\end{figure}
\section*{Conclusion}
In this paper, we concentrate on the link prediction problem and design a general framework for temporal networks. We first provide a new time decay function \textit{ASF} to quantify the remaining information of different timestamps links. Next, \textit{HNS} and \textit{LE} are introduced for the target link to extract the surrounding 2-simplex high-order structures. Besides, \textit{LMS} effectively quantifies the weights of latent edges in the network, which alleviates the problem that traditional similarity methods are affected by lack of common neighbors due to the sparsity of the network. Finally, from the perspective of node asymmetry in the network, we propose the temporal link prediction method TLPSS by combining 2-simplex structural information in adjacency matrix and latent matrix. We theoretically analyze the optimality and validity of the parameters in the model. Extensive experiments on multiple datasets from different fields demonstrate the superiority of our model TLPSS compared to other baseline approaches.
Our future work will focus on link prediction in directed temporal network with the consistency of life cycle of information and high-order structures. Also, the combination of \textit{ASF} with other types of structures extracted with deep learning methods is left for further research.
\section*{Data Availability}
All datasets in this paper are available at http://konect.cc/networks/.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.